System downtime plagues organizations throughout every industry. A recent survey by Globalscape shows that 90 percent of organizations have experienced downtime and a third deal with it at least once a month. On the surface, losing access to core systems, including email servers and backend processors, is frustrating and cripples employee productivity. Unfortunately, that’s not the worst of it — finances, data and security are all casualties of planned and unplanned downtime.
On the monetary side, nearly two-thirds of those surveyed estimated that a single hour of downtime costs their company between $250,000 and $500,000 — one in six reported losses as much as $1 million or more. For 60 percent of Fortune 500 companies that estimate 1.6 hours of downtime a week, incidents of downtime could cost them as much as $41.6 million every year.
Making matters worse, the negative ramifications of downtime extend beyond finances. More than 75 percent of senior executives reported losing critical information as a result of downtime. And when crucial communication systems go down, such as the email and file transfer server, many employees turn to consumer tools to handle corporate data to remain productive – a major security and compliance risk. In fact, according to a recent survey on the sharing of sensitive information:
- 63 percent of employees use remote storage devices, such as USBs.
- 45 percent rely on third-party consumer file storage sites, including Dropbox and Box.com.
- Nearly a third use their personal email accounts.
Although IT may not be able to completely eliminate downtime, there are five ways IT administrators can minimize the risk of losing access to core systems during outages, whether planned or unexpected.
James Bindseil is president and CEO of Globalscape, a managed file transfer solutions provider that helps organizations securely and efficiently send and receive files and data.
Click through for five ways organizations can minimize the risks associated with system downtime, as identified by James Bindseil, president and CEO of Globalscape.
Scope out the right vendor
If you’re experiencing a loss of critical systems, it’s possible that there’s more to blame than your current environment. Vendors and partners may not be able to commit to the level of availability that your organization requires. Before signing on with a vendor, take a close look at the service-level agreement to make sure it’s aligned with your organization’s goals and requirements. If your current vendors are consistently falling short of SLAs, consider implementing an active-active cluster or an alternative vendor that meets your needs.
Implement active-active cluster architectures
Implementing active-active or active-passive clustering is a common way for IT to minimize downtime. While active-passive clustering may have been a viable solution in the past, it doesn’t live up to the needs of today’s organizations. According to Globalscape’s study, IT departments that rely on active-passive clustering reported losing 34 percent more data and important communications than those departments that depend on active-active clustering. Active-active clustering environments provide organizations with more reliable uptime for core systems, which promotes efficiency and lowers risk.
System audits aren’t enough
Your IT department probably runs system audits on a regular basis — it’s a fundamental measure to promote uptime — but why not take it a step further? IT should regularly perform process audits in order to streamline organizational practices and eliminate redundancies. Take a look at what processes have a significant effect on an organization when downtime occurs, and work to protect those systems. In addition to ensuring uptime for core systems, process audits will also provide visibility into where funds will be best allocated.
Implementing dynamic and durable systems
Scalability is a priority for any growing company and a natural defense against instances of downtime. From a server perspective, implementing a solution that offers load-balancing features can reduce the number of issues associated with planned outages. For example, if one server needs to be taken offline for maintenance, other nodes can seamlessly take over without the risk of losing mission-critical data or hindering employee productivity. Investing in software that can continually meet the needs of a growing organization through planned and unplanned outages can save IT time and money in the long run.
Standardize with a single vendor
The IT landscape is full of vendors offering similar solutions, many claiming to work seamlessly with a competitor’s product suite. The truth is: Attempting to implement multiple vendors in a single environment for the same function can introduce unwanted and unnoticed system-to-system vulnerabilities just waiting to be exploited. Standardizing on a single solution provider is the only way to avoid the issue of potential system glitches and secure sensitive data. By relying on one provider, IT can ensure uptime through compatible solutions.