Five Deadly Sins of Disaster Recovery Planning
Common blunders that result in data recovery disasters.
There’s nothing like a major national disaster to remind people about the need to prepare for major national disasters. Such is the case again this week as “Superstorm” Sandy breaks apart in the Northeast, leaving much of the U.S. East Coast a water-sogged mess.
But looking on the bright side, every calamity is a learning experience, so it only helps to take stock of what went right for the enterprise industry, and what went wrong.
All in all, the damage to data infrastructure was not that bad. Most of the severe outages centered around New York City, which saw a record tidal surge that left many downtown organizations, particularly those centered around Wall Street, in the dark. As the Wall Street Journal reported, even firms that were able to successfully switch over to backup facilities had to sit on their hands for two days because all the main exchanges were closed. Others found that while some of their systems were easily ported to working facilities, things like phone service were hampered because they were still tied to central offices.
And even though New Jersey took a severe beating from the storm, many organizations were able to stay on-line due to properly sealed backup generator facilities, according to datacenterknowledge.com. Colocation provider Equinex, for example, was able to shift customer loads to backup generators that are supplied with 48 hours of fuel, which gives the company time to line up new fuel deliveries should local grids remain dark. As of Wednesday morning, however, the company was already switching back to utility power at many sites.
Indeed, the potential for interrupted utility service even without a major disaster is causing some organizations to rethink the traditional roles of “regular” and “backup” power. In Utah, eBay has added 6 megawatts of fuel cell capacity to one of its newest facilities with an eye toward using them as the primary source and reserving utility power supplies as the alternate. Not only are the fuel cells cleaner than the largely coal-powered electrical grid, but they provide built-in redundancy through a multi-brick design. And the initial capital costs of the cells are at least partially offset by the elimination of on-site UPS and power generation equipment.
As Sandy showed, however, this approach may fly in relatively dry areas like Utah, but not in flooded Manhattan. Sites like Gawker and Gizmodo were struggling to get back on-line throughout the storm as rising waters overwhelmed basement-level backup fuel pumps at key downtown locations. So even though backup systems kicked on as expected after electrical grids were purposely shut down in anticipation of the storm, they eventually failed as the waters rose.
So what have we learned from this? Several things. First, despite a dramatic awakening on the part of the IT industry on the need for adequate backup facilities and data sustainability, no system is fool-proof. So if you think you are fully protected for all contingencies, you’re not.
But more importantly, events like Sandy point out the fact that no matter how much we talk about the cloud and the ability to shift loads from one place to another, the fact is that all data relies on physical infrastructure that is as vulnerable to the vagaries of Mother Nature as the buildings that house it and the roads that connect it.
So the next time someone says that hardware doesn’t matter, ask them what they would do if they found their data center suddenly under water.