Five Deadly Sins of Disaster Recovery Planning
Common blunders that result in data recovery disasters.
In the old TV series "Taxi, dispatcher Louie DePalma (Danny DeVito) told his frightened cabbies that the one word he never wanted to hear was "accident." If that show were to be reset in a modern data center, CIO DePalma would feel the same about the word "outage."
Nearly every enterprise has experienced downtime at one point or another, and it's then that the true test of both your infrastructure and your IT staff takes place. And while outages can occur for a wide variety of reasons, the most common cause is external to the data center: namely, loss of power in the surrounding grid. And we're not talking about catastrophic natural or man-made disasters here. According to the University of Minnesota, there has been a significant jump in non-disaster blackouts in the past two decades - more than double in fact.
This puts a lot of pressure on the enterprise to keep the data flowing using backup power infrastructure, a prospect that can grow quite expensive depending on the length of time you expect to be in the dark. Of course, one of the lesser discussed benefits of increased energy efficiency in enterprise systems is that it stretches your power backup capabilities as well, says ZDNet's David Chernicoff. The downside is that, depending on data load fluctuations, many organizations may already be falling behind the curve even if they have instituted a low-power strategy. All the more reason why recovery plans need to be constantly monitored and updated.
Of course, that's easier said than done, particularly in an age of constantly shifting data patterns and infrastructure. Fujitsu Laboratories says it has a fix in the form of a new data center simulation system that plots changes in data and energy infrastructure to gauge the impact on continuity, disaster recovery and other functions. The system is said to calculate conditions like thermal air flow and power consumption at speeds 1,000 times faster than current platforms, offering a more accurate picture of energy-saving initiatives.
At the same time, new energy management software is proving more effective at assessing the risks to seemingly stable energy supplies. AiNET recently patented its Critical Power Protection Supervisor (CPPS) system that the company says identifies a number of threats to power infrastructure that often escape the attention of data center operators. These include input voltage variations caused by lightning strikes and other events, as well as loss of phase and UPS failures. The company says it has integrated monitoring tools for these factors into its automation stack, providing the ability to adapt and respond to threats before they becomes critical.
The symbiotic relationship between power efficiency and disaster recovery is just one of many that has arisen in the data center over the past decade. For the first time, organizations can actually improve data capacity and processing capability while reducing their hardware footprint through virtualization and the cloud. And advances in solid-state technology are enabling both faster throughput and tighter integration with network and server infrastructure.