Disaster Preparedness Vital in the Enterprise, on the Outside Network

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

Christmas came a bit early this year for manufacturers of backup power sources. The U.S. Federal Communications Commission has released rules mandating that mobile providers and local exchange carriers install backup power for cell sites and remote telecom facilities, according to this story in Data Center Knowledge.


The rules, the piece says, are a direct response to the communications meltdown after Hurricane Katrina and, though not stated in the article, 9/11. More recently, the bridge collapse in Minneapolis demonstrated how fragile -- and still inadequate -- the system is. A lesser publicized goal of the 700 MHz auction set for next month is to create a national broadband emergency network.


The story says that the there are more than 210,000 cellular sites and about 20,000 telecom central offices (COs) across the country. There also are innumerable switches and terminals. The new rules demand that central offices be able to run for 24 hours off the grid. Cell sites, remote switches and terminals must keep on keeping on for eight hours without network power. The rules -- which the story, in an understatement, says could "affect the market for diesel generators" -- reports that companies must file a plan in six months about how they will meet the new demands. The CTIA-The Wireless Association has filed a lawsuit seeking to stop the FCC's plans.


The FCC's efforts fit under the broad banner of disaster recovery and business continuity, a major concern both in the telecommunications and enterprise sectors. The two areas -- inside and outside the company, building or campus -- are deeply related.


Organizations considering merging their voice and data networks must think carefully about business continuity and disaster recovery. This Computer Stuff post points out that outages can come from something as localized and innocuous as a malfunctioning sprinkler system or something as dramatic as a hurricane -- or a terrorist attack.


The piece lists what a company should look for in its VoIP platform, including a highly reliable architecture; redundancy of key components; a long mean time between failures; the ability to have employees use the system off site; network redundancy and backup power.


This is a nice outline of what a disaster-recovery plan should look like, from a second-year Master of Science in Information Systems student at the University of North Carolina. The brief introduction highlights the events that could cause the organization to experience an emergency (natural, technical, legal and/or environmental) and what areas must be addressed (electrical, telecommunications, records, data, PC data, legal and safety).


The first step is to form a planning committee and to identify elements of concern in each risk area (For instance, the key concerns in telecommunications are telephones, fax machines, computer networks and printers.) The team must brainstorm to create a list of steps of varying importance ("essential," "needed," "nice," "handy") in each category. Steps to take during and after a crisis occurs are outlined (contain the crisis, be decisive and communicate, resolve the crisis, avoid blame and learn from the crisis). The outline mandates that the plan be written in "an abbreviated but thorough table form."


Byte and Switch says that the heavy rains, flooding and winds in Oregon and Washington, the second to most recent natural disaster -- the big freeze in the middle of the nation is the latest -- wasn't too big a deal for many companies. It wasn't that the wind and rain weren't intense. It was because folks had planned well.


For instance, ViaWest, a managed-service provider, chose Hillsboro, Ore., for its home because it is about 300 feet higher than downtown Portland, about 10 miles away. The company also mirrors its data to four data centers in Utah and Colorado. Finally, the state of Oregon's IT team was well prepared for an emergency of a different sort because it recently participated in a three-day simulated terrorist attack.


It is senseless in this day and age to differentiate enterprise and telecommunication disaster recovery and business continuity. The bottom line is that both service providers and corporations must work together to prepare for man-made or natural disasters.