New Options for Fault Tolerance

Arthur Cole

Fault tolerance is quickly becoming a must-have feature on new servers as organizations seek to boost availability and quality of service (QoS), not just for transactional and other mission-critical applications, but general data processing as well.


Part of the trend is being driven by the fact that more and more lower-value processes are running on virtual partitions within single servers, making disruptions caused by physical failures all the more acute. But as demand for fault-tolerance heads farther into the mainstream, traditional high-end, hardware-based systems are feeling pressure from mid-level software solutions, according to this piece in eChannelLine.


One of the major beneficiaries of this trend is Mass.-based Stratus Systems, which has developed a system that can keep track of server functionality across the globe. The company recently unveiled the first fruits of a development deal with NEC, the 320Fc-MR. The system features Stratus' Active Upgrade module that lets you install updates and OS patches without rebooting.


Stratus also recently signed on with Emulex to include that company's FibreSpy 850 embedded storage switch for its own V Series fault-tolerant server. The move makes it easier for Stratus customers to deploy backup storage environments for improved data reliability.


Other top-tier vendors are beefing up FT capabilities on some of their most popular lines. Sun Microsystems, for instance, has begun loading the Solaris 10 OS with the Zettabyte File System featuring RAID Z protection in conjunction with the Lustre object file system.


Fault tolerance is best viewed as one component in an overall system reliability framework that includes automated backup, failover and other tools. But now that software-based systems are finally bringing some diversity to the market, it should be easier for enterprises to find solutions that fit their specialized needs.

Add Comment      Leave a comment on this blog post
Jul 13, 2007 12:48 PM Buck McCune Buck McCune  says:
The Tandem/Compaq/HP NonStop Servers are the best example of FT and availability... Reply
Jul 17, 2007 9:29 AM Andy Andy  says:
I fail to see why people are so hung up on hardware fault tolerance. What we really need to be focused on is application designs that that are fault tolerant. Too often applications and software solutions are developed within this design principal - then implementers put it on some fault tolerant hardware and sell it as 'great'. In fact if the software was designed correctly it wouldn't need fault tolerant hardware to continue to provide continuous service. Reply
Jul 20, 2007 12:30 PM Chris T. Chris T.  says:
For ease of implementation (ie no clustering amendments requiring software mods) at little more cost than a basic server (ie 4 CPU or less) I've found the NEC Express5800 320Fa range unbeatable. I've seen installations running Windows and Linux OS's and applications in true 7x24 fashion. Reply
Jul 26, 2007 8:44 AM Fedor Fedor  says:
"In fact if the software was designed correctly it wouldnt need fault tolerant hardware to continue to provide continuous service."Not true of course, perfect software will never boot from a corrupted disk, to give the most obvious example. You can work around that using cluster technology, but that wil immediately introduce extra cost and complexity.Fault Tolerant hardware builds in a buffer that goes beyond standard redundancy. Therefore for TRUE business critical applications, Fault Tolerance provides the best availability level, especially when the systems management burden of the available alternatives is taken into consideration.In the list of products mentioned sofar, Bull's Novascale R620 ought to be mentioned. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.