Fault tolerance is quickly becoming a must-have feature on new servers as organizations seek to boost availability and quality of service (QoS), not just for transactional and other mission-critical applications, but general data processing as well.
Part of the trend is being driven by the fact that more and more lower-value processes are running on virtual partitions within single servers, making disruptions caused by physical failures all the more acute. But as demand for fault-tolerance heads farther into the mainstream, traditional high-end, hardware-based systems are feeling pressure from mid-level software solutions, according to this piece in eChannelLine.
One of the major beneficiaries of this trend is Mass.-based Stratus Systems, which has developed a system that can keep track of server functionality across the globe. The company recently unveiled the first fruits of a development deal with NEC, the 320Fc-MR. The system features Stratus' Active Upgrade module that lets you install updates and OS patches without rebooting.
Stratus also recently signed on with Emulex to include that company's FibreSpy 850 embedded storage switch for its own V Series fault-tolerant server. The move makes it easier for Stratus customers to deploy backup storage environments for improved data reliability.
Other top-tier vendors are beefing up FT capabilities on some of their most popular lines. Sun Microsystems, for instance, has begun loading the Solaris 10 OS with the Zettabyte File System featuring RAID Z protection in conjunction with the Lustre object file system.
Fault tolerance is best viewed as one component in an overall system reliability framework that includes automated backup, failover and other tools. But now that software-based systems are finally bringing some diversity to the market, it should be easier for enterprises to find solutions that fit their specialized needs.