Tapping into the Enterprise Need for Hyperscale

Arthur Cole
Slide Show

Key Principles to Web-Scaling a Network

As if the enterprise did not have enough to worry about already, it seems that the next big push to hyperscale architectures is on the horizon.

Having sold the industry on the wonders of Big Data and the Internet of Things, the inevitable conclusion is that organizations will need to either bulk up data infrastructure to handle larger loads or streamline existing footprints to enable high-density hardware configurations for critical functions.

A few months ago, companies like Arista Networks and Dell began rolling out hyperscale platforms for the average enterprise, banking on the notion that even if organizations did not aspire to Google- or Facebook-level scale, they could still benefit from the modularity and simplified management aspects of integrated data architectures.

In recent weeks, however, these efforts have evolved from a handful of systems to full-blown strategic initiatives to tap what is expected to be the next growth market in data center hardware. Dell, for example, recently created the Datacenter Scalable Solutions (DSS) group specifically to target mid-level organizations that are looking to expand their computing capabilities for emerging applications like web services and Big Data analytics. The company anticipates a $7 billion market in which demand for x86 servers like the new PowerEdge C6320 will grow three times faster than in traditional data center settings.


Meanwhile, a number of start-ups are taking aim at hyperscale for the average enterprise as well. A company called Infinidat recently emerged from stealth with a plan to enable web-scale storage capability within a form factor conducive to today’s data center confines. The platform provides 2PB within a 42U rack, augmented by 12Gbps throughput and 750,000 IOPS performance. It also offers an impressive 7-nines reliability using an active-active-active controller architecture featuring DRAM and Flash on top of 480 nearline SAS hard drives that utilize a proprietary driver to spread workloads across all three nodes. At the moment, the system provides block-level storage only, although NFS, mainframe and object storage are expected within a few months.

Networking will be key to any hyperscale operation, of course, so it’s no surprise that many fabric architectures are gravitating toward extreme scale. Big Switch Networks is looking to satisfy these requirements by uniting physical and virtual networks within an OpenStack-based framework, which the company says will enable hyperscale network principles within a normal data center. The Big Cloud Fabric 3.0 utilizes standard switches and a proprietary controller to enable management via a single interface even as the fabric scales across multiple nodes. This style of network disaggregation is expected to provide the kind of agility required to implement full software-defined networking (SDN) over hyperscale infrastructure.

Hyperscale

Pushing hyperscale architectures to the common enterprise market is the only logical way in which traditional IT vendors can get in on the action. The true hyperscale players like Google and Facebook have already designed their own platforms and have the market clout to purchase all the hardware they need from ODM suppliers in Asia.

By tailoring the technology to mid-level deployments and placing the value on its ability to streamline existing infrastructure as well as prepare for additional scale as Big Data needs swell, both the vendor and distributor communities gain a lifeline to stagnating hardware businesses.

And the enterprise gains a robust, modular platform on which to build a highly agile, exceedingly efficient infrastructure for next-generation data needs.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.



Add Comment      Leave a comment on this blog post
Aug 25, 2015 9:26 AM Bob Noel Bob Noel  says:
Traditional static leaf/spine data center networking approaches, which have not changed in the last 25 years, are an inhibitor to achieving the speed an agility needed to support the hyperscale growth of data the need for real time analytics. This transition has the potential to be the key tipping point where established vendors fall short, and open the door to more nimble and purpose built solutions. Storage and compute have radically changed over the last decade. Its time for the network to evolve as well, but this will require a fundamental shift in the approach. Software defined capabilities will play an important role, but the network must also flatten into a single tier to accommodate the tremendous east/west traffic patterns generated in highly virtualized and hyperscale environments. Multi-tiered leaf/spine approaches were beautifully designed for a time when traffic was predictable and north/south in nature but virtual data centers of today and the hyperscale environments of tomorrow are driving completely different traffic patterns. Bob Noel, Plexxi Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.