On the Road to the Dynamic Data Center

Arthur Cole

Your enterprise may be virtual, consolidated, even cloud-ready, but is it dynamic?

If you take a close look at the major technology announcements lately, the overriding goal is to provide a flexible architecture that allows you to repurpose and reprovision resources, preferably on the fly, to suit ever-changing data requirements. It's a far cry from the static silos that have existed in enterprise architectures for the past several decades, but is it realistic?

The people who are providing these new technologies certainly think so. The latest entrant is 3Leaf Systems, which this week introduced the Dynamic Data Center Server built around a proprietary ASIC using AMD Opterons running the Linux OS. The package can hold up to 192 cores providing as much as 1 TB of shared memory and 8 TB of storage using the InfiniBand interconnect. Future iterations will tap Intel chips using the company's Quick Patch Interconnect (QPI).

With the DDC ASIC as a base, the company has devised a software stack aimed at pooling and sharing data center resources across a wide range of configurations. The pooling module, for example, lets you collect any number of x86 servers into a single entity, allowing the OS to be rebooted across the entire system or in designated clusters. The sharing module then lets you allocate resources down to individual cores, enabling the OS to use select portions of individual servers, with static reconfiguration on re-boot. There's also a flex module that can dynamically expand or shrink the OS image based on available CPU, memory or I/O resources.

And following up on yesterday's news of Cisco's expanded relationship with EMC and VMware to devise what is essentially a commodity data center package, we have word from HP that it is close to unveiling a new Converged Infrastructure framework designed to increase the management capability across its server, storage and network portfolios. As our own Mike Vizard reports on CTO Edge, the goal is to extend its systems management approach over distributed architectures, essentially allowing you to establish a grid of dynamic resource pools tied together over a unified fabric.

It's interesting to note that HP doesn't have an actual system just yet, just a concept -- which makes me wonder how much of this is real and how much is simple marketing tit-for-tat in an effort to take the wind out of Cisco's Vblock announcement. Still, there's no doubt HP is on the road to something, considering it is the only major platform vendor that has all three data center technologies in-house -- servers, storage and networking.

Naturally, when it comes to this level of architectural reorganization, we're not simply talking about swapping out old boxes in favor of new ones. Rather, as I mentioned last month, this is a top-to-bottom revamp of some pretty entrenched ways of thinking -- not only about how systems interact with each other, but with people as well. In the end, however, the change is probably inevitable because the rise of virtual infrastructure and the potentially vast number of resources elements it can support simply cannot be managed in the static, linear fashion that has characterized IT oversight thus far. And in terms of lowering costs and increasing efficiency, a dynamic infrastructure -- one that extends from the local server to the cloud halfway around the world -- will be unbeatable.

The problem is getting from here to there. And if in the end it turns out that this new paradigm means IT will be better able to manage these vast resources with fewer people, at least there will be plenty of work to do in the meantime.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.