High-powered computing (HPC) begins in the research labs and eventually trickles down to the commercial world. And while much of the attention is given to faster processing, the fact is that HPC is having a real-world effect on the data center.
Leading researchers are taking a good, hard look at how ever-increasing computer power is affecting storage environments. Questions like how many disks will be needed or how quickly data should be read and retrieved are just the tip of the iceberg. Management issues will become increasingly complex, particularly when searching for a simple document somewhere amidst the petaflops.
These issues may be nearer than previously thought, as well. AMD, for one, is talking about chip advances in which the CPU and the GPU are integrated on the silicon level. Theoretically, it could boost the simple desktop up to the teraflop level, provided such things as power management, memory heirarchy and verification get worked out.
In all likelihood, this will prove a fertile market for firms that are already working on management issues for the major research houses. Panasas is one such firm, having recently released the latest version of its Active Scale storage environment, which tripled the throughput per client to 500 Mbps. Los Alamos National Laboratory uses it for its Roadrunner system, billed as the first attempt at a sustained petaflop supercomputer.
High-powered computing offers tremendous benefits to both large and small enterprises, but it is only one aspect of an overall data environment. And like all environments, changes in one area can vastly affect others. Faster processing is great, but only if all systems, storage included, can keep up.