As the enterprise delves ever deeper into virtual and cloud infrastructure, one salient fact is becoming clearer: Attributes like scalability and flexibility are not part and parcel to the technology. They must be developed and integrated into the environment so that they can truly provide the benefits that users expect.
Even at this early stage of the cloud transition, providers are already feeling the blowback that comes from overpromising and under-delivering. According to a recent study by Enterprise Management Associates (EMA), one third of IT executives say they found scaling either up or down to be not as easy as they were led to believe. With data loads ebbing and flowing in a continual and often chaotic fashion, just trying to match loads with available resources is a challenge even with modern automation and orchestration software.
Scalability is not reserved for the cloud alone, however. Internal data infrastructure will need to scale as well, although as InfoWorld’s Eric Knorr points out, it helps to determine if your applications would be better served by scale up or scale out. In most cases, he says, scale out will provide greater flexibility and will reduce the points of failure that can often cripple data service. With NoSQL and other open source, commodity platforms taking hold, scale out on low-cost, white-box infrastructure can often provide extensive capabilities at lower cost, and that trend is likely to continue now that virtualized, clustered server architectures can be augmented with software-defined networking and storage.
But it’s also important to realize that scalability does not stand alone when it comes to delivering next-generation data infrastructure. Availability, reliability and numerous other –abilites are necessary as well. According to Xtivia’s Bob Dietrich, these can best be delivered through broad redundancy in database, web server and other platforms, as well as flexible hardware and software configuration, load balancing and other techniques, and liberal use of distributed processing and virtualization technology. Expect this to touch virtually every aspect of your data environment, from initial authorization and authentication to database analysis and systems monitoring.
And even once you’ve gotten past all that, you’ll need to embrace the fact that different kinds of scale correspond to the various functions the enterprise performs in order to drive business processes. As Pegasystems CTO Don Schuerman notes, a single application like Business Process Management encompasses three distinct dimensions of scalability. First, there is the physical aspect required to ensure loads do not exceed resources – a task that grows increasingly difficult as the number of processes, rules, user interfaces and the like starts to mount. As well, there is organization scale, which represents things like customer segments, departments and product lines, all of which can slow down BPM and other processes. And there is customer scale, which requires consistency across all consumer-facing access points, as well as the delivery of an optimized experience even across legacy applications and infrastructure.
Achieving scalability, then, is not simply a matter of deploying the latest cloud or virtual technology, but a holistic endeavor that encompasses a wide range of systems, processes and organizational attributes.
No one can deliver “scale” to your doorstep. It’s something you have to build for yourself.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.