The Dragons of Cloud Infrastructure Integration

Loraine Lawson
Slide Show

7 Steps to Smarter Integration

Sometimes, change can be worthwhile. The key is knowing what's worth pursuing and what's not.

I've often wondered how cloud computing works with on-premise infrastructure. I'm not talking SaaS, here. By cloud computing, I mean process power, storage and other functions that once required on-site hardware.


It seemed to me to require some form of integration, but most of the focus is on SaaS to on-premise integration, with precious little written about this question of integrating infrastructure.


I was beginning to think it was just me. Maybe everybody else knew something I didn't, I thought - like infrastructure integration is a non-issue or it's all handled by invisible duct tape or something.


So you can understand why I was excited to see Lori MacVittie address this issue on her blog, Two Different Socks. Finally, proof I wasn't completely clueless in wondering if this is an issue.


In fact, it turns out cloud computing is forcing infrastructure to shift to a more Web 2.0 model of integration, which, by-and-large, means the lightweight integration you get by using APIs.


"What cloud computing is doing is forcing infrastructure - network, storage and application delivery - models to adopt many facets of development," MacVittie writes. That means, among other things, supporting the sharing of data by integration and using a service-based approach to provisioning, she explains.


For that matter, the idea that you can have cloud computing without integration is unlikely, she argues. Follow any cloud computing resource from end to end, and at some point, you'll almost certainly find a physical connection. At some point, using the cloud means integrating infrastructure, she explains:

Cloud bursting or cloud extension or cloud-what-have-you models that leverage cheaper compute and storage resources from public cloud providers require integration at the infrastructure layers. Using storage resources from the cloud as part of a larger tiering strategy mean that some piece of infrastructure-storage virtualization likely-is integrating those resources via an API. Similarly, compute resources must be integrated-included-in architectures in the data center if they are used as part of a dynamic capacity extension strategy. That requires some integration via an API or infrastructure capable of natively managing those resources in public cloud environments (which, if we peer close enough, we'll see is enabled via .. an API).

There's been a lot written about how APIs make life simpler. And as Dion Hinchcliffe recently explained to me, APIs and lightweight integration can be applied within enterprises in smart ways.


But MacVittie warns this issue of infrastructure integration - even with APIs - may not be so simple because you're integrating across environments, models and architectures:

I'd also suggest investing heavily in turkeys. Because if enterprise application integration required sacrificial chickens, we're probably going to need something a bit bigger to meet the challenge ...

MacVittie's blog tends toward the technical, and this is no exception. I point it out here as a sort of early warning for technology leaders who want to explore the new realm of cloud computing: Yes, there could be gold savings, but here there be integration dragons.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.