Cloud computing tends to condition people into thinking that where a remote data center is physically located doesn’t matter as long as it’s accessible. But from a performance perspective, the laws of physics still very much apply in the cloud, which is one reason we’re starting to see cloud service providers focus on specific regional markets.
For example, Markley Group this week announced the availability of its new infrastructure-as-a-service (IaaS) offering, called Markley Cloud Services (MCS), that is based on servers running VMware that are housed in a data center next to the Boston Internet Exchange, the hub through which most of the Internet traffic in New England passes.
According to Joshua Myles, product manager for MCS, that makes a performance difference for any company with operations in New England because it means that their Internet traffic is passing through some other hub that is, for example, in New York before heading back to New England.
Similarly, Telefonica has gone to great care to set up a data center in Miami at exactly where the undersea cable that it operates connects Internet traffic between North and South America. According to Tim Marsden, director of cloud verticals for Telefonica, that data center is an ideal location for any North American organization to host an application that would primarily be used to serve South American customers.
As more mission-critical production applications find their way into the cloud, IT organizations are rediscovering the critical role network latency plays in application performance, which means just like in the physical world, location in the cloud could mean everything.