While much of the industry is focused on what’s going on in enterprise networking inside the data center to enable cloud computing, IT also needs to consider the network that connects the data centers – particularly how these inter-data-center networks need to change to support new cloud use cases and associated network requirements for bandwidth scalability, low latency, security, virtualization and automation. In this slideshow, Ciena is tackling the connectivity issues between data centers, and data centers to the cloud, and outlines 10 reasons why you need a better network to the cloud.
Click through for 10 reasons why you need to focus on a better network between data centers and the cloud, as identified by Ciena.
You need a flatter Layer 0/1/2 network architecture between data centers that delivers better scalability, latency and deterministic performance. With this architecture, the network won’t be a bottleneck when you need it the most, and it will deliver the lowest cost per bit throughput.
Today you are running SaaS or infrastructure applications that stay resident in the provider cloud, often locked behind semi-proprietary architectures. This is a logical first implementation that’s common to most technology rollouts. The next step is also predictable – a turn to a more open architecture with standards in place for APIs and management tools. By taking this step, you unlock the potential to federate your private data centers with multiple provider cloud data centers, enabling greater workload mobility and the ability to deploy entirely new applications that will require a better network to run effectively.
As cloud implementations mature from trials or toe-dipping to full production, management will demand to see tangible benefits of the cloud deployment. The way you design your network can be a deal breaker to success. The network is the strategic lever to achieve the cloud benefits of lower costs, reduced deployment time and new functionality. In short, the cloud is only as good as the network.
Today, the enterprise typically uses cloud infrastructure for simple storage backup to cloud data centers over low speed IP networks, trickling the data asynchronously as the network permits. Typically this use case fits small businesses with relatively low amounts of storage. Tomorrow, the enterprise will want to take advantage of lower cost cloud storage for large amounts of storage. It will want a network that can dynamically respond to moving terabytes when the data needs to move — without bottlenecks, security holes or dropped packets. No more sending hard disks via FedEx to the cloud data center.
Workload orchestration between cloud data centers, and between enterprise and cloud data centers, will be driven by policy-based software automation tools. On-demand change in performance parameters such as bandwidth scalability will be accomplished through high-level software interfaces into the network control planes, not by an operator using a command line interface to the equipment. This performance-on-demand will be triggered at the application level, ensuring that the adjustments to the network and to the cloud meet the business requirements.
A virtualized network partitions resources in many different ways, e.g., virtual circuits (EVPLs), virtual wavelengths (Optical Transport Network – OTN), virtual switches (VSIs) and virtual networks (VPNs, Optical Virtual Private Network – OVPNs). Virtualization of the network provides network efficiency, i.e. coordination of the bandwidth and topology to the specific application need at a point in time. This eliminates the need to size all data center network interconnection facilities for a peak capacity that is only rarely used, which lowers costs because you don’t need to make unnecessary network equipment investment.
IT organizations are highly motivated to drive efficiencies — first through data center consolidation, then virtualization. The next step is to use cloud services, which can offer enterprises up to 25 percent infrastructure savings on IT services and hardware expenses1. In turn, the network is also a key enabler for cloud service providers to operate multiple data centers as a shared pool of virtual data centers, enabling a 35 percent reduction in total cloud data center resources, according to Ciena. The network is the key ingredient that ties everything together, serving as a backplane across the data centers for flexible delivery of applications and services.
1Sand Hill: “Job Growth in the Forecast: How Cloud Computing is Generating New Business Opportunities and Fueling Job Growth in the United States”, 2011
A north-south traffic flow is user-to-machine through a tiered IP network architecture. This reflects a client/server model and is exercised in the cloud for SaaS-type applications where the application is simply moved from the in-house data center to the cloud provider data center server. East-west traffic flows are machine-to-machine within, and increasingly between, data centers. Such traffic has much more stringent quality-of-service requirements. In the near-future, we’ll see at least an order of magnitude more east-west data workloads driven by applications like storage synchronization, inter-data center storage virtualization and virtual machine migrations. This means the cloud network needs to be designed for performance to meet the traffic flow change and intensity challenges of the future.
Whether you use the cloud as a fail-safe to on-premises operations or as your primary IT resource, you need to be prepared for a worse-case scenario such as a natural disaster or unexpected peaks in traffic. You or your cloud provider may need to move hundreds of your virtual machines and/or data stores from the current data center to another data center in the cloud — potentially alongside those of many other enterprises. To move these workloads in time to avoid or recover from disaster — without damaging your business — you’ll need a network that can rapidly add and reallocate capacity to your business continuity/disaster recovery data center.
“Data Center Without Walls” describes an architecture that creates a multi-data center, hybrid cloud environment able to function as a set of virtual data centers from a common resource pool to address any magnitude of workload demand and offer seamless workflow movement. Enterprises will want to access cloud resources anywhere, anytime. Service providers will want to offer cloud services differentiated by state-of-the-art, programmable network access, and economies of scale that leverage their data center footprints. Cloud providers will handle uncertain demand requirements and failover by more efficiently allocating workloads across multiple data centers. The cloud backbone network is the critical link for providing cost-effective scalability, security, and on-demand services that enable the virtual data center.