A recent survey of 240 information IT and security professionals found that many organizations plan to migrate data center applications to the cloud despite often experiencing application connectivity disruptions in the process.
Virtualizing a physical data center and moving it to a private or hybrid cloud requires careful analysis of both security and connectivity needs. The challenge is that you must perform these actions without disrupting existing services and without unplanned downtime. And with data center applications typically being composed of servers, networking and storage components and security infrastructure baked with complexity, when a new application is deployed or a connectivity update is made, it is often fraught with risk. In this slideshow, AlgoSec, a security policy management company, examines five things to consider before migrating data center applications to the cloud.
Click through for five things to consider before migrating data center applications to the cloud, as identified by AlgoSec.
Understand that application connectivity is tied to firewall rules.
Business applications are both critical and abundant in the data center. In a recent survey, nearly one-third of respondents said that they had more than 100 critical applications in their data center (32 percent) and nearly one in five (19 percent) had responsibility for more than 200 critical data center applications. Not only does the modern data center have high volumes of business applications — from commercial off-the-shelf applications such as SAP and SharePoint, to homegrown applications performing custom business logic — but they also are extremely complex. With more applications and increased complexity, IT teams are pressured to not only ensure availability and security, but also to keep up with the speed of the business.
The impact of an unplanned outage to a critical application can be detrimental to a company – both technically and financially. At a technical level, organizations need to understand that firewall changes are primarily driven by business application connectivity needs. All firewall change requests should be linked to the appropriate application and the impact to these applications and to the network must be understood.
Remove access rules when applications are decommissioned.
More often than not, firewall rules are left in place by organizations when an application is decommissioned because of fears that if they are removed, an outage may occur. While breaking the connectivity for a critical application has a significant impact on the business, organizations should make it a priority to quickly and accurately remove unneeded access because this creates openings that can be exploited by attackers. Additionally, organizations should leverage their firewall rules to identify network components and applications that can be removed in order to successfully eliminate the unneeded access, without impacting the business.
Identify risks from the business perspective.
Traditional risk management practices have a very technical focus, displaying risks for servers, IP addresses, and other elements seldom understood by the business. But, according to a recent survey, nearly half of respondents said that they want to view risk by the business application, as opposed to only 30 percent who want to see their exposure by network segment, and 22 percent by server or device. This is important because it not only allows security teams to more effectively communicate with business owners, but it also prepares and encourages them to “own the risk.”
One method of achieving this application-centric approach to risk management is to integrate security policy management with vulnerability scanners that are already in use in the organization. By viewing risk by application, it allows the organization to make better risk decisions with the business in mind.
Beware of Shadow IT.
The availability and ease of use of public clouds create another threat to IT teams known as “shadow IT.” Developers and business owners whose requirements are not addressed by IT can quickly spin up public cloud services without IT even knowing — causing the worst kind of security gaps, ones that the organization doesn’t even know exist. Therefore, security teams must ensure that they evolve from the “no” or “bottleneck” department to a business enabler that adopts agile and orchestrated processes required to support cloud environments and fast application delivery.
Reduce complexity that is inherent in the security change process.
The modern data center is chock full of business applications comprised of complex, multi-tier architectures, numerous components, and intricate, underlying communication patterns that drive network security policies. While individual rules support various applications, an individual “communication” may be forced to travel across multiple policy enforcement points. Hundreds or even thousands of rules can be involved and can consist of multiple potential interdependencies that are configured across tens to hundreds of devices.
Along with this complexity is the challenge of keeping up with dynamic business needs — whether it’s spinning up new applications or keeping existing applications relevant for your users, change is something that organizations must both deal with and embrace. A quarter of respondents to a recent study reported that they must wait more than 11 weeks for a new application to go live. The time required by IT to deploy application updates is also slower than what the business demands; the majority of respondents (59 percent) spend more than eight hours on each application connectivity change. Streamlining and automating the change process can ensure a more agile business while also ensuring that changes are accurate and secure.