Unlike traditional servers, cloud servers are pretty susceptible to outside attack if the right preparations are not made. Many enterprises don’t realize or understand that security responsibilities are shared with their cloud service provider. Cloud providers are typically responsible for securing the underlying infrastructure, while the buyer is responsible for securing the specific workloads that will run in that infrastructure for IaaS.
Starting a cloud server workload without proper configuration is like putting out a beacon alerting hackers to an easy mark. In fact, a CloudPassage study called The Gauntlet showed that even a novice hacker can compromise a poorly configured cloud server in a matter of hours.
This slideshow features five important considerations for safe configurations of cloud servers that all enterprises should cover before launching into the cloud.
Click through for five important considerations for safe configurations of cloud servers that all enterprises should cover before launching into the cloud, as identified by CloudPassage.
Verify tight hardening
Most cloud providers have a marketplace or catalog where master images can be obtained. These master images have usually been vetted and advertised as pre-hardened but additional verification is always recommended. Because multiple instances will become offspring from this image, even a single vulnerability becomes a much larger issue to rectify with repeated propagation.
For example, even the pre-built AMI’s from Amazon Web Services have been known to have vulnerabilities. With the shared responsibility model for security, both the cloud provider and buyer need to verify prior to utilizing. Never completely trust the master image, always verify internally against possible exposure.
Watch out for who or what is at the helm
While disabling and limiting account access on servers should go without saying, pay special attention when giving access to accommodate APIs. Monitoring use of server accounts is important. As we’ve seen with recent breaches, like eBay, improper use of stolen yet authorized credentials is a serious issue. And, always limit root access.
Enforce multi-factor authentication for all access types. When possible, recommend time out credentials for API access and log all activity to ensure that all activity on the servers is accounted for.
Configure out slack
Disabling unnecessary services and ports reduces the opportunity for exploit. Keep a lean profile. Be sure to address how updates are handled over time either by allowing them to update automatically or through a process that ensures against running unnecessary risks.
Historically, breaches often occur on unmonitored services. Reduce the scope for any mischief. If you don’t need it, then don’t enable it.
Watch for drift
Manage drift from hardened configurations by tactfully patching. For even better cloud efficiencies, some companies forego patching altogether and rely on a refresh from completely new server images each time.
Creating additional slack with excess roles and responsibilities may introduce complications like changes outside of maintenance windows. Even when the purpose may have been to drive business agility, the balance between short-term goals and creating potential exposures needs to be carefully weighed. Though there are exceptions, for the smoothest cloud experience, loose ends should be shored up.
Continuously watch for anomalies
Even when we are as diligent about security hygiene as possible, sometimes threatening situations still occur. Having a team that is rewarded for being ‘all hands on deck’ about security in these dynamically changing cloud conditions and elastic compute environments is vital to responding to breaking conditions.
Anomalies can suddenly appear or fly under the radar. Traditional security isn’t able to keep pace in these environments and, worse, can weigh down cloud flexibility.
Heuristic monitoring takes too long to discern actionable patterns. Getting a baseline of every server is a waste, since each propagated from known good images. Monitoring for file changes and configuration changes, like adding a user account, gives a more streamlined approach for anomalous behavior that can be thwarted quickly.