Cloud adoption shows no sign of slowing down. Couple high reliability and high availability with minimal IT overhead management costs, and it is easy to see why so many organizations are implementing cloud-based infrastructure and software-as-a-service (SaaS) applications. And yet, few have the mechanics in place to protect the data within their SaaS infrastructure. Most organizations still mistakenly assume their SLA agreements cover data (they don’t) or rely on legacy backup tools to handle new cloud-based systems (they can’t).
In an age where data is the lifeblood of every business, companies are vulnerable to costly, irreparable damage if they aren’t prepared to protect data within the SaaS infrastructure from cyber threats, data loss, malicious (or careless) employees, and simple user errors.
In this slideshow, OwnBackup CEO Sam Gutmann outlines the top three data dangers lurking in cloud environments and offers three tips for how to manage data protection and backup in a SaaS-based world.
Data Risks in the Cloud
Click through for the top three data dangers lurking in cloud environments and tips for how to manage data protection and backup in a SaaS-based world, as identified by OwnBackup CEO Sam Gutmann.
Data Danger #1 – Believing Your SaaS Agreement Covers Data
One of the many benefits of the cloud environment is that organizations can rely on an outsourced vendor to manage the entire implementation and maintenance of any given application. Unfortunately, the “set it and forget it” mentality can quickly become a detriment as organizations assume the vendor will not only address ongoing IT management headaches, but also anything associated with the application itself – including the data housed within. A few seemingly harmless hours of business-critical cloud application downtime can become a permanent data loss event for many organizations. SaaS vendors are not responsible for the information contained in their applications … you are.
To mitigate the damage to your data and manage your IT team’s expectations for working within cloud environments, read through your SaaS agreement carefully and understand what is covered and what is not. Gutmann guarantees that 99.9 percent of the time, data is not addressed.
Data Danger #2 – Underestimating Your Role as the Compliance Data Steward
No matter where the data is housed, on-premise or via the cloud, the compliance requirements are the same. Organizations must be able to show that their information is backed up at least once a day and demonstrate that any missing information can be restored. As far as regulators are concerned, the business – not the SaaS vendor – is responsible for owning the data, protecting it, and backing it up. Accepting the role of de facto data steward is especially critical as companies are placing more and more sensitive data into SaaS and PaaS applications.
For example, financial institutions are increasingly deploying cloud-based applications for customer transactions. Lost bank records due to inadvertent user error could result in the loss of hundreds of millions of dollars in lost loan data that could take years or months to recover, if at all. This level of compliance offense has a catastrophic impact on the business and often generates irreparable damage. To avoid such a scenario, businesses need to own their responsibility as compliance data stewards and maintain tight control of their data even when it is housed outside the four walls of their organization.
Legacy Backup Tools
Data Danger #3 – Relying on Legacy Backup Tools for Modern Infrastructure
The challenges associated with managing cloud environments may no longer be new, but expecting legacy backup tools to support new cloud applications is dangerous. Traditional backup methods often involve leveraging a disk or snapshot of the server and a subsequent spin up of the entire image if something goes wrong. SaaS applications by nature don’t have access to the infrastructure, so everything is backed up at the record level, which means recovery must also happen at the same level of granularity.
Instead, consider cloud-to-cloud backup tools that are designed with SaaS requirements in mind. Implementing application-specific backup tools that can quickly get at and interact with the data can rapidly resolve any issues regardless of where the infrastructure lives. In addition, by isolating the corruption and data loss event to restore only the affected data and avoid overriding accurate data changes, organizations can avoid the tedious task of trying to manually restore data field by field.
Tip #1: Start with the Infrastructure Layer
First and foremost, you have to put the same level of thought and care into protecting your data no matter where it lives. There are essentially three tiers of managing data protection and recovery in a cloud environment. The first happens at the infrastructure layer. These are the components that should be covered in your vendor agreements, including the uptime availability guarantees and overall accessibility for the application itself (not the data).
This information should help organizations develop a contingency plan outlining what the next steps are in case of an outage or data loss event. Make sure you understand the intricacies of your data and select the data backup and recovery vendor that can solve the pain points you are most likely to encounter.
Tip #2: Govern the Data Layer
Industry reports have noted that while it hasn’t happened yet, 2016 could be the year cyber attacks and data breaches in the cloud go from a potential threat to harsh reality. Preparing to manage the all-important data layer within SaaS applications can make a significant difference when faced with the risk of mass data corruption by a script that went haywire or by a legitimate malicious threat.
Given the scope of these risks, organizations need recovery tools that enable them to back up data at the record level and then take it a step further by restoring data at the field level. These capabilities must also be able to scale. For instance, if 10,000 records are impacted, but only fields one and two have corrupted data, the organization needs the capability to isolate the “bad” data and replicate the fix 9,999 more times without corrupting the existing “good” data. Seek out tools that can help govern the changes to only the data that has been affected and minimize any compromise of existing information or impact to business operations.
Tip #3: Plan for the Worst Case Scenario
The third and final layer of data protection in a cloud environment is the one that no business ever wants to think about. What if your SaaS provider cannot meet its SLAs? What if the application goes down for 48 hours in the middle of the week? What if your data is trapped inside the application itself? No matter the scenario, organizations still need a way to access their data and make it available to users.
Before implementing a SaaS application, IT leaders need to consider how they will be able to easily search the backups and/or export the data to another database to run queries against it and provide users the data they need if the original application is not online. Planning for the worst-case scenario will allow the business to keep running even if access to the original SaaS application isn’t feasible.
The Golden Rule of Recovery
While the risks and challenges of data storage, protection and recovery will evolve over time, the most important one will remain unchanged: Businesses need access to their data – all of it, all the time.
The high-profile data breaches of sensitive customer information are not the only type of data events that organizations should fear (and prevent). Any data loss can cause significant damage to the organization’s ability to serve its customers and keep the business moving – ultimately, any data loss can negatively impact the bottom line. Prepare yourself so that when the time comes, you will be able to understand exactly what happened in a data loss event and put your data back together exactly how it was before — even with the restraints of today’s relational SaaS databases.