Data backups are arguably the backbone of an organization. If data is corrupted, encrypted or deleted the ability of the enterprise to survive is seriously impaired.
But as comedian Rodney Dangerfield might say, backup “don’t get no respect.” It’s routine and boring compared to the glory jobs of infosec such as threat hunting and penetration testing.
“One thing that hasn’t changed about backup is no one wants to be the backup administrator,” says W. Curtis Preston, chief technical evangelist at Druva and author of several books on backup and recovery. “They want to hand this responsibility off to anybody.”
Interviewed for World Backup Day, Preston has a message for CISOs: “Since it’s such an unloved part of IT, take this day to look at the vulnerabilities of your backup system and see if you can address them, either by redesigning it in a more secure way or by handing them to a service provider.”
The good news is technology is getting to the point where backup can be set up and almost forgotten, he said. Depending on the product, backups can be reliable and don’t need “daily care and feeding.”
But Preston says there are many misconceptions about backup services:
- Many IT administrators don’t realize that software-as-a-service offerings such as Google Workspace, Office365 and Salesforce don’t come with backup and recovery built-in. These are IT’s responsibilities.
- Windows-based backup systems can no longer rely on direct storage or network-attached storage (NAS) because they are completely vulnerable to ransomware. Find other ways to connect your storage, use a backup platform with a non-Windows operating system, or back-up offsite.
- Managing the growth and daily security administration of a typical backup system is challenging. Backup servers are just as likely to be attacked as data servers.
Preston urges CISOs to use role-based administration of the backup system to minimize the possibility of compromised credentials. “I’m amazed there are still backup systems that run as root or administrator,” he said.
He also says CISOs should take advantage of products’ capabilities that specify which backups can’t be deleted outside a defined retention period (such as X number of days) as protection against insider attacks.
“The need for backup is real, (but) it’s something that doesn’t cross the mind of many businesses until and unless they are attacked,” Ryan Crompton, senior product manager at Carbonite, a division of Waterloo, Ont.,-based OpenText, said in an interview. “If you only consider backup after a failure has occurred then you’re already too late.”
But don’t start by clicking the backup system’s “On” button. First, he said, organizations have to categorize and classify their data, determining what they have and what needs to be backed up. This includes data staff has squirrelled away on Dropbox, Google Docs, AWS and elsewhere without management knowledge. IT needs to understand the backup tools these applications come with.
Create a data recovery plan that can survive in the absence of more than one application failure. Make sure all required staff has access to the plan and understands their role in it.
Test the recovery plan in a stress environment (ie: there’s no phone or internet access for calling for help). Beware of a single point of failure, such as having only one person who knows how recovery works. Make sure people are cross-trained.
“I would argue up to half of businesses have properly implemented backup, but do not have a properly executable business continuity and disaster recovery strategy,” Crompton said.
Matthew Tyrer, senior manager of solutions marketing at Commvault, said the most common mistake organizations make is trying to “paint everything with the same brush.”
“Not all your data needs to be quickly recoverable and some doesn’t need to be backed up or retained at all,” he said in an email.
Not all data are tier-1 workloads need to have the short Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) as well as strict Service Level Agreements (SLAs).
By adjusting RTOs, RPOs and SLAs to reflect the unique backup and recovery requirements of each of workloads, Tyrer says companies can reduce their storage and data protection costs.
This is especially true when one considers the lower cost (and lower performance) backup and recovery options that are available today on the cloud, he said.
When looking for a backup solution Tyler said organizations should keep three things in mind:
- Flexibility. Can it cover tomorrow’s data challenges? Is it flexible to license, deploy and operate?
- Coverage. Can it address all the workloads needed? Does doing so stretch it, and require kludges or over-customizing?
- Scalability. Not just in raw size and horsepower but also in terms of functionality and use cases. What about disaster recovery use cases? What about DevOps use cases, where it is often about quickly creating a new copy of a set of data for the DevOps team to play with? Avoid using multiple products to protect your data, which may cause visibility problems.
Other expert tips
From a data protection standpoint, the rush to accommodate new and necessary ways to work, shop and live opened the door to cybercriminals, said Surya Varanasi, CTO of Nexsan. Consequently, there was a dramatic increase in ransomware attacks and high-profile data breaches that further cemented the importance of backup.
The overall objective of backup is the ability to recover from any failure or data loss within a specified period of time, he argued. But now, as ransomware and other malware attacks continue to increase in severity and sophistication, there is a need to protect backed up data by making it immutable and by eliminating any way that data can be deleted or corrupted.
Those looking for what he calls ‘unbreakable backup should seek one that delivers data integrity with policy-driven and scheduled data integrity checks that can scrub the data for faults, and auto-heals without any user intervention. They should seek a solution that provides high availability with dual controllers and RAID-based protection that can guarantee data access in the event of component failure.
JG Heithcock, general manager of Retrospect, said IT should have a backup strategy that utilizes the “3-2-1 rule” — having at least three copies of data across multiple locations: the original, a first backup stored onsite, and a second backup located offsite.
Vinay Mathur, chief strategy officer at Next Pathway, noted in an email that often the first foray for enterprise companies moving to the cloud is to consider using public cloud for back-up and disaster recovery because of its scalability.
A good example, he said, is around disaster recovery strategies for large on-premise Hadoop clusters. Especially for large companies, legacy Hadoop clusters store massive amounts of data (often at petabyte-scale), and an on-premise back-up and disaster recovery method would come with an enormous price tag. Instead, companies are looking to the hyperscale cloud providers to solve this challenge. And while doing so, enterprise companies, some of whom have been reticent to consider a move to the public cloud, start to dabble into unfamiliar territory with massive upside for their IT operations.