Backups are obviously important. Everyone—both individuals and businesses—should regularly back up their devices and data in case they become compromised. However, there is a distinct difference between backing up the data generated and owned by an individual versus the data created by an organization, no matter the size.
While simple backups may be sufficient for an individual, traditional backups alone are not enough for businesses. A primary purpose of having a secondary set of data is getting a business up and running after data loss, data corruption or ransomware, and backup systems are simply not up for the task of disaster recovery (DR).
It’s time to rethink data protection. Archiving data and sending it off to some faraway place with the hope that it will never be needed again is antiquated. Businesses can no longer afford to wait weeks, days or even hours to restore their data. Recovery must be instant. Thanks to the public cloud, DR and backup have been radically transformed in ways that make this possible.
In this eWEEK Data Points article, Sazzala Reddy, co-founder and chief technology officer at Datrium, explains why backup is useless for DR and how to do DR right to recover quickly from disasters big and small.
Data Point No. 1: There are two ugly truths about backups.
One: It’s a Schrödinger’s backup situation: The state of a backup is unknown until you have to restore from it.
Two: Backup systems are built for backing up data, not for recovery. It will take you days or weeks to recover your data center from backup systems. Mass recovery was never the design goal of backups.
Data Point No. 2: System availability is of paramount importance.
Times have changed. In today’s on-demand economy, we expect our IT systems to always be up and running. Any amount of downtime impacts customers, employees and the bottom line.
Data Point No. 3: Ransomware is emerging as the leading cause of downtime.
Downtime can be caused by floods, tornados, fires, accidental human error and other unexpected events. However, ransomware, a new and rapidly growing phenomenon, is emerging as a leading cause of downtime. According to The State of Enterprise Data Resiliency and Disaster Recovery 2019, disasters ranging from natural events to power outages to ransomware affected more than 50% of enterprises in the last 24 months. Among these disasters, ransomware was identified as the leading cause with 36% of respondents reporting having been the victim of an attack.
Data Point No. 4: Disaster recovery is more important than ever.
The sharp increase in ransomware attacks and other data threats has made backup useless and DR more important than ever before. While there are newer backup systems on the market today, they still aren’t capable of rapid and reliable recovery. Today, speed of recovery and how quickly you can get back online after an event are the name of the game—and winning requires a comprehensive DR-focused strategy.
Data Point No. 5: Backups are useless in the event of a disaster.
While backups are a great first step, they are not an effective DR strategy due to the sheer amount of time and manual labor required to recover from traditional backups after a disaster. Imagine 100 terabytes of data stored in a backup system, which can restore at 500MB/sec (which is generous). In a disaster scenario, it will take two-plus days to copy the data from the backup system into a primary storage system. Effective and fast DR requires automation in the form of disaster recovery orchestration.
Data Point No. 6: Full DR orchestration entails a series of steps.
Step 1: Access to the right data in a different infrastructure (backup and storage vendors sometimes forget that DR is not just about this step).
Step 2: Bring up the workloads, in the right order, on the right systems, dealing with differences in networking, etc. This is vastly more practical and automatable for virtualized workloads than it is for physical.
Step 3: Fail everything back to the originating site with the same concerns for workload sequencing, mapping, etc. (These last two steps require runbook orchestration, which is a key component to comprehensive DR.)
Data Point No. 7: Modern business requires instant RTO.
Because no business can afford to lose access to their systems for hours, days or even weeks, effective disaster recovery needs instant RTO (recovery time objectives), and the bottom line is that legacy backup systems were not designed for that. Effective DR solutions need to deliver instant RTO restarts.
Data Point 8: The public cloud has changed the game for DR.
The public offers on-demand compute and elastic storage. You can get your data to a geographical region of choice on low-cost media and spin up compute when disaster strikes, so you can work with that data. Additionally, you only pay for resources when you use them in a disaster or testing. That’s how the cloud is supposed to be used—elastic and pay as you go. It’s like only paying for insurance after you’ve had a car accident.
Data Point No. 9: The key to effective cloud DR is converging cloud backup and DR.
A key part of DR is getting the data to a second site that’s unaffected by the disaster and has compute resources available for post-recovery operation. To do this, your backups need to be deduplicated and stored in the cloud in a steady state, such as a public cloud like AWS S3. Then, in the event of a disaster, runbook automation instantly turns these backups into live VMs and you get instant RTO for 1000s of VMs.
Data Point No. 10: To be effective, DR needs to be simple, fast and affordable.
By leveraging the public cloud and new technologies, it is now possible to converge backup (low-cost media, granular recovery) and DR (orchestration software, random I/O performance). This truly simplifies DR with an approach that enables instant failover of an entire data center with the push of a button, eliminating the need to cobble together all of the backup and DR software pieces manually.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.