Replication vs. Backup: Explained
By: David Finster, Vultr’s Technical Editor & Developer Advocate
Vultr Technical Editor and Developer Advocate David Finster specializes in making things work. David’s professional experiences spans developer community relations, small business tech support, and much more. He’s always finding creative solutions to technical problems. One of the topics that continues to cross his plate is backup and data replication. What are the differences? Why would you choose one over the other?
What are the drawbacks to each method? These are just some of the questions David addresses below.
Enjoy!
The first time I needed a backup, a notoriously unreliable TRS-80 cassette player had just shredded the blackjack game I saved on a 60-minute cassette tape. I didn’t have a backup, but I had a printout of the BASIC code, so I learned my lesson that afternoon by retyping the code again.
A few years later, I landed a good job working in a hard drive repair shop, where I spent a lot of quality time with oscilloscopes and Wilson drive analyzers. It was amazing to watch a fluttery analog signal converted into clean digital data. It seemed like magic at the time, and the more I learn how digital stuff works, the more amazed I am that it works at all.
Hardware is more reliable now, and it’s easy to be complacent. It’s tempting to rely on replication, RAID, and cloud storage instead of a solid backup system. But, perfectly reliable hardware doesn’t exist, and even if it did, you still must protect against human error, malicious activity, and natural disasters. So, let’s talk a little about replication and backups and how they are different.
What is Replication?
Replication keeps multiple data copies in sync, either synchronously or asynchronously. The copies might be physically close, like RAID, or distant, like cloud storage.
RAID arrays and high-performance SANs use synchronous replication to increase availability and performance. A mirrored array can tolerate a drive failure and stay online, while a striped array can improve performance, and some configurations can do both, but neither create separate, disconnected copies. Synchronous replication requires high-performance, low-latency hardware because it blocks file access until all copies are in sync.
Products like Dropbox, Microsoft OneDrive, and Apple iCloud Drive are asynchronous replication systems that synchronize to the cloud in the background. The process isn’t instantaneous, but the delay seldom inconveniences you because they don’t block file access while syncing.
Replication is like a spare tire. The tire improves your car’s availability, but it can’t protect you against dead batteries, accidents, or mischief. Replication systems can be convenient, increase availability, and improve performance, but they don’t care if the data is good, corrupt, or malicious; they dutifully synchronize everything.
Replication is not a backup system.
3-2-1 Backup Strategy
A 3-2-1 backup means you always have three copies of data stored two different ways, and one copy is physically separated. For example, a production Vultr server in Los Angeles is the first copy. If you enable Vultr automatic backups, you’ll have a second copy that is regularly updated on a configurable schedule, also stored in Los Angeles. If you make file-by-file copies to object storage in New Jersey, you’ve met the minimum requirement for a 3-2-1 backup. You have three copies stored two different ways, and one of them is in a different physical location.
Automatic backups at Vultr make scheduled snapshots of the filesystem using hot-sector tracking. The server runs uninterrupted, and disk activity is deferred to scratch blocks until the snapshot completes. You can deploy a new cloud server from the snapshot, and they are excellent insurance before you upgrade or perform unsafe operations on your server.
Automatic backups are convenient, but they have some drawbacks.
- They are stored in the same location as the live system.
- They can’t guarantee database consistency.
- If the server has high disk I/O, it might overrun the scratch block buffer, resulting in a failed image.
- You must deploy the complete server to restore a single file.
To compensate for these issues, consider supplementing your automatic backups with a file-by-file backup, which has some advantages over snapshots. Daily backups are faster when only a few files have changed. If you only need to restore a few files, it’s easier than deploying a complete server. It’s easy to script an efficient backup to Object Storage with rclone or create rolling backups with command-line tools.
On the downside, it’s not a bootable backup. If you need to rebuild entirely, it takes longer to reconstruct a server from a file-by-file backup than to boot a snapshot copy.
By combining the strengths of snapshots and file-by-file backups, you can create a solid 3-2-1 system that might look like this:
- Create a daily database dump to a directory on the primary server to ensure you have a consistent database backup.
- Make automatic snapshot backups, which also include the database dump.
- Run a file-by-file backup, including the database dump, to an offsite location.
In some cases, you may not care if you have a bootable system because you can rebuild completely with only a few essential files. WordPress, for example, only needs the database, theme, and upload files. With tools like UpdraftPlus, you can send rolling versions of those files to multiple locations, including Vultr Object Storage. Then, if your server is hacked, launch a clean Vultr One-Click WordPress and restore your backup in a few minutes.
Security
Now that you have a backup, protect it from snooping. Use encryption and verify that access to the backup is restricted. I’ve seen many cases where sensitive HR information is locked down, but the backups were easy for employees to browse. Carelessly stored database dumps are a common source of data breaches.
Don’t allow a malicious actor to tamper with the backups. If they gain control of the live system, do they have access to delete the remote backups? If you send backups offsite by FTP, you might consider configuring the FTP server to ignore deletes. Or, you could use an external system to pull the backups and prevent the live system from accessing the offsite storage location at all.
Restoration
Restoration is the end goal. If you haven’t restored a backup, you only have a theoretical backup and recovery system. Is your backup system encrypted? Great! Do you have the decryption keys and know how to use them? Can you restore the database? Do you know how to deploy a new system from backup? Will you need to change DNS or network settings after restoration? Answers to these questions and periodic data restoration drills should be a part of any robust backup system.
Summary
In summary: Trust nothing. Assume each part of your backup system can fail and have a contingency. Don’t rely on replication. Make sure your backups are secure from snooping and tampering. And finally, test your recovery process.
With a proper 3-2-1 backup strategy, you won’t need to retype twenty pages of BASIC code to play a blackjack game or send your customers a note explaining why they lost their data. If your backup and recovery plan assumes the hardware is unreliable, you’ll never be disappointed.
Discover the right storage solution for your needs, with Vultr’s Object Storage and Block Storage options. Add Vultr Automatic Backups to your compute instances for easy-to-manage data-protection.