The amount of data organizations deal with has increased massively in the last few years. How has this affected the way they handle backups?
You're correct. The amount of data we are dealing with today is significantly more. In fact, our Business Data Centers are managing over 30% more data than they were one year ago. This has forced enterprise data centers to operate smarter and employ new technologies to curb costs. A couple of the new technologies we now utilize include data de-duplication and Openstack Swift.
With the de-duplication solution the resulting footprint of our backup process is significantly less than it would be without this technology. However, with the data growth we’ve been experiencing, optimizing backups is not good enough to keep pace. And, just like other data centers out there in the industry, we do not have the luxury of always throwing more money at ongoing operations. So cost effective storage solutions like Openstack Swift will allow us to keep costs at a digestible level while also allowing us to keep data online and more readily available.
In an always online world, what would you say it's an appropriate recovery time?
The obvious answer is less is best and always on is even better. It has been our experience that our most critical services (those application services that drive revenue or differentiate our business from our competitors) should operate at 100% availability. We utilize extremely high redundancy so failures do not result in an impact to the application service and zero recovery is actually needed.
Unfortunately, a highly redundant architecture typically cannot be implemented for all of your application services due to the cost. So for these less than critical services it is a matter of weighing the solution cost with the recovery time. Typically the closer you drive to zero downtime, the more expensive the solution cost will be. Understand what is acceptable. Have the conversation with the consumer of the application service. Determine their recovery time expectations and what you both are willing to pay in ongoing costs. It is a gentle balance. Find the middle ground that everyone can work within.
What advice would you give to an organization that wants to strengthen its basic backup plan?
Most organizations have not zoomed out from their day to day backup operations in a long time. I suggest they first step out of the weeds. Validate why backups are being done. For example, are you archiving data for long-term retrieval in addition to supporting short-term recovery? Verify how often you are backing up and what is being backed up. Also determine the various backup and recovery methods in use.
Gathering answers to this kind of investigation will lead an organization to complete some very valuable housekeeping and also determine which backup and recovery solutions are optimal. In all honesty your backup plan is not the most critical aspect. It is your recovery plan. And it is important to not ignore your recovery plan. So I suggest running regular recovery exercises that test out every backup solution you have in place. Define failure scenarios of varying degrees of impact (from an isolated hardware failure to a complete site disaster) and then execute your recovery process. This validates that your backup and recovery process is sound and that you can meet recovery time expectations.