Friday 14 August 2015

DBA Training Week 3 - Automating Restores

Doing the Fastest Database Restores
If you're doing transaction log backups, forget using the GUI. Even if you only have one transaction log backup per hour, it'll take you way too long to click through all the different files.

Think about what your backup folder might look like if we named our backup files by database, date, time, and a different extension per type of backup (BAK for fulls, DIF for differentials, and TRN for transaction logs):
  • MyDatabase_20130718_0000.bak
  • MyDatabase_20130718_0100.trn
  • MyDatabase_20130718_0200.trn
  • MyDatabase_20130718_0300.trn
  • MyDatabase_20130718_0400.trn
  • MyDatabase_20130718_0500.trn
  • MyDatabase_20130718_0600.dif
  • MyDatabase_20130718_0700.trn
  • MyDatabase_20130718_0800.trn
In that scenario, I took my full backup at midnight, then hourly transaction logs, with differential backups every 6 hours. (This is never a scenario I'd use in the real world, but it's the simplest way to get the point across in a tiny email. Hey, you try teaching tough concepts via email, buddy.)

If disaster strikes at 8:15AM and I lose the server, I need to restore the full, the most recent differential, and the transaction logs that follow the differential, like this:
  • MyDatabase_20130718_0000.bak
  • MyDatabase_20130718_0600.dif
  • MyDatabase_20130718_0700.trn
  • MyDatabase_20130718_0800.trn
That's a really easy script to write - and thankfully, MSSQLTips has already done it. Just change the variables for your database name and the path where your restore files live, and presto, the script will restore all of the files for you automatically.

This Script Has Implications For YouYou need to back up your databases intelligently. You want to put each database in its own folder, and you want the file names to have the database name and the time in them. I'm a big fan of that anyway - it makes life easier when I want to quickly scan and see what backups have been done.

It assumes the same database path when you restore. If you're the kind of DBA who likes to change drive letters and folders all the time, or you've got files scattered all over the place, and your production and development database servers have different drive letters, then life is going to be harder for you. There are other restore scripts that can adjust data and log file names at restore time, but they're not quite as elegant when it comes to restoring fulls, diffs, and t-logs.

You can do transaction log backups as often as you want. If the restore process is fully automated, why not do transaction log backups every minute? It doesn't cost you any extra. Rather than incurring extra overhead, it can actually incur LESS overhead. If you're only backing up logs once an hour, your users probably feel that hourly load big time when it kicks off. If you do smaller backups more frequently, they'll be less likely to notice the impact.

You can automate your own fire drill testing. Since this is done with plain old T-SQL, you can set up a SQL Agent job to restore last night's backups onto your staging server. I like setting up a DBA utility server chock full of really cheap (but really large) SATA drives, like 2-4TB drives in a RAID 5. I have a series of Agent jobs to restore a different server's backups every day, and then run DBCC jobs on them. This way I know I'm getting really good backups.

Your Homework This WeekYou don't have to go build this whole infrastructure out - but start laying the groundwork by making sure your backup files are named in a restore-friendly way.

Then, try out the MSSQLTips restore script to make sure you can quickly restore a database from scratch into a development or staging environment. If you get errors, leave comments on that blog post describing the error, and ideally, contribute your script improvements back in. Now, when disaster strikes, you won't be clicking around blindly in a GUI - you'll just open a proven script and hit Execute. Bam! 

Brent says: You can use this technique to build smaller copies of your production databases. For example, at StackOverflow, I built an ultra-compressed backup with even more tricks. On my nightly restore server, after restoring the production backups, I dropped the non-clustered indexes, rebuilt all objects with 100% fill factor, and shrank the data and log files down.

The end result was a database that was over 50% smaller!

I then backed it up with compression, and left those backups in a folder that the developers could access. That made it easier for developers to quickly grab a copy of production data as of last night, copy it to their workstation as quickly as possible, and restore it faster with less drive space required.

Log Shipping Training Video: Jes explains how log shipping is really just doing your restores ahead of time.

Backup Planning Video: Jeremiah explains 3 company scenarios and asks you to pick the right backup strategy for each.

Missed a week? Here's what we covered so far:


No comments:

Post a Comment