Optimizing SQL Server Backup Performance

Configuring Backup Settings

Your browser needs to be JavaScript capable to view this video

Try reloading this page, or reviewing your browser settings

This video covers the five key settings for tuning database backups.

Keywords

  • SQL Server
  • backups
  • block size
  • max transfer size
  • buffer count
  • file count
  • compression

About this video

Author(s)
Kevin Feasel
First online
09 June 2021
DOI
https://doi.org/10.1007/978-1-4842-7194-0_3
Online ISBN
978-1-4842-7194-0
Publisher
Apress
Copyright information
© Kevin Feasel 2021

Video Transcript

Welcome to the foundation for the rest of this video. In this section, we’re going to talk about a few important configuration settings, which can help with backup performance. The first of these is block size. Block size is the physical block size. That is, how large is a block on that device you’re writing your backups to. Block sizes range from 512 bytes up to 64 kilobytes. Now the thing about block size is that it really only matters if you’re writing to a tape drive or a CD-ROM, where writing a full block is critical for performance. If you’re just writing to disk or the cloud, leave the setting at the default. It really doesn’t matter.

The next setting is Max Transfer Size. This is the maximum amount of data which can be transferred per operation. There are seven valid values ranging from 64 kilobytes up to 4 megabytes. The metaphor I like to use here is a bucket. Max transfer size is akin to how much water that bucket can hold.

Next up is buffer count. This represents the number of buffers in memory that should be created for a backup. You can make however many buffers you want, as long as it’s a whole number. To extend the prior metaphor, if max transfer size is how much water a bucket can hold, buffer count is the number of buckets you have available. I do recommend keeping this number at or below 1,024, however. The reason why, is that if you have 1,024 buffers, each of which holds 4 megabytes of data, that’s 4 gigabytes of memory used just to take a backup. I don’t know about you, but I don’t like the idea of using 4 gigs of RAM to backup a single database. I mean, I need that memory for my buffer pull.

One sneaky contender for performance benefits is to set the File Count. This allows you to stripe your backup across a number of files. You can choose however many files you’d like, as long as it’s a whole number. The benefit to this is that it’s a nice way of getting some extra throughput out of those drives. I’ve noticed that even if you’re writing everything to the same drive, you still benefit from using multiple files. If you’re writing to direct-attached storage, I recommend to somewhere between 1 and 12 files per drive, and for a SAN, 1 to 12 total. That’s a pretty wide range, but we’ll try to narrow it down a bit as we go forward.

Next up is compression. This is an interesting setting. Conceptually, it’s really easy. You tell SQL Server whether you want that backup to be compressed or not. There are only two valid values here, true and false. There is a minor CPU cost to performing this backup compression. But it usually results in much smaller backups, so oftentimes the trade-off is worth it. That said, I do need to talk to you about some weirdness with Transparent Data Encryption and compressed backups. If you’re using a version of SQL Server prior to 2016, and you’ve encrypted a database using Transparent Data Encryption, it won’t compress at all. So you have to be using at least SQL server 2016.

But there were some bugs in SQL Server 2016 which led to backup corruption when TDE was enabled. It didn’t cause any problem with the database, but sometimes restoration of those backups would fail, and well, that’s a mess. These bugs have been fixed, but be sure to install patches and keep those servers up to date. Also, max transfer size must be greater than 64 kilobytes for this to work. But what’s the default max transfer size in SQL Server? Well, it’s 64 kilobytes. So you have to set this explicitly to get backup compression to work. As of SQL Server 2019, CU 5, if you backup an encrypted database and did not set the max transfer size above 64 kilobytes, the backup service will automatically set your max transfer size to 128 kilobytes. That’s another benefit to staying up to date.

Now that we’ve walked through these settings, let’s give them a try against a real database. In this demo, we are going to back up a database. The database I will back up here is the Wide World Importers database. So the T-SQL command to do this is backup database, and I’ll say to disk. Save it as a dot bak file on the G drive with no format and INIT. Those two settings essentially say, if there’s a Wide World Importers dot bak file already there, reinitialize– start over from scratch for it– instead of appending to the file. And I don’t really care about what prior backups might have been in that file.

Now this whole thing will take roughly 21 seconds to finish. So here we go. Took 21 and 1/2 seconds. That is with our default settings. Now I’m going to try to back up with some other settings. So we’ll say, backup database Wide World Importers. This time, instead of saving to one file, I’m going to save to two separate files. Both files will be on the same drive, but that’s OK. I can do this, and you can get performance benefits from that, up to a certain extent. I will once again say no format and INIT. In addition, let’s set max transfer size to 2 megabytes, and I have to type it in bytes form. I’ll set buffer count to 50. I’ll set block size to 8 kilobytes, and I’ll turn on compression. When we run this configuration, it’s going to take about seven seconds to complete versus the 21 seconds that the original settings took. So we can see that changing settings gives us a nice performance benefit.