Paul Randal

Transaction Log Configuration Issues

SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Paul Randal, CEO of SQLskills, writes about knee-jerk performance tuning, DBCC, and SQL Server internals.

Paul’s Posts

In the my last two posts I discussed ways to reduce the amount of transaction log being generated and how to ensure the transaction log can always clear properly. In this post I want to continue with the transaction log performance theme and discuss some transaction log configuration issues that can cause problems.

Too Many VLFs

The transaction log is split up into chunks called virtual log files (VLFs) so the log management system can easily keep track of which portions of the transaction log are available for reuse. There is a formula for how many VLFs you get when you create your transaction log, manually grow it, or it auto-grows:

Up to 1MB 2 VLFs, each roughly 1/2 of the total size
1MB to 64MB 4 VLFs, each roughly 1/4 of the total size
64MB to 1GB 8 VLFs, each roughly 1/8 of the total size
More than 1GB 16 VLFs, each roughly 1/16 of the total size

For example, if you create a transaction log to be 8GB you’ll get 16 VLFs where each is roughly 512MB. If you then grow the log by another 4GB, you’ll get an additional 16 VLFs with each being roughly 256MB, for a total of 32 VLFs.

Note: this algorithm changed slightly for SQL Server 2014 to alleviate the VLF fragmentation problems – see this blog post for details

A general best practice is to set the log auto-growth to something other than the default 10%, so that you can control the pause that’s required when zero-initializing new transaction log space. Let’s say you create a 256MB transaction log and set the auto-growth to 32MB, and then the log grows to a steady-state size of 16GB. Given the formula above, this will result in your transaction log having more than 2,000 VLFs.

This many VLFs will likely result in some performance issues for operations that process the transaction log (e.g. crash recovery, log clearing, log backups, transactional replication, database restores). This situation is called having VLF fragmentation. Generally any number of VLFs more than a thousand or so is going to be problematic and needs to be addressed (the most I’ve ever heard of is 1.54 million VLFs in a transaction log that was more than 1TB in size!).

The way to tell how many VLFs you have is to use the undocumented (and completely safe) DBCC LOGINFO command. The number of rows of output is the number of VLFs in your transaction log. If you think you have too many, the way to reduce them is:

  1. Allow the log to clear
  2. Manually shrink the log
  3. Repeat steps 1 and 2 until the log reaches a small size (which may be tricky on a busy production system)
  4. Manually grow the log to the size it should be, in up to 8GB steps so each VLF is no larger than about 0.5GB

You can read more about VLF fragmentation issues and the process to fix them at:

Tempdb

Tempdb needs to have its transaction log configured just like any other database, and it may grow just like any other database. But it also has some insidious behavior that can cause you problems.
When a SQL Server instance restarts for any reason, tempdb’s data and log files will revert to the size they were most recently set to. This is different from all other databases, which remain at their current size after an instance restart.

This behavior means that if the tempdb transaction log has grown to accommodate the normal workload you must perform an ALTER DATABASE to set the log file size otherwise its size will drop after an instance restart and it will have to grow again. Every time a log file grows or auto-grows, the new space must be zero-initialized and logging activity pauses while that is done. So if you do not manage your tempdb log file size correctly, you’ll pay a performance penalty as it grows after each instance restart.

Regular Log File Shrinking

Quite often I hear people saying how they usually shrink a database’s transaction log after it grows from a regular operation (e.g. a weekly data import). This is not a good thing to do.

Just as I explained above, whenever the transaction log grows or auto-grows, there’s a pause while the new portion of the log file is zero-initialized. If you’re regularly shrinking the transaction log because it grows to size X, that means you’re regularly suffering performance problems as the transaction log auto-grows back to size X again.

If your transaction log keeps growing to size X, leave it alone! Proactively set it to size X, managing your VLFs as I explained above, and accept size X as the size that’s required for your normal workload. A larger transaction log is not a problem.

Multiple Log Files

There is no performance gain from creating multiple log files for a database. Adding a second log file may be necessary, however, if the existing log file runs out of space and you’re unwilling to force the transaction log to clear by switching to the simple recovery model and performing a checkpoint (as this breaks the log backup chain).

I’m often asked whether there is any pressing reason to remove the second log file or whether it’s ok to leave it in place. The answer is that you should remove it as soon as you can.

Although the second log file doesn’t cause performance problems for your workload, it does affect disaster recovery. If your database is destroyed for some reason, you’ll need to restore it from scratch. The first phase of any restore sequence is to create the data and log files if they don’t exist.

You can make the data file creation almost instantaneous by enabling instant file initialization which skips the zero-initialization but that doesn’t apply to log files. This means that the restore has to create all log files that existed when the full backup was taken (or are created during the period of time covered by a transaction log backup) and zero-initialize them. If created a second log file and forgot to drop it again, zero-initializing it during a disaster recovery operation is going to add to the total downtime. This isn’t a workload performance problem, but it affects the availability of the server as a whole.

Reverting from a Database Snapshot

The final issue in my list is actually a bug in SQL Server. If you use a database snapshot as a way to quickly recover back to a known point in time without having to restore backups (known as reverting from the snapshot) then you can save a lot of time. However, there is a big downside.

When the database reverts from the database snapshot, the transaction log is recreated with two 0.25MB VLFs. This means you will have to grow your transaction log back to its optimal size and number of VLFs (or it will auto-grow itself), with all the zero-initialization and workload pauses I’ve discussed previously. Clearly not the desired behavior.

Summary

As you can see from this post and my previous two posts, there are many things that can lead to poor transaction log performance, which then has a knock-on effect on the performance of your overall workload.

If you can take care of all these things, you’ll have healthy transaction logs. But it doesn’t end there as you need to make sure you’re monitoring your transaction logs so you’re alerted for things like auto-growth and excessive read and write I/O latencies. I’ll cover how to do that in a future post.