Paul Randal

Knee-Jerk Wait Statistics : PAGELATCH

October 19, 2015 by in SQL Performance, Waits | 4 Comments
SentryOne Newsletters

The bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.


Featured Author

Paul Randal, CEO of SQLskills, writes about knee-jerk performance tuning, DBCC, and SQL Server internals.

Paul’s Posts

Over the last 18 months I’ve been focusing on knee-jerk reactions to wait statistics analysis and other performance-tuning related topics, and in this post I’m going to continue that and discuss the PAGELATCH_XX waits. The XX at the end of the wait means that there are multiple types of PAGELATCH wait, and the most common examples are:

  • PAGELATCH_SH – (SHare) waiting for access to a data file page in memory so that the page contents can be read
  • PAGELATCH_EX or PAGELATCH_UP – (EXclusive or UPdate) waiting for access to a data file page in memory so that the page contents can be modified

When one of these wait types is the most prevalent on a server, the knee-jerk reaction is that the problem is something to do with I/O (i.e. confusion with the PAGEIOLATCH_XX wait type, which I covered in a post back in 2014) and someone tries adding more memory or tweaking the I/O subsystem. Neither of these reactions will have any effect at all, as the data file pages under contention are already in memory in the buffer pool!

In all cases, you can see whether you have a problem with PAGELATCH_XX contention using the sys.dm_os_waiting_tasks script on my blog or using a tool like Performance Advisor, as demonstrated (for a different wait type) in this post.

So what’s the source of the contention? First I’ll explain the background behind these wait types, and then I’ll discuss the two most common causes of PAGELATCH_XX contention.

Background: Latches

Before I go into some of the causes of PAGELATCH_XX waits, I want to explain why they even exist.

In any multi-threaded system, data structures that can be accessed and manipulated by multiple threads need to be protected to prevent scenarios such as:

  • Two threads updating a data structure concurrently, and some of the updates are lost
  • A thread updating a data structure concurrently with another thread reading the data structure, so the reading thread sees a mixture of old and new data

This is basic computer science, and SQL Server is no different, so all data structures inside SQL Server need to have multi-threaded access control.

One of the mechanisms that SQL Server uses to do this is called a latch, where holding the latch in exclusive mode prevents other threads from accessing the data structure, and holding the latch in share mode prevents other threads from changing the data structure. SQL Server also uses spinlocks for some data structures and I discussed these in this post back in 2014.

But why is a data file page in memory protected by a latch, you might wonder? Well, a data file page is just a data structure, albeit a special purpose one, and so needs the same access controls as any other data structure. So when one thread needs to modify a data file page it needs to acquire an exclusive or update latch on the page, and if it can’t and needs to wait, the wait type PAGELATCH_EX or PAGELATCH_UP results.

Classic tempdb Contention

PAGELATCH contention in tempdb is typically on allocation bitmaps and occurs with workloads with many concurrent connections creating and dropping small temporary tables (which are stored in tempdb).

When the first row is inserted into a temporary table, two pages must be allocated (a data page and an IAM page, which tracks the data page). These pages need to be marked as allocated in a special allocation page called a PFS page, and by default are allocated from special data extents that are tracked by another allocation page called an SGAM page (details of these can be found in my old blog post here). When the temporary table is dropped, these pages need to be deallocated again, necessitating more changes to the PFS and SGAM pages.

If the temporary tables are small, and the cumulative size of all concurrently created temporary tables is less than 64MB, then all these allocation bitmap changes are centered on the very first PFS and SGAM pages in the tempdb data file (with page ID (1:1) and (1:3) respectively). Updating one of these allocation pages requires latching the page, and only one thread at a time can be changing the page, so all other threads have to wait – with wait type PAGELATCH_UP.

From SQL Server 2005 onwards, temporary tables can be cached when dropped, as long as they’re less than 8MB in size (and in SQL Server 2014 aren’t created in a stored procedure that also has DDL statements on the temporary table). This means that the next thread that executes the same query plan can take the temporary table out of the cache and not have to deal with the initial allocations. This cuts down on contention on the allocation bitmaps, but the temporary table cache isn’t very big, so workloads with hundreds of concurrent temporary table creates/drops will still see lots of contention.

It’s trivial to prevent the contention on the SGAM pages in tempdb by enabling documented trace flag 1118 on the server, which I say should be enabled on all servers across the world, and is actually the unchangeable default behavior in SQL Server 2016.

Preventing contention on the PFS pages in tempdb is a bit more difficult. Assuming that the temporary tables are needed for performance, the trick is to have multiple data files for tempdb so that the allocations are done round-robin among the files, the contention is split over multiple PFS pages, and so the overall contention goes down. There is no right answer for how many data files you should have unfortunately. You can read more about the generally accepted guidance on this in KB article 2154845 and in this blog post.

Insert Hotspot

In user databases, a common cause of high number of PAGELATCH_EX waits is an insert hotspot.

This can occur when a table has a clustered index with an int or bigint cluster key, and a row size that’s small enough so that many tens or more table rows can fit on a data page at the leaf level of the clustered index.

For such a table, if the workload involves many tens or hundreds of concurrent threads inserting into the table, many of the threads will generate rows with identity values (and hence cluster keys) that need to be inserted onto the same leaf-level data page.

Now remember that making any change to a data file page in memory requires an exclusive latch, so each of the threads trying to insert onto the same page must acquire the page’s latch exclusively. While each thread is holding the exclusive latch, the other threads will be waiting for PAGELATCH_EX for that page, essentially making the concurrent inserts into a hugely-bottlenecked synchronous process.

There are a few possible fixes for this problem:

  • Use a more random key, and recognize that this will lead to index fragmentation so also make use of an index fill factor to help prevent page splits
  • Spread the inserts out in the table using some kind of artificial partitioning mechanism
  • Use a longer table row size (this is obviously the least palatable option)

I’ve seen an insert hotspot like this crop up when someone’s tried to remove index fragmentation problems by changing a random GUID cluster key to an int or bigint identity cluster key, but fail to test the new table schema under production loads.


Just as with other wait types, understanding exactly what PAGELATCH_XX waits mean is key to understanding how to troubleshoot them.

As far as general wait statistics are concerned, you can find more information about using them for performance troubleshooting in:

Until next time, happy troubleshooting!