Paul Randal

Knee-Jerk PerfMon Counters : Page Life Expectancy

October 20, 2014 by in SQL Performance | 19 Comments
SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Paul White is an independent SQL Server consultant specializing in performance tuning, execution plans, and the query optimizer.

Paul’s Posts

All my posts this year have been about knee-jerk reactions to wait statistics, but in this post I’m deviating from that theme to talk about a particular bug bear of mine: the page life expectancy performance counter (which I’ll call PLE).

What Does PLE Mean?

There are all kinds of incorrect statements about page life expectancy out there on the Internet, and the most egregious are those that specify that the value 300 is the threshold for where you should be worried.

To understand why this statement is so misleading, you need to understand what PLE actually is.

The definition of PLE is the expected time, in seconds, that a data file page read into the buffer pool (the in-memory cache of data files pages) will remain in memory before being pushed out of memory to make room for a different data file page. Another way to think of PLE is an instantaneous measure of the pressure on the buffer pool to make free space for pages being read from disk. For both of these definitions, a higher number is better.

What’s A Good PLE Threshold?

A PLE of 300 means your entire buffer pool is being effectively flushed and re-read every five minutes. When the threshold guidance for PLE of 300 was first given by Microsoft, around 2005/2006, that number may have made more sense as the average amount of memory on a server was much lower.

Nowadays, where servers routinely have 64GB, 128GB, and higher amounts of memory, having roughly that much data being read from disk every five minutes would likely be the cause of a crippling performance issue

In reality then, by the time PLE is hovering at or below 300, your server is already in dire straits. You’d start to be worried way, way before PLE is that low.

So what’s the threshold to use for when you should be worried?

Well, that’s just the point. I can’t give you a threshold, as that number’s going to vary for everyone. If you really, really want a number to use, my colleague Jonathan Kehayias came up with a formula:

( Buffer pool memory in GB / 4 ) x 300

Even that number is somewhat arbitrary, and your mileage is going to vary.

I don’t like to recommend any numbers. My advice is for you to measure your PLE when performance is at the desired level – that’s the threshold that you use.

So do you start to worry as soon as PLE drops below that threshold? No. You start to worry when PLE drops below that threshold and stays below that threshold, or if it drops precipitously and you don’t know why.

This is because there are some operations that will cause a PLE drop (e.g. running DBCC CHECKDB or index rebuilds can do it sometimes) and aren’t cause for concern. But if you see a large PLE drop and you don’t know what’s causing it, that’s when you should be concerned.

You might be wondering how DBCC CHECKDB can cause a PLE drop when it does disfavoring and tries hard to avoid flushing the buffer pool with the data it uses (see this blog post for an explanation). It’s because the query execution memory grant for DBCC CHECKDB is miscalculated by the Query Optimizer and can cause a big reduction in the size of the buffer pool (the memory for the grant is stolen from the buffer pool) and a consequent drop in PLE.

How Do You Monitor PLE?

This is the tricky bit. Most people will go straight to the Buffer Manager performance object in PerfMon and monitor the Page life expectancy counter. Is this the right approach? Most likely not.

I’d say that a large majority of servers out there today are using NUMA architecture, and this has a profound effect on how you monitor PLE.

When NUMA is involved, the buffer pool is split up into buffer nodes, with one buffer node per NUMA node that SQL Server can ‘see’. Each buffer node tracks PLE separately and the Buffer Manager:Page life expectancy counter is the average of the buffer node PLEs. If you’re just monitoring the overall buffer pool PLE, then pressure on one of the buffer nodes may be masked by the averaging (I discuss this in a blog post here).

So if your server is using NUMA, you need to monitor the individual Buffer Node:Page life expectancy counters (there will be one Buffer Node performance object for each NUMA node), otherwise you’re good monitoring the Buffer Manager:Page life expectancy counter.

Even better is to use a monitoring tool like SQL Sentry Performance Advisor, which will show this counter as part of the dashboard, taking into account the NUMA nodes on the server, and allow you to easily configure alerts.

Examples of Using Performance Advisor

Below is an example portion of a screen capture from Performance Advisor for a system with a single NUMA node:

On the right-hand side of the capture, the pink-dashed line is the PLE between 10.30am and about 11.20am – it’s climbing steadily up to 5,000 or so, a really healthy number. Just before 11.20am there’s a huge drop, and then it starts to climb again until 11.45am, where it drops again.

This is typically what you would see if the buffer pool is full, with all the pages being used, and then a query runs that causes a huge amount of different data to be read from disk, displacing much of what’s already in memory and causing a precipitous drop in PLE. If you didn’t know what caused something like this, you’d want to investigate, as I describe further down.

As a second example, the screen capture below is from one of our Remote DBA clients where the server has two NUMA nodes (you can see that there are two purple PLE lines), and where we use Performance Advisor extensively:

On this client’s server, every morning at around 5am, an index maintenance and consistency checking job kicks off that cause the PLE to drop in both buffer nodes. This is expected behavior so there’s no need to investigate as long as PLE rises up again during the day.

What Can You Do About PLE Dropping?

If the cause of the PLE drop isn’t known, you can do a number of things:

  1. If the problem is happening now, investigate which queries are causing reads by using the sys.dm_os_waiting_tasks DMV to see which threads that are waiting for pages to be read from disk (i.e. those waiting for PAGEIOLATCH_SH), and then fix those queries.
  2. If the problem happened in the past, look in the sys.dm_exec_query_stats DMV for queries with high numbers of physical reads, or use a monitoring tool that can give you that information (e.g. the Top SQL view in Performance Advisor), and then fix those queries.
  3. Correlate the PLE drop with scheduled Agent jobs that perform database maintenance.
  4. Look for queries with very large query execution memory memory grants using the sys.dm_exec_query_memory_grants DMV, and then fix those queries.

My previous post here explains more about #1 and #2, and a script to investigate waits occurring on a server and link to their query plans is here.

The "fix those queries" is beyond the scope of this post, so I’ll leave that for another time or as an exercise for the reader ☺

Summary

Don’t fall into the trap of believing any recommended PLE threshold that you might read online. The best way to react to PLE changes is when PLE drops below whatever your comfort level is and stays there – that’s the indication of a performance problem that you should investigate.

In the next article in the series, I’ll discuss another common cause of knee-jerk performance tuning. Until then, happy troubleshooting!