Tim Radney

Risk When Using Dynamic Memory within Hyper-V

SentryOne eBooks

In these books, you will find useful, hand-picked articles that will help give insight into some of your most vexing performance problems. These articles were written by several of the SQL Server industry’s leading experts, including Paul White, Paul Randal, Jonathan Kehayias, Erin Stellato, Glenn Berry, Aaron Bertrand, and Joe Sack.

Free Download

Featured Author

Erin Stellato is a Principal Consultant with SQLskills and a Microsoft Data Platform MVP.

Erin’s Posts

Virtualization is very popular for organizations: it allows them to better utilize hardware by combining multiple servers onto a single host, provides HA capabilities, and gives a reduction in various costs like heating/cooling, SQL Server licenses, and hardware. I’ve been involved in numerous projects with organizations to help them migrate from physical to virtual environments and have helped them experience these benefits.

What I want to share with you in this article is a peculiar issue I came across while working with Hyper-V on Windows Server 2012 R2 using Dynamic Memory. I must admit that most of my knowledge of virtualization has been with VMware, however that’s changing now.

When working with SQL Server on VMware I always recommend to set reservations for memory so when I encountered this Dynamic Memory feature with Hyper-V I had to do some research. I found an article (Hyper-V Dynamic Memory Configuration Guide) that explains many of the benefits and system requirements for using Dynamic Memory. This feature is pretty cool in how you can provide a virtual machine with more or less memory without it having to be powered off.

Playing around with Hyper-V I’ve found provisioning virtual machines to be straightforward and easy to learn. With little effort I was able to build a lab environment to simulate the experience my customer was having. Credit goes to my boss for providing me with awesome hardware to work with. I am running a Dell M6800 with an i7 processor, 32GB of RAM and two 1TB SSDs. This beast is better than a lot of servers I have worked on.

Using VMware Workstation 11 on my laptop, I created a Windows Server 2012 R2 guest with 4 vCPUs, 24GB of RAM and 100GB of storage. Once the guest was created and patched I added the Hyper-V role and provisioned a guest under Hyper-V. The new guest was built with Windows Server 2012 R2 with 2 vCPUs, 22GB of RAM and 60GB of storage running SQL Server 2014 RTM.

I ran three sets of tests, each using dynamic memory. For each test I used Red Gate's SQL Data Generator against the AdventureWorks2014 database to fill up the buffer pool. For the first test I started with 512MB for the Startup RAM value since that is the minimum amount of memory to start Windows Server 2012 R2 and the buffer pool stopped increasing at around 8GB.

Buffer pool limit with 512MB startup RAM

For each test I would truncate my test table, shut down the guest, modify the memory settings and start the guest back up. For the second test I increased the Startup RAM to 768MB and the buffer pool only increased to just over 12GB in size.

Buffer pool limit with 768MB startup RAM

For the third and final test increased the Startup RAM to 1024MB, ran the data generator and the buffer pool was able to increase to just under 16GB.

Buffer pool limit with 1024MB startup RAM

Doing a little math on these values shows that the buffer pool can’t grow more than 16 times the Startup RAM. This can be very problematic for SQL Server if the Startup RAM is less than 1/16 the size of the maximum memory. Let’s think about a Hyper-V guest with 64GB of RAM running SQL Server with a Startup RAM value of 1GB. We’ve observed that the buffer pool would not be able to use more than 16GB with this configuration, but if we set the Startup RAM value to 4096MB then the buffer pool would be able to increase 16 times allowing us to use all 64GB.

The only references I could find about why the buffer pool is limited to 16 times the Startup RAM value were on pages 8 and 16 in the whitepaper, Best Practices for Running SQL Server with HVDM. This document explains that since the buffer cache value is computed at startup time, it is a static value and doesn’t grow. However if SQL Server detects that Hot Add Memory is supported then it increases the size reserved for the virtual address space for the buffer pool by 16 times the startup memory. This document also states that this behavior affects SQL Server 2008 R2 and earlier, however my test were conducted on Windows Server 2012 R2 with SQL Server 2014 so I will be contacting Microsoft to get the best practices document updated.

Since most production DBAs do not provision virtual machines or work heavily in that space, and virtualization engineers are not studying the latest and greatest SQL Server technology, I can understand how this important information about how the buffer pool handles Dynamic Memory is unknown to a lot of people.

Even following the articles can be misleading. In the article Hyper-V Dynamic Memory Configuration Guide, the description for Startup RAM reads:

Specifies the amount of memory required to start the virtual machine. The value needs to be high enough to allow the guest operating system to start, but should be as low as possible to allow for optimal memory utilization and potentially high consolidation ratios.

Optimal memory utilization for whom, the host or the guest? If a virtualization admin was reading this, they would likely determine that it means the minimum memory allowed to start the operating system.

Being responsible for SQL Server means we need to know about other technologies that can influence our environment. With the introduction of SANs and virtualization we need to fully understand how things in those environments can negatively impact SQL Server and, more importantly, how to effectively communicate concerns to the people responsible for those systems. A DBA doesn’t necessarily need to know how to provision storage in a SAN or how to provision or be able to administer a VMWare or Hyper-V environment, but they should know the basics of how things work.

By knowing basics about how a SAN works with storage arrays, storage networks, multi-pathing and so on, as well as how the hypervisor works with the scheduling of CPUs and storage allocation within virtualization, a DBA can better communicate and troubleshoot when issues arise. Over the years I have successfully worked with a number of SAN and virtualization admins to build standards for SQL Server. These standards are unique to SQL Server and don’t necessarily apply to web or application servers.

DBAs can’t really rely on SAN and virtualization admins to fully understand best practices for SQL Server, regardless of how nice that would be, so we need to educate ourselves the best we can on how their areas of expertise can impact us.

During my testing I used a query from Paul Randal's blog post, Performance issues from wasted buffer pool memory, to determine how much buffer pool the AdventureWorks2014 database was using. I have included the code below:

SELECT
    (CASE WHEN ([database_id] = 32767)
        THEN N'Resource Database'
        ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
    COUNT (*) * 8 / 1024 AS [MBUsed],
    SUM (CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [MBEmpty]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id];

This code is also great for troubleshooting which of your databases is consuming the majority of your buffer pool so you can know which database you should focus on tuning the high-cost queries. If you are a Hyper-V shop, check with your admin to see if Dynamic Memory could be configured in such a way that it is negatively impacting your server.