Selecting and Configuring Hardware for SQL Server 2016 Standard Edition -
SentryOne - SQL Sentry
Mar 212017

Since the release of SQL Server 2016 Service Pack 1 on November 16, 2016, with many very useful programmability-related features that were previously only available in Enterprise Edition, it has become much more feasible for many organizations to purposely choose to use SQL Server 2016 Standard Edition than it was in the past.

If you are thinking about doing this, you need to be aware of some common issues and pitfalls that you may run into when you install and use SQL Server 2016 Standard Edition on a new server with modern hardware.

Memory Limits and Configuration

The first issue is the per-instance licensing limits for SQL Server 2016 Standard Edition. The first license limit is the amount of memory that you can use for the buffer pool for each instance of SQL Server 2016 Standard Edition, which is only 128GB, just as it was in SQL Server 2014 Standard Edition. Personally, I think this limit is artificially low given the memory density of modern two-socket servers, but it is a limit we must deal with.

Current two-socket servers that use Intel Xeon E5-2600 v4 product family processors can use up to 12 DIMMs per processor, while 32GB DDR4 ECC DIMMs are the highest capacity that are also affordable per GB. Each server with this processor family has 4 memory channels per processor, with each channel supporting up to 3 DIMMs. A fully populated two-socket server with twenty-four 32GB DIMMs would have 768GB of RAM, which is far more than a single instance of SQL Server 2016 Standard Edition is allowed to use.

Since SQL Server 2016 Standard Edition has such a low per-instance memory limit, you should purposely choose an appropriate memory configuration that will let you use all of the license-limit memory while also getting the best memory performance possible. Only populating one DIMM per memory channel will give you the absolute best memory performance supported by your processor(s).

The major server vendors, such as Dell, offer detailed guidance on the possible memory configurations for their servers, depending on the number and specific type of processor selected. For SQL Server 2016 Standard Edition in a two-socket server with two Intel Xeon E5-2600 v4 family processors, choosing eight, 32GB DDR4 DIMMs would give you 256GB of RAM, running at the maximum supported speed of 2400MT/s.

This would allow you to set max server memory (for the buffer pool) to 131,072 MB (128GB), and still have plenty of memory left over for the operating system and for possible use by columnstore indexes and in-memory-OLTP. You would also have sixteen empty DIMM slots that could be used for future RAM expansion (which you could take advantage of if you did a subsequent Edition upgrade to Enterprise Edition). Another use for some of those empty DIMM slots would be for “tail of the log caching” on NVDIMMs (which is supported in SQL Server 2016 Standard Edition with SP1).

Processor License Limits

SQL Server 2016 Standard Edition is also limited to the lesser of four sockets or 24 physical processor cores. With current and upcoming processor families from both Intel and AMD that will have up to 32 physical cores, it is very easy to inadvertently exceed the per-instance processor core limit, with a number of dire consequences for performance and licensing costs.

The first negative effect of doing this is how SQL Server 2016 Standard Edition will allocate your available license-limit physical cores across your NUMA nodes. For example, if you had a new two-socket server that had two, 16-core Intel Xeon E5-2697A v4 processors, by default, SQL Server 2016 Standard Edition would use sixteen physical cores on NUMA node 0 and only eight cores on NUMA node 1, which is an unbalanced configuration that won’t perform as well as it could. You can fix this issue with an ALTER SERVER CONFIGURATION command as I describe here.

To add insult to injury in this situation, Microsoft would also expect you to purchase core licenses for all 32 physical cores in the machine, even though you are only allowed to use 24 physical cores per instance. This would be a roughly $15K additional license cost, for core licenses that you would not be able to use, unless you decided to run multiple instances on the same host machine. The additional license cost would pay for a typical two-socket server, depending on how it was configured.

Another common pitfall that you should avoid with Standard Edition is creating a virtual machine that has more than four sockets. If you do that, SQL Server Standard Edition will only use four sockets because of the socket license limit.

Processor Selection

Currently, the most modern Intel Xeon processor family for two-socket servers is the 14nm Intel Xeon E5-2600 v4 product family (Broadwell-EP) that was released in Q1 of 2016. Intel is on the verge of releasing the next generation 14nm Intel Xeon E5-2600 v5 (Skylake-EP), which is already available in the Google Cloud Platform. My guess is that these new processors (which will require new model servers) will be publicly available in Q2 of 2017.

Given this 24-physical core license limit, it is extremely important that you do not select a processor that has more than 12 physical cores (if you plan on populating both sockets of a two-socket server). This limits your selection of processor SKUs somewhat, but there are still four great choices available, as shown in Table 1.

Model Cores Est. TPC-E System Score Score/Physical Core License Cost
Xeon E5-2687W v4 24 3,673.00 153.04 $44,592.00
Xeon E5-2667 v4 16 2,611.91 163.24 $29,728.00
Xeon E5-2643 v4 12 2,081.36 173.44 $22,296.00
Xeon E5-2637 v4 8 1,428.39 178.54 $14,864.00

Table 1: Comparative two-socket system processor metrics

Table 1 shows the total physical cores, estimated TPC-E score, estimated TPC-E score/physical core, and total SQL Server 2016 Standard Edition license cost for a two-socket system, populated with two of the selected processor. You might notice that I have a twelve-core processor, an eight-core processor, a six-core processor, and a four-core processor, but there is no ten-core processor in Table 1. This is by design, since the three available ten-core processor models are all very bad choices for SQL Server, because of their very low base clock speeds.

The estimated TPC-E score for the entire system is a measure of the total CPU capacity of the system, while the score/core is a measure of the single-threaded CPU performance of that specific processor.


If you want the best performance possible at the lowest hardware and SQL Server licensing cost for a SQL Server 2016 Standard Edition instance, you should choose a memory configuration that only uses one DIMM per memory channel (meaning eight DIMMs total in a two-socket system with two Intel Xeon E5-2600 v4 family processors).

You should also purposely choose one of the four processors listed in Table 1. Any other processor choice is a potentially expensive mistake from this perspective.

Finally, if you are going to be using SQL Server 2016 Standard Edition, you should investigate and test whether Buffer Pool Extension (BPE) might help performance with your workload. Depending on your workload, “tail of the log” caching on an NVDIMM might also be very beneficial for your transaction log performance.

  14 Responses to “Selecting and Configuring Hardware for SQL Server 2016 Standard Edition”

  1. From where do you get this "Est. TPC-E System Score" values?

  2. Some proprietary calculations I have done based on actual TPC-E benchmark submissions for the same generation processor. I think they are pretty accurate.

  3. SQL goes ahead and blocks the max memory that's allocated to it. Thus a memory used graph looks like a straight line. How can we find the actual memory used by SQL over a period of one month? This graph would help decide if memory is over provisioned.

  4. No, SQL Server does not "block the max memory that's allocated to it". That is incorrect.

    This query will tell you how much memory SQL Server is actually using.

    SELECT physical_memory_in_use_kb/1024 AS [SQL Server Memory Usage (MB)]
    FROM sys.dm_os_process_memory;

    • If the "Lock pages in memory" option is checked, then SQL will indeed lock the allocated memory. Also, in that case, this query will always show the max memory allocated

  5. Dinesh,

    No, that is not correct either. SQL Server does not immediately allocate all of the memory from the max server memory setting if LPIM is enabled. The SQL Server Memory Usage value from my query above fluctuates depending on the workload, even with LPIM enabled. If you restart SQL Server, then start running some queries, you will see the memory usage climb. It does not instantly go from zero to maximum just because LPIM is enabled.

    LPIM prevents the OS from paging out memory from the SQL Server process.

    • I think Dinesh and others are thinking of Large Pages in SQL Server. If your instance uses large pages, the memory is indeed blocked out at startup. One of the requirements is that Lock Pages in Memory privileges are granted to the SQL Server service account. This is where I think Dinesh's confusion comes in.

      Another requirement for large pages is Enterprise Edition. So this is not a possibility when talking about Standard Edition. Since Standard Edition is the topic of this article, I agree with Glenn's assessment that it doesn't happen. It can happen under the right conditions in Enterprise Edition, but that's a different discussion.

      See "SQL Server and Large Pages Explained" for more info:

    • I'm a bit puzzled by this conversation, as not certain if all parties are talking about buffer pool memory, SQL Server process memory, or SS maximum server memory (which might be the same as buffer or might be all memory, I'm only on SS2014 and not sure what terms mean what in SS 2016).

      But I'll dive in anyway. Glenn, when you say: "If you restart SQL Server, then start running some queries, you will see the memory usage climb. It does not instantly go from zero to maximum just because LPIM is enabled" .. yes I would agree, that is what happens. But, it never goes down, it keeps going up with more use until it reaches maximum server memory and then it flat-lines there. So it does not fluctuate with workload thereafter.

      Are Racharla and Dinesh talking about buffer pool memory, but you're talking about SQL Server process memory?

  6. Hi, Glenn, great stuff. So many folks remain unaware of the licensing costs & perf optimizations made possible by leveraging lower-core high-frequency processors.

    Question re: the absence in your table of a 10-core processor, you state, "the three available ten-core processor models are all very bad choices for SQL Server, because of their very low base clock speeds". Yet according to the ark db as well as wikipedia, the Xeon E5-2689 v4 10-core clocks at 3.1GHz (right between the E5-2687W v4 12-core at 3.0 & the Xeon E5-2667 v4 8-core at 3.2).

    Can you explain your objection to the 10-core? Thanks much, & keep up your evangelization efforts!

  7. @Jimmy May I actually missed the Xeon E5-2689 v4 processor, since it doesn't always show up in the new Intel Ark database web pages. On the other hand, as I look at it, there is an issue with the high TDP of 165 watts. Several server models I have looked at don't offer that processor because of its very high TDP. The Dell R730 does, but you have to get a special heat-sink to use that processor.

    That said, that processor does have pretty good specifications for a 10-core processor. The only small performance weakness is having a 25MB L3 cache for 10 cores, compared to 25MB for six or eight cores.

    • Thanks, @Glenn. As you suggest, not only is the E5-2689 v4 10-proc high wattage, but it's OEM supportability is limited. My DPL Lab Team suggests support may limited to specific HPE single-socket systems.

      They likewise report that the E5-2687W v4 12-proc is not only high wattage, but it's not fully supported by Lenovo.

      It seems that the E5-2667 v4 8-proc hits the sweet spot for SQL 2016 SP1 SE.

  8. Hi Glenn,

    We're running SQL 2016 Standard SP1 CU2 on a dual 12-core with 256GB RAM, and a PCIe BPE of 512GB.

    I ran your simple SQL Memory usage query and it says 204,982 MB. As this is a bit above the 128GB "limit", you could say there's some confusion here as to what is in that count. Our "max server memory (MB)" is set to 229,264 MB (deliberately).

    Just for sanity's sake, I went to your (irreplaceable) 2016 Diagnostic Queries and ran #7. Same 204,983 (it grew 1MB, but who's counting), then Large = 0, Locked = 206,966,128 KB

    Q17 gives me 262,018 Physical Memory, 204,930 Committed

    Q35 says that our User-DB is consuming 204,407, tempdb another 26,233

    Q42 – PLE, is hilarious. BPE really messes it up – 3. Just 3. Needless to say, I ignore it completely.

    Can you enlighten me, please? Am I misreading something? What is going on…?

    Thanks, in hopes.

  9. The madness of the amount of CPU cores today is why we are moving to virtualized SQL going forward. The extra cores don't even do much for performance except in some cases and the other parts of the CPU like cache and all the custom instructions that Intel has are a lot more useful

  10. what would the cost difference be upgrading one of those setups to enterprise?

 Leave a Reply