SQL Server 2016 : In-Memory OLTP Enhancements
SentryOne - SQL Sentry
Nov 302015

Update: November 30, 2015

The SQL Server team has published a blog post with some new functionality for In-Memory OLTP in CTP 3.1:

Update: November 17, 2015
Jos de Bruijn has posted an updated list of In-Memory OLTP changes as of CTP 3.0:

I posted earlier about the changes to Availability Groups in SQL Server 2016, which I learned about at MS Ignite largely from a session by Joey D'Antoni and Denny Cherry. Another great session was from Kevin Farlee and Sunil Agarwal on the changes in store for In-Memory OLTP (the feature formerly known as "Hekaton"). An interesting side note: the video of this session shows a demo where Kevin is running CTP2.0 (build 13.0.200) – though it is probably not the build we'll see publicly this summer.

Feature/Limit SQL Server 2014 SQL Server 2016
Maximum combined size of durable tables 256 GB 2 TB
LOB (varbinary(max), [n]varchar(max)) Not supported Supported*
Transparent Data Encryption (TDE) Not supported Supported
Offline Checkpoint Threads 1 1 per container
ALTER PROCEDURE / sp_recompile Not supported Supported (fully online)
Nested native procedure calls Not supported Supported
Natively-compiled scalar UDFs Not supported Supported
ALTER TABLE Not supported
Partially supported
(offline – details below)
DML triggers Not supported Partially supported
(AFTER, natively compiled)
Indexes on NULLable columns Not supported Supported
Non-BIN2 collations in index key columns Not supported Supported
Non-Latin codepages for [var]char columns Not supported Supported
Non-BIN2 comparison / sorting in native modules Not supported Supported
Foreign Keys Not supported Supported
Check/Unique Constraints Not supported Supported
Parallelism Not supported Supported
Multiple Active Result Sets (MARS)
(Means better Entity Framework support.)
Not supported Supported
SSMS Table Designer Not supported Supported

* LOB support will not be available in the CTP shipping this summer.

ALTER TABLE is an offline operation, and will support adding/dropping columns, indexes, and constraints. There will be new syntax extensions to support some of these actions. You can change your bucket count values with a simple rebuild (however note that any rebuild will require 2X memory):

ALTER TABLE dbo.InMemoryTable

In addition to these capacity / feature enhancements, there are also some additional performance enhancements. For example, there will be the ability to add an in-memory, updateable, non-clustered columnstore index over either disk-based or in-memory tables. And they have simplified the way that deleted rows are processed (in 2014, those operations use FileStream; in 2016, they will skip this step). There have also been improvements to the migration advisors and the best practices analyzer – they are now lighter on data gathering and provide more context about migration complexity.

There are still some limitations with some of these changes. TDE, as an example, requires additional steps when upgrading a database. But it's clear that as In-Memory OLTP gets more mature, they are chipping away at many of the biggest roadblocks to adoption.

But wait, there's more! If you want to use In-Memory OLTP in Azure SQL Database, there will be a public preview with full support coming this summer. So you won't need your own physical server with 2 TB of memory to push this feature to its limits. Do not expect any trickling of this feature into Standard Edition, however.

  32 Responses to “SQL Server 2016 : In-Memory OLTP Enhancements”

  1. Can we have an non-inmemory nonclusted column store index that is updatable? That would be the real killer. If it has to be in-memory it's not useful.

  2. "Do not expect any trickling of this feature into Standard Edition, however." <= and therein lies the problem.

    This feature can't be retrofitted by a DBA; it needs to be designed in by the software authors (generally ISVs). And apart from large customers that write their own software, most software houses (ISVs) won't use it because if they do, they can only then sell their own software to customers running enterprise edition. Ask yourself if SQL Sentry would limit itself to only selling to enterprise edition customers. I can't imagine it. So most ISVs won't then touch it as they don't want that limitation.

    The only way this will get widespread use is for ISVs to be building into their own code. That means standard edition.

    • That's a "problem" for ISVs (well, the ones who won't differentiate by putting any work into edition-specific features) and customers who aren't getting any of the other Enterprise features, either. But Microsoft doesn't hold a lot of stock in giving features to Standard Edition customers in any case.

      From the ISV side I can say – without mentioning any specifics – that we have had discussions about tailoring the use of certain Enterprise features for our customers who are using Enterprise, and those features simply wouldn't turn on for customers running our database on other editions. Hekaton is certainly on the opposite end of the complexity scale than, say, indexed views, but really outside of the additional configurations options and the conditional logic, nothing would be extremely difficult – just need to weigh effort against customer benefit. It's certainly not impossible to fathom that ISVs would spend R&D on features that have the potential to *really* help a portion of their customer base, even if it doesn't benefit all. You see this with every software vendor that has different SKUs of any product, but having edition-specific features does not mean the software itself has to come in different SKUs.

      Those who don't have capable hardware or "spare change" for Enterprise licenses (and don't have other disqualification reasons) will be able to make use of this performance feature in Azure SQL DB, so it's not like all Standard Edition customers are permanently hosed, either.

      I think they are making this feature more and more accessible, and while I made that statement because in my opinion it's not going to move into Standard, that doesn't mean it won't. I made similar statements about backup compression in 2008, and AGs in 2012. If they want the feature to become more widespread, they may downgrade it to Standard, but while some features don't really make sense to be Enterprise only, many performance features do, IMHO. Does the Standard Edition customer on a budget really have the hardware required to make the best use of this feature? Sure, it would make their stuff faster, but do they need it? Is it the kind of feature they can take advantage of to the point that not having it will drive them to other platforms? I imagine that is a small (or extremely quiet) subset of Microsoft's customer base.

  3. With many features I'd agree. For example, we're writing code to use table compression when it's available. But Hekaton needs a fundamental shift in thinking. That's much much more than just a config switch.

    I'm even seeing them shy away from it in Azure. The reason is that although it's there, it's only in Premium Edition. So you can't write code that runs on both Standard Edition and on Premium Edition. In Azure, it's even more of a problem because in an SaaS environment, I might want some customers on Premium and others on Standard, while concurrently running the same application. That gets really ugly really fast.

    One of the key arguments for Azure is the scale-up/scale-down story. That story is broken when there are code-surface differences between editions.

    • It's only much much more than a config switch if you want to take absolute full advantage of the feature, i.e. create a natively-compiled copy of every single stored procedure. While this is certainly possible, it's probably not worth the extra scope effort for ISVs. You can get a mem-opt filegroup in place pretty easily; and when you control the schema, having the table created with or without buckets depending on the edition is not rocket science. This will be even easier in Azure SQL DB because the memory-optimized filegroup will already have to be there in order for them to even offer the feature. And you can get a pretty big performance gain by having the table in memory, especially if you have a highly concurrent workload, even if you don't go all the way to natively-compiled procedures. Why spend $65K on a Cayman S when you can still be pretty happy with performance in a standard Cayman at $50K? In fact, I would guess that even an ISV's customers running Enterprise Edition would want the *option* to use or not use Hekaton. So even if it were on all editions, you'd still have to write conditional code, because some customers would simply choose to not have In-Memory OLTP shoved down their throat. After all, it doesn't benefit all schemas and workloads. My main point here: adding the feature to Standard Edition/Tier doesn't solve your problem.

      Also, how many ISVs are writing code that actually lives inside Azure SQL DB? And mustn't they already write code that differentiates between Standard and Premium (since, surely, you don't suggest that Hekaton will be the only difference, now or ever)? If you want to write code that takes advantage of features that only exist in one of the tiers, then you need to consider writing conditional code that will allow it to work correctly in both (or simply have different versions to deploy, just like anybody using partitioning, or columnstore, or snapshots, or resource governor, or TDE, or CDC would have to – and let's look ahead too, to features like the query store). Or reconsider whether you need to write code that takes advantage of tier-specific features. You're talking about really narrow edge cases here, but I still contest your allegation that it gets really ugly really fast – either it's worth it for you as an ISV and your customers, or it isn't… and again, I suspect your evaluation of "ugliness" is based on full-fledged buy-in of every single aspect of In-Memory OLTP, as opposed to just the memory-optimized filegroup and putting the tables there.

  4. Have to differ on that one Aaron.

    I think having different code surfaces for different editions in Azure is a mistake. Might well be different for a tools vendor but app vendors that I talk to have no interest in using features that are only present in a single edition, particularly in SaaS environments. There's a difference between features that can be applied outside the application (like geo-replication, etc.) and ones that make the code go bang when you change editions (ie: syntax not supported in all editions).

    And when I test in-memory oltp on-premises, the real benefit I get is from the native stored procs, not from the in-memory tables by themselves. I have one sample where the execution of a proc against a disk-based table takes around 25 seconds. Moving the table to in-memory drops it to 24.2 seconds. Native compiled stored proc drops it to milliseconds. It's a much bigger problem than deciding whether or not a table will be memory-based. Appropriate use requires an architectural shift, not a config change.

    • That's fine. Just a few quick points:

      (1) I wasn't suggesting that all use cases would benefit from only in-memory tables without native code. But I think it's disingenuous to imply that all scenarios can *only* benefit from both. You cite one specific example, but I do not believe it is representative of all implementations everywhere, and there are probably particular issues with the original code that made it run in 20+ seconds in the first place. Have you blogged about the specifics anywhere? Was it a multi-level nested cursor? Really bad recursive CTE? Messy string operations? Updating 8 million rows one at a time?

      (2) In cases where there is a huge benefit from natively compiled code, then you can decide if you need it. I wasn't suggesting that you have to deploy in-memory tables but can't invest in native code. I was suggesting that you can get some benefits from just the former (your counter-example notwithstanding). Again, that's up to each customer, whether or not the benefits are worth the effort to do either or both. And I'll say it again: if you're going to make that investment, it's not like a choice of one over the other – even if you only have Enterprise customers or even if the feature was moved to Standard, you'd still have customers who want to opt out regardless of their edition or portability concerns, so you still need to invest in a fallback in addition to any investments you make in taking advantage of this feature.

      (3) I'm not sure where the idea comes from that Standard Edition customers deserve this feature. As I suggested above, the highest impact performance features are selling features for the Enterprise / Premium tiers. You feel differently. I'm relieved that I have no say in this, and that I'm not the one you need to convince that your reasons for giving it to Standard trump their reasons for keeping it in Enterprise.

  5. Agreed.

    But in the circles that I spend most of my time in (ie: software houses or ISVs), that isn't how the comparisons go. They don't say "ah we just need to get customers to use Enterprise and they can use all that peformance stuff". What they do, is to compare SQL Server (standard edition) with other competing products such as PostgreSQL.

    What's scary then are the comparisons. I've been in meetings where they say "SQL Server doesn't have a high availability story". Their argument was that mirroring was deprecated and AGs were EE only. They say "SQL Server doesn't have table compression", etc. etc. It's important when the product team is pushing ever more features into EE that they don't lose sight of what's coming up behind SE.

    Previously it wasn't as much of an issue as many of the features could be retrofitted by a DBA, but when you get into features that require architectural coding support, things change. Very few sites could retrofit Hekaton to an existing application.

  6. Hekaton would be a lot more compelling if it could be used in an automatic mode where you have little restrictions on schema, no bucket count and the ability to page ranges of range indexes to disk. Something that you can just turn on on nearly everything and the engine manages it all.

    Oracle has such a feature for in-memory columnstore "supplemental indexing". Turn it on an hot data that fits into cache gets a secondary columnstore representation. Turn-key.

  7. Adding to list of the previous post: It would be important that all queries could automatically be compiled natively if eligible. I don't see any technological reason why that could not be done. There could be a heuristic that uses normal compilation at first and switches to native compilation if the query is executed a lot. Multiple queries could be compiled into the same DLL for efficiency reasons. The DLLs could be stored in system tables so that recompilations on server restart are not necessary.

    All of that can be solved and done. Would be insanely cool if all OLTP-style queries suddenly start becoming 10x faster by flipping a switch.

  8. Off topic, but still – 1) will the NCCI support WHERE clause? 2) Will there be ways to specify order for CCI so that segments are nicely divider w.r.t. min_data_id and max_data_id without rowstore clustered index workaround?

  9. Any Idea how they are handling Write – Write Locking. 2014 used to abort one of the transactions. Is that still the same?

  10. https://msdn.microsoft.com/en-us/library/dn246937.aspx

    seriously, are you guys joking ? such a big selling buzz and inside we don't even support all the above ? question is how can this be practical at all apart from some scenarios…

    • Ahsan, it's not meant for all scenarios. In v1 (SQL 2014) the use cases were extremely narrow. In v2 (SQL 2016), they've taken away several restrictions, which opens it up to more use cases (but not all). In vNext, even more of these limitations will be removed. But, like all features, it will *never* be something that is useful in every single scenario without putting any thought or planning into it… it's not something you just turn on for every table.

  11. * LOB support will not be available in the CTP shipping this summer.

    Hi have just installed SQL 2016 and is not yet available.

    Do you have any idea when this LOB(varbinary(max), [n]varchar(max)) will be supported? It is a big limitation not having rows more than 8060 bytes.

    Is there any workaround to overcome this issue?

    • Sorry, no idea when it will ship (other than "when it's ready"). I don't know of any workarounds, except keeping the LOB data in a related, traditional table. Or breaking it up somehow so that it fits in-row.

  12. There are huge limitations in SQL Server 2014 In-Memory feature. I was waiting for 2016 and tried to utilized the enhanced in-memory feature using 2016 CTP. Still there is no solution while inserting data from a non-memory table to a in-memory table.

    For an example: I have a DB named "non-memDB" having a non-memory table and I am trying to insert data to a in-memory optimized DB named "in-memDB". SQL Server 2014 and 2016 doesn't support this.

    My question is: How can I do it? I tried using table variable but no performance gain.

    Even I tried using stored procedure below:

    CREATE PROCEDURE dbo.usp_InsertSampleData

    DECLARE @imTable table (
    [column1] nvarchar(50),
    [column2] nvarchar(50),
    [column3] nvarchar(50)

    INSERT INTO @imTable
    SELECT [column1],[column2],[column3]
    FROM [non-memDB].[dbo].[non-MemTable]

    INSERT INTO [in-memDB].[dbo].[in-MemTable] SELECT * FROM @imTable


    Getting following Error:
    Msg 12305, Level 16, State 22, Procedure usp_InsertSampleData, Line 7
    Inline table variables are not supported with natively compiled modules.
    Msg 1054, Level 15, State 1, Procedure usp_InsertSampleData, Line 17
    Syntax '*' is not allowed in schema-bound objects.

    Anyone faced this situation and solved it?

    • The second error message is easy to fix – just change * to column1, column2, column3. The first one is a bit more complicated but, really, have you tried this *not* using a natively compiled procedure and not bothering with an intermediate table variable? You're just moving data, so I'm not sure exactly why this has to be a natively-compiled procedure (and I'm not sure there's much to gain for such an operation). How big is this table? How long does the operation take when you perform the operation normally? How often are you moving all of the data from one table to another?

  13. Hi,

    With the long list of T-SQL limitations inside natively compiled stored procedures, how practical is it to use the in memory feature?

    Because we have very big tables which are not going to store in memory which are disk based so how we are going to use this features….

    Zaim Raza.

    • The nice thing is, if the limitations within natively compiled procedures are prohibitive for you, you can still use In-Memory OLTP and benefit from the lack of locking and latching, just without the *additional* benefit of native compilation (which can help performance in some situations, but is marginal in most others). So don't look at it as "In-Memory OLTP requires native compilation" or even "In-Memory OLTP requires native compilation to derive any benefit," because it has plenty of benefits even without it. And also remember that the limitations in T-SQL surface area are getting chipped away with every release.

  14. Hi Aoran,

    My main concern is T-sql features which are not available on In-Memory OLTP. How OLTP works without the referential integrity. last but not least it's hard to write t-sql without the features are available on table based environment.

    i would say it's in very early stage to adopt it on enterprise level.

    Zaim Raza.

  15. Will there be support for Identity columns or Sequence numbers in 2016?

  16. Dear Aaron, When I refer Filestream documentation https://msdn.microsoft.com/en-IN/library/gg471497.aspx, it says two things.
    1) Encryption is not supported
    2) SQL Logins dont work with FileStream containers. Where is encryption supported in InMemory resident tables in SQL 2016. How does it effect performance when it is encrypted. InMemory transactions logged to regular transaction logs and Filestream uses NT system cache.
    What are the best practices for achieving high performance provided by InMemory engine.

    • Hi Basava, I haven't done any performance testing of encrypted data in In-Memory OLTP in SQL Server 2016 (and think it would be unfair to do so at this early stage), and have not developed any high-performance best practices around it…

  17. Hi Aaron,
    If I have a RAM of 256GB and I have an in-memory table which is larger in size than RAM, how will in-memory processing be done? What is the max limit of available RAM which can be used by SQL Server (MaxMemory configuration) what about Max memory setting for In-Memory processing.


  18. Basava – If you don't have enough RAM to hold a table in-memory, then (assuming you are processing the INSERT in batches), you will eventually reach out-of-memory and no more data can be added to that in-memory table.

    In SQL 2016, There is a limit of 2TB in-memory data per database. That does not include memory you would need for row versions, indexes, table variables, non-durable tables, buffer pool (assuming you also have disk-based tables), etc. You can use up to 80% of the available memory on a server for memory-optimized data.

    You can use Resource Governor to limit the amount of memory that in-memory data can use, so that it doesn't affect other processing on the server.

    Note that there are many things to be aware of for backup/recovery of databases that contain durable memory-optimized data.

  19. @Basava
    in SQL21016 it's possible that tables partially reside in memory… read the intro acticle dn133165 on MSDN, it explains that stuff quite well.

    However that won't be much faster than in SQL2012 or older. All SQL servers cache the most recent used tables…. the three magic things here is
    a) write transactions are committed in memory and not after having a committed write on a physical storage, also the locking is more "optimistic" than physical writes
    b) these natively compiled SP seem to speed up some things in the TSQL area too
    c) whole database might go into memory, till 2 TB in size.

  20. @Andreas –

    Not sure where you are getting your information from, but much of what you wrote is either incorrect, or "it depends".

    To my knowledge, nothing in the MS documentation states that a memory-optimized can partially reside in memory. Where would the rest of it reside? Assuming the memory-optimized table is durable, there is no representation on disk for memory-optimized tables, other than in the data/delta files. I'd be interested in exactly which part of "dn133165 on MSDN" states that memory-optimized tables can partially reside in-memory.

    You stated that "won't be much faster than in SQL2012 or older…." It's been proven many times that given the right type of workload, In-Memory OLTP is capable of several orders of magnitude increase in performance.

    There is no "caching" of memory-optimized data, because it's already in memory. Memory-optimized tables do not use the Buffer Pool.

    Also, the 2TB limit has been eliminated – the size of memory-optimized data is now limited only by the OS.

 Leave a Reply