Some of you have access to published
Hekaton In-Memory OLTP demo scripts involving AdventureWorks; the most recent sample is published here. These examples piggyback on the AdventureWorks2012 sample database on CodePlex. If you have tried these samples, you may have come across a couple of issues that can dramatically change your first experience with this technology.
A lot of people download the "AdventureWorks2012 Data File" – a 200 MB .mdf file that you can attach – without a log – using the following syntax:
CREATE DATABASE AdventureWorks2012 ON ( NAME = AdventureWorks2012_Data, FILENAME = '<path>\AdventureWorks2012_Data.mdf' ) FOR ATTACH_REBUILD_LOG;
The problem is that, if you are connected to the SQL Server instance as your Windows account, you may end up inadvertently as the database owner. Which isn't going to be a big deal in most scenarios, except that if you create stored procedures with
EXECUTE AS OWNER, like many samples you come across will do, this can cause an issue. You may find this line, for example, in many natively-compiled stored procedures:
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
Unless you've already mitigated this issue in other ways, if the owner of the database is your Windows account, you are likely to get the following error when trying to create such a procedure:
Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
Depending on your environment, you may want to seriously weigh how you deal with this; in my case, I took the easy path and just set authorization on the database to
ALTER AUTHORIZATION ON DATABASE::AdventureWorks2012 TO sa;
At this point I was able to run the demo script without issue (well, I got errors when it tried to add another memory-optimized filegroup, but that is a completely different and ignorable problem).
There doesn't seem to be a ton of practical guidance about how to choose bucket count for your memory-optimized tables. There is this article on MSDN, which goes into some technical details, and Klaus Aschenbrenner has written this post about making smart choices in this area. Outside of that, you're pretty much on your own to experiment. The SWAG I've heard most often is 1x-2x the number of unique key values, so that point lookups are most efficient. However some of the samples you'll find out there either consistently use 1,000,000 buckets, or smaller numbers like 100 (and even 5 in one case), or a mix. Keep that in mind as you start to experiment with your own schema and data access patterns – you may have to rip down tables and try again with different bucket sizes to find the "sweet spot" for your scenario.
The AdventureWorks2012 database is set to
SIMPLE recovery. Like the database owner issue, in most cases this isn't that big of a deal for a sample database. But when you are testing In-Memory OLTP – and likely in combination with other technologies that make
SIMPLE recovery a deal-breaker, like Availability Groups – it may make a lot more sense to perform your testing against a database with recovery set to
FULL. Otherwise you may be failing to observe certain behaviors that could be different under different recovery models. You can change AdventureWorks2012 to
FULL as follows:
ALTER DATABASE AdventureWorks2012 SET RECOVERY FULL;
And don't forget to take a full backup, so that a backup chain is established, and the database isn't operating in pseudo-
SIMPLE recovery mode.