On June 27th, the PASS Performance Virtual Chapter held its 2013 Summer Performance Palooza – kind of a scaled-down 24 Hours of PASS, but focused solely on performance-related topics. I gave a session entitled, "10 Bad Habits that can Kill Performance," treating the following 10 concepts:
- SELECT *
- Blind indexes
- No schema prefix
- Default cursor options
- sp_ prefix
- Allowing cache bloat
- Wide data types
- SQL Server defaults
- Overusing functions
- "Works on my machine"
You may remember some of these topics from such presentations as my "Bad Habits and Best Practices" talk or our weekly Query Tuning webinars that I've been hosting with Kevin Kline from the beginning of June through this week. (Those 6 videos, by the way, will be available in early August on YouTube.)
My session had 351 attendees, and I got some great feedback. I wanted to address some of that.
First, a configuration issue: I was using a brand new microphone, and had no idea that every keystroke would sound like thunder. I've addressed that issue with better placement of my peripherals, but I do want to apologize to everyone affected by that.
Next, the downloads; the deck and samples are posted to the event site. They're at the bottom pf the page, but you can also download them right here.
Finally, what follows is a list of questions that were posted during the session, and I wanted to make sure I addressed any that weren't answered during the live Q & A. I apologize that I have squeezed this in in just under a month, but there were a lot of questions, and I didn't want to publish them in parts.
Q: If you have a proc that can have wildly varying input values for the given parameters and the result is that the cached plan isn't optimal for most cases, is it best to create the proc WITH RECOMPILE and take the small performance hit each time it runs?
A: You will have to approach this on a case-by-case basis, since it will really depend on a variety of factors (including the complexity of the plan). Also note that you can do statement-level recompile such that only the affected statements have to take the hit, as opposed to the entire module. Paul White reminded me that people often 'fix' parameter sniffing with RECOMPILE
, but too often that means 2000-style WITH RECOMPILE
rather than the much better OPTION (RECOMPILE)
, which not only limits itself to the statement, but also enables parameter embedding, which WITH RECOMPILE
does not. So, if you're going to use RECOMPILE
to thwart parameter sniffing, add it to the statement, not the module.
Q: If you use option recompile on dynamic sql would you see a big performance hit
A: As above, this will depend on the cost and complexity of the plans and there is no way to say, "Yes, there will always be a big performance hit." You also have to be sure to compare that to the alternative.
Q: If there is clustered index on insertdate, later on when we retrieving data, we using convert function, if using direct comparison, the query date is not readable, in real world, what's the better choice
A: I'm not sure what "readable in real world" means. If you mean you want the output in a specific format, you are usually better off converting to a string on the client side. C# and most of the other languages you are likely using at the presentation tier are more than capable of formatting date/time output from the database in whatever regional format you want.
Q: How does one determine the number of times a cached plan is used – is there a column with that value or any queries on the internet that will give this value? Finally, would such counts only be relevant since the last restart?
A: Most DMVs are only valid since the last service start, and even others can be flushed more frequently (even on demand – both inadvertently and on purpose). The plan cache is, of course, in constant flux, and AFAIK plans that drop out of the cache don't maintain their previous count if they pop back in. So even when you see a plan in the cache I'm not 100% confident you can believe the use count you find.
That said, what you are probably looking for is sys.dm_exec_cached_plans.usecounts
and you may also find sys.dm_exec_procedure_stats.execution_count
to help supplement information for procedures where individual statements within the procedures are not found in the cache.
Q: What are the Issues when upgrading the db engine to a new version but leaving the user databases in older compatibility modes?
A: The main concerns around this are the ability to use certain syntax, such as OUTER APPLY
or variables with a table-valued function. I am not aware of any cases where using a lower compatibility has any direct impact on performance, but a couple of things typically recommended are to rebuild indexes and update statistics (and to get your vendor to support the newer compatibility level ASAP). I have seen it resolve unexpected performance degradation in a substantial number of cases, but I've also heard some opinions that doing so is not necessary and perhaps even unwise.
Q: On the * does it matter when doing an exists clause
A: No, at least in terms of performance, one exception where SELECT *
doesn't matter is when used inside an EXISTS
clause. But why would you use *
here? I prefer to use EXISTS (SELECT 1 ...
– the optimizer will treat those the same, but in a way it self-documents the code and ensures that readers understand the subquery does not return any data (even if they miss the big EXISTS
outside). Some people use NULL
, and I have no idea why I started using 1, but I find NULL
slightly unintuitive as well.
*Note* you will need to be careful, if you try to use EXISTS (SELECT *
inside a module that is schema-bound:
CREATE VIEW dbo.ThisWillNotWork
WITH SCHEMABINDING
AS
SELECT BusinessEntityID
FROM Person.Person AS p
WHERE EXISTS (SELECT * FROM Sales.SalesOrderHeader AS h
WHERE h.SalesPersonID = p.BusinessEntityID);
You get this error:
Syntax '*' is not allowed in schema-bound objects.
However changing it to SELECT 1
works just fine. So maybe that is another argument for avoiding SELECT *
even in that scenario.
Q: Is there any resource link for best coding standards?
A: There are probably hundreds across a variety of languages. Like naming conventions, coding standards are a very subjective thing. It doesn't really matter what convention you decide works best for you; if you like tbl
prefixes, go nuts! Prefer Pascal over bigEndian, have at it. Want to prefix your column names with the data type, like intCustomerID
, I'm not going to stop you. The more important thing is that you define a convention and employ it *consistently.*
That said, if you want my opinions, I have no shortage of them.
Q: Is XACT_ABORT something that can be used in SQL Server 2008 onwards?
A: I don't know of any plans to deprecate XACT_ABORT
so it should continue to work just fine. Frankly I don't see this used very often now that we have TRY / CATCH
(and THROW
as of SQL Server 2012).
Q: How does an Inline Table Function on a cross apply compare to the scalar function that was called 1,000x?
A: I didn't test this, but in many cases replacing a scalar function with an inline table-valued function can have a great impact on performance. The problem I find is that making this switch can be a substantial amount of work on system that were written before APPLY
existed, or are still managed by folks who haven't embraced this better approach.
Q: I have a query that runs really slow first time around (~1min) and fast (~3seconds) every other time. Where shall I start looking at where the performance issue come from the first time?
A: Two things jump to mind: (1) the delay has to do with compilation time or (2) the delay has to do with the amount of data that is being loaded to satisfy the query, and the first time it has to come from disk and not memory. For (1) you can execute the query in SQL Sentry Plan Explorer and the status bar will show you compile time for the first and subsequent invocations (though a minute seems rather excessive for this, and unlikely). If you don't find any difference, then it may just be the nature of the system: insufficient memory to support the amount of data you're trying to load with this query in combination with other data that was already in the buffer pool. If you don't believe either of these are the issue, then see if the two different executions actually yield different plans – if there are any differences, post the plans to answers.sqlperformance.com and we'll be happy to take a look. In fact capturing actual plans for both executions using Plan Explorer in any case may also let you know about any differences in I/O and may lead to where SQL Server is spending its time on the first, slower run.
Q: I'm getting parameter sniffing using sp_executesql, would Optimize for ad hoc workloads solve this as only the plan stub is in cache?
A: No, I don't think the Optimize for ad hoc workloads setting will help this scenario, since parameter sniffing implies that subsequent executions of the same plan are used for different parameters and with significantly different performance behaviors. Optimize for ad hoc workloads is used to minimize the drastic impact on the plan cache that can happen when you have a high number of different SQL statements. So unless you are talking about impact to the plan cache of many different statements you are sending to sp_executesql
– which wouldn't be characterized as parameter sniffing – I think experimenting with OPTION (RECOMPILE)
may have a better result or, if you know the parameter values that *do* produce good results across a variety of parameter combinations, use OPTIMIZE FOR
. This answer from Paul White may provide much better insight.
Q: Is there a way to run dynamic SQL and NOT save the query plan?
A: Sure, just include OPTION (RECOMPILE)
in the dynamic SQL text:
DBCC FREEPROCCACHE;
USE AdventureWorks2012;
GO
SET NOCOUNT ON;
GO
EXEC sp_executesql
N'SELECT TOP (1) * INTO #x FROM Sales.SalesOrderHeader;';
GO
EXEC sp_executesql
N'SELECT TOP (1) * INTO #x FROM Sales.SalesOrderDetail OPTION (RECOMPILE);'
GO
SELECT t.[text], p.usecounts
FROM sys.dm_exec_cached_plans AS p
CROSS APPLY sys.dm_exec_sql_text(p.[plan_handle]) AS t
WHERE t.[text] LIKE N'%Sales.' + 'SalesOrder%';
Results: 1 row showing the Sales.SalesOrderHeader
query.
Now, if any statement in the batch does NOT include OPTION (RECOMPILE)
, the plan may still be cached, it just can't be re-used.
Q: Can you use BETWEEN on the date example from #9 instead if >= and <?
A:Well, BETWEEN
is not semantically equivalent to >= AND <
, but rather >= AND <=
, and optimizes and performs in the exact same way. In any case, I purposely don't use BETWEEN
on date range queries – ever – because there is no way to make it an open-ended range. With BETWEEN
, both ends are inclusive, and this can be very problematic depending on the underlying data type (now or due to some future change you might not know about). The title might seem a bit harsh, but I go into great detail about this in the following blog post:
What do BETWEEN and the devil have in common?
Q: In a cursor what does "local fast_forward" really do?
A: FAST_FORWARD
is actually the short form of READ_ONLY
and FORWARD_ONLY
. Here is what they do:
LOCAL
makes it so that outer scopes (by default a cursor isGLOBAL
unless you have changed the instance-level option).READ_ONLY
makes it so that you can't update the cursor directly, e.g. usingWHERE CURRENT OF
.FORWARD_ONLY
prevents the ability to scroll, e.g. usingFETCH PRIOR
orFETCH ABSOLUTE
instead ofFETCH NEXT
.
Setting these options, as I demonstrated (and have blogged about), can have a significant impact on performance. Very rarely do I see cursors in production that actually need to deviate from this set of features, but usually they are written to accept the much more expensive defaults anyway.
Q: what is more efficient, a cursor or a while loop?
A: A WHILE
loop will probably be more efficient than an equivalent cursor with the default options, but I suspect you will find little if any difference if you use LOCAL FAST_FORWARD
. Generally speaking, a WHILE
loop *is* a cursor without being called a cursor, and I challenged some highly esteemed colleagues to prove me wrong last year. Their WHILE
loops didn't fare so well.
Q: You don't recommend usp prefix for user stored procedures, does this have the same negative impact?
A: A usp_
prefix (or any prefix other than sp_
, or no prefix for that matter) does *not* have the same impact that I demonstrated. I find little value though in using a prefix on stored procedures because there is very rarely ever any doubt that when I find code that says EXEC something
, that something is a stored procedure – so there is little value there (unlike, say, prefixing views to distinguish them from tables, since they can be used interchangeably). Giving every procedure the same prefix also makes it that much harder to find the object you're after in, say, Object Explorer. Imagine if every last name in the phone book was prefixed with LastName_
– in what way does that help you?
Q: Is there a way to clean up cached plans where there are multiple copies?
A: Yes! Well, if you are on SQL Server 2008 or greater. Once you have identified two plans that are identical, they will still have separate plan_handle
values. So, identify the one you *don't* want to keep, copy its plan_handle
, and put it inside this DBCC
command:
DBCC FREEPROCCACHE(0x06.....);
Q: Does using if else etc in a proc cause bad plans, does it get optimized for the first run and only optimize for that path? So do the sections of code in each IF need to be made into separate procedures?
A: Since SQL Server can now perform statement-level optimization, this has a less drastic effect today that it did on older versions, where the entire procedure had to be recompiled as one unit.
Q: I sometimes found that writing dynamic sql can be better because it eliminates parameter sniffing problem for sp. Is this true ? Are there tradeoffs or other consideration to be done about this scenario ?
A: Yes, dynamic SQL can often thwart parameter sniffing, particularly in the case where a massive "kitchen sink" query has a lot of optional parameters. I treated some other considerations in questions above.
Q: If I had a calculated column on my table as DATEPART(mycolumn, year) and in index on it, would SQL server use this with a SEEK?
A: It should, but of course it depends on the query. The index might not be suitable to cover the output columns or satisfy other filters and the parameter you use may not be selective enough to justify a seek.
Q: is a plan generated for EVERY query? Is a plan generated even for trivial ones?
A: As far as I know, a plan is generated for every valid query, even trivial plans, unless there is an error that prevents a plan from being generated (this can happen in multiple scenarios, such as invalid hints). Whether they are cached or not (and how long they stay in cache) depends on a variety of other factors, some of which I've discussed above.
Q: Does a call to sp_executesql generate (and reuse) the cached plan?
A: Yes, if you send the exact same query text, it doesn't really matter if you issue it directly or send it through sp_executesql
, SQL Server will cache and reuse the plan.
Q: Is it ok to enforce a rule (for a dev environment) where all dev machines use instant file initialization?
A: I don't see why not. The only concern I would have is that with instant file initialization the developers may not notice large numbers of autogrow events, which can reflect poor autogrowth settings that can have a very different impact on the production environment (especially if any of those servers do *not* have IFI enabled).
Q: With the function in the SELECT clause, would it be correct to say then that it is better to duplicate code?
A: Personally, I would say yes. I've gotten a lot of performance mileage out of replacing scalar functions in the SELECT
list with an inline equivalent, even in cases where I have to repeat that code. As mentioned above, though, you may find in some cases that replacing that with an inline table-valued function can give you code re-use without the nasty performance penalty.
Q: Can we use data generators to get the same data size for development use instead of using (hard to get) production data? Is the data skew important for the resulted plans?
A: Data skew can have a factor, and I suspect it depends on what kind of data you're generating / simulating and how far off the skew might be. If you have, say, a varchar(100) column which in production is typically 90 characters long and your data generation produces data that averages 50 (which is what SQL Server will assume), you're going to find much different impact on number of pages and optimization, and probably not very realistic tests.
But I'll be honest: this specific facet isn't something I've invested a lot of time in, because I can usually bully my way into getting real data. :-)
Q: Are all functions created equal when examining query performance? If not is there a lists of known functions you should avoid when possible?
A: No, not all functions are created equal in terms of performance. There are three different types of functions that we can create (ignoring CLR functions for the time being):
- Multi-statement scalar functions
- Multi-statement table-valued functions
- Inline table-valued functions
Inline scalar functions are mentioned in the documentation, but they are a myth and, as of SQL Server 2014 at least, may as well be mentioned alongside Sasquatch and the Loch Ness Monster.
In general, and I would put that in 80pt font if I could, inline table-valued functions are good, and the others should be avoided when possible, since they're much harder to optimize.
Functions can also have different properties that affect their performance, such as whether they are deterministic and whether they are schema-bound.
For many function patterns, there are definitely performance considerations you need to make, and you should also be aware of this Connect item which aims to address them.
Q: Can we keep running totals without cursors?
A: Yes, we can; there are several methods other than a cursor (as detailed in my blog post, Best approaches for running totals – updated for SQL Server 2012):
- Subquery in SELECT list
- Recursive CTE
- Self-join
- "Quirky update"
- SQL Server 2012+ only: SUM() OVER() (using default / RANGE)
- SQL Server 2012+ only: SUM() OVER() (using ROWS)
The last option is by far the best approach if you're on SQL Server 2012; if not, there are restrictions on the other non-cursor options that will often make a cursor the more attractive choice. For example, the quirky update method is undocumented and not guaranteed to work in the order you expect; the recursive CTE requires there are no gaps in whatever sequential mechanism you are using; and the subquery and self-join approaches simply don't scale.
Nice post. Full of really useful stuff :)
Hi, Aaron, thanks for some useful information!
Small notice, that seems to be not quite true:
"An interesting parameter sniffing tidbit from Paul White (that I'll confess I had no idea about): the branch that does not get executed still gets compiled, but with a parameter value of NULL."
The branch is really compiled, but with sniffed parameter value. So parameter sniffing problem is fully applicable in this case.
Here is an example:
use AdventureWorks2008
go
if object_id('p1','P') is not null drop procedure p1;
go
create proc p1 @branch int, @ProductID int
as
if @branch = 1
select * from Sales.SalesOrderDetail where ProductID < @ProductID
else
select * from Production.Product where ProductID < @ProductID
go
dbcc freeproccache;
go
set statistics xml on
go
–exec branch 2, compile and sniff value NULL for branch 1 also
exec p1 @branch = 2, @ProductID = null;
go
–exec branch 1, no recomplation, use "bad" plan for branch 1 with seek+lookup for earlier sniffed NULL value
exec p1 @branch = 1, @ProductID = 1000;
go
set statistics xml off
go
In my opinion, splitting code into multiple paths is helpful to address, what is sometimes called, "catch-all" queries, or "optional parameter" queries.
This is also true, if parameters are independent for both branches.
use AdventureWorks2008
go
if object_id('p1','P') is not null drop procedure p1;
go
create proc p1 @branch int, @ProductID int, @Name nvarchar(50)
as
if @branch = 1
select * from Sales.SalesOrderDetail where ProductID < @ProductID
else
select * from Production.Product where Name < @Name
go
dbcc freeproccache;
go
set statistics xml on
go
exec p1 @branch = 2, @ProductID = 1, @Name = 'a';
go
exec p1 @branch = 1, @ProductID = 1000, @Name = 'b';
go
set statistics xml off
go
Hello Dima,
Yes you're right of course. I misremembered something I had been testing recently – I'll ask Aaron to remove that particular comment. Well spotted – good to see you're paying close attention every time my name is mentioned =)
Paul
Hello, Paul.
I try to pay attention on everything that is interesting and involves optimizer, not you personally =)
But you are one of the most advanced, interesting and active authors on this topic, which I really appreciate, so of course I read carefully!