If you've ever used Management Studio, this output message will probably look familiar:
This comes from SQL Server's DONE_IN_PROC
message, which is sent at the successful completion of any SQL statement that has returned a result (including the retrieval of an execution plan, which is why you see two of these messages when you've actually only executed a single query).
You can suppress these messages with the following command:
SET NOCOUNT ON;
Why would you do that? Because these messages are chatty and often useless. In my Bad Habits and Best Practices presentations, I often talk about adding SET NOCOUNT ON;
to all stored procedures, and turning it on in application code that submits ad hoc queries. (During debugging, though, you might want a flag to turn the messages back on, as the output can be useful in those cases.)
I always added the disclaimer that the advice to turn this option on everywhere isn't universal; it depends. Old-school ADO recordsets actually interpreted these as resultsets, so adding them to the queries after the fact could actually break application(s) that are already manually skipping them. And some ORMs (cough NHibernate cough) actually parse the results to determine the success of DML commands (ugh!). Please test your changes.
I know that at one point I had proven to myself that these chatty messages could impact performance, especially over a slow network. But it's been a long time, and last week Erin Stellato asked me if I had ever formally documented it. I haven't, so here goes. We'll take a very simple loop, where we'll update a table variable a million times:
SET NOCOUNT OFF;
DECLARE @i INT = 1;
DECLARE @x TABLE(a INT);
INSERT @x(a) VALUES(1);
SELECT SYSDATETIME();
WHILE @i < 1000000
BEGIN
UPDATE @x SET a = 1;
SET @i += 1;
END
SELECT SYSDATETIME();
A couple of things you might notice:
- The messages pane is flooded with instances of the
(1 row(s) affected)
message: - The initial
SELECT SYSDATETIME();
does not present itself in the results pane until after the entire batch has completed. This is because of the flooding. - This batch took about 21 seconds to run.
Now, let's repeat this without the DONE_IN_PROC
messages, by changing SET NOCOUNT OFF;
to SET NOCOUNT ON;
and run it again.
While the messages pane was no longer flooded with the row(s) affected messages, the batch still took ~21 seconds to run.
Then I thought, wait a second, I know what's going on. I'm on a local machine, with no network involved, using Shared Memory, I have only SSD and gobs and gobs of RAM...
So I repeated the tests using my local copy of SSMS against a remote Azure SQL Database - a Standard, S0, V12. This time, the queries took a lot longer, even after reducing the iterations from 1,000,000 to 100,000. But again there was no tangible difference in the performance whether DONE_IN_PROC
messages were being sent or not. Both batches took about 104 seconds, and this was repeatable over many iterations.
Conclusion
For years, I had been operating under the impression that SET NOCOUNT ON;
was a critical part of any performance strategy. This was based on observations I had made in, arguably, a different era, and that are less likely to manifest today.
That said, I will continue to use SET NOCOUNT ON
, even if on today's hardware there is no noticeable difference in performance. I still feel pretty strongly about minimizing network traffic where possible. I should consider implementing a test where I have much more constrained bandwidth (maybe someone has an AOL CD they can lend me?), or have a machine where the amount of memory is lower than Management Studio's output buffer limits, to be sure that there isn't a potential impact in worst-case scenarios. In the meantime, while it might not change the perceived performance about your application, it might still help your wallet to always turn this set option on, especially in situations like Azure - where you may be charged for egress traffic.
There must be more to this. For example, Azure could be so slow that the workload is server-bound.
Also, I wonder how an S0 managed to execute 10,000 updates per second over an extended period of time. Are you sure the measurement is valid?
I also remember that I measured completion messages to have a severe effect 1-2 years ago. I think this test just does not manage to demonstrate the issue for some reason. But I can reproduce your findings on 2014 and on 2008 R2.
Sorry tobi, you're right, for the S0 server I did reduce the batch size by one zero, so it was more like 1,000 updates per second. Sorry about that – the point remains, though, that NOCOUNT had no impact.
And I do recall this having a severe effect before, as well – so consistently that it became an ingrained force of habit. But now I'm having a hard time reproducing the scenario. If you can figure out which variable needs to be constrained in order to reproduce this effect, I'm all ears.
On my old Core i7-3770K machine, this loop takes about 14.5 seconds (with SQL Server 2014 SSMS). On a newer Core i7-6700K, it takes about 9 seconds with SET NO COUNT ON; and about 12 seconds with SET NOCOUNT OFF; (with SQL Server 2016 CTP 3.3 SSMS).
This just means that my hardware is faster than yours… :-) More seriously, I am seeing some variations in timing results based on what version SSMS is involved, which makes some sense with this exact test.
I think the more important point is the reduction in network traffic, as you point out.
I also tried on the S0 server by changing the batch a little bit, by interspersing some selects (
IF @i % 1000 = 0 SELECT @i;
). I thought context switching might make a difference for SSMS, but both batches finished within 200-300ms of each other every time. Still about 1,000 updates per second (and also with@i % 100
– though in that case SSMS stopped producing results of the select at about@i = 72,800
).What I didn't try? Reproducing the issue on ancient versions of Management Studio. Maybe that's actually where the improvement has happened; I could picture some issue in the code where writing the
DONE_IN_PROC
message was serialized or blocking somehow. Which may have never been an issue for regular applications either (unless they were coded similarly).I did try that. I ran an ancient 2008 (no R2) SSMS in a Win7 x64 VM. I connected to: local, remote 2014, remote 2008 R2. It was fast.
I also tried using a named table in tempdb.
It is hard to believe this! I am sure that I had a case where NOCOUNT provided very relevant speedups, not just a few percent.
Could use use a Microsoft contact that works in a relevant area of the product? This is bugging me.
I did some testing a long time ago on this and always had noticeable differences in time. However the best gains came from jobs in SQL Agent. I remember one job that took usually about 45 minutes and when I added SET NOCOUNT ON it took less than 2 minutes. No lie. This was a while ago and the job (not mine :) )was full of cursors and that simple change made a world of difference and a believer out of me. Now hardware and especially the network has gotten faster so the effects seem less for sure. But the bottom line is that the extra round trips, buffer flushes etc. take more time and resources to do than without all those garbage messages. Think of what the network would be like if no one used SET NOCOUNT ON in a busy environment. There would be millions of useless packets doing nothing but extra work. So while it may not seem in a simple controlled test that there isn't much difference I personally would want my network a bit more cleaner :).
What I'm still most surprised by is that it still doesn't default to ON. I'd have expected that to happen one sunny release, with the usual warnings on BOL articles in the run-up.
I'm guessing that the net gain (against breaking existing applications) was deemed insufficient, and so instead they just added it to the default SSMS template along with the now-ubiquitous comment "SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements."
Reeks of a workaround to me, but it's a pretty handy warning flag for identifying who within your remit is creating stored procs via the use of a mouse :)
I'd say you're gonna get the results you want in sprocs that use cursors.
To Aaron's point, even if performance isn't impacted there are other reasons to set NOCOUNT on. For instance, we have a deployment process that logs the results produced by the engine. When the developers forget and leave NOCOUNT off, the log get flooded with useless record(s) affected messages. If your query results are getting saved off to log files somewhere, this is no bueno.
Aaron, I wonder if causes more issues when it causes a second (that is, additional) packet to be sent over the network.
The interesting thing is that, if client statistics are turned on in SSMS for both executions, there is no difference reported by "Bytes received from server" metric.
However "Rows affected …" and "Rows returned " are reported as 0 when executing with SET NOCOUNT ON.
Daniel
In the paragraph where you start with:
"Why would you do that? Because these messages are chatty and often useless…"
You say:
"I always talk about adding SET NOCOUNT ON; to all stored procedures, and turning it on in application code that submits ad hoc queries."
I think you meant to say SET NOCOUNT OFF; in that context.
No Jacob, for years I've been suggesting people always – with the noted exceptions – add
SET NOCOUNT ON;
both in stored procedures and in ad hoc queries (regardless of source).SSIS consumes these messages in some horribly performing exponential way. I've see it add as much as 25% overhead on a multi hour process when the messages were generated from RBAR statements.
Nothing new to me, I've always found the suggestion to SET NOCOUNT ON to be the least useful performance advice in the various SQL literature (and that advice was just everywhere). Sure it makes a difference in a few rather rare circumstances, but most of the time it makes no difference.
I think the point was that while it won't always help, it can't hurt. I just think we have fewer scenarios now where it helps (but it still doesn't hurt).
Just had an Excel super-user say that without SET NOCOUNT ON, he can't use stored procedures in Excel.
I would imagine testing multiple connections performing this test would be a better comparison of real world activity (users from multiple desktops / laptops / sites, etc). And this is where the usage of SET NOCOUNT ON versus OFF would still demonstrate it's usefulness.
From the foggy SQL 2000 days I always assumed the lag from DONE_IN_PROC and other floods was due to display rendering lag, rather than data retrieval. I seem to recall that this lag was noticeable a query when executed in SQL Query Analyzer, where the no count messages were displayed, rather than when it was run from a client-side app. My hypothesis is that the performance delta for whether NO COUNT is turned on has more to do with the advent of more powerful client-side graphics cards, especially in laptops, than anything going on SQL server side.
Thoughts?
My guess is that the delta at the time had more to do with the fact that the app was just ignoring the messages while Query Analyzer had to accept them and render them. But now I'm really not sure.
i'm still pushing for "set nocount on" in (ALMOST) all of our code. we maybe call 50 million transactions per server per day… saving 1 ms per transaction will help reduce lock durations, cpu usage
I wonder if it is simply when SQL Server is sending the row count messages over TCP.
With TCP you have to wait on the acknowledgement packet. I could see this adding a significant amount of overhead for RBAR operations especially if the network was over a VPN.
The latency for the acknowledgement packets can really start to add up.
This problem could manifest itself it other ways if you have a distributed query as well.
I suspect when I connected to a remote Azure SQL Database, the row count messages were being sent back to my local SSMS using TCP. But I still saw no difference. :-)
Also, I want to stress that in the above post I am absolutely not advocating that you shouldn't bother using
SET NOCOUNT ON
anymore. I still think this is a best practice. I just think that now we do it for slightly different and less visible reasons.The larger issue for me has not been performance but functionality. There are tools out there that would see the first (1 row(s) affected) as the end of the stored procedure and then say they got no response or disconnect. SET NOCOUNT ON prevents that from happening.
I did a quick test in a production SQL Server:
SSMS 2012 – SQL Server 11.0.5343
01:03 no count off
00:45 no count on
00:42 no count on, discard results on
http://daleburnett.com/2014/01/everything-ever-wanted-know-set-nocount/
This guy took a deep dive into what happens with no count.
It seems like the potential for performance problems will scale with the number of statements in the batch.
I may try to test out on some different SQL Server builds as I am curious where the performance improvement is coming from.
I also want to say thanks for the work you put into your articles. They always go good with coffee in the morning. :)
I can't remember when I first discovered the impact of all the "rows affected" messages, it could have been anything between 6.5 and 2000, but using SET NOCOUNT ON made a huge improvement on our stored proc and trigger heavy application which we were running with a PowerBuilder v5 GUI. I hadn't considered that someone had gotten around to changing SQL Server, although I suspect it's a change between MDAC and SNAC.
I've run your test WHILE loop on a Server 2008 (SP3) box and the NOCOUNT setting doesn't make a difference. Looking at the Client Network Statistics there's a small drop in the number of TDS packs sent by the server when NOCOUNT ON.
I might have to do an install of SQL2000 and use WireShark to have a look at what's really going.
The point of having this set ON is missed with this test. Guess which one of these columns was used in a STORED PROCEDURE that had SET NOCOUNT OFF?
TDS packets received from server 9541 1
Client processing time 24689 0
"By removing this extra overhead from the network it can greatly improve overall performance for your database and application."
Duration is never the only metric you should use when making a decision on how something performs. Any time you have RBAR code (how many applications have you come across that don't have this?), the payoff can be dramatic.
Thanks Mark, but my point remains, I used to be able to clearly demonstrate a noticeable change in duration, and I can't anymore. End users aren't counting packets, they're watching a clock. While of course duration isn't the only thing you should care about, it's the only thing end users care about, probably 100% of the time.
And I will repeat once again: I am in no way, shape, or form advocating that you should stop using
SET NOCOUNT ON
. I was merely pointing out that, unlike at some point in the past, duration is no longer impacted by this setting as it used to be, in all the tests I've contrived.I wanted to repeat your code with memory optimized tables. Your code copied verbatim ran for ~10 seconds on my laptop (both NOCOUNT ON and NOCOUNT OFF).
Below is the code I used which ran for ~25 seconds with NOCOUNT ON and failed with error 701 with NOCOUNT OFF. Any explanation for that?…
Thank you Aaron Bertrand. Shared information is very useful to me.
I am seeing one of the weird surprise of "Set nocount on" and may be you can help! I have inherited an app which uses classic ASP and also msxmlhttp. In all the existing code, I have not seen any SQL execution which gets the identity back from the database after inserting a row. I have a need after recent request to summarize times when certain messages are displayed. I added the following code:
SQLL = "set nocount on "
SQLL = SQLL + " INSERT INTO Pirep_History (msgtxt, IPAddress, Area) VALUES ('" + msg + "','" + top.topw.document.forms.ESIS.IPAddress.value + "','" +TargetArea + "') "
SQLL = SQLL + " SELECT SCOPE_IDENTITY() PIREP_ID"
SQLL = SQLL + " set nocount off"
I see weird behavior: 1. If I keep this code, I get 4 rows inserted everytime this code is executed and gets me the id of 4th row back. If I comment out the first line, since the extra "chatty" message meshes up, it inserts only 1 row (which is good) but then I do not get any id back!!! Any idea?
I am curious why you are bothering with
set nocount off
but also I would suggest running trace or extended events to see exactly what code is being sent to SQL Server on your behalf (I bet you get the right results if you run the same code in Management Studio, for example). I also doubtset nocount on
has anything to do with it…Yup same code mgmt studio works fine. It's just msxml & classic Asp meshes by all it up. I have done getting scope_identity() in asp.net many times but this old app I have inherited is giving me trouble!
Hello!
Just put together a blog post… NOCOUNT OFF can have an impact throughout the span of a while loop. But impact is ASYNC_NETWORK_WAITS throughout. If not enough latency for the ops its performing, the waits will disappear. But with sufficient lag between SQL Server and the client, the ASYNC_NETWORK_IO wait time can be severe.
http://sql-sasquatch.blogspot.com/2017/11/hey-whats-deal-with-nocount-and-t-sql.html
Thanks Lonny! It used to be really bad – and really obvious – in SSMS. I think in newer versions they are packaging up all those messages to SSMS much more efficiently. But it doesn't surprise me that there are still scenarios where you can see a glaring difference (and I still think it should be turned off as a rule with few exceptions).