Aaron Bertrand

A couple of quick things on PASS feedback

November 8, 2013 by in SQL Performance | 2 Comments
SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Paul White is an independent SQL Server consultant specializing in performance tuning, execution plans, and the query optimizer.

Paul’s Posts

Earlier this week, the PASS speaker portal was updated with the session evaluations from this year's PASS Summit in Charlotte, NC. Most people were very happy with their evaluations, though a few were pretty hard on themselves. One thing I wanted to say was:

Stop worrying about your ranking!

It's hard to avoid comparing your score to the rest of the field. But I have to point out that almost the entire conference was above 4.0 on a 5-point scale (and more than half was above 4.5). That is truly remarkable, and is a testament to the quality of both the speakers and the job done by the selection committee in vetting speakers and sessions. This really was a high-quality event and, depending on the size of your audience, your rank amongst such an esteemed group of peers could be crushed by a single grumpy reviewer giving you a 3 or 4 instead of a 5. Maybe you made fun of Canadians and they didn't like it, or maybe they blame you for the temperature of the room or that your talk was in a small room and they had to stand. So don't take your ranking too seriously: focus on the qualitative feedback about your content and delivery, and try to remember the things you know you need to improve on. Work on those things for next time; don't work on how to jockey a few places in the rankings so you can be ahead of so-and-so.

Now, since I know that people will still want to dwell on the numbers, there was a little snafu in this year's survey which threw off the original overall scores and rankings. Question #6 asked:

Did you learn what you expected from this session?

And then the possible answers were:

1. Yes     2. No     3. Sort of

This is not a sliding scale: 1 is best, 2 is worst, and 3 is mediocre. Kendra Little (@Kendra_Little) put it best in this graphic.

So it's ok to collect information in this way if you're just going to count up the Yesses, Nos and Sort ofs, but it can't be used to calculate a numerical score. For example, the average for one of my sessions was 1.18. What does that mean? Did I have five people who learned nothing from my session, or two people who "sort of" learned what they expected? Calculating an average here, even on its own, was entirely meaningless; worse yet, those with better scores here actually pulled their overall score *down* further when it was included in the overall score calculation. To their credit, PASS was quick to correct this (unlike the other survey snafu); within about two hours of publishing the results, they had recalculated the scores, dropping Question #6 altogether:

The session evaluations have been updated and the overall ranking will now show your true score. The individual reports still show the average score for question #6; however, this score no longer affects your overall score for the session or your overall ranking. To see your new overall score, please refresh …

Now, my suggestion is, if you really want to know your overall score while incorporating the data from Question #6, just change the scale. Make Yes a 5, No a 1, and Sort of a 3. You can do this by downloading the CSV, bulk inserting it, and updating the data directly. Or if you're a really good Excel wrangler, you could probably just do it there. I tried to automate this but I really suck at Excel and the CSV format is not really made for bulk insert (some of the columns are actually separate by CR/LF, not commas). So you may need to do some work to get it into a format you can work with, but the idea is there, anyway.