Itzik Ben-Gan

Special Islands

SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Erin Stellato is a Principal Consultant with SQLskills and a Microsoft Data Platform MVP.

Erin’s Posts

Gaps and Islands tasks are classic querying challenges where you need to identify ranges of missing values and ranges of existing values in a sequence. The sequence is often based on some date, or date and time values, that should normally appear in regular intervals, but some entries are missing. The gaps task looks for the missing periods and the islands task looks for the existing periods. I covered many solutions to gaps and islands tasks in my books and articles in the past. Recently I was presented with a new special islands challenge by my friend, Adam Machanic, and solving it required a bit of creativity. In this article I present the challenge and the solution I came up with.

The challenge

In your database you keep track of services your company supports in a table called CompanyServices, and each service normally reports about once a minute that it’s online in a table called EventLog. The following code creates these tables and populates them with small sets of sample data:

 SET NOCOUNT ON;
 USE tempdb;
 IF OBJECT_ID(N'dbo.EventLog') IS NOT NULL DROP TABLE dbo.EventLog;
 IF OBJECT_ID(N'dbo.CompanyServices') IS NOT NULL DROP TABLE dbo.CompanyServices;
 
 CREATE TABLE dbo.CompanyServices
 (
   serviceid INT NOT NULL,
   CONSTRAINT PK_CompanyServices PRIMARY KEY(serviceid)
 );
 GO
 
 INSERT INTO dbo.CompanyServices(serviceid) VALUES(1), (2), (3);

 CREATE TABLE dbo.EventLog
 (
   logid     INT          NOT NULL IDENTITY,
   serviceid INT          NOT NULL,
   logtime   DATETIME2(0) NOT NULL,
   CONSTRAINT PK_EventLog PRIMARY KEY(logid)
 );
 GO
 
 INSERT INTO dbo.EventLog(serviceid, logtime) VALUES
   (1, '20180912 08:00:00'),
   (1, '20180912 08:01:01'),
   (1, '20180912 08:01:59'),
   (1, '20180912 08:03:00'),
   (1, '20180912 08:05:00'),
   (1, '20180912 08:06:02'),
   (2, '20180912 08:00:02'),
   (2, '20180912 08:01:03'),
   (2, '20180912 08:02:01'),
   (2, '20180912 08:03:00'),
   (2, '20180912 08:03:59'),
   (2, '20180912 08:05:01'),
   (2, '20180912 08:06:01'),
   (3, '20180912 08:00:01'),
   (3, '20180912 08:03:01'),
   (3, '20180912 08:04:02'),
   (3, '20180912 08:06:00');

 SELECT * FROM dbo.EventLog;

The EventLog table is currently populated with the following data:

 logid       serviceid   logtime
 ----------- ----------- ---------------------------
 1           1           2018-09-12 08:00:00
 2           1           2018-09-12 08:01:01
 3           1           2018-09-12 08:01:59
 4           1           2018-09-12 08:03:00
 5           1           2018-09-12 08:05:00
 6           1           2018-09-12 08:06:02
 7           2           2018-09-12 08:00:02
 8           2           2018-09-12 08:01:03
 9           2           2018-09-12 08:02:01
 10          2           2018-09-12 08:03:00
 11          2           2018-09-12 08:03:59
 12          2           2018-09-12 08:05:01
 13          2           2018-09-12 08:06:01
 14          3           2018-09-12 08:00:01
 15          3           2018-09-12 08:03:01
 16          3           2018-09-12 08:04:02
 17          3           2018-09-12 08:06:00

The special islands task is to identify the availability periods (serviced, starttime, endtime). One catch is that there’s no assurance that a service will report that it’s online exactly every minute; you’re supposed to tolerate an interval of up to, say, 66 seconds from the previous log entry and still consider it part of the same availability period (island). Beyond 66 seconds, the new log entry starts a new availability period. So, for the input sample data above, your solution is supposed to return the following result set (not necessarily in this order):

 serviceid   starttime                   endtime
 ----------- --------------------------- ---------------------------
 1           2018-09-12 08:00:00         2018-09-12 08:03:00
 1           2018-09-12 08:05:00         2018-09-12 08:06:02
 2           2018-09-12 08:00:02         2018-09-12 08:06:01
 3           2018-09-12 08:00:01         2018-09-12 08:00:01
 3           2018-09-12 08:03:01         2018-09-12 08:04:02
 3           2018-09-12 08:06:00         2018-09-12 08:06:00

Notice, for instance, how log entry 5 starts a new island since the interval from the previous log entry is 120 seconds (> 66), whereas log entry 6 does not start a new island since the interval from the previous entry is 62 seconds (<= 66). Another catch is that Adam wanted the solution to be compatible with pre-SQL Server 2012 environments, which makes it a much harder challenge, since you cannot use window aggregate functions with a frame to compute running totals and offset window functions like LAG and LEAD. As usual, I suggest trying to solve the challenge yourself before looking at my solutions. Use the small sets of sample data to check the validity of your solutions. Use the following code to populate your tables with large sets of sample data (500 services, ~10M log entries to test the performance of your solutions):

  — Helper function dbo.GetNums
 IF OBJECT_ID(N'dbo.GetNums') IS NOT NULL DROP FUNCTION dbo.GetNums;
 GO
 CREATE FUNCTION dbo.GetNums(@low AS BIGINT, @high AS BIGINT) RETURNS TABLE
 AS
 RETURN
   WITH
     L0   AS (SELECT c FROM (SELECT 1 UNION ALL SELECT 1) AS D(c)),
     L1   AS (SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B),
     L2   AS (SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B),
     L3   AS (SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B),
     L4   AS (SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B),
     L5   AS (SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B),
     Nums AS (SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS rownum
              FROM L5)
   SELECT TOP(@high – @low + 1) @low + rownum – 1 AS n
   FROM Nums
   ORDER BY rownum;
 GO
 
 — ~10,000,000 intervals
 DECLARE 
   @numservices      AS INT          = 500,
   @logsperservice   AS INT          = 20000,
   @enddate          AS DATETIME2(0) = '20180912',
   @validinterval    AS INT          = 60, — seconds
   @normdifferential AS INT          = 3,  — seconds
   @percentmissing   AS FLOAT        = 0.01;

 TRUNCATE TABLE dbo.EventLog;
 TRUNCATE TABLE dbo.CompanyServices;

 INSERT INTO dbo.CompanyServices(serviceid)
   SELECT A.n AS serviceid
   FROM dbo.GetNums(1, @numservices) AS A;

 WITH C AS
 (
   SELECT S.n AS serviceid,
     DATEADD(second, -L.n * @validinterval + CHECKSUM(NEWID()) % (@normdifferential + 1), @enddate) AS logtime,
     RAND(CHECKSUM(NEWID())) AS rnd
   FROM dbo.GetNums(1, @numservices) AS S
     CROSS JOIN dbo.GetNums(1, @logsperservice) AS L
 )
 INSERT INTO dbo.EventLog WITH (TABLOCK) (serviceid, logtime)
   SELECT serviceid, logtime
   FROM C
   WHERE rnd > @percentmissing;

The outputs that I’ll provide for the steps of my solutions will assume the small sets of sample data, and the performance numbers that I’ll provide will assume the large sets.

All of the solutions that I’ll present benefit from the following index:

CREATE INDEX idx_sid_ltm_lid ON dbo.EventLog(serviceid, logtime, logid);

Good luck!

Solution 1 for SQL Server 2012+

Before I cover a solution that is compatible with pre-SQL Server 2012 environments, I’ll cover one that does require a minimum of SQL Server 2012. I’ll call it Solution 1.

The first step in the solution is to compute a flag called isstart that is 0 if the event does not start a new island, and 1 otherwise. This can be achieved by using the LAG function to obtain the log time of the previous event and checking if the time difference in seconds between the previous and current events is less than or equal to the allowed gap. Here’s the code implementing this step:

 DECLARE @allowedgap AS INT = 66; -- in seconds

 SELECT *,
   CASE
     WHEN DATEDIFF(second,
            LAG(logtime) OVER(PARTITION BY serviceid ORDER BY logtime, logid),
            logtime) <= @allowedgap THEN 0
     ELSE 1
   END AS isstart
 FROM dbo.EventLog;

This code generates the following output:

 logid       serviceid   logtime                     isstart
 ----------- ----------- --------------------------- -----------
 1           1           2018-09-12 08:00:00         1
 2           1           2018-09-12 08:01:01         0
 3           1           2018-09-12 08:01:59         0
 4           1           2018-09-12 08:03:00         0
 5           1           2018-09-12 08:05:00         1
 6           1           2018-09-12 08:06:02         0
 7           2           2018-09-12 08:00:02         1
 8           2           2018-09-12 08:01:03         0
 9           2           2018-09-12 08:02:01         0
 10          2           2018-09-12 08:03:00         0
 11          2           2018-09-12 08:03:59         0
 12          2           2018-09-12 08:05:01         0
 13          2           2018-09-12 08:06:01         0
 14          3           2018-09-12 08:00:01         1
 15          3           2018-09-12 08:03:01         1
 16          3           2018-09-12 08:04:02         0
 17          3           2018-09-12 08:06:00         1

Next, a simple running total of the isstart flag produces an island identifier (I’ll call it grp). Here’s the code implementing this step:

 DECLARE @allowedgap AS INT = 66;

 WITH C1 AS
 (
   SELECT *,
     CASE
       WHEN DATEDIFF(second,
              LAG(logtime) OVER(PARTITION BY serviceid ORDER BY logtime, logid),
              logtime) <= @allowedgap THEN 0
       ELSE 1
     END AS isstart
   FROM dbo.EventLog
 )
 SELECT *,
   SUM(isstart) OVER(PARTITION BY serviceid ORDER BY logtime, logid
                     ROWS UNBOUNDED PRECEDING) AS grp
 FROM C1;

This code generates the following output:

 logid       serviceid   logtime                     isstart     grp
 ----------- ----------- --------------------------- ----------- -----------
 1           1           2018-09-12 08:00:00         1           1
 2           1           2018-09-12 08:01:01         0           1
 3           1           2018-09-12 08:01:59         0           1
 4           1           2018-09-12 08:03:00         0           1
 5           1           2018-09-12 08:05:00         1           2
 6           1           2018-09-12 08:06:02         0           2
 7           2           2018-09-12 08:00:02         1           1
 8           2           2018-09-12 08:01:03         0           1
 9           2           2018-09-12 08:02:01         0           1
 10          2           2018-09-12 08:03:00         0           1
 11          2           2018-09-12 08:03:59         0           1
 12          2           2018-09-12 08:05:01         0           1
 13          2           2018-09-12 08:06:01         0           1
 14          3           2018-09-12 08:00:01         1           1
 15          3           2018-09-12 08:03:01         1           2
 16          3           2018-09-12 08:04:02         0           2
 17          3           2018-09-12 08:06:00         1           3

Lastly, you group the rows by service ID and the island identifier and return the minimum and maximum log times as the start time and end time of each island. Here’s the complete solution:

 DECLARE @allowedgap AS INT = 66;
 WITH C1 AS
 (
   SELECT *,
     CASE
       WHEN DATEDIFF(second,
              LAG(logtime) OVER(PARTITION BY serviceid ORDER BY logtime, logid),
              logtime) <= @allowedgap THEN 0
       ELSE 1
     END AS isstart
   FROM dbo.EventLog
 ),
 C2 AS
 (
   SELECT *,
     SUM(isstart) OVER(PARTITION BY serviceid ORDER BY logtime, logid
                       ROWS UNBOUNDED PRECEDING) AS grp
   FROM C1
 )
 SELECT serviceid, MIN(logtime) AS starttime, MAX(logtime) AS endtime
 FROM C2
 GROUP BY serviceid, grp;

This solution took 41 seconds to complete on my system, and produced the plan shown in Figure 1.

Figure 1: Plan for Solution 1Figure 1: Plan for Solution 1

As you can see, both window functions are computed based on index order, without the need for explicit sorting.

If you are using SQL Server 2016 or later, you can use the trick I cover here to enable the batch mode Window Aggregate operator by creating an empty filtered columnstore index, like so:

 CREATE NONCLUSTERED COLUMNSTORE INDEX idx_cs 
  ON dbo.EventLog(logid) WHERE logid = -1 AND logid = -2;

The same solution now takes only 5 seconds to complete on my system, producing the plan shown in Figure 2.

Figure 2: Plan for Solution 1 using the batch mode Window Aggregate operatorFigure 2: Plan for Solution 1 using the batch mode Window Aggregate operator

This is all great, but as mentioned, Adam was looking for a solution that can run on pre-2012 environments.

Before you continue, make sure you drop the columnstore index for cleanup:

 DROP INDEX idx_cs ON dbo.EventLog;

Solution 2 for pre-SQL Server 2012 environments

Unfortunately, prior to SQL Server 2012, we didn’t have support for offset window functions like LAG, nor did we have support for computing running totals with window aggregate functions with a frame. This means that you’ll need to work much harder in order to come up with a reasonable solution.

The trick I used is to turn each log entry into an artificial interval whose start time is the entry’s log time and whose end time is the entry’s log time plus the allowed gap. You can then treat the task as a classic interval packing task.

The first step in the solution computes the artificial interval delimiters, and row numbers marking the positions of each of the event kinds (counteach). Here’s the code implementing this step:

 DECLARE @allowedgap AS INT = 66;

 SELECT logid, serviceid,
   logtime AS s, -- important, 's' > 'e', for later ordering
   DATEADD(second, @allowedgap, logtime) AS e,
   ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, logid) AS counteach
 FROM dbo.EventLog;

This code generates the following output:

 logid  serviceid  s                    e                    counteach
 ------ ---------- -------------------- -------------------- ----------
 1      1          2018-09-12 08:00:00  2018-09-12 08:01:06  1
 2      1          2018-09-12 08:01:01  2018-09-12 08:02:07  2
 3      1          2018-09-12 08:01:59  2018-09-12 08:03:05  3
 4      1          2018-09-12 08:03:00  2018-09-12 08:04:06  4
 5      1          2018-09-12 08:05:00  2018-09-12 08:06:06  5
 6      1          2018-09-12 08:06:02  2018-09-12 08:07:08  6
 7      2          2018-09-12 08:00:02  2018-09-12 08:01:08  1
 8      2          2018-09-12 08:01:03  2018-09-12 08:02:09  2
 9      2          2018-09-12 08:02:01  2018-09-12 08:03:07  3
 10     2          2018-09-12 08:03:00  2018-09-12 08:04:06  4
 11     2          2018-09-12 08:03:59  2018-09-12 08:05:05  5
 12     2          2018-09-12 08:05:01  2018-09-12 08:06:07  6
 13     2          2018-09-12 08:06:01  2018-09-12 08:07:07  7
 14     3          2018-09-12 08:00:01  2018-09-12 08:01:07  1
 15     3          2018-09-12 08:03:01  2018-09-12 08:04:07  2
 16     3          2018-09-12 08:04:02  2018-09-12 08:05:08  3
 17     3          2018-09-12 08:06:00  2018-09-12 08:07:06  4

The next step is to unpivot the intervals into a chronological sequence of start and end events, identified as event types 's' and 'e', respectively. Note that the choice of letters s and e is important ('s' > 'e'). This step computes row numbers marking the correct chronological order of both of the event kinds, which are now interleaved (countboth). In case one interval ends exactly where another starts, by positioning the start event before the end event, you’ll pack them together. Here’s the code implementing this step:

 DECLARE @allowedgap AS INT = 66;

 WITH C1 AS
 (
   SELECT logid, serviceid,
     logtime AS s, -- important, 's' > 'e', for later ordering
     DATEADD(second, @allowedgap, logtime) AS e,
     ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, logid) AS counteach
   FROM dbo.EventLog
 )
 SELECT logid, serviceid, logtime, eventtype, counteach,
   ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, eventtype DESC, logid) AS countboth
 FROM C1
   UNPIVOT(logtime FOR eventtype IN (s, e)) AS U;

This code generates the following output:

 logid  serviceid  logtime              eventtype  counteach  countboth
 ------ ---------- -------------------- ---------- ---------- ----------
 1      1          2018-09-12 08:00:00  s          1          1
 2      1          2018-09-12 08:01:01  s          2          2
 1      1          2018-09-12 08:01:06  e          1          3
 3      1          2018-09-12 08:01:59  s          3          4
 2      1          2018-09-12 08:02:07  e          2          5
 4      1          2018-09-12 08:03:00  s          4          6
 3      1          2018-09-12 08:03:05  e          3          7
 4      1          2018-09-12 08:04:06  e          4          8
 5      1          2018-09-12 08:05:00  s          5          9
 6      1          2018-09-12 08:06:02  s          6          10
 5      1          2018-09-12 08:06:06  e          5          11
 6      1          2018-09-12 08:07:08  e          6          12
 7      2          2018-09-12 08:00:02  s          1          1
 8      2          2018-09-12 08:01:03  s          2          2
 7      2          2018-09-12 08:01:08  e          1          3
 9      2          2018-09-12 08:02:01  s          3          4
 8      2          2018-09-12 08:02:09  e          2          5
 10     2          2018-09-12 08:03:00  s          4          6
 9      2          2018-09-12 08:03:07  e          3          7
 11     2          2018-09-12 08:03:59  s          5          8
 10     2          2018-09-12 08:04:06  e          4          9
 12     2          2018-09-12 08:05:01  s          6          10
 11     2          2018-09-12 08:05:05  e          5          11
 13     2          2018-09-12 08:06:01  s          7          12
 12     2          2018-09-12 08:06:07  e          6          13
 13     2          2018-09-12 08:07:07  e          7          14
 14     3          2018-09-12 08:00:01  s          1          1
 14     3          2018-09-12 08:01:07  e          1          2
 15     3          2018-09-12 08:03:01  s          2          3
 16     3          2018-09-12 08:04:02  s          3          4
 15     3          2018-09-12 08:04:07  e          2          5
 16     3          2018-09-12 08:05:08  e          3          6
 17     3          2018-09-12 08:06:00  s          4          7
 17     3          2018-09-12 08:07:06  e          4          8

As mentioned, counteach marks the position of the event among only the events of the same kind, and countboth marks the position of the event among the combined, interleaved, events of both kinds.

The magic is then handled by the next step—computing the count of active intervals after every event based on counteach and countboth. The number of active intervals is the number of start events that happened so far minus the number of end events that happened so far. For start events, counteach tells you how many start events happened so far, and you can figure out how many ended so far by subtracting counteach from countboth. So, the complete expression telling you how many intervals are active is then:

 counteach - (countboth - counteach)

For end events, counteach tells you how many end events happened so far, and you can figure out how many started so far by subtracting counteach from countboth. So, the complete expression telling you how many intervals are active is then:

 (countboth - counteach) - counteach

Using the following CASE expression, you compute the countactive column based on the event type:

 CASE
   WHEN eventtype = 's' THEN
     counteach - (countboth - counteach)
   WHEN eventtype = 'e' THEN
     (countboth - counteach) - counteach
 END

In the same step you filter only events representing starts and ends of packed intervals. Starts of packed intervals have a type 's' and a countactive 1. Ends of packed intervals have a type 'e' and a countactive 0.

After filtering, you’re left with pairs of start-end events of packed intervals, but each pair is split into two rows—one for the start event and another for the end event. Therefore, the same step computes pair identifier by using row numbers, with the formula (rownum - 1) / 2 + 1.

Here’s the code implementing this step:

 DECLARE @allowedgap AS INT = 66;

 WITH C1 AS
 (
   SELECT logid, serviceid,
     logtime AS s, -- important, 's' > 'e', for later ordering
     DATEADD(second, @allowedgap, logtime) AS e,
     ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, logid) AS counteach
   FROM dbo.EventLog
 ),
 C2 AS
 (
   SELECT logid, serviceid, logtime, eventtype, counteach,
     ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, eventtype DESC, logid) AS countboth
   FROM C1
     UNPIVOT(logtime FOR eventtype IN (s, e)) AS U
 )
 SELECT serviceid, eventtype, logtime,
   (ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, eventtype DESC, logid) - 1) / 2 + 1 AS grp
 FROM C2
   CROSS APPLY ( VALUES( CASE
                           WHEN eventtype = 's' THEN
                             counteach - (countboth - counteach)
                           WHEN eventtype = 'e' THEN
                             (countboth - counteach) - counteach
                         END ) ) AS A(countactive)
 WHERE (eventtype = 's' AND countactive = 1)
    OR (eventtype = 'e' AND countactive = 0);

This code generates the following output:

 serviceid   eventtype  logtime              grp
 ----------- ---------- -------------------- ----
 1           s          2018-09-12 08:00:00  1
 1           e          2018-09-12 08:04:06  1
 1           s          2018-09-12 08:05:00  2
 1           e          2018-09-12 08:07:08  2
 2           s          2018-09-12 08:00:02  1
 2           e          2018-09-12 08:07:07  1
 3           s          2018-09-12 08:00:01  1
 3           e          2018-09-12 08:01:07  1
 3           s          2018-09-12 08:03:01  2
 3           e          2018-09-12 08:05:08  2
 3           s          2018-09-12 08:06:00  3
 3           e          2018-09-12 08:07:06  3

The last step pivots the pairs of events into a row per interval, and subtracts the allowed gap from the end time to regenerate the correct event time. Here’s the complete solution’s code:

 DECLARE @allowedgap AS INT = 66;

 WITH C1 AS
 (
   SELECT logid, serviceid,
     logtime AS s, -- important, 's' > 'e', for later ordering
     DATEADD(second, @allowedgap, logtime) AS e,
     ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, logid) AS counteach
   FROM dbo.EventLog
 ),
 C2 AS
 (
   SELECT logid, serviceid, logtime, eventtype, counteach,
     ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, eventtype DESC, logid) AS countboth
   FROM C1
     UNPIVOT(logtime FOR eventtype IN (s, e)) AS U
 ),
 C3 AS
 (
   SELECT serviceid, eventtype, logtime,
     (ROW_NUMBER() OVER(PARTITION BY serviceid ORDER BY logtime, eventtype DESC, logid) - 1) / 2 + 1 AS grp
   FROM C2
     CROSS APPLY ( VALUES( CASE
                             WHEN eventtype = 's' THEN
                               counteach - (countboth - counteach)
                             WHEN eventtype = 'e' THEN
                               (countboth - counteach) - counteach
                           END ) ) AS A(countactive)
   WHERE (eventtype = 's' AND countactive = 1)
      OR (eventtype = 'e' AND countactive = 0)
 )
 SELECT serviceid, s AS starttime, DATEADD(second, -@allowedgap, e) AS endtime
 FROM C3
   PIVOT( MAX(logtime) FOR eventtype IN (s, e) ) AS P;

This solution took 43 seconds to complete on my system and generated the plan shown in Figure 3.

Figure 3: Plan for Solution 2Figure 3: Plan for Solution 2

As you can see, the first row number calculation is computed based on index order, but the next two do involve explicit sorting. Still, the performance is not that bad considering there are about 10,000,000 rows involved.

Even though the point about this solution is to use a pre-SQL Server 2012 environment, just for fun, I tested its performance after creating a filtered columnstore index to see how it does with batch processing enabled:

 CREATE NONCLUSTERED COLUMNSTORE INDEX idx_cs 
 ON dbo.EventLog(logid) WHERE logid = -1 AND logid = -2;

With batch processing enabled, this solution took 29 seconds to finish on my system, producing the plan shown in Figure 4.

Figure 3: Plan for Solution 2 with batch processing enabled

Conclusion

It’s natural that the more limited your environment is, the more challenging it becomes to solve querying tasks. Adam’s special Islands challenge is much easier to solve on newer versions of SQL Server than on older ones. But then you force yourself to use more creative techniques. So as an exercise, to improve your querying skills, you could tackle challenges that you’re already familiar with, but intentionally impose certain restrictions. You never know what kinds of interesting ideas you might stumble into!