Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Random order by performance

Tags:

tsql

What is the best way to get top n rows by random order?
I use the query like:

Select top(10) field1,field2 .. fieldn
from Table1
order by checksum(newid())

The problem in the above query is that it will keep getting slower as table size increases. it will always do full clustered index scan to find top(10) rows by random order.

Is there any other better way to do it?

like image 683
kumar Avatar asked Jul 13 '11 14:07

kumar


2 Answers

I have tested this and got better performance changing the query.

The DDL for the table I used in my tests.

CREATE TABLE [dbo].[TestTable]
(
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [Col1] [nvarchar](100) NOT NULL,
    [Col2] [nvarchar](38) NOT NULL,
    [Col3] [datetime] NULL,
    [Col4] [nvarchar](50) NULL,
    [Col5] [int] NULL,
 CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED 
 (
    [ID] ASC
 )
)

GO

CREATE NONCLUSTERED INDEX [IX_TestTable_Col5] ON [dbo].[TestTable] 
(
    [Col5] ASC
)

The table has 722888 rows.

First query:

select top 10
  T.ID,
  T.Col1,
  T.Col2,
  T.Col3,
  T.Col5,
  T.Col5
from TestTable as T
order by newid()

Statistics for first query:

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 13 ms.

(10 row(s) affected)
Table 'TestTable'. Scan count 1, logical reads 12492, physical reads 14, read-ahead reads 6437, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 859 ms,  elapsed time = 1700 ms.

Execution plan first query: enter image description here

Second query:

select 
  T.ID,
  T.Col1,
  T.Col2,
  T.Col3,
  T.Col5,
  T.Col5
from TestTable as T
  inner join (select top 10 ID
              from TestTable
              order by newid()) as C
    on T.ID = C.ID

Statistics for second query:

SQL Server parse and compile time: 
   CPU time = 125 ms, elapsed time = 183 ms.

(10 row(s) affected)
Table 'TestTable'. Scan count 1, logical reads 1291, physical reads 10, read-ahead reads 399, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 516 ms,  elapsed time = 706 ms.

Execution plan second query: enter image description here

Summary:

The second query is using the index on Col5 to order the rows by newid() and then it does a Clustered Index Seek 10 times to get the values for the output.

The performance gain is there because the index on Col5 is narrower than the clustered key and that causes fewer reads.

Thanks to Martin Smith for pointing that out.

like image 57
Mikael Eriksson Avatar answered Oct 12 '22 12:10

Mikael Eriksson


One way to reduce the size of the scan necessary is by using a combination of TABLESAMPLE with ORDER BY newid in order to select a random number of rows from a selection of pages in the table, rather scanning the entire table.

The idea is to calculate the average number of rows per page, then use tablesample to select 1 random page of data for each row you want to output. Then you will run the ORDER BY newid() query on just that subset of data. This approach will be slightly less random then the original approach, but is much better than just using tablesample and involves reading much less data off the table.

Unfortunately, the TABLESAMPLE clause does not accept a variable so dynamic sql is necessary in order to use a dynamic rows value based on the size of the records of the input table.

declare @factor int
select @factor=8000/avg_record_size_in_bytes from sys.dm_db_index_physical_stats(db_id(), object_id('sample'), null, null, 'detailed') where index_level = 0
declare @numRows int = 10
declare @sampledRows int = @factor * @numRows
declare @stmt nvarchar(max) = N'select top (@numRows) * from sample tablesample (' + convert(varchar(32), @sampledRows) + ' rows) order by checksum(newid())'
exec sp_executesql @stmt, N'@numRows int', @numRows
like image 39
stevehem Avatar answered Oct 12 '22 13:10

stevehem