Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Slow select distinct query on postgres

Tags:

postgresql

I'm doing the following two queries quite frequently on a table that essentially gathers up logging information. Both select distinct values from a huge number of rows but with less than 10 different values in those.

I've analyzed both "distinct" queries done by the page:

marchena=> explain select distinct auditrecor0_.bundle_id as col_0_0_ from audit_records auditrecor0_;
                                          QUERY PLAN                                          
----------------------------------------------------------------------------------------------
 HashAggregate  (cost=1070734.05..1070734.11 rows=6 width=21)
   ->  Seq Scan on audit_records auditrecor0_  (cost=0.00..1023050.24 rows=19073524 width=21)
(2 rows)

marchena=> explain select distinct auditrecor0_.server_name as col_0_0_ from audit_records auditrecor0_;
                                          QUERY PLAN                                          
----------------------------------------------------------------------------------------------
 HashAggregate  (cost=1070735.34..1070735.39 rows=5 width=13)
   ->  Seq Scan on audit_records auditrecor0_  (cost=0.00..1023051.47 rows=19073547 width=13)
(2 rows)

Both do sequence scans of the columns. However if I turn off enable_seqscan (dispite the name this only disables doing sequence scans on columns with indices) the query uses the index, but is even slower:

marchena=> set enable_seqscan = off;
SET
marchena=> explain select distinct auditrecor0_.bundle_id as col_0_0_ from audit_records auditrecor0_;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Unique  (cost=0.00..19613740.62 rows=6 width=21)
   ->  Index Scan using audit_bundle_idx on audit_records auditrecor0_  (cost=0.00..19566056.69 rows=19073570 width=21)
(2 rows)

marchena=> explain select distinct auditrecor0_.server_name as col_0_0_ from audit_records auditrecor0_;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Unique  (cost=0.00..45851449.96 rows=5 width=13)
   ->  Index Scan using audit_server_idx on audit_records auditrecor0_  (cost=0.00..45803766.04 rows=19073570 width=13)
(2 rows)

Both bundle_id and server_name columns have btree indices on them, should I be using a different type of index to make selecting distinct values fast?

like image 765
Sindri Traustason Avatar asked May 17 '11 14:05

Sindri Traustason


2 Answers

BEGIN; 
CREATE TABLE dist ( x INTEGER NOT NULL ); 
INSERT INTO dist SELECT random()*50 FROM generate_series( 1, 5000000 ); 
COMMIT;
CREATE INDEX dist_x ON dist(x);


VACUUM ANALYZE dist;
EXPLAIN ANALYZE SELECT DISTINCT x FROM dist;

HashAggregate  (cost=84624.00..84624.51 rows=51 width=4) (actual time=1840.141..1840.153 rows=51 loops=1)
   ->  Seq Scan on dist  (cost=0.00..72124.00 rows=5000000 width=4) (actual time=0.003..573.819 rows=5000000 loops=1)
 Total runtime: 1848.060 ms

PG can't (yet) use an index for distinct (skipping the identical values) but you can do this :

CREATE OR REPLACE FUNCTION distinct_skip_foo()
RETURNS SETOF INTEGER
LANGUAGE plpgsql STABLE 
AS $$
DECLARE
    _x  INTEGER;
BEGIN
    _x := min(x) FROM dist;
    WHILE _x IS NOT NULL LOOP
        RETURN NEXT _x;
        _x := min(x) FROM dist WHERE x > _x;
    END LOOP;
END;
$$ ;

EXPLAIN ANALYZE SELECT * FROM distinct_skip_foo();
Function Scan on distinct_skip_foo  (cost=0.00..260.00 rows=1000 width=4) (actual time=1.629..1.635 rows=51 loops=1)
 Total runtime: 1.652 ms
like image 170
bobflux Avatar answered Oct 13 '22 21:10

bobflux


You're selecting distinct values from the whole table, which automatically leads to a seq scan. You've millions rows, so it'll necessarily be slow.

There's a trick to get the distinct values faster, but it only works when the data has a known (and reasonably small) set of possible values. For instance, I take it that your bundle_id references some kind of bundles table which is a smaller. This means you can write:

select bundles.bundle_id
from bundles
where exists (
      select 1 from audit_records
      where audit_records.bundle_id = bundles.bundle_id
      );

This should lead to a nested loop / seq scan on bundles -> index scan on audit_records using the index on bundle_id.

like image 37
Denis de Bernardy Avatar answered Oct 13 '22 22:10

Denis de Bernardy