Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Postgres Slow group by query with max

I am using postgres 9.1 and I have a table with about 3.5M rows of eventtype (varchar) and eventtime (timestamp) - and some other fields. There are only about 20 different eventtype's and the event time spans about 4 years.

I want to get the last timestamp of each event type. If I run a query like:

select eventtype, max(eventtime)
from allevents
group by eventtype

it takes around 20 seconds. Selecting distinct eventtype's is equally slow. The query plan shows a full sequential scan of the table - not surprising it is slow.

Explain analyse for the above query gives:

HashAggregate  (cost=84591.47..84591.68 rows=21 width=21) (actual time=20918.131..20918.141 rows=21 loops=1)
  ->  Seq Scan on allevents  (cost=0.00..66117.98 rows=3694698 width=21) (actual time=0.021..4831.793 rows=3694392 loops=1)
Total runtime: 20918.204 ms

If I add a where clause to select a specific eventtype, it takes anywhere from 40ms to 150ms which is at least decent.

Query plan when selecting specific eventtype:

GroupAggregate  (cost=343.87..24942.71 rows=1 width=21) (actual time=98.397..98.397 rows=1 loops=1)
  ->  Bitmap Heap Scan on allevents  (cost=343.87..24871.07 rows=14325 width=21) (actual time=6.820..89.610 rows=19736 loops=1)
        Recheck Cond: ((eventtype)::text = 'TEST_EVENT'::text)
        ->  Bitmap Index Scan on allevents_idx2  (cost=0.00..340.28 rows=14325 width=0) (actual time=6.121..6.121 rows=19736 loops=1)
              Index Cond: ((eventtype)::text = 'TEST_EVENT'::text)
Total runtime: 98.482 ms

Primary key is (eventtype, eventtime). I also have the following indexes:

allevents_idx (event time desc, eventtype)
allevents_idx2 (eventtype).

How can I speed up the query?

Results of query play for correlated subquery suggested by @denis below with 14 manually entered values gives:

Function Scan on unnest val  (cost=0.00..185.40 rows=100 width=32) (actual time=0.121..8983.134 rows=14 loops=1)
   SubPlan 2
     ->  Result  (cost=1.83..1.84 rows=1 width=0) (actual time=641.644..641.645 rows=1 loops=14)
          InitPlan 1 (returns $1)
             ->  Limit  (cost=0.00..1.83 rows=1 width=8) (actual time=641.640..641.641 rows=1 loops=14)
                  ->  Index Scan using allevents_idx on allevents  (cost=0.00..322672.36 rows=175938 width=8) (actual time=641.638..641.638 rows=1 loops=14)
                         Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = val.val))
Total runtime: 8983.203 ms

Using the recursive query suggested by @jjanes, the query runs between 4 and 5 seconds with the following plan:

CTE Scan on t  (cost=260.32..448.63 rows=101 width=32) (actual time=0.146..4325.598 rows=22 loops=1)
  CTE t
    ->  Recursive Union  (cost=2.52..260.32 rows=101 width=32) (actual time=0.075..1.449 rows=22 loops=1)
          ->  Result  (cost=2.52..2.53 rows=1 width=0) (actual time=0.074..0.074 rows=1 loops=1)
            InitPlan 1 (returns $1)
                  ->  Limit  (cost=0.00..2.52 rows=1 width=13) (actual time=0.070..0.071 rows=1 loops=1)
                        ->  Index Scan using allevents_idx2 on allevents  (cost=0.00..9315751.37 rows=3696851 width=13) (actual time=0.070..0.070 rows=1 loops=1)
                              Index Cond: ((eventtype)::text IS NOT NULL)
          ->  WorkTable Scan on t  (cost=0.00..25.58 rows=10 width=32) (actual time=0.059..0.060 rows=1 loops=22)
                Filter: (eventtype IS NOT NULL)
                SubPlan 3
                  ->  Result  (cost=2.53..2.54 rows=1 width=0) (actual time=0.059..0.059 rows=1 loops=21)
                        InitPlan 2 (returns $3)
                          ->  Limit  (cost=0.00..2.53 rows=1 width=13) (actual time=0.057..0.057 rows=1 loops=21)
                                ->  Index Scan using allevents_idx2 on allevents  (cost=0.00..3114852.66 rows=1232284 width=13) (actual time=0.055..0.055 rows=1 loops=21)
                                      Index Cond: (((eventtype)::text IS NOT NULL) AND ((eventtype)::text > t.eventtype))
  SubPlan 6
    ->  Result  (cost=1.83..1.84 rows=1 width=0) (actual time=196.549..196.549 rows=1 loops=22)
          InitPlan 5 (returns $6)
            ->  Limit  (cost=0.00..1.83 rows=1 width=8) (actual time=196.546..196.546 rows=1 loops=22)
                  ->  Index Scan using allevents_idx on allevents  (cost=0.00..322946.21 rows=176041 width=8) (actual time=196.544..196.544 rows=1 loops=22)
                        Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = t.eventtype))
Total runtime: 4325.694 ms
like image 250
snorock Avatar asked Jan 12 '23 23:01

snorock


2 Answers

What you need is a "skip scan" or "loose index scan". PostgreSQL's planner does not yet implement those automatically, but you can trick it into using one by using a recursive query.

WITH RECURSIVE  t AS (
SELECT min(eventtype) AS eventtype FROM allevents
           UNION ALL
SELECT (SELECT min(eventtype) as eventtype FROM allevents WHERE eventtype > t.eventtype)
   FROM t where t.eventtype is not null
)
select eventtype, (select max(eventtime) from allevents where eventtype=t.eventtype) from t;

There may be a way to collapse the max(eventtime) into the recursive query rather than doing it outside that query, but if so I have not hit upon it.

This needs an index on (eventtype, eventtime) in order to be efficient. You can have it be DESC on the eventtime, but that is not necessary. This is efficiently only if eventtype has only a few distinct values (21 of them, in your case).

like image 136
jjanes Avatar answered Jan 19 '23 06:01

jjanes


Based on the question you already have the relevant index.

If upgrading to Postgres 9.3 or an index on (eventtype, eventtime desc) doesn't make a difference, this is a case where rewriting the query so it uses a correlated subquery works very well if you can enumerate all of the event types manually:

select val as eventtype,
       (select max(eventtime)
        from allevents
        where allevents.eventtype = val
        ) as eventtime
from unnest('{type1,type2,…}'::text[]) as val;

Here's the plans I get when running similar queries:

denis=# select version();
                                                              version                                                              
-----------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.3.1 on x86_64-apple-darwin11.4.2, compiled by Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn), 64-bit
(1 row)

Test data:

denis=# create table test (evttype int, evttime timestamp, primary key (evttype, evttime));
CREATE TABLE
denis=# insert into test (evttype, evttime) select i, now() + (i % 3) * interval '1 min' - j * interval '1 sec' from generate_series(1,10) i, generate_series(1,10000) j;
INSERT 0 100000
denis=# create index on test (evttime, evttype);
CREATE INDEX
denis=# vacuum analyze test;
VACUUM

First query:

denis=# explain analyze select evttype, max(evttime) from test group by evttype;                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=2041.00..2041.10 rows=10 width=12) (actual time=54.983..54.987 rows=10 loops=1)
   ->  Seq Scan on test  (cost=0.00..1541.00 rows=100000 width=12) (actual time=0.009..15.954 rows=100000 loops=1)
 Total runtime: 55.045 ms
(3 rows)

Second query:

denis=# explain analyze select val as evttype, (select max(evttime) from test where test.evttype = val) as evttime from unnest('{1,2,3,4,5,6,7,8,9,10}'::int[]) val;
                                                                        QUERY PLAN                                                                         
-----------------------------------------------------------------------------------------------------------------------------------------------------------
 Function Scan on unnest val  (cost=0.00..48.39 rows=100 width=4) (actual time=0.086..0.292 rows=10 loops=1)
   SubPlan 2
     ->  Result  (cost=0.46..0.47 rows=1 width=0) (actual time=0.024..0.024 rows=1 loops=10)
           InitPlan 1 (returns $1)
             ->  Limit  (cost=0.42..0.46 rows=1 width=8) (actual time=0.021..0.021 rows=1 loops=10)
                   ->  Index Only Scan Backward using test_pkey on test  (cost=0.42..464.42 rows=10000 width=8) (actual time=0.019..0.019 rows=1 loops=10)
                         Index Cond: ((evttype = val.val) AND (evttime IS NOT NULL))
                         Heap Fetches: 0
 Total runtime: 0.370 ms
(9 rows)
like image 39
Denis de Bernardy Avatar answered Jan 19 '23 06:01

Denis de Bernardy