The problem is I keep getting seq scan on a rather simple query for a very trivial setup. What am I doing wrong?
constraint_exclusion = partition
Here's the create statement:
CREATE TABLE A (
K int NOT NULL,
X bigint NOT NULL,
Date timestamp NOT NULL,
fy smallint NOT NULL,
fz decimal(18, 8) NOT NULL,
fw decimal(18, 8) NOT NULL,
fv decimal(18, 8) NULL,
PRIMARY KEY (K, X)
) PARTITION BY LIST (K);
CREATE TABLE A_1 PARTITION OF A FOR VALUES IN (1);
CREATE TABLE A_2 PARTITION OF A FOR VALUES IN (2);
...
CREATE TABLE A_200 PARTITION OF A FOR VALUES IN (200);
CREATE TABLE A_Default PARTITION OF A DEFAULT;
CREATE INDEX IX_A_Date ON A (Date);
The query in question:
SELECT K, MIN(Date), MAX(Date)
FROM A
GROUP BY K
That always gives a sequence scan which takes several minutes while it's clearly evident there's no need for table data at all as Date field is indexed and I'm just asking for first and last leaf of its B-tree.
Originally the index was on (K, Date)
and it rendered to me quickly that Postgres will not honor one in any query I assumed it to be in use in. Index on (Date)
did the trick for other queries and it seems like Postgres claims to partition indexes automatically. However this specific simple query always goes for seq scan.
Any thoughts appreciated!
UPDATE
Query plan (analyze, buffers)
is as follows:
Finalize GroupAggregate (cost=4058360.99..4058412.66 rows=200 width=20) (actual time=148448.183..148448.189 rows=5 loops=1)
Group Key: a_16.k
Buffers: shared hit=5970 read=548034 dirtied=4851 written=1446
-> Gather Merge (cost=4058360.99..4058407.66 rows=400 width=20) (actual time=148448.166..148463.953 rows=8 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=5998 read=1919356 dirtied=4865 written=1454
-> Sort (cost=4057360.97..4057361.47 rows=200 width=20) (actual time=148302.271..148302.285 rows=3 loops=3)
Sort Key: a_16.k
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
Buffers: shared hit=5998 read=1919356 dirtied=4865 written=1454
-> Partial HashAggregate (cost=4057351.32..4057353.32 rows=200 width=20) (actual time=148302.199..148302.203 rows=3 loops=3)
Group Key: a_16.k
Buffers: shared hit=5984 read=1919356 dirtied=4865 written=1454
-> Parallel Append (cost=0.00..3347409.96 rows=94658849 width=12) (actual time=1.678..116664.051 rows=75662243 loops=3)
Buffers: shared hit=5984 read=1919356 dirtied=4865 written=1454
-> Parallel Seq Scan on a_16 (cost=0.00..1302601.32 rows=42870432 width=12) (actual time=0.320..41625.766 rows=34283419 loops=3)
Buffers: shared hit=14 read=873883 dirtied=14 written=8
-> Parallel Seq Scan on a_19 (cost=0.00..794121.94 rows=26070794 width=12) (actual time=0.603..54017.937 rows=31276617 loops=2)
Buffers: shared read=533414
-> Parallel Seq Scan on a_20 (cost=0.00..447025.50 rows=14900850 width=12) (actual time=0.347..52866.404 rows=35762000 loops=1)
Buffers: shared hit=5964 read=292053 dirtied=4850 written=1446
-> Parallel Seq Scan on a_18 (cost=0.00..198330.23 rows=6450422 width=12) (actual time=4.504..27197.706 rows=15481014 loops=1)
Buffers: shared read=133826
-> Parallel Seq Scan on a_17 (cost=0.00..129272.31 rows=4308631 width=12) (actual time=3.014..18423.307 rows=10340224 loops=1)
Buffers: shared hit=6 read=86180 dirtied=1
...
-> Parallel Seq Scan on a_197 (cost=0.00..14.18 rows=418 width=12) (actual time=0.000..0.000 rows=0 loops=1)
-> Parallel Seq Scan on a_198 (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.002 rows=0 loops=1)
-> Parallel Seq Scan on a_199 (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.001 rows=0 loops=1)
-> Parallel Seq Scan on a_default (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.002 rows=0 loops=1)
Planning Time: 16.893 ms
Execution Time: 148466.519 ms
UPDATE 2 Just to avoid future comments like “you should index on (K, Date)”:
The query plan with both indexes in place is exactly the same, analysis numbers are the same and even buffer hits/reads are almost the same.
Aggregate push-down into parallel plans can be enabled by setting enable_partitionwise_aggregate
to on
.
That will probably speed up your query somewhat, because PostgreSQL doesn't have to pass so many data between the parallel workers.
But it looks like PostgreSQL isn't smart enough to figure out it can use the index to speed up min
and max
for each partition, although it is smart enough to do that with a non-partitioned table.
There is no pretty way to work around that; you could resort to querying each partition:
SELECT k, min(min_date), max(max_date)
FROM (
SELECT 1 AS k, MIN(date) AS min_date, MAX(date) AS max_date FROM a_1
UNION ALL
SELECT 2, MIN(date), MAX(date) FROM a_2
UNION ALL
...
SELECT 200, MIN(date), MAX(date) FROM a_200
UNION ALL
SELECT k, MIN(date), MAX(date) FROM a_default
) AS all_a
GROUP BY k;
Yuck! There is clearly room for improvement here.
I dug into the code and found the reason in src/backend/optimizer/plan/planagg.c
:
/*
* preprocess_minmax_aggregates - preprocess MIN/MAX aggregates
*
* Check to see whether the query contains MIN/MAX aggregate functions that
* might be optimizable via indexscans. If it does, and all the aggregates
* are potentially optimizable, then create a MinMaxAggPath and add it to
* the (UPPERREL_GROUP_AGG, NULL) upperrel.
[...]
*/
void
preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
{
[...]
/*
* Reject unoptimizable cases.
*
* We don't handle GROUP BY or windowing, because our current
* implementations of grouping require looking at all the rows anyway, and
* so there's not much point in optimizing MIN/MAX.
*/
if (parse->groupClause || list_length(parse->groupingSets) > 1 ||
parse->hasWindowFuncs)
return;
Basically, PostgreSQL punts when it sees a GROUP BY
clause.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With