I have table with approximately 300 000 rows with INT[]
column type
Each array contains approximately 2000 elements
I created index for this array column
create index index_name ON table_name USING GIN (column_name)
Then run query:
SELECT COUNT(*)
FROM table_name
WHERE
column_name@> ARRAY[1777]
This query runs very slow Execution time: 66886.132 ms
and as EXPLAIN ANALYZE
shows, not uses GIN
index, only Seq Scan
index is used.
Why not uses Postgres GIN index and main destination: how to run above query as fast, as it is possible?
EDIT
This is result from explain (analyze, verbose)
for above query
Aggregate (cost=10000024724.75..10000024724.76 rows=1 width=0) (actual time=61087.513..61087.513 rows=1 loops=1)
Output: count(*)
-> Seq Scan on public.users (cost=10000000000.00..10000024724.00 rows=300 width=0) (actual time=12104.651..61087.500 rows=5 loops=1)
Output: id, email, pass, nick, reg_dt, reg_ip, gender, curr_location, about, followed_tag_ids, avatar_img_ext, rep_tag_ids, rep_tag_id_scores, stats, status
Filter: (users.rep_tag_ids @> '{1777}'::integer[])
Rows Removed by Filter: 299995
Planning time: 0.110 ms
Execution time: 61087.564 ms
This is table and index definitions
CREATE TABLE users
(
id serial PRIMARY KEY,
rep_tag_ids integer[] DEFAULT '{}'
-- other columns here
);
create index users_rep_tag_ids_idx ON users USING GIN (rep_tag_ids);
You should help query optimizer to use index. Install intarray extension for PostgreSQL if you don't have it yet and then recreate your index using gin__int_ops operator class.
DROP INDEX users_rep_tag_ids_idx;
CREATE INDEX users_rep_tag_ids_idx ON users USING gin (rep_tag_ids gin__int_ops);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With