Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PostgreSQL Simple JOIN very slow

I have a simple query, and two tables:

drilldown

CREATE SEQUENCE drilldown_id_seq;

CREATE TABLE drilldown (
    transactionid bigint NOT NULL DEFAULT nextval('drilldown_id_seq'),
    userid bigint NOT NULL default 0 REFERENCES users(id),
    pathid bigint NOT NULL default 0,
    reqms bigint NOT NULL default 0,
    quems bigint NOT NULL default 0,
    clicktime timestamp default current_timestamp,
    PRIMARY KEY(transactionid)
);

ALTER SEQUENCE drilldown_id_seq OWNED BY drilldown.transactionid;

CREATE INDEX drilldown_idx1 ON drilldown (clicktime);

querystats

CREATE SEQUENCE querystats_id_seq;

CREATE TABLE querystats (
    id bigint NOT NULL DEFAULT nextval('querystats_id_seq'),
    transactionid bigint NOT NULL default 0 REFERENCES drilldown(transactionid),
    querynameid bigint NOT NULL default 0 REFERENCES queryname(id),
    queryms bigint NOT NULL default 0,
    PRIMARY KEY(id)
);

ALTER SEQUENCE querystats_id_seq OWNED BY querystats.id;

CREATE INDEX querystats_idx1 ON querystats (transactionid);
CREATE INDEX querystats_idx2 ON querystats (querynameid);

drilldown has 1.5 million records, and querystats has 10 million records; the problem happens when I to a join between the two.

QUERY

explain analyse
select avg(qs.queryms)
  from querystats qs
  join drilldown d on (qs.transactionid=d.transactionid)
  where querynameid=1;

QUERY PLAN

Aggregate  (cost=528596.96..528596.97 rows=1 width=8) (actual time=5213.154..5213.154 rows=1 loops=1)
   ->  Hash Join  (cost=274072.53..518367.59 rows=4091746 width=8) (actual time=844.087..3528.788 rows=4117717 loops=1)
         Hash Cond: (qs.transactionid = d.transactionid)
         ->  Bitmap Heap Scan on querystats qs  (cost=88732.62..210990.44 rows=4091746 width=16) (actual time=309.502..1321.029 rows=4117717 loops=1)
               Recheck Cond: (querynameid = 1)
               ->  Bitmap Index Scan on querystats_idx2  (cost=0.00..87709.68 rows=4091746 width=0) (actual time=307.916..307.916 rows=4117718 loops=1)
                     Index Cond: (querynameid = 1)
         ->  Hash  (cost=162842.29..162842.29 rows=1371250 width=8) (actual time=534.065..534.065 rows=1372574 loops=1)
               Buckets: 4096  Batches: 64  Memory Usage: 850kB
               ->  Index Scan using drilldown_pkey on drilldown d  (cost=0.00..162842.29 rows=1371250 width=8) (actual time=0.015..364.657 rows=1372574 loops=1)
 Total runtime: 5213.205 ms
(11 rows)

I know there are some tuning parameters I can adjust for PostgreSQL, but what I want to know is the query I am doing the most optimal way of joing the two tables?

Or maybe some sort of INNER JOIN? I'm just not sure.

Any pointers are appreciated!

EDIT

database#\d drilldown
                                       Table "public.drilldown"
    Column     |            Type             |                       Modifiers                        
---------------+-----------------------------+--------------------------------------------------------
 transactionid | bigint                      | not null default nextval('drilldown_id_seq'::regclass)
 userid        | bigint                      | not null default 0
 pathid        | bigint                      | not null default 0
 reqms         | bigint                      | not null default 0
 quems         | bigint                      | not null default 0
 clicktime     | timestamp without time zone | default now()
Indexes:
    "drilldown_pkey" PRIMARY KEY, btree (transactionid)
    "drilldown_idx1" btree (clicktime)
Foreign-key constraints:
    "drilldown_userid_fkey" FOREIGN KEY (userid) REFERENCES users(id)
Referenced by:
    TABLE "querystats" CONSTRAINT "querystats_transactionid_fkey" FOREIGN KEY (transactionid) REFERENCES drilldown(transactionid)

database=# \d querystats
                            Table "public.querystats"
    Column     |  Type  |                        Modifiers                        
---------------+--------+---------------------------------------------------------
 id            | bigint | not null default nextval('querystats_id_seq'::regclass)
 transactionid | bigint | not null default 0
 querynameid   | bigint | not null default 0
 queryms       | bigint | not null default 0
Indexes:
    "querystats_pkey" PRIMARY KEY, btree (id)
    "querystats_idx1" btree (transactionid)
    "querystats_idx2" btree (querynameid)
Foreign-key constraints:
    "querystats_querynameid_fkey" FOREIGN KEY (querynameid) REFERENCES queryname(id)
    "querystats_transactionid_fkey" FOREIGN KEY (transactionid) REFERENCES drilldown(transactionid)

So here are the two tables requested and version

PostgreSQL 9.1.7 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit

So what this query is doing is getting the average from all the rows values of queryms for each query type (querynameid)

            name            |         current_setting          |        source        
----------------------------+----------------------------------+----------------------
 application_name           | psql                             | client
 client_encoding            | UTF8                             | client
 DateStyle                  | ISO, MDY                         | configuration file
 default_text_search_config | pg_catalog.english               | configuration file
 enable_seqscan             | off                              | session
 external_pid_file          | /var/run/postgresql/9.1-main.pid | configuration file
 lc_messages                | en_US.UTF-8                      | configuration file
 lc_monetary                | en_US.UTF-8                      | configuration file
 lc_numeric                 | en_US.UTF-8                      | configuration file
 lc_time                    | en_US.UTF-8                      | configuration file
 log_line_prefix            | %t                               | configuration file
 log_timezone               | localtime                        | environment variable
 max_connections            | 100                              | configuration file
 max_stack_depth            | 2MB                              | environment variable
 port                       | 5432                             | configuration file
 shared_buffers             | 24MB                             | configuration file
 ssl                        | on                               | configuration file
 TimeZone                   | localtime                        | environment variable
 unix_socket_directory      | /var/run/postgresql              | configuration file
(19 rows)

I see that enable_seqscan=off, I have not touched any settings, this is a completely default install.

UPDATE

I made some changes from the below comments and here is the results.

explain analyse SELECT (SELECT avg(queryms) AS total FROM querystats WHERE querynameid=3) as total FROM querystats qs JOIN drilldown d ON (qs.transactionid=d.transactionid) WHERE qs.querynameid=3 limit 1;
                                                                       QUERY PLAN                                                                        
---------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=196775.99..196776.37 rows=1 width=0) (actual time=2320.876..2320.876 rows=1 loops=1)
   InitPlan 1 (returns $0)
     ->  Aggregate  (cost=196775.94..196775.99 rows=1 width=8) (actual time=2320.815..2320.815 rows=1 loops=1)
           ->  Bitmap Heap Scan on querystats  (cost=24354.25..189291.69 rows=2993698 width=8) (actual time=226.516..1144.690 rows=2999798 loops=1)
                 Recheck Cond: (querynameid = 3)
                 ->  Bitmap Index Scan on querystats_idx  (cost=0.00..23605.83 rows=2993698 width=0) (actual time=225.119..225.119 rows=2999798 loops=1)
                       Index Cond: (querynameid = 3)
   ->  Nested Loop  (cost=0.00..1127817.12 rows=2993698 width=0) (actual time=2320.876..2320.876 rows=1 loops=1)
         ->  Seq Scan on drilldown d  (cost=0.00..76745.10 rows=1498798 width=8) (actual time=0.009..0.009 rows=1 loops=1)
         ->  Index Scan using querystats_idx on querystats qs  (cost=0.00..0.60 rows=2 width=8) (actual time=0.045..0.045 rows=1 loops=1)
               Index Cond: ((querynameid = 3) AND (transactionid = d.transactionid))
 Total runtime: 2320.940 ms
(12 rows)
like image 412
MichaelM Avatar asked Feb 10 '13 01:02

MichaelM


1 Answers

It's behaving as though you have set enable_seqscan = off, because it is using an index scan to populate a hash table. Never set any of the planner options off except as a diagnostic step, and if you are showing a plan, please show any options used. This can be run to show a lot of the useful information:

SELECT version();
SELECT name, current_setting(name), source
  FROM pg_settings
  WHERE source NOT IN ('default', 'override');

It also helps if you tell us about the runtime environment, especially the amount of RAM on the machine, what your storage system looks like, and the size of the database (or even better, the active data set of frequently referenced data in the database).

As a rough breakdown, the 5.2 seconds breaks down to:

  1. 1.3 seconds to find the 4,117,717 querystats rows that match your selection criterion.
  2. 2.3 seconds to randomly match those against drilldown records.
  3. 1.6 seconds to pass the 4,117,717 rows and calculate an average.

So, even though you seem to have crippled its ability to use the fastest plan, it is taking only 1.26 microseconds (millionths of a second) to locate each row, join it to another, and work it into a calculation of an average. That's not too bad on an absolute basis, but you can almost certainly get a slightly faster plan.

First off, if you are using 9.2.x where x is less than 3, upgrade to 9.2.3 immediately. There was a performance regression for some types of plans which was fixed in the recent release which might affect this query. In general, try to stay up-to-date on minor releases (where version number changes past the second dot).

You can test different plans in a single session by setting planning factors on just that connection and running your query (or an EXPLAIN on it). Try something like this:

SET seq_page_cost = 0.1;
SET random_page_cost = 0.1;
SET cpu_tuple_cost = 0.05;
SET effective_cache_size = '3GB'; -- actually use shared_buffers plus OS cache

Make sure that all enable_ settings are on.

like image 97
kgrittn Avatar answered Sep 29 '22 15:09

kgrittn