I'm having a problem with the system being overloaded. The query below is getting data from 3 tables, 2 of them have more than 10.000 records, and it takes 50 seconds to run.
SELECT DISTINCT
p.prod_name,
p.prod_price,
Sum(dt.vt_qtd) as total_qtd
FROM tdb_products p
LEFT JOIN tdb_sales_temp dt ON p.prod_mp_id = dt.vt_product
LEFT JOIN tdb_sales s ON dt.vt_cupom = s.sl_coupom
WHERE
s.sl_day = $day_link AND
s.sl_mon = $mon_link AND
s.sl_year = $year_link
GROUP BY
p.prod_name
ORDER BY
p.prod_name ASC
Is this normal?
Resolved!
WAITING: Queries can be slow because they're waiting on a bottleneck for a long time. See a detailed list of bottlenecks in types of Waits. RUNNING: Queries can be slow because they're running (executing) for a long time. In other words, these queries are actively using CPU resources.
Well, usually, we say that 1 ms is good enough for an SQL query duration, while 100 ms is worrisome. And 500-1000 ms is something that we definitely need to optimize, while 10 seconds is a total disaster.
The query takes 20 to 500 ms (or sometimes more) depending on the system and the amount of data. The performance of the database or the database server has a significant influence on the speed.
By default, SQL queries with an execution time of more than 300 milliseconds are considered as slow queries. These queries are recorded in the slow query logs and can be searched via TiDB Dashboard.
SELECT prod_name, prod_price, SUM(dt.vt_qtd) AS total_qtd
FROM tdb_sales s
JOIN tdb_sales_temp dt
ON dt.vt_cupom = s.sl_coupom
JOIN tdb_products p
ON p.prod_mp_id = dt.vt_product
WHERE (s.sl_day, s.sl_mon, s_sl_year) = ($day_link, $mon_link, $year_link)
GROUP BY
p.prod_name -- but it's better to group by product's PRIMARY KEY
Remove DISTINCT
(it's redundant as you have GROUP BY
and select the grouping field)
Rewrite LEFT JOIN
as INNER JOIN
since you have a filtering condition on a LEFT JOIN
'ed table.
Create indexes:
tdb_sales (sl_year, sl_mon, sl_day, sl_coupom)
tdb_sales_temp (vt_cupom, vt_product)
tdp_product (prod_mp_id) -- it's probably a PRIMARY KEY and you already have it
Short answer is no, that is definitely not an okay length of time. Any common database system should be able to handle multiple 10,000 row tables with sub-second time.
Not knowing the full schema or dbms back end, my recommendations to look at would be:
Indexing - make sure that the columns being used in the joins have proper indexes on them Data Type - if there is a difference in data type on the columns being joined, the dbms will have to perform a conversion for each row connection which could lead to significant performance drain.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With