Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL LIMIT vs. JDBC Statement setMaxRows. Which one is better?

I want to select the Top 10 records for a given query. So, I can use one of the following options:

  • Using the JDBC Statement.setMaxRows() method
  • Using LIMIT and OFFSET in the SQL query

What are the advantages and disadvantages of these two options?

like image 300
Rong Nguyen Avatar asked Sep 22 '15 03:09

Rong Nguyen


1 Answers

SQL-level LIMIT

To restrict the SQL query result set size, you can use the SQL:008 syntax:

SELECT title
FROM post
ORDER BY created_on DESC
OFFSET 50 ROWS
FETCH NEXT 50 ROWS ONLY

which works on Oracle 12, SQL Server 2012, or PostgreSQL 8.4 or newer versions.

For MySQL, you can use the LIMIT and OFFSET clauses:

SELECT title
FROM post
ORDER BY created_on DESC
LIMIT 50
OFFSET 50

The advantage of using the SQL-level pagination is that the database execution plan can use this information.

So, if we have an index on the created_on column:

CREATE INDEX idx_post_created_on ON post (created_on DESC)

And we execute the following query that uses the LIMIT clause:

EXPLAIN ANALYZE
SELECT title
FROM post
ORDER BY created_on DESC
LIMIT 50

We can see that the database engine uses the index since the optimizer knows that only 50 records are to be fetched:

Execution plan:
Limit  (cost=0.28..25.35 rows=50 width=564)
       (actual time=0.038..0.051 rows=50 loops=1)
  ->  Index Scan using idx_post_created_on on post p  
      (cost=0.28..260.04 rows=518 width=564) 
      (actual time=0.037..0.049 rows=50 loops=1)
Planning time: 1.511 ms
Execution time: 0.148 ms

JDBC Statement maxRows

According to the setMaxRows Javadoc:

If the limit is exceeded, the excess rows are silently dropped.

That's not very reassuring!

So, if we execute the following query on PostgreSQL:

try (PreparedStatement statement = connection
    .prepareStatement("""
        SELECT title
        FROM post
        ORDER BY created_on DESC
    """)
) {
    statement.setMaxRows(50);
    ResultSet resultSet = statement.executeQuery();
    int count = 0;
    while (resultSet.next()) {
        String title = resultSet.getString(1);
        count++;
    }
}

We get the following execution plan in the PostgreSQL log:

Execution plan:
  Sort  (cost=65.53..66.83 rows=518 width=564) 
        (actual time=4.339..5.473 rows=5000 loops=1)
  Sort Key: created_on DESC
  Sort Method: quicksort  Memory: 896kB
  ->  Seq Scan on post p  (cost=0.00..42.18 rows=518 width=564) 
                          (actual time=0.041..1.833 rows=5000 loops=1)
Planning time: 1.840 ms
Execution time: 6.611 ms 

Because the database optimizer has no idea that we need to fetch only 50 records, it assumes that all 5000 rows need to be scanned. If a query needs to fetch a large number of records, the cost of a full-table scan is actually lower than if an index is used, hence the execution plan will not use the index at all.

I ran this test on Oracle, SQL Server, PostgreSQL, and MySQL, and it looks like the Oracle and PostgreSQL optimizers don't use the maxRows setting when generating the execution plan.

However, on SQL Server and MySQL, the maxRows JDBC setting is taken into consideration, and the execution plan is equivalent to an SQL query that uses TOP or LIMIT. You can run the tests for yourself, as they are available in my High-Performance Java Persistence GitHub repository.

Conclusion

Although it looks like the setMaxRows is a portable solution to limit the size of the ResultSet, the SQL-level pagination is much more efficient if the database server optimizer doesn't use the JDBC maxRows property.

like image 165
Vlad Mihalcea Avatar answered Oct 16 '22 10:10

Vlad Mihalcea