Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What happens when a SQL query runs out of memory?

I want to set up a Postgres server on AWS, the biggest table will be 10GB - do I have to select 10GB of memory for this instance?

What happens when my query result is larger than 10GB?

like image 385
msuda Avatar asked Dec 06 '17 22:12

msuda


People also ask

What happens when SQL Server runs out of memory?

SQL Server will just keep using more and more memory until there's none left on the system. If the operating system has no memory available, it will start using the page file instead of RAM.

Can SQL run out of memory?

In-Memory OLTP uses more memory and in different ways than SQL Server does. It is possible that the amount of memory you installed and allocated for In-Memory OLTP becomes inadequate for your growing needs. If so, you could run out of memory.

How do I know if my SQL needs more memory?

Sometimes, you gotta look at what queries that are currently running are waiting on. For that, go grab sp_WhoIsActive. If you see queries constantly waiting on stuff like this, it might be a sign you need more memory, because you have to keep going out to disk to get what queries need to use.

How does SQL Server handle out of memory exception?

Use sqlcmd Utility instead of SSMS to run the SQL queries. This method enables queries to be run without the resources that are required by the SSMS UI. Additionally, you can use the 64-bit version of Sqlcmd.exe to avoid the memory restriction that affects the 32-bit SSMS process.


2 Answers

Nothing will happen, the entire result set is not loaded into memory. The maximum available memory will be used and re-used as needed while the result is prepared and will spill over to disk as needed.

See PostgreSQL resource documentation for more info.

Specifically, look at work_mem:

work_mem (integer) Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files.

As long as you don't run out of working memory on a single operation or set of parallel operations you are fine.

Edit: The above was an answer to the question What happens when you query a 10GB table without 10GB of memory on the server/instance?

Here is an updated answer to the updated question:

  • Only server side resources are used to produce the result set
  • Assuming JDBC drivers are used, by default, the entire result set is sent to your local computer which could cause out of memory errors

This behavior can be changed by altering the fetch size through the use of a cursor.

Reference to this behavior here

Getting results based on a cursor

like image 186
Aaron Dietz Avatar answered Oct 19 '22 18:10

Aaron Dietz


On the server side, with a simple query like yours it just keeps a "cursor" which points to where it's at, as it's spooling the results to you, and uses very little memory. Now if there were some "sorts" in there or what not, that didn't have indexes it could use, that might use up lots of memory, not sure there. On the client side the postgres JDBC client by default loads the "entire results" into memory before passing them back to you (overcomeable by specifying a fetch count).

With more complex queries (for example give me all 100M rows, but order them by "X" where X is not indexed) I don't know, but probably internally it creates a temp table (so it won't run out of RAM) which, treated as a normal table, uses disk backing. If there's a matching index then it can just traverse that, using a pointer, still uses little RAM.

like image 43
rogerdpack Avatar answered Oct 19 '22 17:10

rogerdpack