I'm trying to understand what this code is doing behind the scenes:
import psycopg2
c = psycopg2.connect('db=some_db user=me').cursor()
c.execute('select * from some_table')
for row in c:
pass
Per PEP 249 my understanding was that this was repeatedly calling Cursor.next()
which is the equivalent of calling Cursor.fetchone()
. However, the psycopg2
docs say the following:
When a database query is executed, the Psycopg cursor usually fetches all the records returned by the backend, transferring them to the client process.
So I'm confused -- when I run the code above, is it storing the results on the server and fetching them one by one, or is it bringing over everything at once?
Best practices Our suggestion is to almost always create a new cursor and dispose old ones as soon as the data is not required anymore (call close() on them.) The only exception are tight loops where one usually use the same cursor for a whole bunch of INSERT s or UPDATE s.
class cursor. Allows Python code to execute PostgreSQL command in a database session. Cursors are created by the connection. cursor() method: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection.
It is an object that is used to make the connection for executing SQL queries. It acts as middleware between SQLite database connection and SQL query. It is created after giving connection to SQLite database.
A SQL cursor in PostgreSQL is a read-only pointer to a fully executed SELECT statement's result set. Cursors are typically used within applications that maintain a persistent connection to the PostgreSQL backend.
It depends on how you configure psycopg2
. See itersize
and server side cursors.
By default it fetches all rows into client memory, then just iterates over the fetched rows with the cursor. But per the above docs, you can configure batch fetches from a server-side cursor instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With