I have a resultset of rows stored in cursor.rows which are returned from a pyodbc.cursor.execute command. What is the fastest way to unpack this data and place it into a list of comma-seperated strings (or unpack into a custom object)?
Currently I am doing the following:
cursor.execute(query_str)
f = open(out_file, 'w')
for row in cursor:
f.write(','.join([str(s) for s in row]))
f.write('\n')
This takes 130ms per row, which seems like a ridiculously expensive operation. How can I speed this up?
I'd use the csv
module:
import csv
cursor.execute(query_str)
with open(out_file, 'w') as f:
csv.writer(f, quoting=csv.QUOTE_NONE).writerows(cursor)
Beware that if you csv.QUOTE_NONE
a csv.Error
is raised if there's a comma in a data field. The sane way would be to csv.QUOTE_MINIMAL
at least.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With