I am using MySQL with pandas and sqlalchemy. However, it is extremely slow. A simple query as this one takes more than 11 minutes to complete on a table with 11 milion rows. What actions could improve this performance? The table mentioned does not have a primary key and was indexed only by one column.
from sqlalchemy import create_engine
import pandas as pd
sql_engine_access = 'mysql+pymysql://root:[password]@localhost')
sql_engine = create_engine(sql_engine_access, echo=False)
script = 'select * from my_database.my_table'
df = pd.read_sql(script, con=self.sql_engine)
You can try out our tool connectorx (pip install -U connectorx). It is implemented in Rust and targeting on improving the performance of pandas.read_sql. The API is basically the same with pandas. For example in your case the code would look like:
import connectorx as cx
conn_url = "mysql://root:[password]@localhost:port/my_database"
query = "select * from my_table"
df = cx.read_sql(conn_url, query)
If there is a numerical column that is evenly distributed like ID in your query result, you can also further speed up the process by leveraging multiple cores like this:
df = cx.read_sql(conn_url, query, partition_on="ID", partition_num=4)
This would split the entire query to four small ones by filtering on the ID column and connectorx will run them in parallel. You can check out here for more usage and examples.
Here is the benchmark result loading 60M rows x 16 columns from MySQL to pandas DataFrame using 4 cores:

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With