Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Get data from pandas into a SQL server with PYODBC

I am trying to understand how python could pull data from an FTP server into pandas then move this into SQL server. My code here is very rudimentary to say the least and I am looking for any advice or help at all. I have tried to load the data from the FTP server first which works fine.... If I then remove this code and change it to a select from ms sql server it is fine so the connection string works, but the insertion into the SQL server seems to be causing problems.

import pyodbc
import pandas
from ftplib import FTP
from StringIO import StringIO
import csv

ftp = FTP ('ftp.xyz.com','user','pass' )
ftp.set_pasv(True)
r = StringIO()
ftp.retrbinary('filname.csv', r.write)

pandas.read_table (r.getvalue(), delimiter=',')


connStr = ('DRIVER={SQL Server Native Client 10.0};SERVER=localhost;DATABASE=TESTFEED;UID=sa;PWD=pass')
conn = pyodbc.connect(connStr)

cursor = conn.cursor()
cursor.execute("INSERT INTO dbo.tblImport(Startdt, Enddt, x,y,z,)" "VALUES                  (x,x,x,x,x,x,x,x,x,x.x,x)")
cursor.close()
conn.commit()
conn.close()
print"Script has successfully run!"

When I remove the ftp code this runs perfectly, but I do not understand how to make the next jump to get this into Microsoft SQL server, or even if it is possible without saving into a file first.

like image 667
andy redmayne Avatar asked Sep 04 '14 09:09

andy redmayne


People also ask

Can I use pandas with SQL?

Pandasql is a python library that allows manipulation of a Pandas Dataframe using SQL. Under the hood, Pandasql creates an SQLite table from the Pandas Dataframe of interest and allow users to query from the SQLite table using SQL.


6 Answers

For the 'write to sql server' part, you can use the convenient to_sql method of pandas (so no need to iterate over the rows and do the insert manually). See the docs on interacting with SQL databases with pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql

You will need at least pandas 0.14 to have this working, and you also need sqlalchemy installed. An example, assuming df is the DataFrame you got from read_table:

import sqlalchemy
import pyodbc
engine = sqlalchemy.create_engine("mssql+pyodbc://<username>:<password>@<dsnname>")

# write the DataFrame to a table in the sql database
df.to_sql("table_name", engine)

See also the documentation page of to_sql.
More info on how to create the connection engine with sqlalchemy for sql server with pyobdc, you can find here:http://docs.sqlalchemy.org/en/rel_1_1/dialects/mssql.html#dialect-mssql-pyodbc-connect


But if your goal is to just get the csv data into the SQL database, you could also consider doing this directly from SQL. See eg Import CSV file into SQL Server

like image 92
joris Avatar answered Oct 01 '22 18:10

joris


Python3 version using a LocalDB SQL instance:

from sqlalchemy import create_engine
import urllib
import pyodbc
import pandas as pd

df = pd.read_csv("./data.csv")

quoted = urllib.parse.quote_plus("DRIVER={SQL Server Native Client 11.0};SERVER=(localDb)\ProjectsV14;DATABASE=database")
engine = create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))

df.to_sql('TargetTable', schema='dbo', con = engine)

result = engine.execute('SELECT COUNT(*) FROM [dbo].[TargetTable]')
result.fetchall()
like image 30
Random Avatar answered Oct 01 '22 18:10

Random


Yes, the bcp utility seems to be the best solution for most cases.

If you want to stay within Python, the following code should work.

from sqlalchemy import create_engine
import urllib
import pyodbc

quoted = urllib.parse.quote_plus("DRIVER={SQL Server};SERVER=YOUR\ServerName;DATABASE=YOur_Database")
engine = create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))

df.to_sql('Table_Name', schema='dbo', con = engine, chunksize=200, method='multi', index=False, if_exists='replace')

Don't avoid method='multi', because it significantly reduces the task execution time.

Sometimes you may encounter the following error.

ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request. (8003) (SQLExecDirectW)')

In such a case, determine the number of columns in your dataframe: df.shape[1]. Divide the maximum supported number of parameters by this value and use the result's floor as a chunk size.

like image 44
Serhii Kushchenko Avatar answered Oct 01 '22 18:10

Serhii Kushchenko


I found that using bcp utility (https://docs.microsoft.com/en-us/sql/tools/bcp-utility) works best when you have a large dataset. I have 2.7 million rows that inserts at 80K rows/sec. You can store your data frame as csv file (use tabs for separator if your data doesn't have tabs and utf8 encoding). With bcp, I've used format "-c" and it works without issues so far.

like image 27
Babu Arunachalam Avatar answered Oct 01 '22 19:10

Babu Arunachalam


This worked for me on Python 3.5.2:

import sqlalchemy as sa
import urllib
import pyodbc

conn= urllib.parse.quote_plus('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
engine = sa.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(conn))

frame.to_sql("myTable", engine, schema='dbo', if_exists='append', index=False, index_label='myField')
like image 44
FedericoSala Avatar answered Oct 01 '22 18:10

FedericoSala


"As the Connection represents an open resource against the database, we want to always limit the scope of our use of this object to a specific context, and the best way to do that is by using Python context manager form, also known as the with statement." https://docs.sqlalchemy.org/en/14/tutorial/dbapi_transactions.html

The example would then be

from sqlalchemy import create_engine
import urllib
import pyodbc
connection_string = (
    "Driver={SQL Server Native Client 11.0};"
    "Server=myserver;"
    "UID=myuser;"
    "PWD=mypwd;"
    "Database=mydb;"
)
quoted = urllib.parse.quote_plus(connection_string)
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={quoted}')
with engine.connect() as cnn:
    df.to_sql('mytable',con=cnn, if_exists='replace', index=False)
like image 41
ekraus Avatar answered Oct 01 '22 18:10

ekraus