So. This issue is almost exactly the same as the one discussed here -- but the fix (such as it is) discussed in that post doesn't fix things for me.
I'm trying to use Python 2.7.5 and pyodbc 3.0.7 to connect from an Ubuntu 12.04 64bit machine to an IBM Netezza Database. I'm using unixODBC to handle specifying a DSN. This DSN works beautifully from the isql
CLI -- so I know it's configured correctly, and unixODBC is ticking right along.
The code is currently dead simple, and easy to reproduce in a REPL:
In [1]: import pyodbc
In [2]: conn = pyodbc.connect(dsn='NZSQL')
In [3]: curs = conn.cursor()
In [4]: curs.execute("SELECT * FROM DB..FOO ORDER BY created_on DESC LIMIT 10")
Out[4]: <pyodbc.Cursor at 0x1a70ab0>
In [5]: curs.fetchall()
---------------------------------------------------------------------------
InvalidOperation Traceback (most recent call last)
<ipython-input-5-ad813e4432e9> in <module>()
----> 1 curs.fetchall()
/usr/lib/python2.7/decimal.pyc in __new__(cls, value, context)
546 context = getcontext()
547 return context._raise_error(ConversionSyntax,
--> 548 "Invalid literal for Decimal: %r" % value)
549
550 if m.group('sign') == "-":
/usr/lib/python2.7/decimal.pyc in _raise_error(self, condition, explanation, *args)
3864 # Errors should only be risked on copies of the context
3865 # self._ignored_flags = []
-> 3866 raise error(explanation)
3867
3868 def _ignore_all_flags(self):
InvalidOperation: Invalid literal for Decimal: u''
So I get a connection, the query returns correctly, and then when I try to get a row... asplode.
Anybody ever managed to do this?
Databricks offers the Databricks SQL Connector for Python as an alternative to pyodbc . The Databricks SQL Connector for Python is easier to set up and use, and has a more robust set of coding constructs, than pyodbc . However pyodbc may have better performance when fetching queries results above 10 MB.
The pyodbc module allows connecting to data sources from Python on Windows, macOS, and Linux, for both 32-bit and 64-bit platforms. You can easily install the pyodbc module on your machine using the pip install pyodbc command in Python interactive mode.
Turns out pyodbc
can't gracefully convert all of Netezza's types. The table I was working with had two that are problematic:
NUMERIC(7,2)
NVARCHAR(255)
The NUMERIC
column causes a decimal conversion error on NULL. The NVARCHAR
column returns a utf-16-le encoded string, which is a pain in the ass.
I haven't found a good driver-or-wrapper-level solution yet. This can be hacked by casting types in the SQL statement:
SELECT
foo::FLOAT AS was_numeric
, bar::VARCHAR(255) AS was_nvarchar
I'll post here if I find a lower-level answer.
I've just encounter the same problem and found a different solution. I managed to solve the issue by:
Making sure the following attributes are part of my driver options in odbc ini file:
Add the following environment variables:
In my case the values are:
I'm using centos 6 and also installed both 'unixODBC' and 'unixODBC-devel' packages.
Hope it helps someone.
I'm not sure what your error is, but the code below is allowing me to connect to Netezza via ODBC:
# Connect via pyodbc to listed data sources on machine
import pyodbc
print pyodbc.dataSources()
print "Connecting via ODBC"
# get a connection, if a connect cannot be made an exception will be raised here
conn = pyodbc.connect("DRIVER={NetezzaSQL};SERVER=<myserver>;PORT=<myport>;DATABASE=<mydbschema>;UID=<user>;PWD=<password>;")
print "Connected!\n"
# you can then use conn cursor to perform queries
The Netezza Linux client package includes /usr/local/nz/lib/ODBC_README
, which lists all the values for those two attributes:
UnicodeTranslationOption: Specify translation option for Unicode. Possible value strings are: utf8 : unicode data is in utf-8 encoding utf16 : unicode data is in utf-16 encoding utf32 : unicode data is in utf-32 encoding Do not add '-' in the value strings listed above, e.g. "utf-8" is not a valid string value. These value strings are case insensitive. On windows this option is not available as windows DM always passes unicode data in utf16. CharacterTranslationOption ("Optimize for ASCII character set" on Windows): Specify translation option for character encodings. Possible value strings are: all : Support all character encodings latin9 : Support Latin9 character encoding only Do not add '-' in the value strings listed above, e.g. "latin-9" is not a valid string value. These value strings are case insensitive. NPS uses the Latin9 character encoding for char and varchar datatypes. The character encoding on many Windows systems is similar, but not identical to this. For the ASCII subset (letters a-z, A-Z, numbers 0-9 and punctuation marks) they are identical. If your character data in CHAR or VARCHAR datatypes is only in this ASCII subset then it will be faster if this box is checked. If your data has special characters such as the Euro sign (€) then keep the box unchecked to accurately convert the encoding. Characters in the NCHAR or NVARCHAR data types will always be converted appropriately.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With