I'm using SQLAlchemy 0.6.3 with PostgreSQL 8.4 on Debian squeeze. I want a table where one column stores something in PostgreSQL that shows up in Python as a list of integer lists or tuples of integer tuples. E.g.
((1,2), (3,4), (5,6,7))
In the example below that column is model
. I thought that a reasonable approach might be to store stuff as an PG 2 dimensional table, which in PG looks like integer[][]
. I don't know in what form SQLA will return this to Python, but I'm hoping it is something like a tuple of tuples.
However, I can't figure out how to tell SQLA to give me a two dimensional Integer array. The documentation for sqlalchemy.dialects.postgresql.ARRAY
says
item_type – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such. The type mapping figures out on the fly.
Unfortunately, I have no idea what that means. How can the type mapping figure this out on the fly? It needs to create the correct DDL.
My first and only guess for how to do this would have been ARRAY(ARRAY(Integer))
. Currently I have
crossval_table = Table(
name, meta,
Column('id', Integer, primary_key=True),
Column('created', TIMESTAMP(), default=now()),
Column('sample', postgresql.ARRAY(Integer)),
Column('model', postgresql.ARRAY(Integer)),
Column('time', Float),
schema = schema,
This creates the following DDL
CREATE TABLE crossval (
id integer NOT NULL,
created timestamp without time zone,
sample integer[],
model integer[],
"time" double precision
);
which isn't right, of course. What am I missing?
I'm answering this here, since Mike Bayer responded to this question on sqlalchemy users.
See the thread on sqlalchemy-user where Mike Bayer responds to this question. As Mike clarified, and as I missed when reading the PG documentation, PG does not actually enforce the array dimensions, nor does SQLA. So, one can write integer[][]
, but PG does not treat that any differently from integer[]
. In particular, both PG and SQLA will accept an array expression of any dimension. I'm not sure why this is the case. As quoted by Mike, the PG Arrays documentation says
The current implementation does not enforce the declared number of dimensions either. Arrays of a particular element type are all considered to be of the same type, regardless of size or number of dimensions. So, declaring the array size or number of dimensions in CREATE TABLE is simply documentation; it does not affect run-time behavior.
See also the ticket he opened. It looks like this is to enforce dimensions at the SQLA level.
I tried this
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine('postgresql://:5432/test', echo=True)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects import postgresql
Base = declarative_base()
from sqlalchemy import Column, Integer, String
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
sample = Column(postgresql.ARRAY(Integer))
Base.metadata.create_all(engine)
Session = sessionmaker(engine)
s = Session()
a = User()
a.name='test'
a.sample = [[1,2], [3,4]]
s.add(a)
s.commit()
I think this will solve your problem. Because in doc they mention.
However, the current implementation ignores any supplied array size limits, i.e., the behavior is the same as for arrays of unspecified length.
So if you are not declare anything then it will be the array which you want.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With