I generated a fixture:
python manage.py dumpdata --all > ./mydump.json
I emptied all my databases using:
python manage.py sqlflush | psql mydatabase -U mydbuser
But when i try to use loaddata:
python manage.py loaddata ./mydump.json
I'm recieving this error:
IntegrityError: Could not load tastypie.ApiKey(pk=1): duplicate key
value violates unique constraint "tastypie_apikey_user_id_key"
DETAIL: Key (user_id)=(2) already exists.
I'm having this problem on production and i'm out of ideas. Someone had a similar problem?
Run loaddata
with all @reciever
s commented out because they will be fired when loaddata
loads your data. If @reciever
s create other objects as a sideeffect it will cause collisions.
First: I believe your unix pipe is incorrectly written.
# 1: Dump your json
$ python manage.py dumpdata --all > ./mydump.json
# 2: dump your schema
$ python manage.py sqlflush > schema.sql
# 3: launch psql
# this is how I launch psql ( seems to be more portable between rhel/ubuntu )
# you might use a bit different technique, and that is ok.
Edited: (very important) Make sure you do not have any active django connections running on your server. Then:
$ sudo -u myuser psql mydatabase
# 4: read in schema
mydatabase=# \i schema.sql
mydatabase=# ctrl-d
# 5: load back in your fixture.
$ python manage.py loaddata ./mydump.json
Second: If your pipe is ok.. and it might be. Depending on your schema/data you may need to use natural-keys.
# 1: Dump your json using ( -n ) natural keys.
$ python manage.py dumpdata -n --all > ./mydump.json
# followed by steps 2-5 above.
Jeff Sheffield's solution is correct, but now I find that a solution like django-dbbackup is by far the most generic and simplier way to do it with any database.
python manage.py dbbackup
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With