I often need to clone production data to investigate bugs. Even with a trivial database size heroku db:pull (taps) takes 5+ minutes and seems to have a high chance of failing. Is there an alternative method to pull the database?
Libraries for alternative processes / articles would also be appreciated.
Heroku Redis is an in-memory, key-value data store and is not meant for long-term data persistence. After your organization deletes data stored in Heroku Redis, it initiates the data deletion process for any Heroku Redis database snapshots that Heroku Services created.
The command looks like this: $ heroku pg:push mylocaldb HEROKU_POSTGRESQL_MAGENTA --app sushi This command will take the local database “mylocaldb” and push it to the database at DATABASE_URL on the app “sushi”. In order to prevent accidental data overwrites and loss, the remote database must be empty.
Check out pgbackups. It has replaced the Heroku bundle command and will give you a the postgres equivalent of mysqldump. This is far more civilized than Taps for large datasets.
heroku pgbackups:capture
Will create a dumpfile and store it. To download the dumpfile you need the url which you get with
heroku pgbackups:url b001 (or whatever the id number of the backup is)
That will return an url from which you can download your dump. You can paste it into Firefox if you want or use curl/wget like they suggest. The use pg_restore to load the dump file into your database as they say in the docs:
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U test_user -d myapp_development /home/mike/Downloads/b001.dump
pg_restore: connecting to database for restore
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With