I need to backup a Drupal database it is huge. So it has over 1500 tables (don't blame me, its a Drupal thing) and is 10GB in size.
I couldn't do it with PHPMyAdmin, I just got an error when it started to build the .sql file.
I want to make sure I wont break anything or take the server down or anything when I try to back it up.
I was going to attempt a mysqldump on my server and then copy the file down locally but realised that this may cause unforeseen problems. So my question to you is, is it safe to use mysqldump on so many tables at once and even if it is safe are there any problems such a huge file could lead to in the future for rebuilding the database?
Thanks for the input guys.
Back up data using cloud storage Investing in cloud storage is one of the best ways to back up your data. A good cloud storage service, such as Google Drive or Dropbox, will come with its own file management system for simplified access, as well as decent encryption to keep your data away from third parties.
The only way to maintain the indexes on such a huge database is to REORGANIZE them. REBUILD index option can only be chosen during index corruption or when there is an absolute need to REBUILD a particular large index.
is it safe to use mysqldump on so many tables at once
I run daily backups with mysqldump
on servers literally 10x this size: 15000+ tables, 100+ GB.
If you have not examined the contents of a file produced by mysqldump
... you should, because to see its output is to understand why it is an intrinsically safe backup utility:
The backups are human-readable, and consist entirely of the necessary SQL statements to create a database exactly like the one you backed up.
In this form, their content is easily manipulated with ubiquitous tools like sed
and grep
and perl
, which can be used to pluck out just one table from a file for restoration, for example.
If a restoration fails, the error will indicate the line number within the file where the error occurred. This is usually related to buggy behavior in the version of the server where the backup was created (e.g. MySQL Server 5.1 allowed you to create views in some situations where the server itself would not accept the output of its own SHOW CREATE VIEW
statement. The create statement was not considered -- by the same server -- to be a valid view definition, but this was not a defect in mysqldump, or in the backup file, per se.)
Restoring from a mysqldump-created backup is not lightning fast, because the server must execute all of those SQL statements, but from the perspective of safety, I would argue that there isn't a safer alternative, since it is the canonical backup tool and any bugs are likely to be found and fixed by virtue of the large user base, if nothing else.
Do not use the --force
option, except in emergencies. It will cause the backup to skip over any errors encountered on the server while the backup is running, causing your backup to be incomplete with virtually no warning. Instead, find and fix any errors that occur. Typical errors during backup are related to views that are no longer valid because they reference tables or columns that have been renamed or dropped, or where the user who originally created the view has been removed from the server. Fix these by redefining the view, correctly.
Above all, test your backups by restoring them to a different server. If you haven't done this, you don't really have backups.
The output file can be compressed, usually substantially, with gzip/pigz, bzip2/bpzip2, xz/pixz, or zpaq. These are listed in approximate order by amount of space saved (gzip saves the least, zpaq saves the most) and speed (gzip is the fastest, zpaq is the slowest). pigz, pbzip2, pixz, and zpaq will take advantage of multiple cores, if you have then. The others can only use a single core at a time.
Use mysqlhotcopy it is well working with large databases
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With