yes, it reduces a lot data transfer! But what I wanted to understand was: if the mysqldump caches the data on my pc or on the remote server.. From what I understood it does transfer all the data on my pc first.
The mysqldump process, however, is just querying the database, and the queries themselves runs on the mysql server with the usual priority and cause (almost) the same impact on the server. mysqldump is not the bottleneck.
By default, the mysqldump utility, which allows to back a MySQL database, will perform a lock on all tables until the backup is complete.
mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, LOCK TABLES if the --single-transaction option is not used, and (as of MySQL 8.0.
As skaffman says, use the --where option:
mysqldump --opt --where="1 limit 1000000" database
Of course, that would give you the first million rows from every table.
If you want to get n
records from a specific table you can do something like this:
mysqldump --opt --where="1 limit 1000000" database table > dump.sql
This will dump the first 1000000
rows from the table named table
into the file dump.sql
.
As the default order is ASC which is rarely what you want in this situation, you need to have a proper database design to make DESC work out of the box. If all your tables have ONE primary key column with the same name (natural or surrogate) you can easily dump the n latest records using:
mysqldump --opt --where="1 ORDER BY id DESC limit 1000000" --all-databases > dump.sql
This is a perfect reason to why you should always name your PK's id and avoid composite PK's, even in association tables (use surrogate keys instead).
mysqldump can be given a SQL query to execute, from which it will take the data for the dump. You can then use the "limit X" clause in your query to restrict the number of rows.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With