We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log
InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
This error rings true because we were working on a very large table that contains BLOB data types.
The best answer we found online said
To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you've moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.
and in the comments
Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.
which is I guess what happened, based on our current (default) innodb_log_file_size
of 48mb?
SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+-----------------------------+----------+
So, this leads me to two pointed questions and one open-ended one:
innodb_log_file_size
to be bigger than that?We're running mysql 5.6 and here's our my.cnf
.
[mysqld]
#defaults
basedir = /opt/mysql/server-5.6
datadir = /var/lib/mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = /tmp
bind-address = 0.0.0.0
#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1
innodb_buffer_pool_size = 11G
#http://stackoverflow.com/a/10866836/182484
collation-server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='SET collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake
#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```
As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes
. Here's the result of getting the largest byte size:
select pen_size as bytes,·
pen_size / 1024 / 1024 as mb,·
pen_id from pen_sizes
group by pen_id
order by bytes desc
limit 40
+---------+------------+--------+
| bytes | mb | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+
This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?
I ran
SELECT table_name AS "Tables",
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = 'codepen'
+-------------------+------------+
| Tables | Size in MB |
+-------------------+------------+
...snip
| pens | 6287.89 |
...snip
innodb_log_file_size is the size of a log file (in bytes). There exist multiple log file for MySQL to write the data to. This particular system-variable tells the file size of one of such log files.
The redo log is a disk-based data structure used during crash recovery to correct data written by incomplete transactions. During normal operations, the redo log encodes requests to change table data that result from SQL statements or low-level API calls.
The InnoDB buffer pool size sets how much memory (RAM) is allocated to MySQL for caching data and indexes. It should be set to as large a value as possible without causing excessive paging by other processes.
0. Preliminary information
Your settings:
innodb_log_file_size = 50331648
innodb_log_files_in_group = 2
Therefore your "log group capacity" = 2 x 50331648 = 96 MB
1. How to determine the largest row
There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).
2. Impact of innodb_log_file_size
Reference manual:
The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.
3. Anything else to worry about
6169.8 GB / 650k rows = about 10 MB per row on average
This is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB
's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With