Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

aws rds, lost connection to MySQL server during query, when importing large file

I try to import an 1.4G mysql file into aws rds. I tried the 2 cpu and 4G mem option. I still got error: Lost connection to MySQL server during query. My quetion is that how do I import large mysql file into rds.

like image 498
kenpeter Avatar asked May 14 '17 12:05

kenpeter


People also ask

How do I fix the lost connection to MySQL server during query?

Open the MySQL Workbench Preferences. Check if the SSH Timeout and DBMS Timeout value is set to only a few seconds. Try to increase the default value of the connection timeouts. Save the settings, close the MySQL Workbench and reopen the connection to see if you are able to connect to the database.

What can the maximum size be for each database in RDS?

You can now create Amazon RDS for MySQL and Amazon RDS for MariaDB database instances with up to 64TB of storage when using R5 instance types. Previously, R5 instances supported 16TB of storage. The new storage limit is available when using the Provisioned IOPS (IO1) and General Purpose (GP2) storage types.

What are some of the common causes why you Cannot connect to a DB instance on AWS?

When you can't connect to a DB instance, the following are common causes: Inbound rules – The access rules enforced by your local firewall and the IP addresses authorized to access your DB instance might not match. The problem is most likely the inbound rules in your security group.


2 Answers

MySQL Server and the MySQL client both have a parameter max_allowed_packet.

This is designed as a safety check to prevent the useless and disruptive allocation of massive amounts of memory that could occur if data corruption caused the receiving end of the connection to believe a packet¹ to be extremely large.

When transmitting queries and result sets, neither client nor server is allowed to send any single "thing" (usually a query or the value of a column) that is larger than max_allowed_packet -- the sending side will throw an error and refuse to send it if you try, and the receiving side will throw an error and then close the connection on you (so the client may or may not actually report the error thrown -- it may simply report that the connection was lost).

Unfortunately, the client setting and server setting for this same parameter are two independent settings, and they are uncoordinated. There is technically no requirement that they be the same, but discrepant values only works as long as neither of them ever exceeds the limit imposed by the other.

Worse, their defaults are actually different. In recent releases, the server defaults to 4 MiB, while the client defaults to 16 MiB.

Finding the server's value (SELECT @@MAX_ALLOWED_PACKET) and then setting the client to match the server (mysql --max-allowed-packet=max_size_in_bytes) will "fix" the mysterious Lost connection to MySQL server during query error message by causing the client to Do The Right Thing™ and not attempt to send a packet that the server won't accept. But you still get an error -- just a more informative one.

So, we need to reconfigure both sides to something more appropriate... but how do we know the right value?

You have to know your data. What's the largest possible value in any column? If that's a stretch (and in many cases, it is), you can simply start with a reasonably large value based on the longest line in a dump file.

Use this one-liner to find that:

$ perl -ne '$max = length($_) > $max ? length($_) : $max; END { print "$max\n" }' dumpfile.sql

The output will be the length, in bytes, of the longest line in your file.

You might want to round it up to the next power of two, or at least the next increment of 1024 (1024 is the granularity accepted by the server -- values are rounded) or whatever you're comfortable with, but this result should give you a value that should allow you to load your dump file without issue.

Now that we've established a new value that should work, change max_allowed_packet on the server to the new value you've just discovered. In RDS, this is done in the parameter group. Be sure the value has been applied to your server (SELECT @@GLOBAL.MAX_ALLOWED_PACKET;).

Then, you'll need to pass the same value to your client program, e.g. mysql --max-allowed-packet=33554432 if this value is smaller than the default client value. You can find the default client value with this:

$ mysql --help --verbose | grep '^max.allowed.packet'
max-allowed-packet                16777216

The client also allows you to specify the value in SI units, like --max-allowed-packet=32M for 32 MiB (33554432 bytes).

This parameter -- and the fact that there are two of them, one for the client and one for the server -- causes a lot of confusion and has led to the spread of some bad information: You'll find people on the Internet telling you to set it to ridiculous values like 1G (1073741824, which is the maximum value possible) but this is not a really good strategy since, as mentioned above, this is a protective mechanism. If a packet should happen to get corrupted on the network in just the wrong way, the server could conclude that it actually needs to allocate a substantial amount of memory just so that this packet can successfully be loaded into a buffer -- and this could lead to system impairment or a denial of service by starving the system for available memory.

The actual amount of memory the server normally allocates for reading packets from the wire is net_buffer_length. The size indicated in the packet isn't actually allocated unless it's larger than net_buffer_length.


¹ a packet refers to a layer 7 packet in the MySQL Client/Server Protocol sense. Not to be confused with an IP packet or datagram.

like image 65
Michael - sqlbot Avatar answered Oct 16 '22 19:10

Michael - sqlbot


Your connection may timeout if you are importing from your local computer or laptop or a machine which is not in the same region as the RDS instance.

Try to import from an EC2 instance, which has access to this RDS. You will need to the upload the file to S3, ssh into the EC2 instance and run an import into RDS.

like image 35
Yeshodhan Kulkarni Avatar answered Oct 16 '22 18:10

Yeshodhan Kulkarni