Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reduce memory usage in Google Cloud SQL

We have a simple MySQL db with a few tables, managed by Google Cloud SQL. The database has a few hundred million rows, and fairly light reading/writing. We are nowhere near the limits for storage, yet memory usage is climbing linearly with storage, and will soon be an issue for scalability.

Should memory usage be rising linearly with storage? What should we consider to lower our memory usage? We haven't customized any database flags.

I'm not seeing any information about memory usage and best practices in the Google Cloud documentation. It seems way too early for us to be thinking about horizontal scaling/sharding.

like image 692
Justin Avatar asked Dec 13 '18 00:12

Justin


People also ask

How do I reduce SQL memory usage?

Use SQL Server Management Studio Use min server memory (MB) and max server memory (MB) to reconfigure the amount of memory (in megabytes) managed by the SQL Server Memory Manager for an instance of SQL Server. In Object Explorer, right-click a server and select Properties.

How do I increase Google Cloud memory?

For Cloud Run jobsClick Container, variables and secrets, connections, security to expand the job properties page. Click the Container tab. Select the desired memory size from the Memory dropdown list. Click Create or Update.

Why SQL Server consumes all memory?

SQL Server will consume all of the available memory. By default, that number corresponds to the total amount of numeric memory available on your computer. As a result, your perceptions are correct. To put it another way, if you give SQL Server 24 GB of memory, it will make every effort to make the most of that memory.

Is Google Cloud SQL scalable?

Highly scalable and secure cloud SQL Google Cloud SQL is easy to set up. Migration is also simple and is extremely scalable based on needs. It offers a good level of security from Google and data transfer take minimal time in comparison to other such service providers.


Video Answer


1 Answers

MySQL will automatically allocate around 0.8 of the instance's memory to store data & indexes. This is in order to avoid heavy I/O operations and provide good response times.

So, by default, MySQL will hope to store as much as possible in memory - causing the memory usage to look like it scales linearly as more data is inserted.

This does not necessarily mean you are close to facing issues. It is really a matter of what resources your MySQL machine has. Give it 128GB of RAM and insert ~ 120GB worth data, and it will hold ~ 102GB of data in memory (maintaining performance) while 64GB RAM machine with 120GB worth data will surely show slower response times.

A side note:

If you store hundreds of millions of rows and willing to compromise a little in query times (millisecond to seconds), I would suggest checking out BigQuery, you might end up paying less than Cloud SQL, not worrying about scale (ever..) nor DBA / Dev Ops maintenance (Memory, CPU, Indexing etc..).

like image 172
Royzipuff Avatar answered Sep 26 '22 04:09

Royzipuff