Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Google Compute Engine VM disk is very slow

We just switched over to Google Compute Engine and are having major issues with disk speed. It's been about 5% of Linode or worse. It's never exceeded 20M/s for writing and 10M/s for reading. Most of the time it's 15M/s for writing and 5M/s for reading.

We're currently running a n1-highmem-4 (4 vCPU, 26 GB memory) machine. CPU & memory aren't the bottleneck. Just running a script that reads rows from PostgreSQL database, processes them, then writes back to PostgreSQL. It's just for a common job to update database row in batch. Tried running 20 processes to take advantage of multi-core but the overall progress is still slow.

We're thinking disk may be bottleneck because traffic is abnormally low.

Finally we decided to do benchmarking. We found it's not only slow but seems to have a major bug which is reproducible:

  1. create & connect to instance
  2. run the benchmark at least three times:

    dd if=/dev/zero bs=1024 count=5000000 of=~/5Gb.file
    

We found it becomes extremely slow and aren't able to finish the benchmarking at all.

like image 729
user3641595 Avatar asked May 15 '14 15:05

user3641595


People also ask

Why is GCP so slow?

The Google Cloud page loads a large initial JavaScript bundle. The longer it takes to load and initialize this code, the longer it takes to load page-specific code and to render the list of Cloud Functions the user wants to see.

What is the maximum size for Compute Engine persistent disks?

Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.

What is extreme persistent disk?

Extreme persistent disks feature higher maximum IOPS and throughput, and allow you to provision IOPS and capacity separately. Extreme persistent disks are available in all zones. When you create an extreme persistent disk, you can choose your desired IOPS level in the range of 10,000 to 120,000 IOPS.


1 Answers

Persistent Disk performance is proportional to the size of the disk itself and the VM that it is attached to. The larger the disk (or the VM), the higher the performance, so in essence, the price you are paying for the disk or the VM pays not only for the disk/CPU/RAM but also for the IOPS and throughput.

Quoting the Persistent Disk documentation:

Persistent disk performance depends on the size of the volume and the type of disk you select. Larger volumes can achieve higher I/O levels than smaller volumes. There are no separate I/O charges as the cost of the I/O capability is included in the price of the persistent disk.

Persistent disk performance can be described as follows:

  • IOPS performance limits grow linearly with the size of the persistent disk volume.
  • Throughput limits also grow linearly, up to the maximum bandwidth for the virtual machine that the persistent disk is attached to.
  • Larger virtual machines have higher bandwidth limits than smaller virtual machines.

There's also a more detailed pricing chart on the page which shows what you get per GB of space that you buy (data below is current as of August 2014):

                                  Standard disks     SSD persistent disks

Price (USD/GB per month)                $0.04               $0.025
Maximum Sustained IOPS
  Read IOPS/GB                           0.3                  30
  Write IOPS/GB                          1.5                  30
Read IOPS/volume per VM                 3,000               10,000
Write IOPS/volume per VM               15,000               15,000
Maximum Sustained Throughput
  Read throughput/GB (MB/s)              0.12                 0.48
  Write throughput/GB (MB/s)             0.09                 0.48
Read throughput/volume per VM (MB/s)     180                  240
Write throughput/volume per VM (MB/s)    120                  240

and a concrete example on the page of what a particular size of a disk will give you:

As an example of how you can use the performance chart to determine the disk volume you want, consider that a 500GB standard persistent disk will give you:

  • (0.3 × 500) = 150 small random reads
  • (1.5 × 500) = 750 small random writes
  • (0.12 × 500) = 60 MB/s of large sequential reads
  • (0.09 × 500) = 45 MB/s of large sequential writes
like image 167
Misha Brukman Avatar answered Sep 30 '22 14:09

Misha Brukman