Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why linux disables disk write buffer when system ram is greater than 8GB?

Background:

I was trying to setup a ubuntu machine on my desktop computer. The whole process took a whole day, including installing OS and softwares. I didn't thought much about it, though.

Then I tried doing my work using the new machine, and it was significantly slower than my laptop, which is very strange.

I did iotop and found that disk traffic when decompressing a package is around 1-2MB/s, and it's definitely abnormal.

Then, after hours of research, I found this article that describes exactly same problem, and provided a ugly solution:

We recently had a major performance issue on some systems, where disk write speed is extremely slow (~1 MB/s — where normal performance is 150+MB/s).

...

EDIT: to solve this, either remove enough RAM, or add “mem=8G” as kernel boot parameter (e.g. in /etc/default/grub on Ubuntu — don’t forget to run update-grub !)

I also looked at this post

https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/

and did

cat /proc/vmstat | egrep "dirty|writeback"

output is:

nr_dirty 10
nr_writeback 0
nr_writeback_temp 0  
nr_dirty_threshold 0  // and here
nr_dirty_background_threshold 0 // here

those values were 8223 and 4111 when mem=8g is set.

So, it's basically showing that when system memory is greater than 8GB (32GB in my case), regardless of vm.dirty_background_ratio and vm.dirty_ratio settings, (5% and 10% in my case), the actual dirty threshold goes to 0 and write buffer is disabled?

Why is this happening?

Is this a bug in the kernel or somewhere else?

Is there a solution other than unplugging RAM or using "mem=8g"?

UPDATE: I'm running 3.13.0-53-generic kernel with ubuntu 12.04 32-bit, so it's possible that this only happens on 32-bit systems.

like image 606
user3528438 Avatar asked Jan 08 '23 06:01

user3528438


1 Answers

If you use a 32 bit kernel with more than 2G of RAM, you are running in a sub-optimal configuration where significant tradeoffs must be made. This is because in these configurations, the kernel can no longer map all of physical memory at once.

As the amount of physical memory increases beyond this point, the tradeoffs become worse and worse, because the struct page array that is used to manage all physical memory must be kept mapped at all times, and that array grows with physical memory.

The physical memory that isn't directly mapped by the kernel is called "highmem", and by default the writeback code treats highmem as undirtyable. This is what results in your zero values for the dirty thresholds.

You can change this by setting /proc/sys/vm/highmem_is_dirtyable to 1, but with that much memory you will be far better off if you install a 64-bit kernel instead.

like image 75
caf Avatar answered Jan 15 '23 02:01

caf