The problem is that I am running no performance intensive program but the system load continues to be high. Actually the normal load is below 0.05, but from yesterday it has been always been higher than 1.5. After some time digging in the reason, I think that it is the jbd2/sda2-8's io usage that caused the problem.
Later I went to room where the PC locates and found out that the HDD LED light keeps flashing, maybe lots of times in a second. It means that io usage is really a problem.
Here, https://www.webhostingtalk.com/showthread.php?t=1148545, it tells me that jbd2 is not the root cause and I must found out which program is really writing or reading the disk. So I found out that the real cause is snapd.
I tried temporarily stopping the snapd service, and the load went down immediately.
OS: Ubuntu 20.04 focal
Kernel: x86_64 Linux 5.4.0-33-generic
Uptime: 12h 8m
Packages: 985
Shell: bash 5.0.16
Disk: 11G / 231G (5%)
CPU: Intel Core2 Duo E8600 @ 2x 3.336GHz
GPU: GeForce 9300 GE
RAM: 766MiB / 3935MiB
You can see that it's a dual core cpu so the load 1.5 is really high.
Total DISK READ: 0.00 B/s | Total DISK WRITE: 844.51 K/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 1643.16 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
306 be/3 root 0.00 B/s 0.00 B/s 0.00 % 69.00 % [jbd2/sda2-8]
972 be/4 root 0.00 B/s 324.81 K/s 0.00 % 0.15 % snapd
919 be/4 root 0.00 B/s 259.85 K/s 0.00 % 0.12 % snapd
926 be/4 root 0.00 B/s 259.85 K/s 0.00 % 0.12 % snapd
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init maybe-ubiquity
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp]
6 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H-kblockd]
8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched]
11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
12 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [idle_inject/0]
14 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/0]
15 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/1]
16 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [idle_inject/1]
17 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]
18 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1]
20 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0H-kblockd]
21 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kdevtmpfs]
22 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [netns]
23 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_tasks_kthre]
24 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kauditd]
26 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khungtaskd]
keys: any: refresh q: quit i: ionice o: active p: procs a: accum
sort: r: asc left: SWAPIN right: COMMAND home: TID end: COMMAND
The jbd2/sda2-8 is using 69.00% 's io but the speed is zero, just like some previous problem has mentioned. But the difference here is that I am doing nothing, and I don't know which program is causing the problem. Recently I made no large software changes. The only change I made is that I installed and then uninstalled vsftpd.
I have been looking for solutions online, and I have found the follows and tried most of them:
Now that I have temporarily fixed the problem by stopping snapd service
I uninstalled snapd and the problem was solved.
I have this same problem, my system is ubuntu 20.04 which was just installed. I fixed it by abort snapd job which is running.
this will give you a job number X
snap abort X
snap disable ....
It seems only the people behind some kind of POWERFULL firewall or something will have this issue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With