Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is 2G the limit size of coredump file on Linux?

My OS is Arch Linux. When there is a coredump, I try to use gdb to debug it:

$ coredumpctl gdb 1621
......
       Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4
       Message: Process 1621 (runTests) of user 1014 dumped core.

                Stack trace of thread 1621:
                #0  0x00007ff1c0fcfa10 n/a (n/a)

GNU gdb (GDB) 7.12.1
......
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.

I check the /var/tmp/coredump-28KzRc file:

$ ls -alth /var/tmp/coredump-28KzRc
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc

Is 2G the limit size of coredump file on Linux? Because I think my /var/tmp has enough disk space to use:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
dev              32G     0   32G   0% /dev
run              32G  3.1M   32G   1% /run
/dev/sda2       229G   86G  132G  40% /
tmpfs            32G  708M   31G   3% /dev/shm
tmpfs            32G     0   32G   0% /sys/fs/cgroup
tmpfs            32G  957M   31G   3% /tmp
/dev/sda1       511M   33M  479M   7% /boot
/dev/sda3       651G  478G  141G  78% /home

P.S. "ulimit -a" outputs:

$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 257039
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 257039
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Update: The /etc/systemd/coredump.conf file:

$ cat coredump.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See coredump.conf(5) for details.

[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
like image 590
Nan Xiao Avatar asked Sep 11 '25 19:09

Nan Xiao


2 Answers

@n.m. is correct.
(1) Modify /etc/systemd/coredump.conf file:

[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G

(2) Reload systemd's configuration:

# systemctl daemon-reload

Notice this will only take effect for the new generated core dump files.

like image 156
Nan Xiao Avatar answered Sep 14 '25 12:09

Nan Xiao


Is 2G the limit size of coredump file on Linux?

No. I routinely deal with core dumps larger than 4GiB.

ulimit -a
core file size (blocks, -c) unlimited

This tells you your current limit in this shell. It tells you nothing about the environment in which runTests ran. That process may be setting its own limit via setrlimit(2), or its parent may be setting a limit for it.

You can modify runTest to print its current limit with getrlimit(2) and see what it actually is when the process runs.

P.S. Just because the core is truncated doesn't mean it's completely useless (though often it is). At a minimum, you should try the GDB where command.

like image 42
Employed Russian Avatar answered Sep 14 '25 12:09

Employed Russian