Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is my program's core dump always zero bytes, when run on an NTFS partition mounted in Linux?

I'm trying to get a usable core dump from code that I am writing. My source is on a NTFS partition that I share between Windows and Linux OSes. I'm doing the development under Linux and have set ulimit -c unlimited in my bash shell. When I execute the code in my project directory on the NTFS partition, and purposely cause a SIGSEGV or SIGABRT, the system writes a core dump file of zero bytes.

If I execute the binary in my home directory (an ext4 partition), the core dump is generated fine. I've had a look at the man page for core, which gives a list of various circumstances in which a core dump file is not produced. However, I don't think it's a permissions issue as all the files and directories on that partition have full rights (chmod 777).

Any help or thoughts appreciated.

like image 622
Dan Boswell Avatar asked Nov 05 '22 22:11

Dan Boswell


2 Answers

Maybe you should check the this file (/proc/sys/kernel/core_pattern)

like image 60
xda1001 Avatar answered Nov 10 '22 19:11

xda1001


The directory where the application sits is a mount point to another linux machine. The core file can't be written to a mounted drive but must be written to the local drive.

http://www.experts-exchange.com/OS/Linux/Q_23677186.html

You can create ram disk and put the core dump on the ram disk.

like image 41
dimafon Avatar answered Nov 10 '22 20:11

dimafon