Does someone know how well the following 3 compare in terms of speed:
shared memory
tmpfs (/dev/shm)
mmap (/dev/shm)
Thanks!
Read about tmpfs
here. The following is copied from that article, explaining the relation between shared memory and tmpfs
in particular.
1) There is always a kernel internal mount which you will not see at
all. This is used for shared anonymous mappings and SYSV shared
memory.
This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
set the user visible part of tmpfs is not build, but the internal
mechanisms are always present.
2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
POSIX shared memory (shm_open, shm_unlink). Adding the following
line to /etc/fstab should take care of this:
tmpfs /dev/shm tmpfs defaults 0 0
Remember to create the directory that you intend to mount tmpfs on
if necessary (/dev/shm is automagically created if you use devfs).
This mount is _not_ needed for SYSV shared memory. The internal
mount is used for that. (In the 2.3 kernel versions it was
necessary to mount the predecessor of tmpfs (shm fs) to use SYSV
shared memory)
So, when you actually use POSIX shared memory (which i used before, too), then glibc
will create a file at /dev/shm
, which is used to share data between the applications. The file-descriptor it returns will refer to that file, which you can pass to mmap
to tell it to map that file into memory, like it can do with any "real" file either. The techniques you listed are thus complementary. They are not competing. Tmpfs
is just the file-system that provides in-memory files as an implementation technique for glibc
.
As an example, there is a process running on my box currently having registered such a shared memory object:
# pwd
/dev/shm
# ls -lh
insgesamt 76K
-r-------- 1 js js 65M 24. Mai 16:37 pulse-shm-1802989683
#
"It depends." In general, they're all in-memory and dependent upon system implementation so the performance will be negligible and platform-specific for most uses. If you really care about performance, you should profile and determine your requirements. It's pretty trivial to replace any one of those methods with another.
That said, shared memory is the least intensive as there are no file operations involved (but again, very implementation-dependent). If you need to open and close (map/unmap) repeatedly, lots of times, then it could be significant overhead.
Cheers!
Sean
By "Shared memory" you mean System V shared memory, right?
I think Linux mmap's a hidden tmpfs when you use this, so it's effectively the same as mmaping a tmpfs.
Doing file I/O on tmpfs is going to have a penalty... mostly (there are special cases where it might make sense, such as >4G in a 32-bit process)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With