I have a trouble with NFS client-side attribute caching. I'm using some servers, one is an NFS server and the others are NFS client servers.
All servers are Debian(lenny, 2.6.26-2-amd64 of Linux) and versions are following.
% dpkg -l | grep nfs
ii libnfsidmap2 0.20-1 An nfs idmapping library
ii nfs-common 1:1.1.2-6lenny1 NFS support files common to client and server
ii nfs-kernel-server 1:1.1.2-6lenny1 support for NFS kernel server
In the NFS server, /etc/exports is written as following:
/export-path 192.168.0.0/255.255.255.0(async,rw,no_subtree_check)
In the NFS clients, /etc/fstab is written as following:
server:/export-path /mountpoint nfs rw,hard,intr,rsize=8192,async 0 0
As you can see, "async" option is used for multi-clients access performance. However, sometimes this can cause false-caching errors.
Since I am maintaining many servers (and I have not so strong permission to change the mount options), I don't want to modify /etc/exports nor /etc/fstab. I think it is sufficient if I have a command-line tool that "cleans" NFS client-side attribute cache with a user permission.
Please let me know if there such commands.
Thanks,
I mean by "false-caching errors",
% ls -l /data/1/kabe/foo
ls: cannot access /data/1/kabe/foo: No such file or directory
% ssh another-server 'touch /data/1/kabe/foo'
% ls -l /data/1/kabe/foo
ls: cannot access /data/1/kabe/foo: No such file or directory
Sometimes such cases happen. The problem is not a file content but file attributes(=dentries information) since NFS says it guarantees Close-to-Open consistency.
Therefore this is designed behavior of Linux NFS client. Disable/skip NFS client caching by configuring NFS client mount options, or read/write the data with O_DIRECT/O_SYNC. To disable all caches for NFS client, add “sync” for mount option, ex. Note: This option may bring about a degredation of performance.
NFS indexes cache contents using NFS file handle, not the file name, which means hard-linked files share the cache correctly. Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches for caching.
To improve performance of NFS, distributed file systems cache the data as well as the metadata read from the server onto the clients. This is known as client-side caching. This reduces the time taken for subsequent client accesses. The cache is also used as a temporary buffer for writing.
Using the Cache with NFS 10.3. Using the Cache with NFS NFS will not use the cache unless explicitly instructed. To configure an NFS mount to use FS-Cache, include the -o fsc option to the mount command: All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O or writing.
If none is specified, the client revalidates both types of directory cache entries before an application can use them. This permits quick detection of files that were created or removed by other clients, but can impact application and server performance. 1. Close-To-Open Cache Consistency in the Linux NFS Client
There is a lookupcache=positive NFS mount option that might be used to prevent negative lookup caching, e.g. the NFS returning "No such file or directory" when the file actually exists on the server. See Directory entry caching in man nfs.
If the file in the NFS mount (whose existence is being checked) is created by another application on the same client (possibly using another mount point to the same NFS export) the consider using a single shared NFS cache on the client. Use the sharecache option to setup the NFS mounts on the client.
Depending on what you mean by "false-caching errors", running sync
may get you what you need. This will flush all filesystem buffers.
If needed, you can also clear out the VM caches in the kernel using /proc/sys/vm/drop_caches
.
# To free pagecache echo 1 > /proc/sys/vm/drop_caches # To free dentries and inodes echo 2 > /proc/sys/vm/drop_caches # To free pagecache, dentries and inodes echo 3 > /proc/sys/vm/drop_caches
Within a given process, calling opendir()
and closedir()
on the parent directory of a file invalidates the NFS cache. I used this while programming a job scheduler. Very, very helpful. Try it!
This is the line number of the relevant code (showing the use in context): https://github.com/earonesty/grun/blob/master/grun#L820
It was the only way I could fix the issue of job #1 completing and job #2, which needed some output files, firing off in a context where those files were visible,
AFAIK, the sync
and async
options aren't the source of attribute caching. Async
allows the server to delay saving data to server filesystem, e.g. it affects the write durability in case of NFS server failures, but if the NFS server is stable then async
does not affect the NFS clients.
There is a lookupcache=positive
NFS mount option that might be used to prevent negative lookup caching, e.g. the NFS returning "No such file or directory" when the file actually exists on the server. See Directory entry caching
in man nfs
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With