I want to test and compare the speed of two different file-opening approaches over a network using multi-processing. To really see if the network is the bottleneck, I want to disable caching of the relevant data for the python process or another method to force the python process to get its data via network and explicitly not from the cache. Clearing the cache is not an option as I am working on a multi-user environment where caching itself is essential.
I eventually solved this specific problem by piping my filelist to another python process which interfaced with the bash to call the cachedel command from the nocache package. This is no pure pythonic solution, but it worked in my case.
import glob
import subprocess
filelist=glob.glob('/path/to/file/*.fileending')
for dat in filelist:
subprocess.call("cachedel "+dat, shell=True)
Nevertheless, if caching itself is the timing bottleneck, it's a better idea to disable caching for the python process itself by running
nocache python
with the nocache package installed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With