Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Symlinks - performance hit?

Tags:

linux

bash

unix

For deployment reasons, it is slightly easier for me to use symlinks, but these would be for all of my websites core files and configurations which will be accessed 10’s of thousands of times a day.

Am I more sensible to move the documents to the correct positions on the server (slightly more problematic deployment) rather than using symlinks for everything (slight performance degradation?)

like image 249
J.Zil Avatar asked Sep 26 '12 12:09

J.Zil


People also ask

Does symlink affect performance?

Don't worry about symlink performance hit. Practically everything else will be a bottleneck before that one. Resolving a symlink does not take too many CPU cycles; running your PHP script, handling the database queries and result sets, possibly using modules like Apache's mod_security will be the real bottle neck.

Does rm R follow symlinks?

Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful.

Does rm delete symlinks?

Using the rm Command We know the rm command can delete files and directories. Additionally, we can use this command to delete symbolic links. As the output above shows, We have successfully deleted fileLink. The syntaxes of deleting a symbolic link and a file are the same.

Can symlinks be copied?

We can use the -l option of rsync for copying symlinks. rsync copies the symlinks in the source directory as symlinks to the destination directory using this option. Copying the symlinks is successful in this case.


2 Answers

I have created a file testfile.txt with 1000 lines of blablabla in it, and created a local symlink (testfile.link.txt) to it:

$ ls -n
total 12
lrwxrwxrwx 1 1000 1000    12 2012-09-26 14:09 testfile.link.txt -> testfile.txt
-rw-r--r-- 1 1000 1000 10000 2012-09-26 14:08 testfile.txt

(The -n switch is only there to hide my super-secret username.:))

And then executed 10 rounds of cating into /dev/null 1000 times for both files. (Results are in seconds.)

Accessing the file directly:

$ for j in `seq 1 10`; do ( time -p ( for i in `seq 1 1000`; do cat testfile.txt >/dev/null; done ) ) 2>&1 | grep 'real'; done
real 2.32
real 2.33
real 2.33
real 2.33
real 2.33
real 2.32
real 2.32
real 2.33
real 2.32
real 2.33

Accessing through symlink:

$ for j in `seq 1 10`; do ( time -p ( for i in `seq 1 1000`; do cat testfile.link.txt >/dev/null; done ) ) 2>&1 | grep 'real'; done
real 2.30
real 2.31
real 2.36
real 2.32
real 2.32
real 2.31
real 2.31
real 2.31
real 2.32
real 2.32

Measured on (a rather old install of) Ubuntu:

$ uname -srvm
Linux 2.6.32-43-generic #97-Ubuntu SMP Wed Sep 5 16:43:09 UTC 2012 i686

Of course it's a dumbed-down example, but based on this I wouldn't expect too much of a performance degradation when using symlinks.

I personally think, that using symlinks is more practical:

  • As you said, your deployment process will be simpler.
  • You can also easily use versioning and roll-back if you include some kind of timestamp or version number in the directory names (e.g. my_web_files.v1, my_web_files.v2), and use the "official" name in the symlink (e.g. my_web_files) pointing to the "live" version. If you want to change the version, just re-link to another versioned directory.
like image 196
battery Avatar answered Sep 20 '22 13:09

battery


Have you measured this performance degredation ? I suspect it's hugely negligible compared to the time taken to fetch pages via the network.

like image 36
Brian Agnew Avatar answered Sep 21 '22 13:09

Brian Agnew