I'm into unit testing of some legacy shell scripts.
In the real world scripts are often used to call utility programs
like find
, tar
, cpio
, grep
, sed
, rsync
, date
and so on with some rather complex command lines containing a lot of options. Sometimes regular expressions or wildcard patterns are constructed and used.
An example: A shell script which is usually invoked by cron in regular intervals has the task to mirror some huge directory trees from one computer to another using the utility rsync. Several types of files and directories should be excluded from the mirroring process:
#!/usr/bin/env bash
...
function mirror() {
...
COMMAND="rsync -aH$VERBOSE$DRY $PROGRESS $DELETE $OTHER_OPTIONS \
$EXCLUDE_OPTIONS $SOURCE_HOST:$DIRECTORY $TARGET"
...
if eval $COMMAND
then ...
else ...
fi
...
}
...
As Michael Feathers wrote in his famous book Working Effectively with Legacy Code, a good unit test runs very fast and does not touch the network, the file-system or opens any database.
Following Michael Feathers advice the technique to use here is: dependency injection. The object to replace here is utility program rsync
.
My first idea: In my shell script testing framework (I use bats) I manipulate $PATH
in a way that a mockup rsync
is found instead of
the real rsync
utility. This mockup object could check the supplied command line parameters and options. Similar with other utilities used in this part of the script under test.
My past experience with real problems in this area of scripting were often bugs caused by special characters in file or directory names, problems with quoting or encodings, missing ssh keys, wrong permissions and so on. These kind of bugs would have escaped this technique of unit testing. (I know: for some of these problems unit testing is simply not the cure).
Another disadvantage is that writing a mockup for a complex utility like rsync
or find
is error prone and a tedious engineering task of its own.
I believe the situation described above is general enough that other people might have encountered similar problems. Who has got some clever ideas and would care to share them here with me?
You can mockup any command using a function, like this:
function rsync() {
# mock things here if necessary
}
Then export the function and run the unittest:
export -f rsync
unittest
Cargill's quandary:
" Any design problem can be solved by adding an additional level of indirection, except for too many levels of indirection."
Why mock system commands ? After all if you are programming Bash, the system is your target goal and you should evaluate your script using the system.
Unit test, as the name suggests, will give you a confidence in a unitary part of the system you are designing. So you will have to define what is your unit in the case of a bash script. A function ? A script file ? A command ?
Given you want to define the unit as a function I would then suggest writing a list of well known errors as you listed above:
And write a test case for it. And try to not deviate from the system commands, since they are integral part of the system you are delivering.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With