This story highlights the potential dire consequences of undefined variables in Shell scripts.
In summary: entire directories were lost because of a buggy shell script running this command:
rm -rf "$VAR/" *
.. which is intended to delete all files under the $VAR directory. However if $VAR is left undefined, then what is executed instead is:
rm -rf *
… which will delete all files in the current directory and subdirectories. oops.
It’s somewhat paradoxical that Shell scripts are frequently used to drive mission-critical activities such as starting/stopping processes, copying, moving and deleting files… and yet these scripts are error-prone, often hard to read and always hard to test.
Being at the mercy of a buggy shell script is not fun. Thankfully there are ways to prevent disaster from happening.
- Add “set -eu” to the script.
“-e” causes the script to terminate if any command fails. “-u” causes the script to terminate when it encounters an unbound variable. One additional line of code which will save lot of trouble… This should be mandatory on top of every script.
- Check if the variables are set before use
[[ "$VAR" ]] && rm -rf "$VAR/*"It’s not foolproof though as it wont prevent failure due to typos in the variable name.
- Use a unit test framework
Yes shell scripts have their unit-test frameworks too, see Roundup or ShUnit.
- Use a (real) programming language
bash/zsh were never meant to be used for complex programming tasks. Replace with Python, or Perl, or Groovy, or Java. Programming languages have much better support for functions, variable scoping, conditionals, string handling …etc. compared to a shell script. The idea is to avoid shell scripts save for the simplest tasks, and use a programming language to handle any kind of elaborate control logic (anything more complex than a pair of if/else statements).