We are finally ready to see what makes the shell such a powerful programming environment. We are going to take the commands we repeat frequently and save them in files so that we can re-run all those operations again later by typing a single command. For historical reasons, a bunch of commands saved in a file is usually called a shell script, but make no mistake: these are actually small programs.
Not only will writing shell scripts make your work faster — you won’t have to retype the same commands over and over again — it will also make it more accurate (fewer chances for typos) and more reproducible. If you come back to your work later (or if someone else finds your work and wants to build on it) you will be able to reproduce the same results simply by running your script, rather than having to remember or retype a long list of commands.
Let’s start by going back to proteins/ and creating a new file, middle.sh which will become our shell script:
jupyter-user:$cd ~/IntroShell/data/shell-lesson-data/exercise-data/proteins
jupyter-user:$nano middle.sh
As we have seen the command nano middle.sh
opens the file middle.sh
within the text editor nano
(which runs within the shell). If the file does not exist, it will be created. We can use the text editor to directly edit the file – we’ll simply insert the following line:
head -n 15 octane.pdb | tail -n 5
This is a variation on the pipe we constructed earlier: it selects lines 11-15 of the file octane.pdb. Remember, we are not running it as a command just yet: we are putting the commands in a file.
Then we save the file (Ctrl-O in nano), and exit the text editor (Ctrl-X in nano). Check that the directory proteins now contains a file called middle.sh.
Once we have saved the file, we can ask the shell to execute the commands it contains. Our shell is called bash, so we run the following command:
This is a variation on the pipe we constructed earlier: it selects lines 11-15 of the file octane.pdb. Remember, we are not running it as a command just yet: we are putting the commands in a file.
Then we save the file (Ctrl-O
in nano), and exit the text editor (Ctrl-X
in nano). Check that the directory proteins now contains a file called middle.sh.
Once we have saved the file, we can ask the shell to execute the commands it contains. Our shell is called bash, so we run the following command:
jupyter-user:$bash middle.sh
ATOM 9 H 1 -4.502 0.681 0.785 1.00 0.00
ATOM 10 H 1 -5.254 -0.243 -0.537 1.00 0.00
ATOM 11 H 1 -4.357 1.252 -0.895 1.00 0.00
ATOM 12 H 1 -3.009 -0.741 -1.467 1.00 0.00
ATOM 13 H 1 -3.172 -1.337 0.206 1.00 0.00
Sure enough, our script’s output is exactly what we would get if we ran that pipeline directly.
We usually call programs like Microsoft Word or LibreOffice Writer “text editors”, but we need to be a bit more careful when it comes to programming. By default, Microsoft Word uses .docx files to store not only text, but also formatting information about fonts, headings, and so on. This extra information isn’t stored as characters and doesn’t mean anything to tools like head: they expect input files to contain nothing but the letters, digits, and punctuation on a standard computer keyboard. When editing programs, therefore, you must either use a plain text editor, or be careful to save files as plain text.
What if we want to select lines from an arbitrary file? We could edit middle.sh each time to change the filename, but that would probably take longer than typing the command out again in the shell and executing it with a new file name. Instead, let’s edit middle.sh and make it more versatile:
jupyter-user:$nano middle.sh
Now, within nano
, replace the text octane.pdb with the special variable called $1
:
head -n 15 "$1" | tail -n 5
Inside a shell script, $1
means ‘the first argument on the command line’. We can now run our script like this:
jupyter-user:$ bash middle.sh octane.pdb
ATOM 9 H 1 -4.502 0.681 0.785 1.00 0.00
ATOM 10 H 1 -5.254 -0.243 -0.537 1.00 0.00
ATOM 11 H 1 -4.357 1.252 -0.895 1.00 0.00
ATOM 12 H 1 -3.009 -0.741 -1.467 1.00 0.00
ATOM 13 H 1 -3.172 -1.337 0.206 1.00 0.00
Or on a different file like this:
jupyter-user:$bash middle.sh pentane.pdb
ATOM 9 H 1 1.324 0.350 -1.332 1.00 0.00
ATOM 10 H 1 1.271 1.378 0.122 1.00 0.00
ATOM 11 H 1 -0.074 -0.384 1.288 1.00 0.00
ATOM 12 H 1 -0.048 -1.362 -0.205 1.00 0.00
ATOM 13 H 1 -1.183 0.500 -1.412 1.00 0.00
For the same reason that we put the loop variable inside double-quotes, in case the filename happens to contain any spaces, we surround $1 with double-quotes.
This works, but it may take the next person who reads middle.sh a moment to figure out what it does. We can improve our script by adding some comments at the top:
jupyter-user:$nano middle.sh
# Select lines from the middle of a file.
# Usage: bash middle.sh filename end_line num_lines
head -n "$2" "$1" | tail -n "$3"
What if we want to process many files in a single pipeline? For example, if we want to sort our .pdb files by length, we would type:
jupyter-user$ wc -l *.pdb | sort -n
because wc -l
lists the number of lines in the files (recall that wc stands for ‘word count’, adding the -l
option means ‘count lines’ instead) and sort -n
sorts things numerically. We could put this in a file, but then it would only ever sort a list of .pdb files in the current directory.
If we want to be able to get a sorted list of other kinds of files, we need a way to get all those names into the script. We can’t use $1
, $2
, and so on because we don’t know how many files there are. Instead, we use the special variable $@
, which means, ‘All of the command-line arguments to the shell script’. We also should put $@
inside double-quotes to handle the case of arguments containing spaces ($@
is special syntax and is equivalent to "$1" "$2" …).
Here's an example:
jupyter-user:$nano sorted.sh
# Sort files by their length.
# Usage: bash sorted.sh one_or_more_filenames
wc -l "$@" | sort -n
Now lets try running it:
jupyter-user$ bash sorted.sh *.pdb ../creatures/*.dat
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
163 ../creatures/basilisk.dat
163 ../creatures/minotaur.dat
163 ../creatures/unicorn.dat
596 total
Nelle’s supervisor insisted that all her analytics must be reproducible. The easiest way to capture all the steps is in a script.
First we return to Nelle’s project directory:
jupyter-user:$cd ../../north-pacific-gyre/
She creates a file using nano
…
jupyter-user:$nano do-stats.sh
containing the followingL
# Calculate stats for data files.
for datafile in "$@"
do
echo $datafile
bash goostats.sh $datafile stats-$datafile
done
She saves this in a file called do-stats.sh so that she can now re-do the first stage of her analysis by typing:
jupyter-user:$bash do-stats.sh NENE*A.txt NENE*B.txt
She can also do this:
jupyter-user:$ bash do-stats.sh NENE*A.txt NENE*B.txt | wc -l
so that the output is just the number of files processed rather than the names of the files that were processed.