Notes on Lab #11
This lab will introduce you to MPI (Message-Passing Interface), the most popular software platform for HPC (high-performance computing) on clusters (networks of computers connected by a high-speed switch). Because the Unix operating system is by far the most popular OS for HPC, we will start with a discussion of the history and philosophy of Unix, and a crash course on simple Unix tools for software development.
As we will see in our discussion, the Mac OS X operating system runs a variety of Unix. Since our HBAR cluster also runs Unix (Linux, to be precise), it makes sense to do this lab on a Mac instead of in Windows.
Part I: Your first Unix commands
To get started, click on the Spotlight icon (magnifying glass)in the upper-right corner of the desktop, and use it to search for Terminal, and hit return to launch a terminal window. This is the “virtual terminal” program that you will use to do this lab. You will see a little window where you can type things, next to a prompt – typically, a dollar or percent sign or arrow, perhaps with your username and/or the name of the computer and/or directory (Unix term for folder) you’re working in.
As we did in Matlab, you should get into the habit of using the up- and down-arrow keys (lower-right of keyboard) to retrieve previous commands, rather than re-typing them, which is prone to error. The first Unix command we will use is ls, which in typically terse (cryptic) Unix fashion stands for list directory. Enter this command and see what’s in your home directory (where you always start). Then hit the up arrow to retrieve the same command, and enter it again. If you’re a Mac user, you’ll note that this directory contains the same sub-directories (folders) that you’re used to seeing in the Home folder on your Mac. In fact, I will use the terms folder and directoryinterchangeably to mean the same thing.
Part II: The vi editor
If Unix is the OS for real programmers, then vi is the text editor for real Unix programmers. (In fact, I’m using it right now to edit this web page.) It has a completely stripped-down, keyboard-only interface that saves wear-and-tear on your wrists and puts the entire set of powerful editing commands literally at your fingertips. Switching from a using modern, over-engineered text-editing program like MS Word to using vi is like switching from the family minivan with built-in GPS and Taylor Swift in the CD player to a 1969 Dodge Dart with a V8 engine, standard transmission, and Led Zeppelin on the AM radio. It’s like …. actually, it’s just a text editor, I guess.
Really, vi is so simple that you can learn most of it in 10 minutes. So do that on your own now (Parts 1 and 2), while I go listen to some Zeppelin.
Part III: Connecting to the HBAR cluster
Okay, I’m back, and it’s time to learn our third Unix command, ssh, which we will use to log on to the hbar cluster. The program that runs in your terminal window, allowing you to talk to Unix, is called a shell, and ssh stands for secure shell: it’s a safe way of connecting to other computers with encrypted communication. Prof. Whitworth has set up a single hbar acount that we can all use, whose username and password I will tell you when you’re ready to long in. So to log into hbar you will type ssh username@hbar.wlu.edu at the Unix prompt, where username is the one I will tell you. Unix is very security-minded, so when you respond with your password, you won’t even see a blank space, dot, or other indication for each character that you are typing (which would indicate to someone looking over your shoulder how long your password is).
Once you’re logged in, you’ll want to create your own directory to work in, so you don’t interfere with others sharing the common account. To do this, use the Unix mkdircommand and your username, and then the cd command to change to that directory. For example, I would type (in bold):
[cs102@HBAR ~]$ mkdir levys
[cs102@HBAR ~]$ cd levys
Part IV: A simple MPI program
This page has a simple illustration of how to write an MPI program, using the classic “Hello World!” example taught to generations of computer science students. On the hbar cluster you can use the vi program to create the file hello.c:
[cs102@HBAR levys]$ vi hello.c
To help avoid typos, I suggest copying and pasting the commands from your web browser to your hbar shell, too – then you can just up-arrow to retrieve them. (In fact, if I see people struggling to type commands into the shell, I’m going to get angry and put Taylor Swift on the speakers or something.) This will open up an empty file, into which you can insert text by typing a single i, putting you in insert mode. Now you can copy-and-paste from the example into your vi session. You can just copy-and-paste everything between the first two paragraphs – i.e, from the /*The Parallel Hello World Program*/ comment line through the closing }. Once you’re done pasting, hit ESC to get out of insert mode, hit : (colon, i.e. shift-semicolon) to get into file mode, and type wq to write (save) the file and quit.
High-performance languages like C use a compiler to convert your human-readable program into low-level commands that can be executed optimally on a particular architecture (like HBAR). To compile your program, issue the following command:
[cs102@HBAR levys]$ mpicc -o hello hello.c
(If you get all kinds of crazy errors, it’s probably because you didn’t copy and paste the code correctly – perhaps you left out the initial or final character or line. In typical Unix fashion, if everything’s okay you get no message, just the prompt.) The mpicc command invokes the MPI C Compiler. The -o option specifies the name of the output file, which is usually the same as the name of the .c file, but without the .c extension.
Now you are ready to run your program. You will use mpirun to run the program on a specified number of processors; for example:
[cs102@HBAR levys]$ mpirun -np 8 hello
If everything is working, you should see an output something like this:
Hello World from Node 0 Hello World from Node 3 Hello World from Node 4 Hello World from Node 5 Hello World from Node 1 Hello World from Node 2 Hello World from Node 6 Hello World from Node 7
(The output may be preceded and followed by some warnings or other messages, but don’t worry about those right now.) Repeat the run several times (up-arrow!), and you’ll see that the node order isn’t always the same. This is because MPI is executing each process (copy of the program) concurrently (simultaneously, independently), with no guarantee of who finishes first. MPI decides internally how to dole out the processes to the processors (individual computing nodes), but to a first approximation we can assume one processor per process.
Part V: Timing
Now we will explore some non-trivial aspects of parallel computing using MPI. Use vi to make a copy of cpi.c, which computes an approximate value of Pi. (As the late, great Leonard Nimoy showed, this task can really tax a computer!) Compile the program as you did with the hello example. To run the new program, you will specify not just the number of processors but also the number of intervals over which to compute the value; for example:
[cs102@HBAR levys]$ mpirun -np 4 cpi 1000
uses four processes and 1000 intervals. Now you can experiment to see how long it takes to execute the program under various combinations of these parameters, using the Unix time command:
[cs102@HBAR levys]$ time mpirun -np 4 cpi 1000
One interesting experiment is to start with just one processor and increase the number of intervals by a factor of 10 until it starts to take a non-trivial amount of time (several seconds) to compute the result. Then add more processors and see what happens. Then, conversely, keep the number of intervals small (like 1000) and see what happens as you increase the number of processors. Does using more processors always give you a faster result? Why not? (Hint: Look at the slides starting at #23 in the lecture notes for this week.)
If you’ve been paying attention, you’ll naturally be wondering how Moore’s Law fits into this. Well, I ran the program with 1000000000 intervals on our one of our Mac Minis without MPI, and it took around 11.8 seconds. So you should try to see how many processes you need to beat that on HBAR, and how much better you can do, and include that info in your writeup. While you’re at it, find the average price of a Mac Mini at Apple’s online store, and compare it to the approximately $70,000 dollars that HBAR cost back in 2010. (Of course, HBAR is probably used for more than just computing the digits of Pi!)
In your writeup, include plots for these two experiments, as well as your explanation of the results.
Extra Credit: GalaxSee Quest!
If you’re up for a challenge, try building and running the classic GalaxSee program to illustrate the N-body problem that we’ll study in class.
To get started, download this file to your Mac. Open up another terminal on the Mac, and type
cd Downloads
Now you’ll use scp, the companion program to ssh, to securely copy the file to your directory on hbar. For example, I would do:
scp galaxSeeHPC_source.tar cs102@hbar.wlu.edu:~/levys
Every character counts, so type carefully!
Back on hbar, unpack the new file:
[cs102@HBAR levys]$ tar xvf galaxSeeHPC_source.tar
You’ll see a list of new files scroll by, after which you can cd to the new directory containing them:
[cs102@HBAR levys]$ cd GalaxSeeHPC
Now use vi to edit the file named Makefile. Look for the section starting with ###### FFTW OPTIONS, and edit the second and third lines of that section so they look exactly like this:
CFLAGS += -I/export/apps/fftw/3.2.2/include
LIBS += -L/export/apps/fftw/3.2.2/lib
Save the Makefile file and type
[cs102@HBAR levys]$ make
You’ll see a much longer compilation session scroll by, after which you can run the program like this (for one processor):
[cs102@HBAR levys]$ mpirun -np 1 galaxsee simple.gal
You should see some old-school-style ASCII art scrolling by, representing various stages in the evolution of the galaxy. As with the cpi program, try timing with different numbers of processors, and report / plot your results.