The Beowulf  Cluster



What is Kenyon's Beowulf cluster?

A Beowulf cluster is a parallel computer made by networking commodity PCs. This name applies despite the fact that our cluster is hardly made up of "commodity" PCs. Our IBM 1350 cluster has thirteen compute nodes, each with two, dual-core Opteron processors (for a total of 52 processor cores) and 10 GB of RAM per node. In addition, there are two nodes, similar to the ones above, to handle management tasks and interface to the outside world, and to serve the 2.6 TB of RAID storage.

Using Kenyon's Beowulf Cluster

Contact Tim Sullivan for an account on the Beowulf cluster. You can log onto the cluster by ssh to cassiopeia.kenyon.edu.

Kenyon's Beowulf system is set up to mostly have the same user interface as the Glenn cluster at the Ohio Supercomputer Center (OSC). We use the PBS batch system and the MAUI scheduler. Hence, one can use training materials created by OSC to learn to use our system. Links to much of this material can be found on the list of  training classes page at OSC. Pick a class that seems appropriate for what you need to know. The actual information you will want are in the course handouts that are linked to at the bottom of the page of course descriptions. Of particular interest are:
However, running a batch file is a little different on cassioppeia than on the OSC cluster. The suggested basic batch file is:

#PBS -l walltime=00:01:00
#PBS -l nodes=8:ppn=4
#PBS -N PHW
#PBS -o PHW.out
#PBS -j oe

set -x
cd $PBS_O_WORKDIR
/usr/local/mvapich/bin/mpirun -np 32 ParallelHelloWorld

Also note that to encourage one user from tying up the entire cluster, jobs are limited to no more than 8 nodes. Since there are four processors per node, this means the maximum number of processors your job can request is 36.

Welcome  |   Academics  |  News & Calendar  |  People  |  Students | Kenyon College

        
Contact:  Connie Miller, Dept. of Physics.


Created by Bethany Anderson, Kenyon College 2005

 October 25, 2003


Kenyon Physics Logo