A Beowulf cluster is a parallel computer made by networking commodity
This name applies despite the fact that our cluster is hardly made up
"commodity" PCs. Our IBM 1350 cluster has thirteen compute nodes, each
two, dual-core Opteron processors (for a total of 52 processor cores)
GB of RAM per node. In addition, there are two nodes, similar to the
ones above, to
handle management tasks and interface to the outside world, and to
the 2.6 TB of RAID storage.
Using Kenyon's Beowulf Cluster
Sullivan for an
on the Beowulf cluster. You can log onto the cluster by ssh to cassiopeia.kenyon.edu.
Kenyon's Beowulf system is set up to mostly have the same user
Glenn cluster at the Ohio Supercomputer
Center (OSC). We use the PBS batch system and the MAUI scheduler.
Hence, one can use training materials created by OSC
learn to use our system. Links to much of this material can be found on
list of training
at OSC. Pick a class that seems appropriate for what you need to know.
actual information you will want are in the course handouts that are
to at the bottom of the page of course descriptions. Of particular
/usr/local/mvapich/bin/mpirun -np 32 ParallelHelloWorld
Also note that to encourage one user from tying up the entire cluster,
jobs are limited to no more than 8 nodes. Since there are four
processors per node, this means the maximum number of processors your
job can request is 36.