![]() |
JLAB HPC Clusters |
The Scientific Computing group is working with JLab's Theory group to deploy a sequence of high performance clusters in support of Lattice Quantum ChromoDynamics (LQCD). LQCD is the numerical approach to solving QCD, the fundamental theory of quarks and gluons and their interactions. This computationally demanding suite of applications are key to understanding the experimental program of the laboratory.
Three clusters are currently deployed:
![]() |
Decommissioned Intel Clusters |
![]() |
The 2004 cluster consisted of 384 nodes
arranged as a 6x8x2^3 mesh (torus). These nodes were interconnected using 3 dual
gigE cards plus one half of the dual gigE NIC on the motherboard (the other
half is used for file services. This 5D wiring could be configured as various
configurations of a 3D torus (4D and 5D running was possible in principle,
but was less efficient so not used). Nodes were single processor 2.8 GHz
Xeon, 800 MHz front side bus, 512 MB memory, and 36 GB disk.
This cluster achieved approximately 0.7 teraflops sustained.
Message passing on this novel architecture was done using an application optimized library, QMP, for which implementations will also target other custom LQCD machines. The lowest levels of the communications stack were implemented using a VIA driver, and for multiple link transfers VIA data rates approaching 500 MB/sec/node were achieved. |
![]() |
The 2003 deployed cluster of 256 nodes was configured
as either a 2x2x4x8. Nodes were interconnected using 3 dual
gigE cards (one per dimension, view
of wiring) and one on-board link, an approach which delivered high
aggregate bandwidth while avoiding the expense of a high performance switch.
Nodes were single processor 2.67 GHz
Pentium 4 Xeon
with 256 MByte of memory. The cluster achieved 0.4 TeraFlops on LQCD applications,
Additional information can be found in an Intel case study (pdf file). |
![]() |
The oldest cluster, now decommissioned, was installed in 2002. The cluster contained 128 nodes of single processor
2.0 GHz Xeons connected by a
myrinet network.
It achieved 1/8 TeraFlops of performance (Linpack),
and contained 65 GBytes of total physical memory, and
delivered 270MB/s simultaneous node-to-node
aggregate network bandwidth. All nodes ran RedHat Linux Release
7.3 with kernel version
of 2.4-18.
Additional configuration information. |