About CMMS

The University of Pittsburgh established the Center for Molecular and Materials Simulations (CMMS) in 2000, providing computational resources to researchers in the Sciences and Engineering. The effort to establish CMMS was spearheaded by researchers in the Departments of Chemistry and Chemical Engineering with support being provided by the College of Engineering and the School of Arts and Sciences.

When originally established in 2000, the resources in CMMS included a 50-CPU IBM RS6000 POWER3 workstation cluster and a 9-processor Pentiumiii cluster. Sixteen of the RS6000 computers were connected optically via switched Gigabit Ethernet to provide parallel processing applications using up to 32 CPUs. Funding for this hardware was provided by the Major Research Instrumentation (MRI) program of the National Science Foundation and the SUR program of IBM.

Between 2000 and 2003 CMMS added several new computer systems, including four 4-processor IBM 44p workstations, a 32-processor 1.0 GHz Pentiumiii cluster, a 32-processor Athlon 1700 MP cluster and a 20-processor Athlon 2200 MP cluster.

In the Fall of 2004, two new clusters–one with six 8-CPU IBM Power4+ p655 nodes, and the other with 80 nodes, each with two 2.4 GHz Opteron CPUs–were added. The Opteron nodes are connected via Gigabit Ethernet. These new systems were funded by an MRI grant form NSF and a SUR grant from IBM. In 2005, six new Opteron nodes, each with two dual-core CPUs were added to the Cluster.

In January 2007, a 24 node cluster, with an Infiniband network was installed. Each node of this cluster has two dual-core 2.6 GHz Opteron 2218 CPUs. Funding for this cluster was provided by the University.

In 2008 the computational facilities were further enchanced with the addition of 66 nodes, each with two quadcores Xeon E5430 nodes running at 2.66 GHz, and with between 8 and 16 GB of memory. All Xeon nodes are interconnected with a low-latency Infiniband fabric. These Xeon nodes were funded by the University. After the addition of these nodes, the cluster had a total of 964 cores dedicated for use by CMMS researchers.

In June of 2009, there was the most significant upgrade in hardware yet for CMMS. 98 nodes were purchased from Penguin Computing together with a QDR Infiniband fabric. All 98 nodes consist of two quad-core Intel Xeon E5550 CPUs (Nehalem). 90 of the nodes have 12 GB of RAM, and 8 have 48 GB of RAM. These latter 8 nodes have 1.8 TB of local disk for heavy I/O calculations. These Penguin nodes are managed by Scyld Clusterware, and the queue software employed is Taskmaster/Moab. With the addition of these nodes, there is now 1748 cores available for CMMS users. Funding for the purchase from Penguin Computing was provided by the University.

See here for most up-to-date hardware details.

Since October 2008, CMMS has worked in conjunction with the Center for Simulation and Modeling (SaM) to facilitate High Performance Computing research at the University of Pittsburgh. The documentation and forums web site home page for both CMMS users, and the SaM community can be found here.