What is the GHPCC?
The Massachusetts Green High Performance Computing Center (MGHPCC) is a unique initiative by the University of Massachusetts, Boston University, Harvard University, MIT, Northeastern University and the Commonwealth of Massachusetts to deliver a world-class computational infrastructure to the research and academic community, indispensable in the increasingly data-rich environment of modern science. With the increasingly integrated role of computation in basic and applied research, the MGHPCC represents a critical piece of infrastructure that will fuel the innovation economy of the Commonwealth.
The five campuses of the University of Massachusetts are the “consortium within the consortium” of the MGHPCC, and have deployed their own shared high performance computing cluster at the MGHPCC facility in Holyoke. The rationale for an UMass HPC cluster is simple: it’s a cost-effective and powerful resource for the UMass research community. This UMass HPC is governed by a research advisory council and a user group.
For purposes of this wiki, we generally refer to the UMass owned and managed cluster as GHPCC.
The GHPCC is a linux based cluster. Currently the node images are based on Redhat Linux with the exception of the single SGI UV2 system which runs SUSE Linux Enterprise Server.
Job Scheduling Software
In order to manage the cluster resources GHPCC makes use of Platform LSF from IBM. All processing on the cluster must be submitted through LSF in order to try and optimize utilization and minimize conflicts.
Please see the Governance page here.
The current Acceptable Use policy is available here.
Statements for grant applications
Current Resources (NIH formatted)
Analysis of high throughput sequencing data is performed on a shared high performance computer cluster with 5312 cores available, and 400TBs of high performance EMC Isilon X series storage. The Massachusetts Green High Performance Computing Cluster is located in Holyoke MA and provide computing to the five University of Massachusetts Campuses. The High Performance Computing Cluster (HPCC) consists of the following hardware: an FDR based Infiniband (IB) network and a 10GE network for the storage environment, qty three (3) GPU nodes (Intel with 256GB RAM) with two NVIDIA Tesla C2075 - GPU computing processor - Tesla C2075 - 6 GB GDDR5 - PCI Express 2.0 x16 units, qty seven (7) AMD (2x AMD Opteron 6278, 2.4GHz, 16C, Turbo CORE, 16M L2/16M L3, 1600Mhz ) based Dell chassis with 64 cores / 512GB RAM per blade (42 blades), qty two (2) Intel (Xeon E5-2650 2.00GHz, 20M Cache, 8.0GT/s QPI, Turbo, 8C, 95W, Max Mem 1600MHz) based chassis with 16 cores / 196GB RAM per blade (16 blades), qty two (2) SGI UV200 with 512 Intel (Intel® Xeon® processor E5-4600) cores and 4TBs of fully addressable memory, qty one (1) AMD based Dell chassis with 128 cores Quad-Core AMD Opteron(tm) Processor 2376 and 256GB RAM, qty three (3) Intel (six-core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz ) based Dell chassis with 12 cores / 48GB RAM per blade (16 blades). The HPC environment runs the IBM LSF scheduling software for job management.