Resources at National PARAM Supercomputing Facility (NPSF)
I. Computer resourcesPARAM Yuva II has four compute sub-clusters.
- The first sub-cluster is 218 node cluster with each node having two Intel Xeon E5-2670 (Sandy Bridge) processor (eight cores each) with clock speed of 2.6 GHz, with FDR Infiniband interconnect. Each of these nodes have two Intel Xeon Phi 5110P co-processors, each with 60 cores running at clock speed of 1.0 GHz and 8 GB RAM to boost the computing power.
- The second sub-cluster is a 100+ node cluster with each of the nodes having four Intel Xeon X7350 (Tigerton) processor (four cores each), with clock speed of 2.93 GHz with PARAMNet3 as well as DDR Infiniband interconnects.
- The third sub-cluster is a 4 node cluster of with each node having two Intel Xeon E5-2650 (Sandy Bridge) processor (eight cores each) with clock speed of 2.0 GHz, with FDR Infiniband interconnects. Each of these nodes has two NVIDIA GPU Tesla M2090 cards to accelerate the performance.
- The fourth sub-cluster consists of four AMD Opteron 6276 processor (sixteen cores each), with clock speed of 2.3 GHz. This is connected with the rest of the cluster by Gigabit Ethernet connection as well as FDR Infiniband interconnects.
All the nodes have a minimum of 64 GB of RAM. Eight nodes are equipped with 128 GB of RAM and the fourth sub-cluster node/s has 512 GB of RAM.
II. 3-Tier storageHigh Performance Computing requires a single system image of the application to enable its execution across the nodes using Message Passing Interface (MPI). The overall performance of the application on a HPC system depends, not only on the memory bandwidth and the sustained performance, but also on the bandwidth that is available for data transfer (data being read or written by the application) during the execution.
The National PARAM Supercomputing Facility has state of the art 3-tier HPC Storage, with features of high I/O bandwidth, high availability and large storage capacity, integrated with the PARAM Yuva II HPC cluster. The 3-tier HPC storage on PARAM Yuva II consists of three major sub-systems as mentioned.
- 100 T Bytes file system capacity.
- I/O bandwidth of 1 G Bytes/s aggregate WRITE and 1 G Bytes/s aggregate READ simultaneously.
- No Single Point of Failure.
- 100 T Bytes file system capacity.
- 10 G Bytes/s sustained aggregate I/O WRITE bandwidth to a single file using multiple clients.
- 400 T Bytes native backup capacity (800 TB with compression).
- 16 Ultrium LTO4 drives.
This storage serves the purpose of both the bandwidth requirement of the I/O intensive applications as well as the storage demands of the data generated by the scientific applications of users across various domains.
III.Software, Tools and Libraries- Intel Fortran Compiler XE, Version 13.1
- Intel C++ Compiler XE, Version 13.1
- Intel Threading Building Blocks 4.1
- Intel MPI Library 4.1
- Intel Math Kernel Library 11.0
- Intel Integrated Performance Primitives 7.1
- Intel Advisor XE 2013
- Intel Inspector XE 2013
- Intel Trace Analyzer and Collector 8.1
- Intel VTune Amplifier XE 2013
PARAM Yuva II HPC Cluster software environment consists of large number of softwares, libraries, utilities, tools to cater the needs of vast diverse pool of scientific community users.
Intel Cluster Studio XE 2013
- Parallel Fortran, C, and C++ compilers v13.4
- AMD Math Core Library v5.3
- PG Debugger v13.4
- Hierarchical Data Format Library v5-1.8.11
- Network Common Data Form Library v4.0
Contact Us
Centre for Development of Advanced Computing
The National PARAM Supercomputing Facility (NPSF)
Pune University Campus, Ganeshkhind
Pune-411007
Phone No.: +91-20-25704183
Email: npsfhelp[at]cdac[dot]in