BRAF HPC Resources
PARAM BioEmbryo
PARAM BioEmbryo cluster is a 100 teraflop machine which achieves its performance in 30:70 ( CPU:GPU) ratio. PARAM BioEmbryo is based on AMD EPYC processors with zen2 microarchitecture. It has 16 CPU only nodes and 4 GPU nodes containing 2 GPU cards in each node. Each node is a dual socket node with two 32 core AMD processors, which accumulates to 1280 CPU cores in the cluster. The cluster has 5120 GB RAM distributed across 20 nodes. Total usable storage of 240TB capacity is present as Luster based PFS mounted across the cluster nodes. The cluster is having 100 GBps EDR infiniband as primary interconnect.
It has the open source CentOS operating system and is configured with an OHPC cluster stack that consists, compilers, and libraries. Widely used bioinformatics applications are pre installed for the users.
Hardware Overview:
Cluster Details | |
Peak performance | 100 TF |
Number of Master nodes | 2 |
Number of compute Nodes | 20 |
Node type | Dual socket rack based servers |
Processor |
|
Total RAM | 5120 GB distributed across 20 nodes |
Total Storage | Luster PFS with 240 TB usable space |
Interconnect | 100 Gbps EDR Infiniband |
Other | Rack enclosure with contained cooling and backup |
CPU only nodes | |
No of nodes | 16 |
Node specification | Dual socket node with total 64 cores |
Processor | 2 x AMD EPYC 7502 CPU |
RAM per node | 256 GB |
GPU nodes | |
No of nodes | 4 |
Node specification | Dual socket node with total 64 cores |
Processor | 2 x AMD EPYC 7502 CPU |
RAM per node | 256 GB |
GPU | 2 x NVIDIA Tesla v100 GPUs |
Software overview:
Operating System | CentOS-7.7.1908 |
Cluster Suite | OpenHPC |
Scheduler | SLURM |
Compilers | GCC, AOCC |
Libraries | ACML |
Application software | Gromacs, NAMD, BWA, Blast, GATK |
PARAM BioEmbryo Software Stack
PARAM BioInferno
PARAM BioInferno cluster is the new addition in BRAF which enhances its capability to handle Big Data applications along with traditional HPC applications. PARAM BioInferno is based on the unique design architecture solution which fulfills the need of traditional computing as well as future computing. It consists of heterogeneous hardware with hybrid capabilities of executing HPC as well as Big Data Jobs. It is connected with the 100 GBps HDR Infiniband network as the primary interconnect.
Hardware Overview:
Cluster Details | |
---|---|
Peak performance | ~ 150 TF |
Number of Nodes | 34 |
Node type |
|
Processor |
|
Total RAM | 25 TB distributed across cluster |
Total Storage |
|
Interconnect | 100 Gbps HDR Infiniband interconnect |
CPU only nodes | |
No of nodes | 24 |
Node specification | Dual socket node with total 64 cores |
Processor | 2 x AMD EPYC 7502 CPU |
RAM per node | 512 GB |
Local Storage | 32 GB HDD |
High Memory nodes | |
No of nodes | 4 |
Node specification | Dual socket node with total 64 cores |
Processor | 2 x AMD EPYC 7502 CPU |
RAM per node | 1024 GB |
Local Storage | 32 GB HDD |
GPU nodes | |
No of nodes | 4 |
Node specification | Dual socket node with total 64 cores |
Processor | 2 x AMD EPYC 7502 CPU |
RAM per node | 512 GB |
Local Storage | 32 GB HDD |
Accelerator | 2 x NVIDIA Tesla v100 GPUs |
SMP node | |
No of nodes | 1 |
Node specification | 4U chassis with 8 socket node SMP 192 cores |
Processor | 8 x Intel Xeon Platinum 8260 CPU |
RAM per node | 6 TB |
Vector node | |
No of nodes | 1 |
Node specification | Dual socket node with 24 cores |
Processor | 2 x Intel Xeon Gold 6226 CPU |
Accelerator | 8 x NEC 10BE vector cards |
RAM per node | 196 GB |
Software overview:
Operating System | CentOS-7.7.1908 |
Cluster Suite | Vendor customized |
Scheduler | SLURM |
Compilers | GCC, AOCC |
Libraries | ACML |
Application software | Gromacs, NAMD, BWA, Blast, GATK, HBAT |