Universiteit Leiden

nl en

Theoretical Chemistry

Computational facilities

Beowulf clusters looking for new frontiers.

In total we host about 335 nodes with 6.2k cores in the server room of the Gorlaeus divided over 9 racks and the following clusters:

Milan

This machine is a 28 node dual socket AMD EPYC 7313 cluster running at max 3.7GHz based on Dell PowerEdge R6525 servers. Each node has 128GB of DDR4-3200MT/s memory, 4x 4TB (SATA) RAID-5 (mdadm) local disk scratch and 32 cores (64 threads). In total there are 896 cores in 31u available.  The nodes are connected via 10Gb/s ethernet using a Dell EMC N3248X-ON 10Gb/s switch. The head node has 8x 8TB SATA disks in RAID-10 for a 30TB /home on XFS and is based on a Dell PowerEdge R7515 with a single socket AMD EPYC 7452 32 core processor at 2.3 GHz and also with 128GB of DDR4-3200 MT/s memory. This cluster operates with Rocky 8, OpenMPI 4.1.1, SLURM 21.08.03 and GlusterFS 9.4.

Epyc

This machine is a 26 node dual socket AMD EPYC 7351 cluster running at 2.4GHz. Each node has 128GB of DDR4-2666 MHz memory, 4x 4TB (SATA) RAID-5 (mdadm) local disk scratch and 32 cores (64 threads). In total there are 832 cores in 30u available. GigaByte mainboards are used with Intel X550T 10Gb/s ethernet connections using an M4300-48X 48-port NetGear 10Gb/s switch. The head node has 8x 4TB SATA disks in RAID-10 for /home and is based on a SuperMicro chassis with a H11DSi-NT mainboard. The head node also has 128GB DDR4-2666MHz memory available. This cluster operates with Rocky 9, OpenMPI 4.1.5, SLURM 22.05.02 and GlusterFS 11.

VIDI

This Beowulf cluster contains 38 compute nodes with dual processor, octo-core 2.4GHz EM64T Xeon E5 processors (E5-2630 V3) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 608 cores for computing available. The nodes are connected by a SuperMicro 1Gb/s switch. Each node has 64GB DDRIII 1866MHz RAM and four 3TB SATA hard-disks available in a RAID-1 (OS) and a RAID-5 (scratch). On the head node a total storage capacity of 11TB, using software RAID-10 over eight 3TB SATA disks, is available and with the system disks running from a software based RAID-1 configuration also over these eight disks. GlusterFS 3.7 has been used to aggregate a global distributed scratch storage over the nodes. This system runs on Rocky 9, uses SLURM 22.05.2 and OpenMPI 4.1.4.

ERC2

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.4GHz EM64T Xeon E5 processors (E5-2630 v3) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10Gb/s SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40Gb/s copper links resulting in a triangle topology with an aggregated bandwidth of 80Gb/s. Each node has 64GB DDRIII 1866MHz RAM and four 2TB SATA hard-disks available in a RAID-5 for a local XFS scratch file system. The OSes on the nodes use a four- way RAID-1 software raid over these disks and the ext4 file system. On the head node a total storage capacity of 7TB, using software RAID-10 over eight 2TB SATA disks and XFS, is available and with the OS running from a software based RAID-1 configuration also over these eight disks using the ext4 file system. GlusterFS 3.7.6 has been used to aggregate a global distributed scratch storage over the nodes. This system runs on Rocky 9, SLURM 22.05.2 and OpenMPI 4.1.4.

ERC1

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.6GHz EM64T Xeon E5 processors (E5-2650 v2) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10Gb/s SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40Gb/s copper links resulting in a triangle topology with an aggregated bandwidth of 80Gb/s. Each node has 64GB DDRIII 1866MHz RAM and four 2TB SATA hard-disks (WD-red) available in a RAID-5 for a local XFS scratch file system. The OSes on the nodes use a four-way RAID-1 software raid over these disks and XFS. On the head node a total storage capacity of 7TB, using software RAID-10 over eight 2TB SATA disks and XFS, is available and with the OS running from a software based RAID-1 configuration also over these eight disks. GlusterFS 8.1 has been used to aggregate a global distributed scratch storage over several nodes. This system runs on Rocky 8, SLURM 20.02.05, OpenMPI 4.0.5.

SURFsara Snellius

We frequently make use of the national supercomputer Snellius located in Amsterdam.

Leiden Grid Infrastructure

Currently we have connected all these computational resources into a single computer grid so we can develop, for instance, python workflows on our workstations. Information regarding LGI can be found here.

This website uses cookies.  More information.