Universiteit Leiden

nl en

Theoretical Chemistry

Computational facilities

Beowulf clusters looking for new frontiers.

Currently all our clusters and servers are located in the server room in the Gorlaeus Building.

VIDI

This Beowulf cluster contains 38 compute nodes with dual processor, octo-core 2.4 GHz EM64T Xeon E5 processors (E5-2630 V3) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 608 cores for computing available. The nodes are connected by a SuperMicro gigabit switch.  Each node has 64 GB DDRIII 1866 MHz RAM and four 3 TB SATA hard-disks available in a RAID-1 (OS) and a RAID-5 (scratch). On the head node a total storage capacity of ~11 TB, using software RAID-10 over eight 3 TB SATA disks, is available and with the system disks running from a software based RAID-1 configuration also over these eight disks.

GlusterFS 3.7 has been used to aggregate a global distributed scratch storage over the nodes of ~300 TB performing with a gigabit throughput (~118 Mb/s) in (random) IO. This system runs on CentOS 7.2.

ERC2

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.4 GHz EM64T Xeon E5 processors (E5-2630 v3) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10G SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40 Gb copper links resulting in a triangle topology with an aggregated bandwidth of 80 Gb/s. Each node has 64 GB DDRIII 1866 MHz RAM and four 2 TB SATA hard-disks available in a RAID-5 for a local XFS scratch file system.

The OSs on the nodes use a four- way RAID-1 software raid over these disks and the ext4 file system. On the head node a total storage capacity of ~7 TB, using software RAID-10 over eight 2 TB SATA disks and XFS, is available and with the OS running from a software based RAID-1 configuration also over these eight disks using the ext4 file system. GlusterFS 3.7.6 has been used to aggregate a global distributed scratch storage over the nodes. This system runs on CentOS 7.2. Kickstart is used to deploy the OS on the nodes.

Lewis: ERC1

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.6 GHz EM64T Xeon E5 processors (E5-2650 v2) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10G SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40 Gb copper links resulting in a triangle topology with an aggregated bandwidth of 80 Gb/s. Each node has 64 GB DDRIII 1866 MHz RAM and four 2 TB SATA hard-disks (WD-red) available in a RAID-5 for a local XFS scratch file system.

The OSs on the nodes use a four-way RAID-1 software raid over these disks and the ext3 file system. On the head node a total storage capacity of ~7 TB, using software RAID-10 over eight 2 TB SATA disks and XFS, is available and with the OS running from a software based RAID-1 configuration also over these eight disks using the ext4 file system. GlusterFS 3.4.2 has been used to aggregate a global distributed scratch storage over the nodes of ~570 TB. This system runs on CentOS 6.5. SystemImager is used to deploy the OS on the nodes.

Octo

This Beowulf cluster contains 39 nodes with dual processor, octo core 2.0 GHz EM64T Xeon E5 processors (E5-2650) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 624 cores for computing available. The nodes are connected by an HP Procurve 2848 gigabit switch.  Each node has 64 GB DDRIII 1600 MHz RAM and two 1 TB SATA hard-disks available in a stripe. On the head node a total storage capacity of ~5 TB, using software RAID-10 over six 2 TB SATA disks, is available and with the system disks running from a software based RAID-1 configuration also over these six disks. GlusterFS 3.3 has been used to aggregate a global distributed scratch storage over the nodes of ~70 TB performing with a gigabit throughput (~118 Mb/s) in (random) IO. Local scratch (stripe over two 1Tb disks) shows a performance of roughly 350 Mb/s. This system runs on CentOS 6.3.

Hexa

This Beowulf cluster contains 34 nodes with dual processor, hexa core 2.67 GHx EM64T Xeon processors (5650) all with Hyper-Threading enabled. In total there are 420 cores for computing available. The nodes are connected by HP Procurve 2848 gigabit switch. Each node has 24 GB DDRIII 1066 MHz RAM and three 150 GB 10Krpm SATA harddisks available with scratch in software RAID-5 configuration. On the head node a total storage capacity of ~2 TB, using software RAID-5 over three 1 TB SATA disks, is available and with the system disks running from a software based RAID-1 configuration. This system is being updated to CentOS 7.4 only to keep it available until it will be replaced in 2018.

SARA Huygens

We frequently make use of the national supercomputer Cartesius and the national super cluster LISA. These machines are located at the SurfSARA institute in Amsterdam.

Leiden Grid Infrastructure

Currently we have connected all these computational resources in a computer grid. For this we developed a grid middleware with the support of NWO-NCF. The report of that project and the LGI software can be found here

More information can be found here. Screenshots can be found here.

This website uses cookies. More information