EESSI

The European Environment for Scientific Software Installations (EESSI, pronounced as "easy") offers software readily available in every type of architecture, including both arm and x86 in Deucalion. The EESSI documentation can be found here.

Right now we offer EESSI in the ARM and GPU-accelerated partitions, with plans to expand to the remaining partition soon.

To use EESSI in any partition, you must run these commands in a compute node (or just add them to your bash script):

unset MODULEPATH
source /cvmfs/software.eessi.io/versions/2023.06/init/bash

This will make sure that you will only have access to the modules from EESSI. To have access to both native and EESSI modules just run the second command. The available EESSI modules can be found here. Only about 30% of the modules are available in the ARM partition, and we are working with the EESSI team to supply the remaining ones.

To get the available modules please use lmod's module avail commend:

{EESSI 2023.06} [malaca@cna0001 ~]$ module avail

------------------------ /cvmfs/software.eessi.io/versions/2023.06/software/linux/aarch64/a64fx/modules/all -------------------------
   archspec/0.2.1-GCCcore-12.3.0                       libwebp/1.3.1-GCCcore-12.3.0
   Bison/3.8.2-GCCcore-12.3.0                          libwebp/1.3.2-GCCcore-13.2.0              (D)
   Bison/3.8.2-GCCcore-13.2.0                  (D)     libxml2/2.11.4-GCCcore-12.3.0
   BLIS/0.9.0-GCC-12.3.0                               libxml2/2.11.5-GCCcore-13.2.0             (D)
   (...)
     LibTIFF/4.6.0-GCCcore-13.2.0                (D)     ZeroMQ/4.3.5-GCCcore-13.2.0               (D)
   libunwind/1.6.2-GCCcore-12.3.0                      zstd/1.5.5-GCCcore-12.3.0
   libunwind/1.6.2-GCCcore-13.2.0              (D)     zstd/1.5.5-GCCcore-13.2.0                 (D)

  Where:
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Moduleß

Example of a jobscript file that runs Quantum Espresso in an ARM node using EESSI:

#!/bin/bash
#SBATCH --ntasks=96
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --time=4:00:00
#SBATCH --partition normal-arm
#SBATCH --mem=0
#SBATCH --account=<slurm_account>

unset MODULEPATH
source /cvmfs/software.eessi.io/versions/2023.06/init/bash

module load ESPResSo/4.2.2-foss-2023a
mpirun -np 96 python3 lj.py