Hands-on #3
Singularity and MPI
Singularity has a good integration with MPI. With Singularity, the MPI usage model is to call ‘mpirun’ from outside the container, and reference the container from the ‘mpirun’ command. It has to be installed on all of the nodes that are going to use it. A centralized filesystem works best for this. The mpirun command communicates with the binary via the Process Management Interface (PMI) used by OpenMPI. When MPI run gets executed it forks an Orted process that launches Singularity as a container process. The MPI application within the container is linked to the MPI runtime libraries within the container. The MPI runtime libraries then connect and communicate back to the Orted process via a universal PMI.

MPI application process for Singularity
Usage would look like this:
Warning MPI must be newer or equal to the version inside the container
Building LAMMPS MPI Singularity configuration file
Create/use the lammps template configuration file lammps-mpi.cfg in the /home directory.
Creating the container image
Create a container called
lammps-mpi.imgand modify the size to 2048 MiB.
Bootstrap the lammps configuration file created in the previous step.
Note Bootstrapping step requires
rootaccess.
Running the container image
To execute the container
To run LAMMPS on Cambridge HPC
Last updated