Hands-on #3
Last updated
Last updated
Singularity has a good integration with MPI. With Singularity, the MPI usage model is to call ‘mpirun’ from outside the container, and reference the container from the ‘mpirun’ command. It has to be installed on all of the nodes that are going to use it. A centralized filesystem works best for this. The mpirun command communicates with the binary via the Process Management Interface (PMI) used by OpenMPI. When MPI run gets executed it forks an Orted process that launches Singularity as a container process. The MPI application within the container is linked to the MPI runtime libraries within the container. The MPI runtime libraries then connect and communicate back to the Orted process via a universal PMI.
MPI application process for Singularity
Usage would look like this:
Warning MPI must be newer or equal to the version inside the container
Create/use the lammps template configuration file lammps-mpi.cfg
in the /home
directory.
Create a container called lammps-mpi.img
and modify the size to 2048 MiB.
Bootstrap the lammps configuration file created in the previous step.
Note Bootstrapping step requires
root
access.
To execute the container
To run LAMMPS on Cambridge HPC