Introduction to containers
|
Containers enable you to package up an application and its dependencies
By using containers, you can better enforce reproducibility, portability and share-ability of your computational workflows
|
Basics of Singularity
|
Singularity can run both Singularity and Docker container images
Execute commands in containers with singularity exec
Open a shell in a container with singularity shell
Download a container image in a selected location with singularity pull
You should not use the latest tag, as it may limit workflow reproducibility
The most commonly used registries are Docker Hub, Quay, Biocontainers and Nvidia GPU Cloud
|
Share files with the host: BLAST, a bioinformatics example
|
By default Singularity mounts the host current directory, and uses it as container working directory
Map additional host directories in the containers with the flag -B
|
Build your own container image
|
Build images using remote
Use the remote builder with the flag -r , if you need to build images from a machine where you don’t have sudo rights
You can share you Singularity Image File with others, as you would do with any other (big) file
Upload images to a web registry with singularity push (Sylabs account required)
|
Parallel processing with Dask-MPI containers
|
Singularity interfaces with HPC schedulers such as Slurm, with some requirements
You need to build your application in the container with an MPI version which is ABI compatible with MPI libraries in the host
Appropriate environment variables and bind mounts may be required at runtime to make the most out of MPI applications (sys admins can help)
|
Molecular dynamics with GPU containers
|
|
GUI enabled applications: RStudio in a container
|
An interactive session can essentially be executed as any other containerised application, via singularity exec
Use the %startscript section of a def file to configure an image for long running services
Launch/shutdown long running services in the background with singularity instance start/stop
|