Difference between revisions of "HPC:Singularity"
(Created page with "=== Singularity === Singularity is an open-source application for creating and running software containers, designed primarily for scientific computing on Linux-based computin...") |
|||
(13 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
Begin by launching an interactive/non-interactive session (bsub -Is bash). Singularity is installed on all HPC nodes outside of the environment module system, so there is no need to load a module in order to use it. Instead, you can directly use the singularity commands. | Begin by launching an interactive/non-interactive session (bsub -Is bash). Singularity is installed on all HPC nodes outside of the environment module system, so there is no need to load a module in order to use it. Instead, you can directly use the singularity commands. | ||
− | ''Note: | + | '''Note: commands that require sudo will not be available to on HPC nodes.''' |
+ | |||
+ | The current version of Singularity installed is 3.7.1. | ||
For detailed help information run 'singularity help' within an interactive session. | For detailed help information run 'singularity help' within an interactive session. | ||
+ | |||
+ | ===== Obtaining Singularity Images ===== | ||
+ | Container images are executable files that define the software environment for the container and run the container. A single container image can be used to run multiple instances of the same container concurrently for different jobs. | ||
+ | |||
+ | To use container images on the HPC, images need to be pulled (i.e downloaded) from an external resource like the [https://cloud.sylabs.io/library Container Library] or [https://hub.docker.com/ Docker Hub] or [https://singularity-hub.org/ Singularity Hub]. Container images may also be built externally from a definition file and then transferred to HPC system where it can be run. | ||
+ | |||
+ | ===== Pulling pre-built images ===== | ||
+ | If a pre-built container image containing all the necessary software packages has been identified in [https://cloud.sylabs.io/library Container Library] or [https://hub.docker.com/ Docker Hub] or [https://singularity-hub.org/ Singularity Hub], it can easily be pulled down using the 'singularity pull' command. | ||
+ | |||
+ | Example below shows how to pull down the "ubuntu" image from the Singularity Container Library: | ||
+ | |||
+ | <pre> | ||
+ | singularity pull library://ubuntu | ||
+ | </pre> | ||
+ | |||
+ | The above command will create an image file called "ubuntu_latest.sif". To use an alternate naming convention for image files: | ||
+ | <pre> | ||
+ | singularity pull ubuntu.sif library://ubuntu | ||
+ | </pre> | ||
+ | |||
+ | Example below shows how to pull down the "ubuntu" image from Docker Hub: | ||
+ | <pre> | ||
+ | singularity pull docker://ubuntu | ||
+ | </pre> | ||
+ | |||
+ | ===== Building images from a definition file ===== | ||
+ | |||
+ | A custom container image may also be built using a definition file that instructs the build process. For security reasons we do not allow users to build directly on our systems; images must be build externally and then transfered to the HPC, where it can be run like any other container image. | ||
+ | |||
+ | ===== Running Singularity container images ===== | ||
+ | |||
+ | Use one of the following commands to interact with Singularity containers: | ||
+ | |||
+ | * singularity shell — for an interactive shell within the container | ||
+ | * singularity exec — for executing commands within the container | ||
+ | * singularity run — for executing a pre-defined runscript within the container | ||
+ | |||
+ | Environment variables set outside a container image will be inherited within the container, unless the --cleanenv (or -e) option is passed to the singularity commands listed above. Using the --cleanenv (or -e) option is recommended to avoid potential issues with using software within the container. | ||
+ | |||
+ | Example below shows how to launch an interactive shell within a container (using the previously downloaded image): | ||
+ | |||
+ | <pre> | ||
+ | singularity shell --cleanenv ubuntu.sif | ||
+ | Singularity> | ||
+ | </pre> | ||
+ | |||
+ | ====== '''Note about project directories''' ====== | ||
+ | Singularity images will automatically mount the /tmp and $PWD directories from the HPC node where the container job is run. This means /project directories will not be available by default and must be included in the singularity command. | ||
+ | |||
+ | For example: | ||
+ | <pre> | ||
+ | singularity shell --cleanenv --bind /project/test_lab ubuntu.sif | ||
+ | Singularity> ls /project/ | ||
+ | test_lab | ||
+ | </pre> | ||
+ | |||
+ | Alternatively, the '''SINGULARITY_BIND''' environment variable can be set to include any directory paths that are required within the containers. For example: | ||
+ | |||
+ | <pre> | ||
+ | export SINGULARITY_BIND=/project/test_lab,/project/test_lab2 | ||
+ | </pre> | ||
+ | |||
+ | The above "export" command can be added to $HOME/.bashrc for added convenience. | ||
+ | |||
+ | Once the SINGULARITY_BIND environment variable is set, images can be run without the use of the --bind option as shown below. | ||
+ | |||
+ | <pre> | ||
+ | singularity shell --cleanenv ubuntu.sif | ||
+ | Singularity> ls /project/ | ||
+ | test_lab test_lab2 | ||
+ | </pre> | ||
+ | |||
+ | === Other Pages === | ||
+ | ---- | ||
+ | <div class="mw-collapsible mw-collapsed"> | ||
+ | *[[HPC:FAQ|HPC FAQ ]] | ||
+ | *[[HPC:Login|Connecting to the PMACS cluster]] | ||
+ | *[[HPC:User_Guide|User Guide]] | ||
+ | *[[HPC:Software|Available Software]] | ||
+ | *[[HPC:Archive System|PMACS Archive System]] | ||
+ | </div> |
Latest revision as of 23:37, 22 March 2021
Contents
Singularity
Singularity is an open-source application for creating and running software containers, designed primarily for scientific computing on Linux-based computing clusters like the Penn Medicine HPC system. Singularity containers enable self-contained, stable, portable, and reproducible computing environments and software stacks that can be shared and used across different machines and computing clusters, such as for research collaborations spanning multiple institutions.
Usage
Begin by launching an interactive/non-interactive session (bsub -Is bash). Singularity is installed on all HPC nodes outside of the environment module system, so there is no need to load a module in order to use it. Instead, you can directly use the singularity commands.
Note: commands that require sudo will not be available to on HPC nodes.
The current version of Singularity installed is 3.7.1.
For detailed help information run 'singularity help' within an interactive session.
Obtaining Singularity Images
Container images are executable files that define the software environment for the container and run the container. A single container image can be used to run multiple instances of the same container concurrently for different jobs.
To use container images on the HPC, images need to be pulled (i.e downloaded) from an external resource like the Container Library or Docker Hub or Singularity Hub. Container images may also be built externally from a definition file and then transferred to HPC system where it can be run.
Pulling pre-built images
If a pre-built container image containing all the necessary software packages has been identified in Container Library or Docker Hub or Singularity Hub, it can easily be pulled down using the 'singularity pull' command.
Example below shows how to pull down the "ubuntu" image from the Singularity Container Library:
singularity pull library://ubuntu
The above command will create an image file called "ubuntu_latest.sif". To use an alternate naming convention for image files:
singularity pull ubuntu.sif library://ubuntu
Example below shows how to pull down the "ubuntu" image from Docker Hub:
singularity pull docker://ubuntu
Building images from a definition file
A custom container image may also be built using a definition file that instructs the build process. For security reasons we do not allow users to build directly on our systems; images must be build externally and then transfered to the HPC, where it can be run like any other container image.
Running Singularity container images
Use one of the following commands to interact with Singularity containers:
* singularity shell — for an interactive shell within the container * singularity exec — for executing commands within the container * singularity run — for executing a pre-defined runscript within the container
Environment variables set outside a container image will be inherited within the container, unless the --cleanenv (or -e) option is passed to the singularity commands listed above. Using the --cleanenv (or -e) option is recommended to avoid potential issues with using software within the container.
Example below shows how to launch an interactive shell within a container (using the previously downloaded image):
singularity shell --cleanenv ubuntu.sif Singularity>
Note about project directories
Singularity images will automatically mount the /tmp and $PWD directories from the HPC node where the container job is run. This means /project directories will not be available by default and must be included in the singularity command.
For example:
singularity shell --cleanenv --bind /project/test_lab ubuntu.sif Singularity> ls /project/ test_lab
Alternatively, the SINGULARITY_BIND environment variable can be set to include any directory paths that are required within the containers. For example:
export SINGULARITY_BIND=/project/test_lab,/project/test_lab2
The above "export" command can be added to $HOME/.bashrc for added convenience.
Once the SINGULARITY_BIND environment variable is set, images can be run without the use of the --bind option as shown below.
singularity shell --cleanenv ubuntu.sif Singularity> ls /project/ test_lab test_lab2