CI/CD
Overview
Deucalion provides an Continuous Integration / Continuous Deployment (CI/CD) environment through a GitLab instance paired with the Jamacar CI executor. This system enables researchers, developers, and students to:
-
Version control their research code and workflows
-
Automatically build, test, and validate projects on code changes
-
Run CI pipelines using Deucalion
-
Improve reproducibility and collaboration
-
Reduce manual compiling/debugging/submitting work
Instead of developing locally and manually transferring code to Deucalion, users can automate the entire workflow from commit → build → test.
Access Requirements
To use the CI/CD platform, you need:
| Requirement | Details |
|---|---|
| Deucalion account | A valid account on User Portal |
| SSH Key | Public key uploaded to the Security tab of the Settings page in User Portal |
| TOTP Device | TOTP Device registered on the Security tab of the Settings page in User Portal |
Creating a repository
There are different options for creating a repository:
| Option | Use Case |
|---|---|
| Create a blank project | You have experience with CI/CD in HPC |
| Import an existing project | You want to bring your code from from GitHub, Bitbucket or another GitLab |
| Fork a template project | You prefer to start with our ready-to-use templates |
Whichever method you choose to create your first repository (GitLab Project), you need to access Deucalion GitLab with your web browser. After authenticating, you should click on the New Project or just by clicking on Fork in one of our Project Templates.
Creating a CI pipeline
Every project that wants CI/CD must include a .gitlab-ci.yml file in the repository root.
Minimal example:
include:
- project: 'deucalion-templates/default-settings'
file: 'deucalion-default.yml'
stages:
- build
- run
build-job:
stage: build
tags:
- buildx-local
script:
- ml load GCCcore/11.3.0
- gcc sample.c -i sample.out
run-job:
stage: run
variables:
SCHEDULER_PARAMETERS: "--nodes=1 --account=$ACCOUNT --partition=normal-x86 --time=01:00:00"
tags:
- buildx-slurm
script:
- ml load GCCcore/11.3.0
- ./sample.out
Key Notes
| Field | Meaning |
|---|---|
| include | Default settings, required for authentication |
| stages | Defines job order |
| tags | The runner that will execute the job. See available runners |
| script | Commands executed during job |
Available Runners
| Tag | Architecture | Use case | CPU/GPU charge | Time limit | Notes |
|---|---|---|---|---|---|
| buildx-local | x86_64 Intel Xeon | Build software | No charge | 30 minutes | 1, 2 |
| buildx-slurm | x86_64 | Build/Run software | Regular charge | 6 hours | 3 |
We are working on a dedicated aarch64 (ARM) architecture runner that will be available soon.
Notes:
- CPU architecture is different from compute nodes (AMD Rome). If you need to use CPU optimizations you must avoid use this runner.
- This runner does not have a GPU available.
- Until a dedicated aarch64 runner is available, you can use this runner to submit your jobs in Slurm. You just need to adjust the PATH to the correct environment.
Ready-to-use Templates
We provide ready-to-use examples for different use cases.
| Template | Use case |
|---|---|
| Container Job x86 | Python CPU application running inside Singularity container |
| Container job GPU | Python GPU application running inside Singularity container |
We are working on new templates that we will be making available soon.
Getting Support
If you have questions or need help setting up pipelines please visit support page