The .gitlab-ci.yml
file defines the structure and job execution order of the
GitLab
CI
pipelines
.
Jobs are the most fundamental element of a this file and each job is run independently from each other. Usually a pipeline is run for every
commit
.
Following is a template for such a yml file:
stages:
- build
- execute
- deploy
- test
building_job:
stage: build
image: # Your Docker image for building (e.g. archlinux/base)
before_script:
- # Install any operating system specific packages (e.g. pacman install -Sy which)
script:
- # The main script/command to be executed (e.g. ./building_script.sh)
artifacts:
paths:
- # Relative path to things needed to be transferred to the next job (e.g. <FOLDER_OF_MY_PROJECT>/<NAME_OF_BINARY>)
execute_job:
stage: execute
image: # Your Docker image for running the experiment (e.g. openfoamplus/of_v1812_centos73)
before_script:
- # Commands to be executed to get ready for the execution of the main script (e.g. source /opt/OpenFOAM/OpenFOAM-plus/etc/bashrc)
script:
- # The main script/command to be executed (e.g. python run_my_exp.py)
artifacts:
paths:
- # Paths to files needed for next jobs (e.g data/hex2D/SHEAR_2D.csv)
deploy_job:
stage: deploy
image: alpine:latest # Docker image name for deploying the results
before_script:
- apk update
- apk add git openssh # Install git and openssh packages.
- mkdir ~/.ssh/ # Create a folder for the SSH keys.
# Move the RSA environment variables keys to files.
- echo "$ID_RSA" > ~/.ssh/id_rsa
- echo "$ID_RSA_PUB" > ~/.ssh/id_rsa.pub
- echo "$KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 400 ~/.ssh/id_rsa.pub
- chmod 400 ~/.ssh/id_rsa
# Pull the GitLab Pages repository
- cd / && git clone <NAME_OF_PROJECT_REPOSITORY>.git
script:
# Rename any images by appending commit short hash, so there are no conflicts between commits.
- mv <NAME_OF_RESULT_IMAGE>.png <NAME_OF_RESULT_IMAGE>-$CI_COMMIT_SHA.png
# Insert specific meta-data need by Jekyll to display as a post.
- sed -i '1i ---\nlayout= post\ntitle= <NAME_OF_EXPERIMENT_RUN> commit '"$CI_COMMIT_SHORT_SHA"'\ndate= '"`date "+%Y-%m-%d %H:%M:%S %z"`"'\ntags= '"$CI_COMMIT_SHA"'\n---\n' ../<NAME_OF_RESULT_FILE>.md
# YAML cannot escape double-colon in previous command so replacement happens here.
- sed -i 's/\=/:/g' ../<NAME_OF_RESULT_FILE>.md
# Change references to images inside the Markdown/HTML file, so images can be displayed.
- sed -i 's/<PREVIOUS_PATH_TO_IMAGE>/\/<PATH_TO_GITLAB_PAGE_REPO>/assets/images/<NAME_OF_RESULT_IMAGE>-'"$CI_COMMIT_SHA"'/g' ../<NAME_OF_FILE_CONTAINING_THE_IMAGE_REFERENCES>.md
# Rename file in format need by Jekyll, by appending current date.
- mv ../<NAME_OF_RESULT_FILE>.md ../$(date +%Y-%m-%d)-<NAME_OF_RESULT_FILE>-$CI_COMMIT_SHORT_SHA.md
# Move files to GitLab Page local repository.
- mv <NAME_OF_PROJECT_REPOSITORY_CONTAINING_IMAGES> /<NAME_OF_GITLAB_PAGES_REPOSITORY>/assets/images/
- mv <NAME_OF_PROJECT_REPOSITORY_CONTAINING_MARKDOWN_OR_HTML> /<NAME_OF_GITLAB_PAGES_REPOSITORY>/_posts/
- cd /<NAME_OF_GITLAB_PAGES_REPOSITORY>/
# Push changes upstream.
- git config --global user.email '<>'
- git add . && git commit -m "New experiment run" && git push origin master
dependencies:
- # Need of files that were created at some previous job of the (e.g. my_execute_job)
test_job:
stage: test
image: # Docker image name for testing the results.
script:
- # Script for testing.
dependencies:
- execute_job
In this template there are 4 stages and 4 jobs. Each stage though can have multiple jobs running in parallel (for example many jobs with the ‘test’ stage for testing different things).
The stage defines in which stage each job belongs to.
The image defines what Docker image to use for this specific job. It is useful to create images in advance with all the dependencies installed, so that the pipeline executes faster.
The before_script & script are the places where commands are executed one by one and they depend on the specific project and its structure.
The artifacts & dependencies are working together to pass files from one job to another. The artifacts block specifies paths to the files that need to be saved for use by later jobs and the dependencies specifies the previous jobs from which it needs to download all the artifacts.
The current path when executing the Runner is /builds/<NAME_OF_GITLAB_GROUP>/<NAME_OF_GITLAB_PROJECT>/
All of the jobs are pretty much specific to the project except the deploy_job which is pretty generic for publishing the results. Here, in the beginning we update the system and install openssh & git so we can pull and push private repositories. Then follows, the export of the RSA-KEYS into default files in the $HOME folder and finally the actual pull of the Pages repository. Later, we rename the results files (should be Markdown or HTML for Jekyll) and images so they are have unique names in the Pages repository and inserting the following Jekyll-specific meta-data to the results files:
---
layout: post
title: <NAME_OF_RUN> commit <COMMIT_SHORT_HASH>
date: <CURRENT_DATE_AND_TIME>
tags: <COMMIT_HASH>
---
As well as changing the references inside the result files for the images to display properly. These last 3 steps could also apply at the Pages repository since they are specific to the Jekyll framework but for simplicity we keep them here. Finally, we move files and images to the proper folders of the Pages repository local folder and push the changes to the master branch .
To begin with, create a file with name .gitlab-ci.yml
in the root directory of your repository project. GitLab will automatically detect the file with that name and begin the CI procedure. There are a lot of templates for different kinds of projects (CI Templates), but we will create a custom one from the beginning. Pipelines contain “Jobs”, which are picked up by
Runners
and executed within the environment of the Runner. A Job can have an arbitrary name but it’s a good practice to be descriptive the work it actually does.
At the top of the file should be the name of the Docker image to be used. Here we use the official image provided by Docker Hub for the Gnu Compiler Collection ( gcc ), which contains the g++ used to compile C++ programs.
image: gcc
This is an optional step and if no stages are defined GitLab will assume only
building
, test
and deploy
which can later be used to define the jobs. Continuing with the file, we should define the names of the stages that will execute. These stages will execute in order. But we can have many jobs that belong to the same stage, and these jobs will execute in parallel if there are no further requirements. The code snippet below is telling our Runner that there are two stages, a) build and b) test. Again, by this definition we get that all jobs defined as in stage build
can run in parallel, and so all jobs in stage test
, but the test
jobs can only start after all build
jobs have finished execution.
stages:
- build
- test
The name of our first job is lets_compile
, which is basically a _
build test
. After that we declare a list of parameters that define the job’s behavior. Here the first parameter is stage, which defines that this Job belongs to the build
stage. The second parameter is the script one, which is used to write a command to compile our code. This command is written by us and can have any options we want. Since we compile a C++ program we just give g++ the name of our source code file and the name of our output file. Last parameter here is “artifacts” which is used to actually save the output of the script execution. The artifacts will be sent to GitLab after the job finishes and will be available for download in the GitLab UI. Here we give just the path of our output file, because it will be used in a later Job for the testing.
lets_compile:
stage: build
script:
- g++ genesis.cpp -o mybinary
artifacts:
paths:
- mybinary
Again, we define a new Job with the name lets_test. Remember that according to our stages definition in the start, this job and every other job with belonging to the “test” stage can only start after all “build” Jobs are finished. The first parameter is the name of the stage this job belongs to, which here is test
. The second parameter defines dependencies from other Jobs. Job artifacts can only be used within a single job, unless we define a dependency like here, where we allow artifacts to pass from one job to another. Here we allow artifacts from job lets_compile
to pass to job lets_test
. Last is our script, where we define the commands to run on the Runner. Here first we change the privileges of our bash file, in order to allow it to execute and then we just run it.
lets_test:
stage: test
dependencies:
- lets_compile
script:
- chmod +x test.sh
- ./test.sh
genesis.cpp
#include <iostream>
using namespace std;
int main() {
cout << "Hello world" << endl;
return 0;
}
test.sh
#!/bin/bash
echo "Starting testing."
OUTPUT=`./mybinary`
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
echo "OK"
else
echo "FAIL"
exit 1
fi
if [ "$OUTPUT" == "Hello world!" ]; then
echo "Test passed."
else
echo "Test failed."
exit 1
fi
.gitlab-ci.yml
stages:
- build
- test
build:
stage: build
script:
- g++ genesis.cpp -o mybinary
artifacts:
paths:
- mybinary
test:
stage: test
dependencies:
- build
script:
- chmod +x test.sh
- ./test.sh
Taking a real project’s pipeline configuration as an example (fvc-reconstruct) we can see how it works.
This project has 4 stages defined at the start of the YAML file
.
stages:
- build
- run_experiment
- test
- deploy
In the first stage we compile the fvc-reconstruct library/solver which will be used later for the experiment run. The image being used here is a custom one, containing only OpenFOAM 18.06 already build and using Arch linux as a base Operating System. The before script
block installs ‘which’ package because it is needed for finding the dependencies of the library/solver. Also in this block, permission is given to a custom script for compiling the code.
This is because GitLab Runner cannot execute the source
command and we need to source the bashrc
file for OpenFOAM to work properly.
In the ‘script’ block the
bash
script is executed. The contents of the building_script.sh
are provided below. And lastly, 4 files are specified as artifacts, which means they are going to be passed to all the jobs after this one. These files are the necessary ones to use the fvc-reconstruct solver in the experiment run.
build_fvc_reconstruct:
stage: build
image: melanoleucos/arch_openfoam
before_script:
- pacman -Sy --noconfirm which
- chmod +x building_script.sh
script:
- ./building_script.sh
artifacts:
paths:
- code/foamTestFvcReconstruct/libs.tar
- code/foamTestFvcReconstruct/etc.tar
- code/foamTestFvcReconstruct/blockMesh
- code/foamTestFvcReconstruct/foamTestFvcReconstruct
#!/bin/bash
# Enable OpenFOAM
source /opt/OpenFOAM/OpenFOAM-plus/etc/bashrc
cd code/foamTestFvcReconstruct
wmake # Compile fvc-reconstruct
# Find out what the dependencies are for the solver and tarball them.
ldd $(which foamTestFvcReconstruct) | cut -d" " -f3 | xargs tar --dereference -cf libs.tar
ldd $(which blockMesh) | cut -d" " -f3 | xargs tar --dereference -rvf libs.tar
# Tarball etc & other libraries needed.
tar --dereference -rvf libs.tar /lib64/ld-linux-x86-64.so.2
tar -cf etc.tar /opt/OpenFOAM/OpenFOAM-plus/etc
# Move the two commands to the current working directory.
mv /opt/OpenFOAM/OpenFOAM-plus/platforms/linux64GccDPInt32Opt/bin/blockMesh .
mv /root/OpenFOAM/-v1906/platforms/linux64GccDPInt32Opt/bin/foamTestFvcReconstruct .
In this stage first we define an environment variable ‘LD_LIBRARY_PATH’, which will be used to look for fvc-reconstruct and its dependencies. In the before_script
block we perform some movements in directories, untarring the libraries and etc (needed by OpenFOAM) and moving those files to the appropriate folders. The we change permissions for ‘pyFoamRunStudy’ which is a script responsible for running the experiment invoked by the main ‘studyRun’ script and executing the source command on bashrc which exposes OpenFOAM commands. The main script
block just moves into the right folder and executes the ‘studyRun’ bash script. This job produces two artifacts (2 .csv files) with the results of the experiment that are going to be passed to the next stages
study_run:
# As an image either user 1)archlinux/base or 2)fvc_study_run_image
# Second choice is a Docker image with packages already installed to speed things up.
# If first choice is used, following commands must also run in 'before_scirpt':
# pacman -Sy --noconfirm tar python python-pip
# pip install --no-cache pyfoam sympy matplotlib pandas
variables:
LD_LIBRARY_PATH: /opt/OpenFOAM/OpenFOAM-plus/platforms/linux64GccDPInt32Opt/lib:lib:lib64:/opt/OpenFOAM/OpenFOAM-plus/platforms/linux64GccDPInt32Opt/lib/openmpi-system
stage: run_experiment
image: fvc_study_run_image
before_script:
- cd code/foamTestFvcReconstruct/
- tar -xf libs.tar
- tar -xf etc.tar
- mv blockMesh foamTestFvcReconstruct /bin/
- mv opt/ /
- mv usr/lib/openmpi/ usr/lib/libhwloc.so.5 usr/lib/libltdl.so.7 usr/lib/libnuma.so.1 /usr/lib/
- chmod +x /opt/OpenFOAM/OpenFOAM-plus/etc/bashrc /builds/leia/fvc-reconstruct/data/pyFoamRunStudy
- source /opt/OpenFOAM/OpenFOAM-plus/etc/bashrc
script:
- cd /builds/leia/fvc-reconstruct/data/hex2D
- ./studyRun
artifacts:
paths:
- data/hex2D/SHEAR_2D.csv
- data/hex2D/HADAMARD_RYBCZYNSKY_2D.csv
In this stage we convert the python Notebook into a Markdown file so it can be used later by the Jekyll framework in order to publish it as a blog page. First we have to take care of the SSH Keys because we are using a different repository for the Page and because it is private we cannot pull or push without the keys (about ssh-keys see e.g. here or here). That is why the ‘.ssh’ folder is created, and we copy the environment variables (these variables should be kept secret, so they are stored in the page settings ‘ENVIROMENT_VARIABLES’) into files into that folder and give them permissions to be read. Now we are able to clone the private repo into the root folder.
The script
block is specific to this project, but besides the names of the files the structure is necessary for converting files for Jekyll to recognize as blog posts. To begin with, we execute the jupyter nbconvert
command which executes a notebook from the command-line and converts it to Markdown(.md) with the flag --to markdown
. After that, we rename the two images that were produced by the Notebook and append at the end of the name the hash for the commit of the push (referring to the commit of the fvc-reconstruct, NOT the commit to the fvc-reconstruct-page repo). This renaming will help, because the repository will hold images from many commits (each commit will produce a new blog post) and we don’t want them to be overwritten. Continuing, every Jekyll post needs some kind of data in specific format to recognize it as a post page. So using sed
we insert the information (date and title of the article) to the top of the file, as well as changing the referenced of the old images since we renamed them. In the end, we just move the files inside the necessary folders (_posts/ and assets/images/) and execute git push to the repository. From then, it is the work of the GitLab Page repository to build the new version of the website and publish it automatically.
$CI_COMMIT_SHA and $CI_COMMIT_SHORT_SHA are variables provided by the GitLab CI Runner and are available by default.
publish_page:
# As an image either use 1)archlinux/base or 2)fvc_publish_image
# Second choice is a Docker image with packages already installed to speed things up.
# If first choice is used, the following commands must also run in 'before_script':
# pacman -Sy --noconfirm git python python-pip texlive-core texlive-latexextra openssh
# pip install pandas matplotlib jupyter
stage: deploy
image: fvc-publish-image
before_script:
# RSA key-pair is being saved from environment variable (saved at project repository)
# to files in order to clone the GitLab Page repository. Also, the public key has
# been added to GitLab Page's repository as a 'Deploy Key' with 'Write access' allowed.
- mkdir ~/.ssh/
- echo "$ID_RSA" > ~/.ssh/id_rsa
- echo "$ID_RSA_PUB" > ~/.ssh/id_rsa.pub
- echo "$KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 400 ~/.ssh/id_rsa.pub
- chmod 400 ~/.ssh/id_rsa
- cd / && git clone git@git.rwth-aachen.de:leia/fvc-reconstruct-gitlab-page.git
script:
- jupyter nbconvert fvc-reconstruct-convergence.ipynb --execute --to markdown
# Rename images by appending commit short hash, so there are no conflicts between commits.
- cd /builds/leia/fvc-reconstruct/data/fvc-reconstruct-convergence_files/
- mv fvc-reconstruct-convergence_1_0.png fvc-reconstruct-SHEAR-convergence-$CI_COMMIT_SHA.png
- mv fvc-reconstruct-convergence_2_0.png fvc-reconstruct-HADAMARD_RYBCZYNSKY-convergence-$CI_COMMIT_SHA.png
# Insert specific metadata need by Jekyll to display as a post.
- sed -i '1i ---\nlayout= post\ntitle= fvc-reconstruct commit '"$CI_COMMIT_SHORT_SHA"'\ndate= '"`date "+%Y-%m-%d %H:%M:%S %z"`"'\ntags= '"$CI_COMMIT_SHA"'\n---\n' ../fvc-reconstruct-convergence.md
# YAML cannot escape double-colon in previous command so replacement happens here.
- sed -i 's/\=/:/g' ../fvc-reconstruct-convergence.md
# Change references to images inside the Markdown file, so images can be displayed.
- sed -i 's/fvc-reconstruct-convergence_files\/fvc-reconstruct-convergence_1_0/\/fvc-reconstruct-gitlab-page\/assets\/images\/fvc-reconstruct-SHEAR-convergence-'"$CI_COMMIT_SHA"'/g' ../fvc-reconstruct-convergence.md
- sed -i 's/fvc-reconstruct-convergence_files\/fvc-reconstruct-convergence_2_0/\/fvc-reconstruct-gitlab-page\/assets\/images\/fvc-reconstruct-HADAMARD_RYBCZYNSKY-convergence-'"$CI_COMMIT_SHA"'/g' ../fvc-reconstruct-convergence.md
# Rename file in format need by Jekyll, by appending current date.
- mv ../fvc-reconstruct-convergence.md ../$(date +%Y-%m-%d)-fvc-reconstruct-convergence-$CI_COMMIT_SHORT_SHA.md
# Move files to GitLab Page local repository and push changes upstream.
- mv *.png /fvc-reconstruct-gitlab-page/assets/images/
- mv ../*.md /fvc-reconstruct-gitlab-page/_posts/
- cd /fvc-reconstruct-gitlab-page/
- git config --global user.email '<>'
- git add . && git commit -m "New experiment run" && git push origin master
This is the final stage and in this case it is just executes a python file which checks the values of the experiment. If the values do not satisfy the requirements the whole pipeline fails. testing_stage:
# As an image either use 1)archlinux/base or 2)fvc_testing_image
# Second choice is a Docker image with packages already installed to speed things up.
# If first choice is used, the following commands must also run in 'before_script':
# pacman -Sy --noconfirm python python-pip texlive-core texlive-latexextra
# pip install pandas matplotlib jupyter
stage: test
image: fvc-testing-image
before_script:
- cd data/
script:
- python fvc-reconstruct-convergence.py
dependencies:
- study_run
Now that the CI Pipeline configuration file is done, every time a commit is happening, the Pipeline is activated and starts. We can observe the procedure by visiting GitLab page of our project and clicking on CI/CD
→ Pipelines
on the right side bar. There we can see all the pipelines. Here follows two bad examples and one good of what we can see.
A. Our building has failed and therefore our test was skipped.
B. Our testing has failed.
C. Everything passed.
We see that our pipelines contain information about the overall Pipeline status (passed or failed), a pipeline ID, who initiated the pipeline, the hash of the commit, and two small icons (containing either a tick for a pass, or an x for a fail) representing the number of stages and their statuses. If there were more than two stages, there would be more of these icons. On the far right, we have the Download icon for getting the artifacts we mentioned before or the repeat button in order to try to run the pipeline again if it failed.
If we click inside one of the failed pipelines we can see more information about the error. By clicking on the “failed” icon we go to the page of the specific pipeline, and then by clicking “Failed Jobs” we have access to the Runner console output.
Here is an example of a commit that was made in order to fail (by taking the “!” out of the “Hello world!” and therefore making the test to exit)