Jenkins and Docker

This document provides a high-level overview of the reasons to adopt Docker for the Jenkins jobs used by the Apertis infrastructure and covers the steps needed to transition existing non-Docker jobs.

What is Jenkins

Jenkins is the automation server that ties all the components of the Apertis infrastructure together.

It is responsible for:

  • building source packages from git repositories and submitting them to OBS
  • building ospacks and images
  • submitting test jobs to LAVA
  • rendering documentation from Markdown to HTML and PDF and publishing it
  • building sample app-bundles
  • bundling test helpers

What is Docker

Docker is the leading system to build, manage and run server applications in a containerized environment.

It simplifies reproducibility by:

  • providing an easy way to build container images
  • providing a registry for already built container images
  • isolating the applications using the container images from the host system

Why Docker with Jenkins

Running Jenkins jobs directly on a worker machine has several drawbacks:

  • all the jobs share the same work environment which can cause unwanted interactions
  • the work environment has to be provisioned manually by installing packages on the machine and hand-tweaking the configuration
  • the work environment has to be kept up-to-date manually
  • reproducing the same work environment on different workers is very error prone as it relies on manual action
  • customizing the work environment needs privileged operations
  • the work environment can't be reproduced on developers' machines
  • conflicting requirements (for instance, building against different releases) cannot be fulfilled as the work environment is shared
  • scaling is complex

Jenkins jobs can be instead configured to use Docker containers as their environment, which gives the advantages below:

  • each job runs in a separate container, giving more control about resource usage
  • Docker containers are instantiated automatically by Jenkins
  • rebuilding Docker containers from scratch to get the latest updates can be done with a single click
  • the containers provide a reproducible environment across workers
  • Docker container images are built from Dockerfiles controlled by developers using the normal review workflow with no special privileges
  • the same container images used on the Jenkins workers can be used to reproduce the work environment on developers' machines
  • containers are ephemeral, a job changing the work environment does not affect other jobs nor subsequent runs of the same job
  • containers are isolated from each other and allow to address conflicting requirements using different images
  • several service providers offer Docker support which can be used for scaling

Apertis jobs using Docker

Apertis already uses Docker containers for a few key jobs: in particular the transition to Debos has been done by targeting Docker from the start, greatly simplifying setup and maintenance compared to the previous mechanism.

Image recipes

The jobs building ospacks and images use the image-builder Docker container, based on Debian stretch.

A special requirement for those jobs is that /dev/kvm must be made accessible inside the container: particular care must then be taken for the worker machines that will run these jobs, ruling out incompatible virtualization mechanisms (for instance VirtualBox) and service providers that cannot provide access to the KVM device.

Developers can retrieve and launch the same environment used by Jenkins with a single command:

$ docker run \
  -t \


The jobs building the designs and development websites use the documentation-builder Docker container, based on Debian stretch.

Unlike the containers used to build images, the documentation builder does not have any special requirement.

Docker images

The Docker images used are generated and kept up-to-date through a dedicated Jenkins job that checks out the docker-images repository, uses Docker to build all the needed images and pushes them to our Docker registry to make them available to Jenkins and to developers.

Converting the remaining jobs to Docker

All the older jobs still run directly on a specifically configured worker machine. By converting them to use Docker we would get the benefits listed above and we would also be able to repurpose the special worker machine to become another Docker host, doubling the number of jobs that can be run in parallel.

The affected jobs are:

Creating a new Docker image

The first step is to create a new Docker image to reproduce the work environment needed by the jobs.

A new package-builder recipe is introduced.

Unlike other images so far, this one is based on Apertis itself rather than Debian. This means that a minimal Apertis ospack is produced during the build and is then used to seed the Dockerfile, which installs all the needed packages on top of it.

Converting the packaging jobs

All the packages/* and packaging/* jobs are similar as they involve checking out a git tree for a package, launching build-snapshot to build them against the work environment and submit the resulting source package to OBS.

Once all the dependencies have been made available in the work environment again, converting the job templates only requires minor changes.

Converting the sample app-bundle jobs

The jobs building the sample applications need ade and the dependencies of the app-bundles themselves.

The changes required to switch the job template to use Docker are pretty similar to the ones required by the packaging jobs.

Converting the build-package-all-masters job

This job's purpose is to check that no API breakage is introduced in the application framework and HMI packages by building them from sources in sequence.

The changes required to switch the job template to use Docker are pretty similar to the ones required by the packaging jobs.

Converting the Phabricator jobs

While the plan is to officially switch to GitLab for all the code reviews, the jobs used to validate the patches submitted to Phabricator need to be ported to avoid regressions.

The changes to port them to Docker are similar to the ones for the other jobs, but additional fixes are needed to ensure they work smoothly in ephemeral Docker containers, relaxing ssh host keys checking and avoiding the interactive behavior of git-phab.

Steps to be taken by downstreams

Downstreams are already likely to have a Docker-capable worker machine for their Jenkins instance in order to run the Debos-based jobs.

By merging the latest changes in the apertis-docker-images repository a new package-builder image should be available in their Docker registry.

The updates to the templates in the apertis-jenkins-jobs repository can then be merged and deployed to Jenkins to make use of the new Docker image.

The results of the search are