Testing is a core part of the project, and different test data is required to optimise the testing process.

Currently the project does not have a functional and well defined place for storage of the different types of test data, which creates many issues across the testing processes.

The goal of this document is to define a single storage place for all the test data and build on top it the foundation for accurate test data processing and reporting.

Current Issues

Test Case Issues

At this time, test cases are stored in the Apertis MediaWiki instance with a single page for each test case. Although this offers a reasonable degree of visibility for the tests, the storage method is not designed to manage this type of data, which means that there are only some limited features available for handling the test cases.

The wiki does not provide a convenient way to reuse this data through other tools or infrastructure services. For example, management functions like filtering or detailed searching are not available.

Test cases may also come out of sync with the automated tests, since they are managed manually in different places: an automated test might not have a test case page available, or the test case could be marked as obsolete while it is still being executed automatically by LAVA.

Another big issue is that test cases are not versioned, so there is no way to keep track of which specific version of a test case was executed for a specific image version.

Test Result Issues

Automated tests results are stored in the LAVA database after the tests are executed, while manual tests results are manually added by the tester to the wiki page report for the weekly testing round. This means that the wiki is the only storage location for all test data. As with test cases, there are also limits to the functionally of the wiki when handling the results.

LAVA does not offer a complete interface or dashboard to clearly track test results, and the current interface to fetch these results is not user friendly.

Manual results are only available from the Apertis wiki in the Weekly Testing Report page, and they are not stored elsewhere.

The only way to review trends between different test runs is to manually go through the different wiki and LAVA web pages of each report, which is cumbersome and time consuming.

Essentially, there is no a canonical place for storing all the test results for the project. This has major repercussions since there is no way to keep proper track of the whole project health.

Testing Process Issues

The biggest issue is the lack of a centralised data storage for tests results and tests cases, creating the following issues for the testing process:

  • It is not possible to easily analyse tests results. For example, there is no interface for tracking test result trends over a period of time or across different releases.

  • Tests cases are not versioned, so it is not possible to know exactly which test cases are executed for a specific image version.

  • Test cases instructions can differ from the actual test instructions being executed. This issue tends to happen mainly with automated tests: for example, when a test script is updated but the corresponding test case misses the update.

  • Tests results cannot be linked to test cases because test data is located in different places and test cases have no version information.


A data storage backend need to be defined to store all test cases and test results.

The storage backend may not be necessarily the same for all the data types, but a well defined mechanism should be available to access this data in a consistent way from our current infrastructure, and one solution should not impose limitations or constraints onto the other. For example, one backend can be used only for test cases and another for test results.

Data Backend Requirements

The data storage backend should fulfil the following conditions at the minimum:

  • Store all test cases.
  • Store all manual and automated test results.
  • It should make no distinction between manual and automated test cases, and ideally offer a very transparent and consistent interface for both types of tests.
  • It should offer an API to access the data that can be easily integrated with the rest of the services in the existing infrastructure.
  • It should allow the execution of management operations on the data (querying, filtering, searching).
  • Ideally, it should offer a frontend to simplify management operations.


We are interested in storing two types of test data: test cases and test results.

Test Cases

A test case is a specification containing the requirements, environment, inputs, execution steps, expected results and metadata for a specific test.

The test cases descriptions in the wiki include custom fields that will need to be defined during the design and development of the data storage solution. The design will also need to consider the management, maintenance and tools required to handle all test case data.

Test Results

Tests results can be of two types: manual and automated.

Since tests results are currently acquired in two different places depending on the test type, this makes it very inconvenient to process and analyse test data.

Therefore, the data backend solution should be able to:

  • Store manual tests results which will be manually entered by the tester.
  • Store automated tests results that will be fetched from the LAVA database.
  • Have all results in the same place and format to simplify reporting and manipulation of such data.

Data Usage

The two main usage for test result data will be reports and statistics.

Test Reports

This shows the tests results for all the applicable tests cases executed in a specific image version.

The tests reports are currently created weekly. They are created manually with the help of some scripts and stored on the project wiki.

New tools will need to be designed and developed to create reports once the backend solution is implemented.

These tools should be able to query the test data using the backend API to produce reports both as needed and at regular intervals (weekly, monthly).

Test Statistics

Accurate and up-to-date statistics are an important use case for the test data.

Even though these statistics could be generated using different tools, there may still exist the need for storing this data somewhere. For example, for every release, besides the usual test report, producing a final release report giving a more detailed overview of the whole release's history could be generated.

The backend should also make it possible to easily access the statistics data for further processing, for example, to download it and manipulate the data using a spreadsheet.

Data Format

Tests data should ideally be in a well-known standard format that can be reused easily by other services and tools.

In this regard, data format is an important point for consideration when choosing the backend since it will have a major impact on the project as it will help to determine the infrastructure requirements and the tools which need to be developed to interact with such data.

Version Management

Tests cases and tests results should be versioned.

Though this is more related to the way data will be used, the backend might also have an impact on managing versions of this data.

One of the advantages of versioning is that it will allow to link test cases to tests results.

Data Storage Backends

These sections give an overview of the different data backend systems that can be used to implement a solution.


SQUAD stands for Software Quality Dashboard and it is an open source test management dashboard.

It can handle tests results with metrics and metadata, but it offers no support for test case management.

SQUAD is a database with a HTTP API to manage tests result data. It uses an SQL database, like MySQL or PostgreSQL, to store results. Its web frontend and API are written using Django.

Therefore, it would not require much effort to modify our current infrastructure services to be able to push and fetch test results from SQUAD.


  • Simple HTTP API: POST to submit results, GET to fetch results.
  • Easy integration with all our existing infrastructure.
  • Test results, metrics and metadata are in JSON format.
  • Offers support for PASS/FAIL results with metrics, if available.
  • Supports authentication token to use the HTTP API.
  • Has support for teams and projects. Each team can have multiple projects and each project can have multiple builds with multiple test runs.
  • It offers group permissions and visibility options.
  • It offers optional backend support for LAVA.
  • Actively developed and upstream is open to contributions.
  • It provides a web fronted to visualise test result data with charts.
  • It is a Django application using a stable database system like PostgreSQL.


  • It offers no built-in support for storing manual tests results. But it should be straightforward to develop a new tool or application to submit these test results.
  • It has no support for test case management. This could be either added to SQUAD or a different solution could be used.
  • The web frontend is very simple and it lacks support for many visual charts. It currently only supports very simple metrics charts for tests results.

Database Systems

Since the problem is about storing data, a plain SQL database is also a valid option to be considered.

A reliable DB system could be used, for example PostgreSQL or MySQL, with an application built on top of it to manage all test data.

New database systems, such as CouchDB, can also offer more advanced features. CouchDB is a NOSQL database that stores data using JSON documents. It also offers a HTTP API that allows to send requests to manage the stored data.

This database acts like a server that can interact with remote applications through its HTTP API.


  • Very simple solution to store data.
  • Advanced database systems can offer an API and features to interact with data remotely.


  • All applications to manage data need to be developed on top of the database system.

Version Control Systems

A version control system (VCS), like Git, could be used to store all or part of the test data.

This approach would involve a design from scratch for all the components to manage the data in the VCS, but it has the advantage that the solution can be perfectly adapted to the project needs.

A data format would need to be defined for all data and document types, alongside a structure for the directory hierarchy within the repository.


  • It fits the model of the project perfectly. All project members can easily have access to the data and are already familiar with this kind of system.
  • It offers good versioning and history support.
  • It allows other tools, frameworks or infrastructure services to easily reuse data.
  • Due to its simplicity and re-usability, it can be easily adapted to other projects and teams.


  • All applications and tools need to be developed to interact with this system.
  • Although it is a simple solution, it depends on well defined formats for documents and files to keep data storage in a sane state.
  • It does not offer the usual query capabilities found in DB systems, so this would need to be added in the applications logic.


ResultsDB is a system specifically designed for storage of test results. It can store results from many different test systems and types of tests.

It provides an optional web frontend, but it is built to be compatible with different frontend applications, which can be developed to interact with the stored data.


  • It has a HTTP REST interface: POST to submit results, GET to fetch results.
  • It provides a Python API for using the JSON/REST interface.
  • It only stores test results, but it has the concept of test cases in forms of namespaced names.
  • It is production ready.


  • The web frontend is very simple. It lacks metrics graphics and groups for projects teams.
  • The web frontend is optional. This could involve extra configurations and maintenance efforts.
  • It seems too tied to its upstream project system.


This section describes a solution using some of the backends discussed in the previous section in order to solve the test data storage problem in the Apertis project.

This solution proposes to use a different type of storage backends for each type of data.

SQUAD will be used to store the tests result data (both manual and automated), and a VCS system (Git is recommended) will be used to store the tests case data. This solution also involves defining data formats, and writing a tool or a custom web application to guide testers through entering manual test results.


  • It is a very simple solution for all data types.
  • It can be designed to perfectly suit the project needs.
  • It can be easily integrated with our current infrastructure. It fits very well into the current CI workflow.
  • Storing test cases in a VCS will easily allow managing test case versions in a very flexible way.


  • Some tools and applications need to be designed and implemented from scratch.
  • Format and standards need to be defined for test cases files.
  • It is a solution only limited to data storage, further data processing tasks will need to be done by other tools (for example, test case management tasks, generating tests results statistics, and so on).

Test Results

SQUAD will be used as the data storage backend for all the tests results.

This solution to receive and store tests results perfectly fits into the proposed mechanism to close the CI loop.

Automated Test Results

Automated tests results will be received in Jenkins from LAVA using the webhook plugin. These results will then be processed in Jenkins and can be pushed into SQUAD using the HTTP API.

A tool needs to be developed to properly process the tests results received from LAVA, though this data is in JSON format, which is the same format required by SQUAD, so it should be very simple to write a tool to properly translate the data to the correct format accepted by SQUAD.

Manual Test Results

SQUAD does not offer any mechanism to input manual tests results. These tests results will need to be manually entered into SQUAD. Nevertheless, it should be relatively simple to develop a tool or application to submit this data.

The application would need to receive the test data (for example, it can prompt the user in some way to input this data), and then generate a JSON file that will later be sent into SQUAD.

The manual test results will need to be entered manually by the tester using the new application or tool every time a manual test is executed.

File Format

All the tests results will be in the standard SQUAD JSON format:

  • For automated tests, Jenkins will receive the test data in the JSON format sent by LAVA, then this data needs to be converted to the JSON format recognised by SQUAD.

  • For manual tests, a tool or application will be used to enter the test data manually by the tester, and it will create a JSON format file that can also be recognised by SQUAD.

So it can be said that the final format for all tests results will be determined by the SQUAD backend.

The test data must be submitted to SQUAD as either file attachments, or as regular POST parameters.

There are four types of input file formats accepted by SQUAD: tests, metrics, metadata and attachment files.

The tests, metrics and metadata files should all be in JSON format. The attachment files can be in any format (txt, png, and so on).

All tests results, both for automated and manual tests will use any of these file formats. Here are some examples of the different types of file formats:

  1. Tests file: it contains the test results in PASS/FAIL format.
  "test1": "pass",
  "test2": "pass",
  "testsuite1/test1": "pass",
  "testsuite1/test2": "fail",
  "testsuite2/subgroup1/testA": "pass",
  "testsuite2/subgroup2/testA": "pass",
  1. Metrics file: it contains the test results in metrics format.
  "test1": 1,
  "test2": 2.5,
  "metric1/test1": [1.2, 2.1, 3.03],
  "metric2/test2": [200, 109, 13],
  1. Metadata file: it contains metadata for the tests. It recognises some special values and also accept new fields to extend the test data with any relevant information.
  "build_url": "https://<url_build_origin>",
  "job_id": "902",
  "job_url": "https://<original_test_run_url>",
  "job_status": "stable",
  "metadata1": "metadata_value1",
  "metadata2": "metadata_value2",
  1. Attachment files: these are any arbitrary files that can be submitted to SQUAD as part of the test results. Multiple attachments can be submitted to SQUAD during a single POST request.

Mandatory Data Fields

The following metadata fields are mandatory for every test file submitted to SQUAD, and must be included in the file: source, image.version, image.release, image.arch, image.board, and image.type.

The metadata file also needs to contain the list of test cases executed for the tests job, and their types (manual or automated). This will help to identify the exact test case versions that were executed.

This metadata will help to identify the test data environment, and it essentially maps to the same metadata available in the LAVA job definitions.

This data should be included both for automated and manual tests, and it can be extended with more fields if necessary.

Processing Test Results

At the end, all test results (both manual and automated) will be stored in a single place, the SQUAD database, and the data will be accessed consistently using the appropriate tools.

The SQUAD backend won't make any distinction between storing manual and automated test results, but they will contain their respective type in the metadata so that they can be appropriately distinguished by the processing tools and user interfaces.

Further processing of all the test data can be done by other tools that can use the respective HTTP API to fetch this data from SQUAD.

All the test result data will be processed by two main tools:

  1. Automated Tests Processor

    This tool will receive test results in the LAVA JSON format and convert it to the JSON format recognised by SQUAD.

    This should be developed as a command line tool that can be executed from the Jenkins job receiving the LAVA results.

  2. Manual Tests Processor

    This tool will be manually executed by the tester to submit the manual test results and will create a JSON file with the test data which can then be submitted to SQUAD.

Both tools can be written in the Python programming language, using the JSON module to handle the test result data and the request module in order to submit the test data to SQUAD.

Submitting Test Results

Test data can be submitted to SQUAD triggering a POST request to the specific HTTP API path.

SQUAD works around teams and projects to group the test data, so these are central concepts reflected in its HTTP API. For example, the API path contains the team and project names in the following form:


Tools can make use of this API either using programming modules or invoking command line tools like curl to trigger the request.

An example using the curl command line tool to submit all the results in the test file common-tests.json for the image release 18.06 with version 20180527.0 and including its metadata from the file environment.json would look like this:

$ curl \
    --header "Auth-Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
    --form tests=@common-tests.json \
    --form metadata=@environment.json \

Fetching Test Results

Test data can be fetched from SQUAD, triggering a GET request to the specific HTTP API path.

Basically, all the different types of files pushed to SQUAD are accessible through its HTTP API. For example, to fetch the tests results contained in the tests file (previously submitted), a GET request call to the test file path can be triggered like this:

$ curl

This retrieves the test results contained in the tests file of the test run ID 5399. In the same say, the metadata file for this test run can be fetched with a call to the metadata file path like this:

$ curl

The test run ID is generated by SQUAD to identify a specific test, and it can be obtained by triggering some query calls using the HTTP API.

Tools and applications (for example, to generate test reports and project statistics) can conveniently use this HTTP API either from programming modules or command line tools to access all the test data stored in SQUAD.

Test Cases

Git will be used as the data storage backend for all the test cases.

A data format for test cases needs to be defined, among with other standards such as, for example, the directory structure in the repository, as well as a procedure to access and edit this data might be necessary.

File Format

A well defined file format is required for the test cases.

The proposed solution is to reuse the LAVA test definition file format for all test cases, both for manual and automated tests.

The LAVA test definitions files are YAML files that contain the instructions to run the automated tests in LAVA and they are already stored in the automated tests Git repository.

In essence, the YAML format would be extended to add all the required test case data to the automated tests definition files and new test definition YAML files would be created for manual test cases, which would follow the same format as for the automated test cases.

In this way, all tests cases, both for automated and manual tests, will be available in the same YAML format.

The greatest advantage of this approach is that it will avoid the current issue of test case instructions differing from the executed steps in automated tests, since the test case and the definition file will be the same document.

The following examples are intended to give an idea of the file format for the manual and automated test cases. They are not in the final format and only serve as an indicator of format that will be used.

An example of the automated test case file for the librest test. This test case file will be executed by LAVA automatically:

  format: "Lava-Test-Shell Test Definition 1.0"
  name: librest
  type: unit-tests
  exec-type: automated
  target: any
  image-type: any
  description: "Run the unit tests that ship with the library against the running system."
  maintainer: "Luis Araujo <>"

  - "Ensure you have the development repository enabled in your sources.list and you have recently run apt-get update."
  - "Ensure Rootfs is remounted as read/write"
  - sudo mount -o remount,rw /

  - librest-0.7-tests

    - common/run-test-in-systemd --user=user --timeout=900 --name=run-test env DEBUG=2 librest/automated/

  pattern: ^(?P<test_case_id>[a-zA-Z0-9_\-\./]+):\s*(?P<result>pass|fail|skip|unknown)$


An example of the manual test case file for the webkit2gtk-aligned-scroll test. This test case can be manually read and executed by the tester, but ideally a new application should be developed to read this file and guide the tester through each step of the test case:

  format: "Manual Test Definition 1.0"
  name: webkit2gtk-aligned-scroll
  type: functional
  exec-type: manual
  target: any
  image-type: any
  description: "Test that scrolling is pinned in a given direction when started mostly towards it."
  maintainer: "Luis Araujo <>"

  - "A touchscreen and a mouse (test with both)."

  - "Ensure you have the development repository enabled in your sources.list and you have recently run apt-get update."
  - "Ensure Rootfs is remounted as read/write"
  - sudo mount -o remount,rw /

  - webkit2gtk-testing

    - GtkClutterLauncher -g 400x600
    - "Try scrolling by starting a drag diagonally"
    - "Try scrolling by starting a drag vertically"
    - "Try scrolling by starting a drag horizontally, ensure you can only pan the page horizontally"

  - "When the scroll is started by a diagonal drag, you should be able to pan the page freely"
  - "When the scroll is started by a vertical drag, you should only be able to pan the page vertically,
     regardless of if you move your finger/mouse horizontally"
  - "When the scroll is started by a horizontal drag, you should only be able to pan the page horizontally,
     regardless of if you move your finger/mouse vertically"

  - video:

  - "Both mouse and touchscreen need to PASS for this test case to be considered a PASS.
     If either does not pass, then the test case has failed."

Mandatory Data Fields

A test case file should at least contain the following data fields for both the automated and manual tests:

format: This is used to identify the format version.
name: Name of test case.
type: This could be used to define a series of test case types (functional, sanity,
      system, unit-test).
exec-type: Manual or automated test case.
image-type: This is the image type (target, minimal, ostree, development, SDK).
image-arch: The image architecture.
description: Brief description of the test case.
priority: low, medium, high, critical.
run: Steps to execute the test.
expected: The expected result after running the test.

The test case file format is very extensible and new fields can be added as necessary.

Git Repository Structure

A single Git repository can be used to store all test cases files, both for automated and manual tests.

Currently, LAVA automated tests definitions are located in the git repository for the project tests. This repository contains all the scripts and tools to run tests.

All tests cases could be placed inside this git repository. This has the great advantage that both tests instructions and tests tools will be located in the same place.

The git repository will need to be cleaned and organised to adapt it to contain all the available tests cases. A directory hierarchy can be defined to organise all test cases by domain and type.

For example, the path tests/networking/automated/ will contain all automated tests for the networking domain, the path tests/apparmor/manual/ will contain all manual tests for the apparmor domain, and so on.

Further tools and scripts can be developed to keep the git repository hierarchy structure in a sane and standard state.

Updates and Changes

Since all tests cases will be available from a git repository, and they are plain YAML files, they can be edited like any other file from that repository.

At the lowest level, the tester or developer can use an editor to edit these files, though it is also possible to develop tools or a frontend to help with editing and at the same time enforce a certain standard on them.


The test cases files will be in the same format both for automated and manual tests, though the way these will be executed are different.

Automated test cases will continue to be automatically executed by LAVA, and for manual test cases a new application could be developed that can assist the tester going through the steps from the tests definition files.

This application can be a tool or a web application that, besides guiding the tester through each step of the manual test definition file, will also collect the test results and convert them to JSON format, which can then be sent to the SQUAD backend using the HTTP API.

In this way, both types of test cases, manual and automated, would follow the same file format, located in the same git repository, they will be executed by different applications (LAVA for automated tests, a new application for manual tests), and both types of tests results will conveniently use the same HTTP API to be pushed into the SQUAD data storage backend.


Though a Git repository offers many advantages to manage the test cases files, it is not a friendly option for the user to access and read test cases from it.

One solution is to develop an application that can render these test cases files from the git repository into HTML or other format and display them into a server where they can be conveniently accessed by users, testers and developers.

In the same way, other tools to collect statistics, or generate other kinds of information about test cases can be developed to interact with the git repository to fetch the required data.

Test Reports

A single application or different ones can be developed to generate different kinds of report.

These applications will need to trigger a GET request to the SQUAD HTTP API to fetch the specific tests results (as explained in the Fetching Test Results section) and generate the report pages or documents using that data.

These applications can be developed as command line tools or web applications that can be executed periodically or as needed.


Since all the test cases both for manual and automated tests will be available as YAML files from a Git repository, these files can be versioned and link to the corresponding tests runs.

Tests case groups will be versioned using Git branches. For every image release, the test cases repository will be branched with the same version (for example 18.03, 18.06, and so on). This will match the whole group of test cases against an image release.

A specific test case can also be identified using the HEAD commit of the repository from which it is being executed. It should be relatively simple to retrieve the commit id from the git repository during test execution and add it to the metadata file that will be sent to SQUAD to store the test results. In this way, it will be possible to locate the exact test case version that was used for executing the test.

For automated tests cases, the commit version can be obtained from the LAVA metadata, and for manual tests cases, the new tool executing the manual tests should take care of retrieving the commit id. Once the commit id is available, it should be added to the JSON metadata file that will be pushed along with the tests results data to SQUAD.

SQUAD Configuration

Some configuration is required to start using SQUAD in the project.

Groups, teams and projects need to be created and configured with the correct permissions for all the users. Depending on the implementation, some of these values will need to be configured every quarter (for example, if new projects should be created for every release).

Authentication tokens need to be created by users and tools required to submit tests results using the HTTP API.


This section describes the workflow for each of the components in the proposed solution.

Automated Test Results

  • Automated tests are started by the Jenkins job responsible for triggering tests.
  • The Jenkins job waits for automated tests results using a webhook plugin.
  • Once test results are received in Jenkins, these are processed with the tool to convert the test data into SQUAD format.
  • After the data is in the correct format, it is sent by Jenkins to SQUAD using the HTTP API.

Manual Test Results

  • Tester manually executes the application to run manual tests.
  • This application will read the instructions from the manual test definition files in the git repository and will guide the testers through the different test steps.
  • Once the test is completed, the tester enter its results into the application.
  • A JSON file is generated with these results in the format recognised by SQUAD.
  • This same application or a new one could be used by the tester to send the test results (JSON file) into SQUAD using the HTTP API.

Test Cases

  • Test case files can be edited using any text editor in the Git repository.
  • A Jenkins job could be used to periodically generate HTML or PDF pages from the test case files and make them available from a website for easy and convenient access by users and testers.
  • Test cases will be automatically versioned once a new branch is created in the git repository which is done for every release.

Test Reports

  • Reports will be generated either periodically or manually by using the new reporting tools.
  • The SQUAD frontend can be used by all users to easily check test results and generate simple charts showing the trend for test results.


This gives a general list of the requirements needed to implement the proposed solution:

  • The test case file format needs to be defined.
  • A directory hierarchy needs to be defined for the tests Git repository to contain the test case files.
  • Develop tools to help work around the test cases files (for example, syntax and format checker, repository sanity checker).
  • Develop tool to convert tests data from LAVA format into SQUAD format.
  • Develop tool to push tests results from Jenkins into SQUAD using HTTP API.
  • Develop application to guide execution of manual test cases.
  • Develop application to push manual test results into SQUAD using HTTP API (this can be part of the application to guide manual test case execution).
  • Develop tool or web application to generate weekly test report.

Deployment Impact

All the additional components proposed in this document (SQUAD backend, new tools, web application) are not resource intensive and do not set any new or special requirements on the hosting infrastructure.

The instructions for the deployment of all the new tools and services will be made available with their respective implementations.

Following is a general overview of some important deployment considerations:

  • SQUAD will be deployed using a Docker image. SQUAD is a Django application, and using a Docker image makes the process of setting up an instance very straightforward with no need for special resources, packaging all the required software in a container that can be conveniently deployed by other projects.

    The recommended setup is to use a Docker image for the SQUAD backend and another one for its PostgreSQL database.

  • The application proposed to execute manual tests and collect their results will serve mainly as an extension to the SQUAD backend, therefore the requirements will also be in accordance with the SQUAD deployment from an infrastructure point of view.

  • Other new tools proposed in this document will serve as the components to integrate all the workflow of the new infrastructure, so they won't require special efforts and resources beyond the infrastructure setup.


This document only describes the test data storage issues and proposes a solution for those issues along with the minimal test data processing required to implement the reporting and visualisation mechanisms on top of it. It does not cover any API in detail and only gives a general overview of the required tools to implement the proposed solution.


  • Apertis Tests

  • Weekly Tests Reports Template Page


  • ResultsDB

  • CouchDB

The results of the search are