Background

Testing is a fundamental part of the project, but it is not so useful unless it goes along with an accurate and convenient model to report the results of such a testing.

Receiving notifications on time about critical issues, easily checking tests results, or analyzing tests trends among different images versions are some of the examples of test reporting that helps to keep a project in a good state through its different phases of development.

The goal of this document is to define a model for on time and accurate reporting of tests results and issues in the project. This model should be adapted to the project needs and requirements, along with support for convenient visualization of the test data and reports.

The solution proposed in this document should fit the mechanisms available to process the test data in the project.

Current Issues

  • Tests reports are created manually and stored in the wiki.
  • There is no convenient way to analyze test data and check tests logs.
  • There does not exist a proper notification system for critical issues.
  • There is no mechanism to generate statistics from the test data.
  • There is no way to visualize test data trends.

Solution

A system or mechanism must be implemented with a well defined workflow fulfilling the project requirements to cover the test reporting and visualization of all the test data.

The solution will mainly involve designing and implementing a web application dashboard for visualization of test results and test cases, and a notification mechanism for tests issues.

Test Cases

Test cases will be available from the Git repository in YAML format as explained in the document about test data storage and processing.

The Gitlab Web UI will be used to read test cases rendered in HTML format.

A link to the test case page in Gitlab will be added to the test results metadata to easily find the exact test case instructions that were used to execute the tests. This link will be shown from the web application dashboard and the SQUAD UI for convenient access of the test case for each test result.

As originally proposed in the test data storage document, the test case file will be the canonical specification for the test instructions, and it will be executed both by the automated tests and during manual test execution.

Test Reports

Test results will be available from the SQUAD backend in JSON format as explained in the document about test data storage and processing.

The proposal for reporting test results involves two solutions, a web application dashboard and a notification mechanism. Both of them will use the SQUAD API to access the test data.

Web Application Dashboard

A web application dashboard must be developed to view test results and generate reports from it. This dashboard will serve as the central place for test data visualization and report generation for the whole project.

The web application dashboard will be running as a HTTP web service, and it can be accessed using a web browser. Details about the specific framework and platform will be defined during implementation.

This application should allow to do the following at the minimum:

  • Filter and view test results by priority, test categories, image types, architecture and test type (manual or automated).
  • Link test results to the specific test cases.
  • Graphics to analyze test data trends.

The web application won't process test results in any way, nor manipulate the test data or change the test data in the storage backend. Its only purpose is to generate reports and visual statistics for the test data, so it only has a one way commnication channel with the data storage backend in order to fetch the test data.

The application may also be progressively extended to export data in different formats such as spreadsheets and PDFs

This dashboard will serve as a complement to the SQUAD Web UI that is more suitable for developers.

Components

The web application will consist at least of the following functionalities modules:

  • Results Fetcher
  • Filter and Search Engine
  • Results Renderer
  • Test Report Generator
  • Graphics Generator
  • Application API (Optional)
  • Format Exporters (Optional)

Each of these components or modules can be independent tools or be part of a single framework for web development. Proper researching about the more suitable model and framework should be done during implementation.

Apart of these components, new ones might be added during implementation to support the above components and any other functionality required by the web application dashboard (for example, for HTML and data rendering, allow privileged operations if needed, and so on).

This section will give an overview of each of the above listed components.

Results Fetcher

This component will take care of fetching the test data from the storage backend.

As explained in the test data storage document, the data storage backend is SQUAD, so this component can use the SQUAD API to fetch the required test results.

Filter and Search Engine

This involves all the filtering and searching capabilities for test data and it can be implemented either using existing web application modules or extending those to suit the dashboard needs.

This engine will only search and filter test results data and won't manipulate that data in any other way.

Results Renderer

This component will take care of showing the test results visualization. It is basically the HTML renderer for the test data, with all the required elements for the web pages design.

Test Report Generator

This includes all the functions to generate all kind of test reports. It can also be split into several modules (for example, one for each type of report), and it should ideally offer a command line API that can be used to trigger and fetch test reports remotely.

Graphics Generator

It comprises all the web application modules to generate graphics, charts and any other visual statistics, including the history view. In the same way as other components, it can be composed of several smaller components.

Application API

Optionally the web application can also make available an API that can be used to trigger certain actions remotely, for example, generation and fetching of test reports, or test data exporting are some of the possible features for this API.

Format Exporters

This should initially be considered an optional module which will include support to export the test data into different formats, for example, PDF and spreadsheets.

It can also offers a convenient API to trigger this kind of format generations using command line tools remotely.

History View

The web application should offer a compact historical overview of all the tests results through specific period of times to distinguish at a glance important trends of the results.

This history view will also be able to show results from randomly chosen dates, so in this way it is possible to generate views for comparing test data between different images cycles (daily, weekly or releases images).

This view should be a graphical visualization that can be generated periodically or at any time as needed from the web application dashboard.

In a single compact view, at least the following information should be available:

  • All tests names executed by images.
  • List of images versions.
  • Platforms and images type.
  • Number of failed, passed and total tests.
  • Graphic showing the trend of tests results across the different images versions.

Graphical Mockup

The following is an example of how the history view might look like for test results:

Weekly Test Report

This report should be generated using the web application dashboard described in the previous section.

The dashboard should allow to generate this report weekly or at any time as needed, and it should offer both a Web UI and a command line interface to generate the report.

The report should contain at least the following data:

  • List of images used to run the tests.
  • List of tests executed ordered by priority, image type, architecture and category.
  • Tests results will be in the form: PASS, FAIL, SKIP.
  • Image version.
  • Date of test execution.

The report could also include the historical view as explained in the section History View and allow exporting to all formats supported by the web application dashboard.

Application Layout and Behaviour

The web application dashboard will only show test results and generate test reports.

The web application will fetch the test data from SQUAD directly to generate all the relevant web pages, graphics and test reports once it is launched. So, the web application won't store any test data and all visual information will be generated on runtime.

For the main layout, the application will show in the initial main page the history view for the last 5~10 images versions as this will help to have a quick overview of the current status of tests for latest images at a first glance.

Along with the history view in the main page, a list of links to the latests test reports will also be shown. These links can point to previously saved searches or they can just be convenient links to generate test reports for past images versions.

The page should also show the relevant options for filtering and searching test results as explained in the web application dashboard section.

In summary, the minimal required layout of the main page for the web application dashboard will be the history view, a list to recent test reports and the searching and filtering options.

Notifications

At least for critical and high priority tests failures, a notification system must be setup.

This system could send emails to a mailing list and messages to the Mattermost chat system for greater visibility on time.

This system will work as proposed in the closing ci loop document. It will be a Jenkins phase that will receive the automated tests results previously analyzed, and will determine the critical tests failures in order to send the notifications.

For manual tests results, the Jenkins phase could be manually or periodically triggered once all the tests results are stored in the SQUAD backend.

Format

The notification message should at least contain the following information:

  • Test name.
  • Test result (FAIL).
  • Test priority.
  • Image type and architecture.
  • Image version.
  • Link to the logs (if any).
  • Link to attachments (if any).
  • Date and time of test execution.

Infrastructure

The infrastructure for the web application dashboard and notification system will be defined during implementation, but they all will be aligned to the requirements proposed by the document for closing the CI loop, so it won't impose any special or resource intensive requirements beyond the current CI loop proposal.

Test Results Submission

For automated tests, the test case will be executed by LAVA and results will be submitted to the SQUAD backend as explained in the closing ci loop document.

For manual tests, a new tool is required to collect the tests results and submit those to SQUAD. This can be either a command line tool or a web application that could render the test case pages for convenient visualization during the test execution, or link to the test cases Gitlab pages for easy reference.

The main function of this application will be to collect the manual tests results, optionally guide the tester through the test cases steps, generate a JSON file with the test results data, and finally send these results to the SQUAD backend.

SQUAD

SQUAD offers a web UI frontend that allows to check tests results and metadata, including their attachments and logs.

This web frontend is very basic, it only shows the tests organized by teams and groups, and list the tests results for each test stored in the backend. Though it is a basic frontend, it can be useful for quickly checking results and making sure the data is properly stored in SQUAD, but it might be intended to be used only by developers and sometimes testers as it is not a complete solution from a project management perspective.

For a more complete visualization of the test data, the new web application dashboard should be used.

Concept Limitations

The platform, framework and infrastructure for the web application is not covered by this document and it needs to be defined during implementation.

Current Implementation

The QA Test Report is an application to save and report all the test results for the Apertis images.

It supports both types of tests, automated tests results executed by LAVA and manual tests results submitted by a tester. It only provides static reports with no analytical tools yet.

Workflow

The deployment consists of two docker images, one containing the main report application and the other running the postgresql database. The general workflow is as follows:

Automated Tests

  1. The QA Report Application is executed and it opens HTTP interfaces to receive HTTP requests calls and serve HTML pages in specific HTTP routes.

  2. Jenkins builds the images and they are pushed to the image server.

  3. Jenkins triggers the LAVA jobs to execute the automated tests in the published images.

  4. Jenkins, when triggering the LAVA jobs, also registers these jobs with the QA Report Application using its specific HTTP interface.

  5. The QA Report application adds these jobs in its internal queue and waits for the LAVA tests jobs results to be submitted via HTTP.

  6. Once LAVA finishes executing the tests jobs, it triggers the configured HTTP callback sending all the test data to the QA Report application.

  7. Test data for the respective job is saved into the database.

Manual Tests

  1. User authenticate with GitLab credentials from the Login button in the main page.

  2. Once logged in, the user can click on the Submit Manual Test Report button that is now available from the main page.

  3. Tester needs to enter the following information in the Select Image Report page:

    • Release: Image release (19.03, v2020dev0 ..)
    • Version: The daily build identifier (20190705.0, 20190510.1 ..)
    • Select Deployment Type (APT, OSTree)
    • Select Image Type
  4. A new page only showing the valid test cases for the selected image type is shown.

  5. User selects PASS , FAIL or NOT TESTED for each test case.

  6. An optional Notes text area box is avaibale besides each test case for the user to add any extra information (e.g tasks links, a brief comment about any issue with the test, etc).

  7. Once results have ben selected for all test cases, user should submit this data using the Submit All Results button at the top of the page.

  8. The application now will save the results into the database and redirect the user to a page with the following two options:

    • Submit Manual Test Report: To submit tests results for a new image type.
    • Go Back to Main Page: To check the recently submitted tests results.
  9. If the user wants to update a report, just repeat the above steps selecting the specific image type for the existing report and then updating the results for the necessary test cases.

Reports

  1. Reports for the stored test results (both manual and automated) are generated on the fly by the QA application such as: https://lavaphabbridge.apertis.org/report/v2019dev0/20190401.0

The results of the search are