At Global App Testing, we improved our productivity by improving our continuous integration pipelines. Here’s how.
In Global App Testing we heavily rely on our continuous integration (CI) pipelines to run all sorts of checks, build applications, and deploy them to live environments. We run hundreds of pipelines each day with thousands of jobs in them.
For that reason, developers need to trust that the pipelines are correct, so that if software fails a test, they can act on the failure. Additionally, the faster the pipeline returns any feedback, the faster developers can get back to their work, address any issues reported, and move on. So pipelines should be fast and reliable to ensure fast and reliable development in general.
At the beginning of the year, we heard complaints from the developers about our CI process. Some of the tests were failing the software randomly, unrelated to the changes made by the developers. Usually, retrying the job led to the test showing a pass result, and it required an investigation from the developer on the failure and manual action to retry, which was time-intensive and frustrating.
Developers also complained that due to the slow pipelines, the deployment queue was growing longer, making it harder for developers to deploy changes. These problems combined, leading to situations where the deployment queue became stuck; developers had to continuously retry tests, wait to see whether the new test passed, and then unstick others.
We decided to look at the data and started experimenting with some improvements that could reduce manual retries and speed up the whole process.
Note: We use GitLab as our CI tool, so some problems and solutions might be specific to this platform.
We run our own Gitlab runners on k8s cluster. In addition to saving on costs, we try to use AWS spot instances as much as possible. Because of that, there could be lots of reasons that the job would fail – the pod could be terminated, the docker image may not be pulled, postgres service might not start before tests. Most of these reasons have nothing to do with what the developer is doing, and in most cases, a manual retry would fix the issue. In these cases, we could safely assume that any failure, other than caused by the CI job script, could be retried automatically. So we added a default config for each job to retry when a known issue would cause it to fail.
The end result is that the developer is no longer bothered by the things outside of their control, getting us closer to returning only meaningful results and we can gather statistics on the failures we retry.
The next issue we wanted to address was flaky tests. According to this survey, corporations like Microsoft and Google estimate that 26-41% of failed builds were due to flaky tests. This problem is prevalent in our industry, so of course flaky tests are also present in our own code base.
Our aim in dealing with them was twofold. Firstly, we wanted to reduce the impact of the flaky tests, so the developers can focus on their changes and reliably deploy to the live environments. Secondly, we want to reduce the number of new flaky tests. We probably won’t eliminate them entirely, but the fewer we have, the better.
To reduce the impact of the flaky tests, we decided that we would let the pipeline pass when the test usually passes. To achieve that, we developed a command retry-failed-tests, that would parse the output of the test execution, in our case Jest and RSpec frameworks, scan for failing tests and rerun them multiple times. If the test usually passes, at a threshold of <30% failures, we allow the pipeline job to pass. Additionally, any test that is failing randomly is also reported to Sentry, where we gather the events on the tests failing and we can decide on investigating any particular test. It's also important that the retry-failed-tests command decides whether a job should fail or not, instead of the main test run. That's why we need to run it in a single command using || operator.
To reduce the number of flaky tests we decided to do more of the same. Any tests that are added or changed are again retried multiple times on a separate job for the merge request. If we detect that upon reruns, the test is failing randomly, we fail this pipeline, which stops it from being merged to the main branch and pushes the developer to investigate the change that they made. We've added a separate command retry-changed-tests that takes changed test files as input from command cmd-mr-changes.
To speed up the CI pipeline, the obvious thing is to parallelize it! It’s quite easy to split the workload (ie. for tests) between multiple jobs and let them perform their part. Initially, we set it up to split the test files evenly between the jobs in a pipeline with a deterministic script which, with a given list of test files and ordinal number of the node, would return the files to be run at the node. The script looked like this:
However this naive approach is not the most efficient one, as tests come in different shapes and sizes, and evenly splitting the workload by the number of test files will result in jobs of various lengths. In our case, the difference between fastest and slowest jobs in that stage was 5 minutes! And as the whole stage is as slow as its slowest part, we wanted to even it out. Fortunately, we found a solution to that. A gem named knapsack. It generates the report about the time spent on each test file on each parallel job and then uses this data to split the test files more evenly between the jobs.
As a setup we needed to fetch the report generated on the master branch with the test timings:
Then we needed to adjust the execution of the tests from calling RSpec directly to using Knapsack task
And at the end we need to merge reports generated by each job and store it for future runs. Fortunately, GitLab has a script to merge the reports, so we utilized it in our CI.
This allowed us to make the difference between the slowest and fastest test jobs to go down to around 2 minutes.
At the first stage of the pipeline, we install and cache dependencies needed for the application. The cache key is calculated based on the lock file depending on the application type; for example, a Gemfile.lock is used for Ruby on Rails applications. These jobs usually take longer when the dependencies need to be updated, as when the lock file changes, the whole cache is invalidated and all dependencies need to be installed from the start. Additionally, in GitLab, the cache is always uploaded by default, even when nothing changes in the cached directory.
To improve the speed of the job we could address two obvious issues: installing all the dependencies, and always uploading cache. The first change we made was to add a fallback cache. That’s a cache updated on the main branch of the application which contains all the dependencies.
Now, when the .lock file-based cache is invalidated due to file change, the fallback cache is downloaded and the dependencies are installed incrementally, based on the change to the lock file only. With one caveat: In GitLab, all cache keys have an automatic increment that changes when someone manually clears the cache. You can read about it in this issue. When that happens, we need to update our fallback cache config to change it as well.
The second thing to update that was taking unnecessary time was cache uploads when nothing had changed. There's an issue open in GitLab to change this behavior, but it has not yet been resolved. Fortunately, there are helpful workarounds, one of which we used. There is an option to configure when the upload should happen – i.e. when the pipeline fails or passes. This isn’t the greatest option, but that’s all that we have. We redirected a bundle output to the file, we tested the output to check if it includes any information about installing the dependencies, if it did, we failed the job with a specific exit code that the job is allowed to fail with. Then we configured the cache upload only on the failed job and saved us about 30 sec on each run pipeline.
We use our CI pipeline to build docker images for our applications that are later deployed to our cloud infrastructure. Docker also uses a cache, based on its layers. One downside of building docker images in the CI pipeline is that jobs can be assigned to any out of many runners. This means that when an image is built on one runner its cached layers stay on that runner. When another runner gets the next job to build an image it can't use the cached layers generated by the previous job.
Well.. what if it could? We decided to set up a long-lasting docker building service that takes request from CI pipelines, to build the images there and point at this service via the DOCKER_HOST environment variable in the jobs where we build docker images. This way docker can utilize its cache most of the time and save time on building the images.
We fixed Docker layer caching but it still wasn't ideal. We were already caching app's dependencies in Dockerfile but it wasn't enough. Changing a single dependency would invalidate the cache on the whole "install dependencies" command. To optimize that, we decided to a similar approach, as with the dependencies installation in CI jobs, and add a fallback cache. Once a week, we cache the stable lock file from the master.
Then we copy it to the image first, and install those stable dependencies.
As the lock file doesn't change often, all the dependencies are cached and when the new updated lock file is copied, only new and updated dependencies need to be installed.
At the end of the first iteration of improvements we were able to reduce median pipeline duration from 15 minutes to 10. We've also decreased p75 and p90 times. With the number of pipelines and jobs that we are running, this was a significant improvement and the developer’s moods improved as well. The pipeline failure rate went down from 16% to 8% and 0.4% tests were reported as flaky giving us less noisy pipelines and higher developer confidence.