How We Used Docker to Lower Test Run Times from 1 Hour to 10 Minutes

When a service grows in size and complexity, we add more tests in order to maintain test coverage. Having proper test coverage allows us to change or add new features and be reasonably confident we didn’t break any existing features.

This is especially important for “bidder”, the name of our real time bidding service, where even a small unexpected downtime or bug can have major consequences. Bidder interacts with ad exchanges through http requests to place bids on advertisement opportunities (webpages, mobile apps, etc.) for our advertisers. As bidder increased in features and handled more bid opportunities (millions of bid requests per second) the number of test cases also increased.

In the beginning, we easily had the capability to run our tests on the same server that we used for production bidders, but as we added more and more tests to bidder, the length of time required to run them swelled to over an hour. If an issue is discovered in production, having to wait over an hour for the tests to run before deploying a patch is not ideal. Even waiting an hour to see whether your latest commit broke anything is not ideal. We needed a way to speed up the tests without significantly modifying our existing test framework.

One suggested option was to run multiple tests at a time. So, instead of executing one test after another, a batch of tests would be executed at the same time. However, we realized this solution wasn’t practical due to the nature of how we test bidder.

Each test case typically has the following steps: provide bidder with a specific set of data (campaign / strategy information), simulate a bid request coming from an exchange, and finally, verify that the response from bidder is what we expected.

MM Docker Test Framework

We believe each test case should be isolated from one another. This means that test A’s setup data and bid request should not affect test B in any way. If we tried to run tests in parallel, what could happen is test A first sends its setup data to bidder – but before test A can send its bid request, test B sends its setup data to bidder which will overwrite test A’s data. Then, when test A sends its bid request, it will receive the incorrect response because bidder is responding with test B’s data.

We could try to combine test A and B’s setup data, but the data can quickly become hard to manage and would require a lot of effort to modify existing tests.

So without being able to run multiple tests at once on a single bidder, we decided to run multiple bidders and split the tests amongst each bidder.

devblog_docker_setup[1]

We decided to use Docker to assist with our efforts. Docker is a tool which creates lightweight containers, isolating applications from each other. By running bidders inside a container, we were able to spin up multiple containers on a single machine.

Each container will also include a copy of the test repository and a test receiver HTTP server.

This server would listen for an incoming request containing the name of a test and then find that test in its repository and run it against bidder. Once the test finished, the server would respond back with the test results.

Now that we were able to create containers that house both a bidder and a test receiver server, we needed a way to distribute containers to different machines. For example, if we wanted to spin up 10 containers and only had two machines, ideally we’d like to have 5 containers running on each machine.

Luckily, Docker provides another tool that has these exact capabilities. Docker Swarm is a clustering tool for Docker, turning a group of hosts into a single, virtual Docker host. By sending information on the image we want to launch to the Swarm, it will automatically create the appropriate container on an available machine node. This frees us from having to manually deal with host management. In our case, we provisioned four machines to the swarm.

With all these pieces ready, our workflow looks like this:

  • When a release branch is cut, we use Jenkins to build bidder.
  • After its built, we create a Docker image containing bidder, our test repository, and the test receiver server.
  • This image is then uploaded to our private Docker hub.
  • We then ask Docker Swarm to spin up 40 containers and expose a port so we can communicate with the test receiver.
  • The containers are created using the build image pulled from our private hub.
  • After that, we have a script on our Jenkins build machine which collects the test names of all our tests. For each test name, it sends a request to the test receiver in one of the 40 containers, round robin style.
  • After all the tests have been run and their test results received, we shut down the containers and conclude our test run.
    By utilizing Docker, we were able to speed up our test suite with little change to our existing architecture. By scaling out our testing efforts, we reduced the average testing duration from over an hour to under 10 minutes.

A Picture of Jimmy Huang

JIMMY HUANG

Jimmy Huang is a Test Engineer at MediaMath, helping the Media engineering team with MediaMath's bidding system. In his spare time, you can find him in the gym lifting weights or at the park playing chess.
1 Comment.

One response to “How We Used Docker to Lower Test Run Times from 1 Hour to 10 Minutes”

  1. Is Jenkins also a docker container or run on the same infrastructure? How is your build script in Jenkins getting the 40 URLs to hit?

Leave a Reply

Your email address will not be published. Required fields are marked *