Developer Productivity Engineering Blog

Test Distribution FAQs

Develocity features Test Distribution for Gradle builds (since 2020.2) and Maven builds (since 2020.5) and supports auto scaling Test Distribution agents based on current demand (since 2021.2). You can learn more about this powerful build and test acceleration technology by watching our webcast: Develocity Unveils Test Distribution. We have compiled this list of FAQs to supplement the information shared in the Webcast. If you want to try out Test Distribution for your Gradle or Maven builds, you can request a free trial today.

The FAQ entries are organized into the following categories:

Prerequisites & Supported Environments

Do you need a Develocity subscription to be able to use Test Distribution?
Yes.

Does this work for both Gradle and Maven?
Yes, Gradle builds are supported since 2020.2 and Maven since 2020.5.

What Gradle/Maven versions are required to use Test Distribution?
Gradle 5.4 or later and Maven 3.3.1, respectively.

What are the JDK version requirements?
For tests, it requires a Java 8+ test runtime. The agents run on Java 11+.

Does Test Distribution work for Android?
It will work for JUnit-based unit tests but not testing frameworks like Espresso that require an emulator.

What test frameworks are supported?
Currently testing frameworks that use a JUnit Platform like JUnit 3, 4, 5, and Spock are supported.

 

Capabilities & Functionality

Does Test Distribution only for the test task or can it be used for other Test task types like integration tests, smoke tests, functional tests, etc. ?
It is available on every test task.

Can the framework be used to distribute the execution of any non-test tasks?
No.

Does maxParallelForks do anything when distribution is enabled?
Yes, it defines the parallelism for executing tests on your local machine.

Can we configure individual build test cases to be executed on multiple OSs?
You would have to set up several test tasks, one for each OS, because an agent can only run on a single operating system.

How does this work with parameterized tests that generate large sets of test inputs?
It will work the same way by determining the estimated run time of those tests and distributing accordingly.

Does the duration of tests have any impact on splitting the test set?
Yes, the test set is split into balanced partitions based on the historical execution time of tests.

Does Test Distribution work when multiple tests need different JDK versions?
Yes. You can configure agents with different JDK versions.

 

Testing Frameworks

Does it run parallel tests inside each fork?
That depends on the test framework.

Does it work for aggregating Jacoco reports in multimodular applications?
The output merge is on a task basis. It takes care of merging the outputs of the test task that was distributed to several agents.

How do you make this work when tests need native/c++ libraries?
Install it on the agent and declare it as a capability.

Would it work for the code coverage library coverage?
The only code coverage library supported at this time is JaCoCo.

Will this support Firebase Test Lab?
No.

Will this work when running integration tests inside a server environment using a custom JUnit runner?
It should work for any custom runner as long as it uses/extends a JUnit Platform.

 

Deployment & Operations

Are the agents hosted by Develocity or can they be hosted on-premise?
Develocity and Test Agents are only available on-premise.

Is there any sort of auto-scaling for the agents?
Yes. Develocity 2021.2 and above offer support for auto scaling Test Distribution agents in a container orchestration platform like Kubernetes based on the demand for how many tests there are to run.

What logic is used to decide how to separate the JUnit suite into the different groups (i.e. some classes executed on one agent, some on another, etc)?
It attempts to determine the length of each test and arrange the partitions to most efficiently use the available resources.

Does Develocity know about the available resources on each agent? If so, how are agents assigned for a given build?
The agents declare their capabilities (jdk version, os, etc) when they connect to the Develocity server. The server then manages the list of available capabilities for each agent. If a particular test requires certain functionality the server will find and coordinate with the matching agents.

How would you set up integration and functional tests that depend on a database and external services?
Agents can be configured with specific capabilities like databases. For external services, you could set up your tests the same way you run them on CI or another machine.

What is the performance like on the first build when many agents need to pull the inputs from the build machine?
It depends on your setup. There is some overhead for this. It’s a file transport so performance will be impacted by the quality of your network connection and the amount of data sent. Further, input files are cached on the agents. So files that change less often like binary dependencies don’t need to be sent frequently.

Additionally, Develocity also caches input files, so they only have to be transferred once by the build, even if multiple agents require the same input file. This reduces network bandwidth requirements for the build machine, which is especially helpful for local builds where network bandwidth is often limited.

 

C/I Integration Details

How would the test output look like in Jenkins?
It would look the same as if the task ran on that machine.

Can I create distribution agents from CI tools like Bitrise?
See here.

Will there be a solution for starting agents dynamically on Kubernetes similar to the way the Jenkins Kubernetes Plugin works?
Not in initial release but we have this on our roadmap.

Would integration tests that need to start Docker containers work with Test Distribution?
Yes, If you are using something like Test Containers that can run inside another docker container and works with one of the supported testing frameworks.

Is it possible to hook into an agent to start up a server environment before running tests on the agent?
To configure specific agents with specific functionality, you can install them on the agents and then define the capabilities like databases, available jdk versions, operating systems, etc.

If you want to configure setup/teardown behaviour (like configuring a database schema) before tests are actually executed on the agent, you can do this by configuring a LauncherSessionListener as documented for Gradle and for Maven.

Best Practices

What is the recommended strategy to transition from dependent tests to fully-independent tests? Can you use distribution with a mix?
To transition, you can start with a mix. To do so, you need to explicitly enable it on each test task.

Are there ways to group agents to avoid being a bad neighbor? For example, can you assign a group of agents to a specific project?
You can use requirements/capabilities to do that. Give your agents the capabilities “TEAM_ROBERTO” and then add that requirement in your build configuration.

(last updated: June 11, 2021)