Recently I discovered a library called Testcontainers . I already wrote about using it on my current project here . It helps you run software that your application depends on in a test context by providing an API to start docker containers. It’s implemented as a JUnit 4 rule currently, but you can also use it manually with JUnit 5. Native support for JUnit 5 is on the roadmap for the next major release. Testcontainers comes with a few pre-configured database- and selenium-containers, but most importantly it also provides a generic container that you can use to start whatever docker image you need to.
In my project we are using Infinispan for distributed caching. For some of our integration tests caching is disabled, but others rely on a running Infinispan instance. Up until now we have been using a virtual machine to run Infinispan and other software on developer machines and build servers. The way we are handling this poses a few problems and isolated Infinispan instances would help mitigate these. This post shows how you can get Infinispan running in a generic container. I’ll also try to come up with a useful abstraction that makes running Infinispan as a test container easier.
Configuring a generic container for Infinispan
Docker Hub provides a readymade Infinispan image: jboss/infinispan-server
. We’ll be using the latest version at this time, which is 9.1.3.Final
. Our first attempt to start the server using Testcontainers looks like this:
1@ClassRule
2public static GenericContainer infinispan =
3 new GenericContainer("jboss/infinispan-server:9.1.3.Final");
4
5@Before
6public void setup(){
7 cacheManager = new RemoteCacheManager(new ConfigurationBuilder()
8 .addServers(getServerAddress())
9 .version(ProtocolVersion.PROTOCOL_VERSION_26)
10 .build());
11}
12
13@Test
14public void should_be_able_to_retrieve_a_cache() {
15 assertNotNull(cacheManager.getCache());
16}
17
18private String getServerAddress() {
19 return infinispan.getContainerIpAddress() + ":"
20 + infinispan.getMappedPort(11222);
21}
You can see a few things here:
- We’re configuring our test class with a class rule that will start a generic container. As a parameter, we use the name of the infinispan docker image alongside the required version. You could also use
latest
here. - There’s a setup method that creates a
RemoteCacheManager
to connect to the Infinispan server running inside the docker container. We extract the network address from the generic container and retrieve the container IP address and the mapped port number for the hotrod port ingetServerAddress()
- Then there’s a simple test that will make sure we are able to retrieve an unnamed cache from the server.
Waiting for Infinispan
If we run the test, it doesn’t work and throws a TransportException
, though. It mentions an error code that hints at a connection problem. Looking at other pre-configured containers, we see that they have some kind of waiting strategy in place. This is important so that the test only starts after the container has fully loaded. The PostgreSQLContainer
waits for a log message, for example. There’s other wait strategies available and you can implement your own, as well. One of the default strategies is the HostPortWaitStrategy
and it seems like a straightforward choice. With the Infinispan image at least, it doesn’t work though: one of the commands that is used to determine the readiness of the tcp port has a subtle bug in it and the other relies on the netcat
command line tool being present in the docker image. We’ll stick to the same approach as the PostgreSQLContainer
rule and check for a suitable log message to appear on the container’s output. We can determine a message by manually starting the docker container on the command line using:
docker run -it jboss/infinispan-server:9.1.3.Final
.
The configuration of our rule then changes to this:
1@ClassRule 2public static GenericContainer container = 3 new GenericContainer("jboss/infinispan-server:9.1.3.Final") 4 .waitingFor(new LogMessageWaitStrategy() 5 .withRegEx(".*Infinispan Server.*started in.*\\s"));
After this change, the test still doesn’t work correctly. But at least it behaves differently: It waits for a considerable amount of time and again throws a TransportException
before the test finishes. Since the underlying TcpTransportFactory
swallows exceptions on startup and returns a cache object anyway, the test will still be green. Let’s address this first. I don’t see a way to ask the RemoteCacheManager
or the RemoteCache
about the state of the connection, so my approach here is to work with a timeout:
1private ExecutorService executorService = Executors.newCachedThreadPool();
2
3@Test
4public void should_be_able_to_retrieve_a_cache() throws Exception {
5 Future<RemoteCache<Object, Object>> result =
6 executorService.submit(() -> cacheManager.getCache());
7 assertNotNull(result.get(1500, TimeUnit.MILLISECONDS));
8}
The test will now fail should we not be able to retrieve the cache within 1500 milliseconds. Unfortunatly, the resulting TimeoutException
will not be linked to the TransportException
, though. I’ll take suggestions for how to better write a failing test and leave it at that, for the time being.
Running Infinispan in standalone mode
Looking at the stacktrace of the TransportException
we see the following output:
INFO: ISPN004006: localhost:33086 sent new topology view (id=1, age=0) containing 1 addresses: [172.17.0.2:11222] Dez 14, 2017 19:57:43 AM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo INFO: ISPN004014: New server added(172.17.0.2:11222), adding to the pool.
It looks like the server is running in clustered mode and the client gets a new server address to talk to. The IP address and port number seem correct, but looking more closely we notice that the hotrod port 11222
refers to a port number inside the docker container. It is not reachable from the host. That’s why Testcontainers gives you the ability to easily retrieve port mappings. We already use this in our getServerAddress()
method. Infinispan, or rather the hotrod protocol, however is not aware of the docker environment and communicates the internal port to the cluster clients overwriting our initial configurtation.
To confirm this analysis we can have a look at the output of the server when we start the image manually:
19:12:47,368 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-6) ISPN000078: Starting JGroups channel clustered 19:12:47,371 INFO [org.infinispan.CLUSTER] (MSC service thread 1-6) ISPN000094: Received new cluster view for channel cluster: [9621833c0138|0] (1) [9621833c0138] ... Dez 14, 2017 19:12:47,376 AM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo INFO: ISPN004016: Server not in cluster anymore(localhost:33167), removing from the pool.
The server is indeed starting in clustered mode and the documentation on Docker Hub also confirms this. For our tests we need a standalone server though. On the command line we can add a parameter when starting the container (again, we get this from the documentation on Docker Hub):
$ docker run -it jboss/infinispan-server:9.1.3.Final standalone
The output now tells us that Infinispan is no longer running in clustered mode. In order to start Infinispan as a standalone server using Testcontainers, we need to add a command to the container startup. Once more we change the configuration of the container rule:
1@ClassRule 2public static GenericContainer container = 3 new GenericContainer("jboss/infinispan-server:9.1.3.Final") 4 .waitingFor(new LogMessageWaitStrategy() 5 .withRegEx(".*Infinispan Server.*started in.*\\s")) 6 .withCommand("standalone");
Now our test now has access to an Infinispan instance running in a container.
Adding a specific configuration
The applications in our project use different caches, these can be configured in the Infinispan standalone configuration file. For our tests, we need them to be present. One solution is to use the .withClasspathResourceMapping()
method to link a configuration file from the (test-)classpath into the container. This configuration file contains the cache configurations. Knowing the location of the configuration file in the container, we can once again change the testcontainer configuration:
1public static GenericContainer container =
2 new GenericContainer("jboss/infinispan-server:9.1.3.Final")
3 .waitingFor(new LogMessageWaitStrategy()
4 .withRegEx(".*Infinispan Server.*started in.*\\s"))
5 .withCommand("standalone")
6 .withClasspathResourceMapping(
7 "infinispan-standalone.xml",
8 "/opt/jboss/infinispan-server/standalone/configuration/standalone.xml",
9 BindMode.READ_ONLY);
10
11@Test
12public void should_be_able_to_retrieve_a_cache() throws Exception {
13 Future<RemoteCache<Object, Object>> result =
14 executorService.submit(() -> cacheManager.getCache("testCache"));
15 assertNotNull(result.get(1500, TimeUnit.MILLISECONDS));
16}
Now we can retrieve and work with a cache from the Infinispan instance in the container.
Simplifying the configuration
You can see how it can be a bit of a pain getting an arbitrary docker image to run correctly using a generic container. For Infinispan we now know what we need to configure. But I really don’t want to think of all this every time I need an Infinispan server for a test. However, we can create our own abstraction similar to the PostgreSQLContainer
. It contains the configuration bits that we discovered in the first part of this post and since it is an implementation of a GenericContainer
, we can also use everything that’s provided by the latter.
1public class InfinispanContainer extends GenericContainer<InfinispanContainer> {
2
3 private static final String IMAGE_NAME = "jboss/infinispan-server";
4
5 public InfinispanContainer() {
6 this(IMAGE_NAME + ":latest");
7 }
8
9 public InfinispanContainer(final String imageName) {
10 super(imageName);
11 withStartupTimeout(Duration.ofMillis(20000));
12 withCommand("standalone");
13 waitingFor(new LogMessageWaitStrategy().withRegEx(".*Infinispan Server.*started in.*\\s"));
14 }
15
16}
In our tests we can now create an Infinispan container like this:
1@ClassRule 2public static InfinispanContainer infinispan = new InfinispanContainer();
That’s a lot better than dealing with a generic container.
Adding easy cache configuration
You may have noticed that I left out the custom configuration part here. We can do better by providing builder methods to create caches programatically using the RemoteCacheManager
. Creating a cache is as easy as this:
1cacheManager.administration().createCache("someCache", null);
In order to let the container automatically create caches we facilitate the callback method containerIsStarted()
. We can overload it in our abstraction, create a RemoteCacheManager
and use its API to create caches that we configure upfront:
1...
2private RemoteCacheManager cacheManager;
3private Collection<String> cacheNames;
4...
5
6public InfinispanContainer withCaches(final Collection<String> cacheNames) {
7 this.cacheNames = cacheNames;
8 return this;
9}
10
11@Override
12protected void containerIsStarted(final InspectContainerResponse containerInfo) {
13 cacheManager = new RemoteCacheManager(new ConfigurationBuilder()
14 .addServers(getServerAddress())
15 .version(getProtocolVersion())
16 .build());
17
18 this.cacheNames.forEach(cacheName ->
19 cacheManager.administration().createCache(cacheName, null));
20}
21
22public RemoteCacheManager getCacheManager() {
23 return cacheManager;
24}
You can also retrieve the CacheManager
from the container and use it in your tests.
There’s also a problem with this approach: you can only create caches through the API if you use Hotrod protocol version 2.0 or above. I’m willing to accept that as it makes the usage in test really comfortable:
1@ClassRule
2public static InfinispanContainer infinispan =
3 new InfinispanContainer()
4 .withProtocolVersion(ProtocolVersion.PROTOCOL_VERSION_21)
5 .withCaches("testCache");
6
7@Test
8public void should_get_existing_cache() {
9 assertNotNull(infinispan.getCacheManager().getCache("testCache"));
10}
If you need to work with a protocol version below 2.0, you can still use the approach from above, linking a configuration file into the container.
Conclusion
While it sounds very easy to run any docker image using Testcontainers, there’s a lot of configuration details to know, depending on the complexity of the software that you need to run. In order to effectivly work with such a container, it’s a good idea to encapsulate this in your own specific container. Ideally, these containers will end up in the Testcontainers repository and others can benefit of your work as well.
I hope this will be useful for others, if you want to see the full code, have a look at this repository .
More articles
fromReinhard Prechtl
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Reinhard Prechtl
Senior IT Consultant
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.