Beliebte Suchanfragen
//

Running Spring Boot GraalVM Native Images with Docker & Heroku

1.6.2020 | 19 minutes of reading time

Combining Spring Boot with the benefits of GraalVM Native Images is really cool. But how about doing all that magic inside a Docker container also? How about running those native apps on cloud infrastructures like Heroku?

Spring Boot & GraalVM – blog series

Part 1: Running Spring Boot apps as GraalVM Native Images
Part 2: Running Spring Boot GraalVM Native Images with Docker & Heroku
Part 3: Simplifying Spring Boot GraalVM Native Image builds with the native-image-maven-plugin

Say ‘works on my machine’ one more time!

Working on the first article about Running Spring Boot apps as GraalVM Native Images I got really stoked about what’s already possible today when we try to use Spring Boot together with GraalVM Native Images. But no matter if I am at the customer’s site or giving lectures to my students at Fachhochschule Erfurt , I really try to avoid this works on my machine dilemma. And so far we’ve only compiled Spring Boot apps into GraalVM Native Images on our local workstation.

Since we’re in 2020, we shouldn’t stop there and instead try to use some sort of container to build and run our apps, right? And we should continuously do that with the help of some Continuous Integration cloud platform. Finally, we need to deploy and run our native apps on some sort of cloud platform!

Logo sources: Docker logo , Spring Boot logo , Computer logo, GraalVM logo

So first things first – let’s figure out how to compile our Spring Boot apps into GraalVM Native Images using Docker!

Compiling Spring Boot Apps into GraalVM Native Images with Docker

The easiest way to use Docker here is to rely on the official GraalVM Docker image from Oracle . Interestingly, this image lacks both Maven and the native-image GraalVM plugin. So let’s simply add them to the image creating our own Dockerfile. Again all code examples are available in an example project on GitHub .

In the first article of this blog post series we already got used to leveraging SDKMAN to install Maven. As the official GraalVM Docker image from Oracle is based on oraclelinux:7-slim, we need to install unzip and zip first. Both is needed by SDKMAN in order to work properly:

1FROM oracle/graalvm-ce:20.0.0-java11
2 
3# For SDKMAN to work we need unzip & zip
4RUN yum install -y unzip zip
5 
6RUN \
7    # Install SDKMAN
8    curl -s "https://get.sdkman.io" | bash; \
9    source "$HOME/.sdkman/bin/sdkman-init.sh"; \
10    # Install Maven
11    sdk install maven; \
12    # Install GraalVM Native Image
13    gu install native-image;
14 
15RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && mvn --version
16 
17RUN native-image --version
18 
19# Always use source sdkman-init.sh before any command, so that we will be able to use 'mvn' command
20ENTRYPOINT bash -c "source $HOME/.sdkman/bin/sdkman-init.sh && $0"

We shouldn’t forget to enable the mvn command for a user of our Docker image. Therefore we craft a slightly more interesting ENTRYPOINT that always prefixes commands with source $HOME/.sdkman/bin/sdkman-init.sh. Having defined our Dockerfile, we should build our image with:

1docker build . --tag=graalvm-ce:20.0.0-java11-mvn-native-image

After the build is finished, we are able to launch our GraalVM Native Image compilation inside a Docker container. But wait, the following command inherits a second Docker volume definition with --volume "$HOME"/.m2:/root/.m2. Why is that? Because I really wanted to avoid downloading all the Spring Maven dependencies over and over again every time we start our Docker container. With this mount we simply use the Maven repository cached on our machine already:

1docker run -it --rm \
2    --volume $(pwd):/build \
3    --workdir /build \
4    --volume "$HOME"/.m2:/root/.m2 \
5    graalvm-ce:20.0.0-java11-mvn-native-image ./compile.sh

The first volume --volume $(pwd):/build simply mounts our Spring Boot app’s sources including our .compile.sh script for GraalVM Native Image compilation into the Docker container. Running this Docker build, the resulting spring-boot-graal native app should be ready after some minutes of heavy compilation.

Preventing java.lang.OutOfMemoryError errors

When I started experimenting with GraalVM Native Images compilations of Spring Boot apps, I often experienced that the docker run command seemed to take ages to complete. And at the end a java.lang.OutOfMemoryError error was thrown into the log like this:

114:06:34.609 [ForkJoinPool-2-worker-3] DEBUG io.netty.handler.codec.compression.ZlibCodecFactory - -Dio.netty.noJdkZlibEncoder: false
2Exception in thread "native-image pid watcher"
3Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "native-image pid watcher"

In this case it is very likely that your Docker engine is not able to use enough memory. On my Mac’s Docker installation the default was only 2.00 GB. As stated in the comments of this stackoverflow q&a , you have to give Docker much more memory since the GraalVM Native Image compilation process is really RAM-intensive. Allotting the Docker engine around 9 to 12 GB of RAM, I was able get my compilation working inside the Docker container:

If everything goes fine, you should find the natively compiled Spring Boot app as spring-boot-graal inside the /target/native-image directory. So in order to run our app, simply run it with ./target/native-image/spring-boot-graal:

1$ ./spring-boot-graal
2zsh: exec format error: ./spring-boot-graal

Ooops! It turns out that this doesn’t work! Why? We really need to keep in mind that we are compiling native executables from our Java applications! So they’re absolutely platform-dependent now! And our Docker container’s base image will be mostly different than our host operating system. I guess this is something new for all the Java folks! Since from the beginning we were told that Java is platform-independent thanks to its virtual machine. And this problem becomes really clear to us only at this point, where we started to compile our app inside a Docker container.

The solution to this problem is simple: we don’t only compile our apps inside Docker containers, but also run them inside of them.

Running native Spring Boot apps in Docker

If we want to run our native Spring Boot apps inside a container, the Docker multi-stage build feature comes in handy. Using it, we could do the GraalVM Native Image compilation inside the first container – and only use the resulting native Spring Boot app inside the second container and run it. Therefore we need to extend our Dockerfile slightly:

1FROM oracle/graalvm-ce:20.1.0-java11
2 
3ADD . /build
4WORKDIR /build
5 
6# For SDKMAN to work we need unzip & zip
7RUN yum install -y unzip zip
8 
9RUN \
10    # Install SDKMAN
11    curl -s "https://get.sdkman.io" | bash; \
12    source "$HOME/.sdkman/bin/sdkman-init.sh"; \
13    # Install Maven
14    sdk install maven; \
15    # Install GraalVM Native Image
16    gu install native-image;
17 
18RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && mvn --version
19 
20RUN native-image --version
21 
22RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && ./compile.sh
23 
24 
25# We use a Docker multi-stage build here so that we only take the compiled native Spring Boot app from the first build container
26FROM oraclelinux:7-slim
27 
28MAINTAINER Jonas Hecht
29 
30# Add Spring Boot Native app spring-boot-graal to Container
31COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal
32 
33# Fire up our Spring Boot Native app by default
34CMD [ "sh", "-c", "./spring-boot-graal" ]

We simply copy the compilation result from the first build container via COPY --from=0 here. Then we define the app’s startup command ./spring-boot-graal as we would do on our machine also. Here it’s just wrapped inside a CMD statement. And as you might notice, we switched to oraclelinux:7-slim as the base image for our second run container. This saves a lot of memory since the resulting image only needs around 180 MB as opposed to nearly 2 GB, which it would need if we stuck to oracle/graalvm-ce:20.1.0-java11.

That’s already all that needs to be prepared here and we’re now able to run our Docker multi-stage build with the following command:

1docker build . --tag=spring-boot-graal

This again will take a while – you may grab a coffee. 🙂 The Docker build is successfully finished when you get something like the following output:

1[spring-boot-graal:289]   (typeflow): 114,554.33 ms,  6.58 GB
2[spring-boot-graal:289]    (objects):  63,145.07 ms,  6.58 GB
3[spring-boot-graal:289]   (features):   6,990.75 ms,  6.58 GB
4[spring-boot-graal:289]     analysis: 190,400.92 ms,  6.58 GB
5[spring-boot-graal:289]     (clinit):   1,970.98 ms,  6.67 GB
6[spring-boot-graal:289]     universe:   6,263.93 ms,  6.67 GB
7[spring-boot-graal:289]      (parse):  11,824.83 ms,  6.67 GB
8[spring-boot-graal:289]     (inline):   7,216.63 ms,  6.73 GB
9[spring-boot-graal:289]    (compile):  63,692.52 ms,  6.77 GB
10[spring-boot-graal:289]      compile:  86,836.76 ms,  6.77 GB
11[spring-boot-graal:289]        image:  10,050.63 ms,  6.77 GB
12[spring-boot-graal:289]        write:   1,319.52 ms,  6.77 GB
13[spring-boot-graal:289]      [total]: 313,644.65 ms,  6.77 GB
14 
15real  5m16.447s
16user  16m32.096s
17sys 1m34.441s
18Removing intermediate container 151e1413ec2f
19 ---> be671d4f237f
20Step 10/13 : FROM oracle/graalvm-ce:20.0.0-java11
21 ---> 364d0bb387bd
22Step 11/13 : MAINTAINER Jonas Hecht
23 ---> Using cache
24 ---> 445833938b60
25Step 12/13 : COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal
26 ---> 2d717a0db703
27Step 13/13 : CMD [ "sh", "-c", "./spring-boot-graal" ]
28 ---> Running in 7fa931991d7e
29Removing intermediate container 7fa931991d7e
30 ---> a0afe30b3619
31Successfully built a0afe30b3619
32Successfully tagged spring-boot-graal:latest

With an output like that we could simply run our Spring Boot native app with docker run -p 8080:8080 spring-boot-graal:

1$ docker run -p 8080:8080 spring-boot-graal
2 
3  .   ____          _            __ _ _
4 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
5( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
6 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
7  '  |____| .__|_| |_|_| |_\__, | / / / /
8 =========|_|==============|___/=/_/_/_/
9 :: Spring Boot ::
10 
112020-04-19 09:22:51.547  INFO 1 --- [           main] i.j.s.SpringBootHelloApplication         : Starting SpringBootHelloApplication on 06274db526b0 with PID 1 (/spring-boot-graal started by root in /)
122020-04-19 09:22:51.547  INFO 1 --- [           main] i.j.s.SpringBootHelloApplication         : No active profile set, falling back to default profiles: default
132020-04-19 09:22:51.591  WARN 1 --- [           main] io.netty.channel.DefaultChannelId        : Failed to find the current process ID from ''; using a random value: -949685832
142020-04-19 09:22:51.593  INFO 1 --- [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port(s): 8080
152020-04-19 09:22:51.594  INFO 1 --- [           main] i.j.s.SpringBootHelloApplication         : Started SpringBootHelloApplication in 0.063 seconds (JVM running for 0.065)

Wow, I guess this was simple and fast again. Now finally access your app inside a browser at http://localhost:8080/hello !

Configuring the Spring Boot native app’s port dynamically inside a Docker container

Being able to build and run our natively compiled Spring Boot apps inside Docker containers, we’re now truly free in our actions! As some of the readers may already know, I really like Heroku . So why not run our native Spring Boot app there?

Logo sources: Docker logo , Heroku logo , Spring Boot logo , Computer logo, GraalVM logo

One of the things we need for most cloud platform as a service providers is the possibility to configure our Spring Boot native app’s port dynamically at runtime. This is simply because most cloud providers put some sort of proxy in front of our apps. And Heroku is no exception here. As the Heroku docs state :

The web process must listen for HTTP traffic on $PORT, which is set by Heroku. EXPOSE in Dockerfile is not respected, but can be used for local testing. Only HTTP requests are supported.

To achieve this, we need to somehow pass a port variable to our Spring Boot native app at runtime. Since the GraalVM support is just in its early stages, we can’t rely on a huge documentation. But the answer is quite simple ! We only need to pass a -D parameter like -Dserver.port=8087 to the native app – just as we’re used to from non-native Spring Boot apps already:

1./spring-boot-graal -Dserver.port=8087

After doing this, our app starts using port 8087. Having this in mind, we need to define the port also within a docker run command. Therefore a small change to our Dockerfile is required again:

1...
2# Add Spring Boot Native app spring-boot-graal to Container
3COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal
4 
5# Fire up our Spring Boot Native app by default
6CMD [ "sh", "-c", "./spring-boot-graal -Dserver.port=$PORT" ]

With this we are able to run our Dockerized native Spring Boot app with a dynamic port setting from command line like this:

1docker run -e "PORT=8087" -p 8087:8087 spring-boot-graal

Our app can now be accessed at http://localhost:8087/hello.

If you want to simply run a native Spring Boot app without doing all the described steps yourself, you’re encoraged to use the example project’s Docker image released on hub.docker.com/r/jonashackt/spring-boot-graalvm . Simply run the pre-packaged app by executing: docker run jonashackt/spring-boot-graalvm:latest

Travis CI & Heroku Container Registry & Runtime to save us from ‘exit status 137’ errors

As we move forward to deploy our app on Heroku, we shouldn’t forget to create a Heroku app if we haven’t already:

1heroku create spring-boot-graal

As we plan to use Heroku in “Docker mode”, we need to set the Heroku stack to container also:

1heroku stack:set container --app spring-boot-graal

Sadly we can’t use the instructions inside the post on Running Spring Boot on Heroku with Docker, JDK 11 & Maven 3.5.x in our case here. Using them, we would run into the following error:

1Error: Image build request failed with exit status 137
2real  2m51.946s
3user  2m9.594s
4sys 0m19.085s
5The command '/bin/sh -c source "$HOME/.sdkman/bin/sdkman-init.sh" && ./compile.sh' returned a non-zero code: 137

This error usually appears when Docker does not have enough memory. And since the free Heroku dyno only guarantees us 512 MB of RAM 🙁 (see Dyno Types ), we won’t get far with our GraalVM native compilation here.

But as the docs state, the way of building Docker Images with heroku.yml isn’t the only option to run Docker containers on Heroku. Luckily there’s another way to use the Container Registry & Runtime (Docker Deploys) . This allows us to decouple the Docker image build process (which is so memory-hungry!) from running our Docker container.

Work around the Heroku 512 MB RAM cap: Compiling Heroku-ready Docker images with TravisCI

So we need to shift the Docker build process onto another CI cloud platform like TravisCI . It already proved to work directly on the Travis virtual host , so why not also use the Travis Docker service ?

Logo sources: Docker logo , GitHub logo , TravisCI logo , Heroku logo , Spring Boot logo , Computer logo, GraalVM logo

And as we know how to do the native compilation of our Spring Boot inside a Docker container, the required native-image-compile.yml becomes extremely simple:

1dist: bionic
2language: minimal
3 
4services:
5  - docker
6 
7script:
8  # Compile App with Docker
9  - docker build . --tag=spring-boot-graal

The example projects native-image-compile.yml additionally implement a separate build job Native Image compile on Travis Host to show how GraalVM Native Image compilation can be done on TravisCI without Docker also.

But also on Travis, we need to brace ourselves against the 'Error: Image build request failed with exit status 137' error. This one happened to me many times before I really solved the issue!

Using the native-image with –no-server option and a suitable -J-Xmx parameter

As mentioned in the Spring docs , we should use the --no-server option when running Native Image compilations with Spring for now. But what does this parameter do to our Native Image compilation process? As the official docs state:

Another prerequisite to consider is the maximum heap size. Physical memory for running a JVM-based application may be insufficient to build a native image. For server-based image building we allow to use 80% of the reported physical RAM for all servers together, but never more than 14 GB per server (for exact details please consult the native-image source code). If you run with --no-server option, you will get the whole 80% of what is reported as physical RAM as the baseline. This mode respects -Xmx arguments additionally.

We could leave out the no-server option in order to reduce the amount of memory our native image compilation consumes. But there’s an open GraalVM issue in combination with Spring which makes image building without --no-server sometimes unreliable. Luckily I found a hint in this GitHub issue that we could configure the amount of memory the --no-server option takes in total. This is done with the help of an Xmx parameter like -J-Xmx4G:

1time native-image \
2  --no-server -J-Xmx4G \
3  --no-fallback \
4  --initialize-at-build-time \
5  -H:+TraceClassInitialization \
6  -H:Name=$ARTIFACT \
7  -H:+ReportExceptionStackTraces \
8  -Dspring.graal.remove-unused-autoconfig=true \
9  -Dspring.graal.remove-yaml-support=true \
10  -cp $CP $MAINCLASS;

Using that option like this in our native-image command, we can repeatably reduce the amount of memory to 4 GB of RAM. And this should be enough for TravisCI, since it provides us with more than 6 GB using the Docker service (see this build for example ). Using the option results in the following output for a native image compilation of our Spring Boot app:

108:07:23.999 [ForkJoinPool-2-worker-3] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 4294967296 bytes (maybe)
2...
3[spring-boot-graal:215]   (typeflow): 158,492.53 ms,  4.00 GB
4[spring-boot-graal:215]    (objects):  94,986.72 ms,  4.00 GB
5[spring-boot-graal:215]   (features): 104,518.36 ms,  4.00 GB
6[spring-boot-graal:215]     analysis: 368,005.35 ms,  4.00 GB
7[spring-boot-graal:215]     (clinit):   3,107.18 ms,  4.00 GB
8[spring-boot-graal:215]     universe:  12,502.04 ms,  4.00 GB
9[spring-boot-graal:215]      (parse):  22,617.13 ms,  4.00 GB
10[spring-boot-graal:215]     (inline):  10,093.57 ms,  3.49 GB
11[spring-boot-graal:215]    (compile):  82,256.99 ms,  3.59 GB
12[spring-boot-graal:215]      compile: 119,502.78 ms,  3.59 GB
13[spring-boot-graal:215]        image:  12,087.80 ms,  3.59 GB
14[spring-boot-graal:215]        write:   3,573.06 ms,  3.59 GB
15[spring-boot-graal:215]      [total]: 558,194.13 ms,  3.59 GB
16 
17real  9m22.984s
18user  24m41.948s
19sys 2m3.179s

The one thing to take into account is that native image compilation will be a bit slower now. So if you run on your local machine with lots of memory (I hear you Jan with your 64 GB “Rechenzentrum” 🙂 ), feel free to erase the -J-Xmx4G parameter.

Pushing our dockerized native Spring Boot app to Heroku Container Registry

Now we should be able to finally push the build Docker image into Heroku’s Container Registry , from where we’re able to run our Spring Boot native app later on. Therefore we need to configure some environment variables in order to push to Heroku’s Container Registry inside our TravisCI job’s settings. The first one HEROKU_USERNAME should keep your Heroku email and HEROKU_PASSWORD will inherit your Heroku API key. Be sure to avoid displaying the values in the build log:

With the following configuration inside our native-image-compile.yml , we should be able to successfully log in to Heroku Container Registry:

1- script:
2        # Login into Heroku Container Registry first, so that we can push our Image later
3        - echo "$HEROKU_PASSWORD" | docker login -u "$HEROKU_USERNAME" --password-stdin registry.heroku.com

Now after a successful Docker build that compiles our Spring Boot app into a native executable, we finally need to push the resulting Docker image into Heroku Container Registry. Therefore we need to use the correct tag for our Docker image build (see the docs :

1docker build . --tag=registry.heroku.com/yourAppName/HerokuProcessType
2docker push registry.heroku.com/yourAppName/HerokuProcessType

For our example application the concrete docker build and docker push commands inside the native-image-compile.yml look like this:

1- docker build . --tag=registry.heroku.com/spring-boot-graal/web
2    - docker push registry.heroku.com/spring-boot-graal/web

Releasing our dockerized native Spring Boot app on Heroku container infrastructure

The final step after a successful docker push is to release our native Spring Boot app on Heroku container infrastructure . Since May 2018 this is always the last step to really run an app on Heroku using Docker (before that, a push was all you had to do).

There are two ways to achieve this according to the docs . Either through the CLI via heroku container:release web or with the API. The first would require us to install Heroku CLI inside TravisCI, the latter should work out of the box. Therefore, let’s craft the required curl command:

1curl -X PATCH https://api.heroku.com/apps/spring-boot-graal/formation \
2          -d '{
3                "updates": [
4                {
5                  "type": "web",
6                  "docker_image": "'"$(docker inspect registry.heroku.com/spring-boot-graal/web --format={{.Id}})"'"
7                }]
8              }' \
9          -H "Content-Type: application/json" \
10          -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \
11          -H "Authorization: Bearer $DOCKER_PASSWORD"

This command is even better than the documented one in the official Heroku docs . It already incorporates the docker inspect registry.heroku.com/spring-boot-graal/web --format={{.Id}} command to retrieve the required Docker image ID. Additionally, it also omits the need to log into Heroku CLI beforehand to create the needed ~/.netrc mentioned in the docs. This is because we simply use -H "Authorization: Bearer $DOCKER_PASSWORD" here (where $DOCKER_PASSWORD is our Heroku API Key).

The problem with Travis: It does not understand our nice curl command , since it interprets it totally wrong . Even if we mind the correct multiline usage . I guess our Java User Group Thüringen speaker Kai Tödter already knew that restriction of some CI systems. And that’s why he crafted a bash script for exactly that purpose. At that point I started to work with a simple script called heroku-release.sh in order to achieve the release of our Spring Boot app on Heroku:

1#!/usr/bin/env bash
2 
3herokuAppName=$1
4dockerImageId=$(docker inspect registry.heroku.com/$herokuAppName/web --format={{.Id}})
5 
6curl -X PATCH https://api.heroku.com/apps/$herokuAppName/formation \
7          -d '{
8                "updates": [
9                {
10                  "type": "web",
11                  "docker_image": "'"$dockerImageId"'"
12                }]
13              }' \
14          -H "Content-Type: application/json" \
15          -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \
16          -H "Authorization: Bearer $DOCKER_PASSWORD"

Using this script, we finally have our fully working native-image-compile.yml ready:

1dist: bionic
2language: minimal
3 
4services:
5  - docker
6 
7- script:
8    # Login into Heroku Container Registry first, so that we can push our Image later
9    - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin registry.heroku.com
10 
11    # Compile App with Docker
12    - docker build . --tag=registry.heroku.com/spring-boot-graal/web
13 
14    # Push to Heroku Container Registry
15    - docker push registry.heroku.com/spring-boot-graal/web
16 
17    # Release Dockerized Native Spring Boot App on Heroku
18    - ./heroku-release.sh spring-boot-graal

That’s it! After the next successful TravisCI build, we should be able to see our natively compiled and dockerized Spring Boot app running on Heroku at https://spring-boot-graal.herokuapp.com/hello

You can even use heroku logs command to see what’s happening behind the scenes:

1$ heroku logs -a spring-boot-graal
2 
32020-04-24T12:02:14.562471+00:00 heroku[web.1]: State changed from down to starting
42020-04-24T12:02:41.564599+00:00 heroku[web.1]: State changed from starting to up
52020-04-24T12:02:41.283549+00:00 app[web.1]:
62020-04-24T12:02:41.283574+00:00 app[web.1]: .   ____          _            __ _ _
72020-04-24T12:02:41.283575+00:00 app[web.1]: /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
82020-04-24T12:02:41.283575+00:00 app[web.1]: ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
92020-04-24T12:02:41.283576+00:00 app[web.1]: \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
102020-04-24T12:02:41.283576+00:00 app[web.1]: '  |____| .__|_| |_|_| |_\__, | / / / /
112020-04-24T12:02:41.283578+00:00 app[web.1]: =========|_|==============|___/=/_/_/_/
122020-04-24T12:02:41.286498+00:00 app[web.1]: :: Spring Boot ::
132020-04-24T12:02:41.286499+00:00 app[web.1]:
142020-04-24T12:02:41.287774+00:00 app[web.1]: 2020-04-24 12:02:41.287  INFO 3 --- [           main] i.j.s.SpringBootHelloApplication         : Starting SpringBootHelloApplication on 1c7f1944-1f01-4284-8931-bc1a0a2d1fa5 with PID 3 (/spring-boot-graal started by u11658 in /)
152020-04-24T12:02:41.287859+00:00 app[web.1]: 2020-04-24 12:02:41.287  INFO 3 --- [           main] i.j.s.SpringBootHelloApplication         : No active profile set, falling back to default profiles: default
162020-04-24T12:02:41.425964+00:00 app[web.1]: 2020-04-24 12:02:41.425  WARN 3 --- [           main] io.netty.channel.DefaultChannelId        : Failed to find the current process ID from ''; using a random value: -36892848
172020-04-24T12:02:41.427326+00:00 app[web.1]: 2020-04-24 12:02:41.427  INFO 3 --- [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port(s): 59884
182020-04-24T12:02:41.430874+00:00 app[web.1]: 2020-04-24 12:02:41.430  INFO 3 --- [           main] i.j.s.SpringBootHelloApplication         : Started SpringBootHelloApplication in 0.156 seconds (JVM running for 0.159)

Running Spring Boot apps as GraalVM Native Images with Docker is really cool!

Being able to leverage the power of containers together with the benefits of Spring Boot & GraalVM Native Image really takes us to a new level! Now we’re able to build and run our native Spring Boot apps nearly everywhere. If we keep a few basic conditions in mind, we can build our native apps inside pretty much every Continous Integration cloud platform. Be it TravisCI, CircleCI or something else. And having built it there, we can simply run it everywhere. As a first example, we saw how to run our native apps on Heroku in this article and we now know what to watch out for. Having Continuous Integration & Delivery in place, we’re again back in calmer waters.

But wait! Didn’t we use GraalVM Native Image compilation to be able to really benefit from cloud-native platforms like Kubernetes? As we reduced the memory footprint and startup time of our Spring Boot app tremendously and are able to ship those native apps inside Docker containers as well, we have everything in place to run our apps inside a Kubernetes cluster! Just as we’re used to from all those hip Quarkus.io or Go apps. 🙂 So as always: Stay tuned for follow-up posts!

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.