Beliebte Suchanfragen
//

Log Management for Spring Boot Applications with Logstash, Elasticsearch and Kibana

29.10.2014 | 5 minutes of reading time

In this blog post you will get a brief overview on how to quickly setup a Log Management Solution with the ELK Stack (Elasticsearch-Logstash-Kibana) for Spring Boot based Microservices . I will show you two ways how you can parse your application logs and transport it to the Elasticsearch instance. Basically you can replace Spring Boot with any other application framework which uses Logback, Log4J or any other known Java logging framework. So this is also interesting for people who are not using Spring Boot.

This post doesn’t contain detailed insights about the used technologies, but you will find a lot of informations in the web about it. So, before we start have a short look at Elasticsearch, Logstash and Kibana. A good starting point is the website of elasticsearch.org with a lot of resources and interesting webinars. Also my codecentric colleagues have already blogged about some topics in this area. The reason why I have selected Spring Boot for this Demo is, that we actually using it in some projects and I believe it will help to make the next big step in the area of Enterprise Java Architectures. With this Micrservice based approach there will be a lot more logfiles you have to monitor, so a solution is definitely needed here.

First of all, clone the example repository into your workspace and go into the root of this directory.

1git clone http://github.com/denschu/elk-example
2cd elk-example

The Spring Boot example application is a small batch job which is located in the directory “loggging-example-batch”. Start the JVM with the following commands:

1cd loggging-example-batch/
2mvn spring-boot:run

Take a look inside “/tmp/server.log”. There you will find some log statements like those:

12014-10-10 17:21:10.358  INFO 11871 --- [           main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 8090
22014-10-10 17:21:10.591  INFO 11871 --- [           main] o.apache.catalina.core.StandardService   : Starting service Tomcat
32014-10-10 17:21:10.592  INFO 11871 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/7.0.55
42014-10-10 17:21:10.766  INFO 11871 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
52014-10-10 17:21:10.766  INFO 11871 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2901 ms
62014-10-10 17:21:11.089  INFO 11322 [main] --- s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8090/http

The question is now: How can we transport and parse those log statements? So let’s setup the ELK Stack and try out two methods on how to parse and transport these logfiles with Logstash.

Preparation

Elasticsearch

Open a new shell and download the Elasticsearch archive. Afterwards you can directly start the instance.

1curl -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.tar.gz
2tar zxvf elasticsearch-1.1.1.tar.gz
3./elasticsearch-1.1.1/bin/elasticsearch

Kibana

In another shell download Kibana and extract the contents of the archive. It contains the JavaScript-based Dashboard which you can simply serve with every HTTP Server. In this example we use a lightweight Python-based HTTP server.

1curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
2tar zxvf kibana-3.1.0.tar.gz
3cd kibana-3.1.0/
4python -m SimpleHTTPServer 8087

Open the preconfigured Logstash Dashboard in Kibana and check if it successfully connect to your running Elasticsearch Server. Per default it uses the URL “http://localhost:9200” (see config.js to modify it).

1http://localhost:8087/index.html#/dashboard/file/logstash.json

Logstash Agent

To collect the logfiles and transport them to our log server we use Logstash. Open a new shell and execute this:

1curl -O https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
2tar zxvf logstash-1.4.2.tar.gz

Method 1: Parse unstructured logfiles with Grok

The mostly used method to parse the logs is to create a Grok Filter which is able to extract the relevant data from the log statement. I have created a Grok Filter for the standard configuration of Logback that is used actually in Spring Boot.

1input {
2  stdin {}
3  file {
4    path =>  [ "/tmp/server.log" ]
5  }
6}
7filter {
8   multiline {
9      pattern => "^(%{TIMESTAMP_ISO8601})"
10      negate => true
11      what => "previous"
12   }
13   grok {
14      # Do multiline matching with (?m) as the above mutliline filter may add newlines to the log messages.
15      match => [ "message", "(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:loglevel} %{SPACE}%{NUMBER:pid}%{SPACE}%{SYSLOG5424SD:threadname}%{SPACE}---%{SPACE}%{JAVACLASSSHORT:classname}%{SPACE}:%{SPACE}%{GREEDYDATA:logmessage}" ]
16   }
17}
18output {
19  elasticsearch { host => "localhost" }
20}

To be able to parse the Java class name correctly I created an additional pattern (JAVACLASSSHORT). Add it to the agent directory of Logstash:

1cp custompatterns logstash-1.4.2/patterns/

Run Logstash Agent

Start the Logstash agent with the Spring Boot log configuration from above. It’s already placed in logstash-spring-boot.conf.

1./logstash-1.4.2/bin/logstash agent -v -f logstash-spring-boot.conf

Now start a job using this cURL command:

1curl --data 'jobParameters=pathToFile=classpath:partner-import.csv' localhost:8090/batch/operations/jobs/flatfileJob

Open the preconfigured Logstash Dashboard in Kibana again and you will see upcoming logstatements

1http://localhost:8087/index.html#/dashboard/file/logstash.json

Method 2: Use JSON Logback Encoder

One big disadvantage of Method 1 is that it’s sometimes not so easy to create a fully working Grok Pattern that is able to parse the unstructured logfiles. The Spring Boot default log format is one of the better ones, because it uses fixed columns. An alternative is to directly create the log statements in JSON Format. To achieve that you have to add the following artifact (It’s already inlcuded in the sample application!) to the pom.xml.

1<dependency>
2    <groupId>net.logstash.logback</groupId>
3    <artifactId>logstash-logback-encoder</artifactId>
4    <version>2.5</version>
5</dependency>

… and add this special Logstash Encoder to the Logback Configuration File “logback.xml” (It’s also already inlcuded in the sample application!)

1<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>

The new Logstash Configuration (logstash-json.conf) is now much smaller and easier to read:

1input {
2  file {
3    path =>  [ "/tmp/server.log.json" ]
4    codec =>   json {
5      charset => "UTF-8"
6    }
7  }
8}
9 
10output {
11  elasticsearch { host => "localhost" }
12}

Alternative Log Shippers

The Logstash Agent runs with a memory footprint (up to 1GB) that is not so suitable for small servers (e.g. EC2 Micro Instances). For our demo here it doesn’t matter, but especially in Microservice environments it is recommended to switch to another Log Shipper, e.g. the Logstash Forwarder (aka Lumberjack). For more Informations about it please refer to this link . Btw. for the JS Guys, there is also a Node.JS implementation of Logstash available .

To summarize it up, the ELK Stack (Elasticsearch-Logstash-Kibana) is a good combination to setup a complete Log Management Solution only with Open Source Technologies. For larger environments with a high amount of logs it’s maybe useful to add an additional transport like Redis to decouple the componentes (Log Server, Log Shipper) and make it more reliable. In the next time I will post about some other topics in the area of Microservices. So stay tuned and give some feedback 🙂

share post

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.