Popular searches
//

Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana

26.7.2016 | 28 minutes reading time

Cool! SOAP-Endpoints that are based on Microservice technologies. But how do we find an error inside one of our many “micro servers”? What about the content of our SOAP messages and how do we log in general? And last but not least: How many products did we sell over the last period? Sounds like we´ll need another blog article dealing with logging and monitoring with Spring Boot and Apache CXF!

Spring Boot & Apache CXF – Tutorial

Part 1: Spring Boot & Apache CXF – How to SOAP in 2016
Part 2: Spring Boot & Apache CXF – Testing SOAP web services
Part 3: Spring Boot & Apache CXF – XML validation and custom SOAP faults
Part 4: Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana
Part 5: Spring Boot & Apache CXF – SOAP on steroids fueled by cxf-spring-boot-starter

After reading through this blog series’ previous three articles, working with Spring Boot & Apache CXF seems to be a more and more common task. We set up new SOAP endpoints, test them with unit & integration tests and react with XML schema compliant messages – even when the requests are incorrect XML fragments. But before we set up our first SOAP service in a productive environment, we want to know in detail what our SOAP messages do contain when they travel over the wire. Not only to achieve a reasonable collaboration in the process of testing, we need to know what the inbound and outbound SOAP messages are comprised of.

And at the latest while making our way to production, the stakeholders from the non IT departments will ask for real numbers, explaining how often our services are being called – maybe also asking the question how many products we sold over the last period, and so forth. Additionally, our smart architecture evolves into something like the standard thing to do when a SOAP endpoint is needed in a corporate project. And therefore the number of servers is growing rapidly and we can´t manage to look into each and every machine any more just to know what messages it processes right at that moment.

We need transparency over all our SOAP messages. But how do we log with Apache CXF? What framework should we use? And how do we satisfy the questions from the non IT departments that flood us without being forced to dig into every single server`s log files? We´ll try to answer all of those questions, step by step. Let´s just catch a breath for the moment – and then start!

A consistent logging framework: slf4j and Logback

As usual, you can reproduce every step on your own – the GitHub repository tutorial-soap-spring-boot-cxf is waiting for you. The entire following step can be found in the project step6_soap_message_logging and is based on the second-last step from the second part of this blog series Testing SOAP Web Services . This is mainly because you won’t necessarily need custom SOAP faults and so we start on a common basis.

Initially one could ask which logging framework we should use in our architecture. The usage of the Simple Logging Facade for Java (slf4j) is something we for sure don’t really need to discuss. Logback represents a really good implementation of slf4j. Both frameworks’ leading position convinced the Spring Boot team to set them as a standard inside the Spring project. Sadly this is not the case with Apache CXF for now, which makes use of the Java SE Logging from java.util.logging . But there´s some remedy helping us to find one logging framework as a common ground for Spring Boot, Apache CXF and our own implementation: From version 2.2.8 and up, Apache CXF is completely configurable as to what logging framework the whole stack can use. Knowing this, we want to configure slf4j right now. In order to do so, we create a folder META-INF with another one named cxf in src/main/resources . Inside, we create a file org.apache.cxf.Logger containing only one line:

1org.apache.cxf.common.logging.Slf4jLogger

And we are done. From the next startup onwards, our whole implementation will use slf4j and our Logback configuration. So now we are in the comfortable position to be able to configure every log statement with our logback-spring.xml :

1<?xml version="1.0" encoding="UTF-8"?>
2<configuration>
3    <include resource="org/springframework/boot/logging/logback/base.xml"/>
4    <logger name="org.springframework" level="WARN"/>
5    <logger name="org.apache.cxf" level="INFO"/>
6    <logger name="de.jonashackt.tutorial" level="DEBUG"/>
7 
8  <appender name="file" class="ch.qos.logback.core.FileAppender">
9      <file> weather-service.log </file>
10      <append>true</append>
11      <encoder>
12         <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
13      </encoder>
14   </appender>
15 
16  <root level="INFO">
17      <appender-ref ref="file" />
18  </root>
19</configuration>

The documentation contains a detailed introduction how to configure Logback. For our purpose this quite simple configuration will be sufficient. At first we include the Logback basis configuration from Spring Boot, so we have a good foundation as a starting point. Second, we configure our main frameworks and implementations with appropriate logging levels. As an example, Apache CXF will run on “INFO” level. This configuration is really just an example and you can play around with it to match your exact needs. After that we configure a simple log file appender, containing a file name and a simple pattern.

Just a quick note: To produce logging events inside our code, we just use an org.slf4j.Logger and instantiate a concrete logger with the help of the method getLogger(Class clazz) from the class org.slf4j.LoggerFactory :

1private static final Logger LOG = LoggerFactory.getLogger(ClassWhereWeUseThisLoggerInside.class);

After that we are free to use the whole bandwidth of logging methods like .info(), .debug() and so on.

How to configure SOAP message logging on Apache CXF endpoints

To make Apache CXF log our SOAP messages, the configuration of a LoggingFeature inside our WebServiceConfiguration will be sufficient. This can be done globally on the SpringBus:

1@Bean(name = Bus.DEFAULT_BUS_ID)
2public SpringBus springBus() {
3    SpringBus springBus = new SpringBus();
4    LoggingFeature logFeature = new LoggingFeature();
5    logFeature.setPrettyLogging(true);
6    logFeature.initialize(springBus);
7    springBus.getFeatures().add(logFeature);
8    return springBus;
9}

Now every endpoint of our SOAP server will log all incoming and outgoing messages. A second option ist to configure the logging directly on the Apache CXF endpoint. The configuration could be done quite similarly:

1@Bean
2public Endpoint endpoint() {
3    EndpointImpl endpoint = new EndpointImpl(springBus(), weatherService());        
4    endpoint.setServiceName(weather().getServiceName());
5    endpoint.setWsdlLocation(weather().getWSDLDocumentLocation().toString());
6    endpoint.publish(SERVICE_URL);
7 
8    LoggingFeature logFeature = new LoggingFeature();
9    logFeature.setPrettyLogging(true);
10    logFeature.initialize(springBus());
11    endpoint.getFeatures().add(logFeature);
12 
13    return endpoint;
14}

As we choose one option to configure SOAP message logging, we could fire up one of our (single system) integration tests like the WeatherServiceSystemTest , which provides everything necessary inside. Looking at our console among other things we should be able to find the incoming SOAP message which Apache CXF logged containing some header information like Address, Encoding, and the HTTP headers:

12016-07-14 17:52:50.988  INFO 42064 --- [nio-8090-exec-1] o.a.c.s.W.WeatherService.WeatherService  : Inbound Message
2----------------------------
3ID: 1
4Address: http://localhost:8090/soap-api/WeatherSoapService_1.0
5Encoding: UTF-8
6Http-Method: POST
7Content-Type: text/xml; charset=UTF-8
8Headers: {Accept=[*/*], cache-control=[no-cache], connection=[keep-alive], Content-Length=[662], content-type=[text/xml; charset=UTF-8], host=[localhost:8090], pragma=[no-cache], SOAPAction=["http://www.codecentric.de/namespace/weatherservice/GetCityForecastByZIP"], user-agent=[Apache CXF 3.1.6]}
9Payload: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
10  <soap:Body>
11    <GetCityForecastByZIP xmlns="http://www.codecentric.de/namespace/weatherservice/general" xmlns:ns2="http://www.codecentric.de/namespace/weatherservice/exception" xmlns:ns3="http://www.codecentric.de/namespace/weatherservice/datatypes" xmlns:xmime="http://www.w3.org/2005/05/xmlmime">
12      <ForecastRequest>
13        <ZIP>99425</ZIP>
14        <flagcolor>blackblue</flagcolor>
15        <productName>ForecastBasic</productName>
16        <ForecastCustomer>
17          <Age>67</Age>
18          <Contribution>500</Contribution>
19          <MethodOfPayment>Bitcoin</MethodOfPayment>
20        </ForecastCustomer>
21      </ForecastRequest>
22    </GetCityForecastByZIP>
23  </soap:Body>
24</soap:Envelope>

This is only a first step that should really be sufficient to know what´s going on inside the wire.

Tailor Apache CXF´s SOAP message log statements

Up to this point everything has been running very smoothly. But we actually want to decide for ourselves what the log statements should look like. For example, there could be the need to only log the payload´s content which represents our SOAP message. This should be no problem considering the knowledge about the Apache CXF interceptor chains from the last part of this article series . Let´s dig into the CXF class org.apache.cxf.interceptor.LoggingInInterceptor . The method protected void logging(Logger logger, Message message) populates an org.apache.cxf.interceptor.LoggingMessage object with all the necessary information for the standard log output – as we just saw in the preceding paragraph. Apart from the encoding, HTTP method and so forth, our payload is stored here. The method’s last statement calls another method formatLoggingMessage(LoggingMessage loggingMessage), which is really simple, as it only calls toString() on the populated LoggingMessage object. That’s exactly our starting point. We just derive our own class from org.apache.cxf.interceptor.LoggingInInterceptor and override the method formatLoggingMessage(LoggingMessage loggingMessage). And now we are in charge of how Apache CXF will log our SOAP messages.

All the following steps are again prepared inside the GitHub example project step7_soap_message_logging_payload_only . And off we go! Let´s create a class LoggingInInterceptorXmlOnly.java and override the mentioned method like this:

1import org.apache.cxf.interceptor.LoggingInInterceptor;
2import org.apache.cxf.interceptor.LoggingMessage;
3 
4public class LoggingInInterceptorXmlOnly extends LoggingInInterceptor {
5 
6  @Override
7    protected String formatLoggingMessage(LoggingMessage loggingMessage) {
8        StringBuilder buffer = new StringBuilder();
9        buffer.append("Inbound Message:\n");
10 
11        // Only write the Payload (SOAP-Xml) to Logger
12        if (loggingMessage.getPayload().length() > 0) {
13            buffer.append(loggingMessage.getPayload());
14        }        
15        return buffer.toString();
16    }
17}

To let Apache CXF use our own LoggingInInterceptor, we have to configure it as a Spring Bean and hook it into the incoming interceptor chain. Therefore we extend our WebServiceConfiguration

1@Bean
2public AbstractLoggingInterceptor logInInterceptor() {
3    LoggingInInterceptor logInInterceptor = new LoggingInInterceptorXmlOnly();
4    // The In-Messages are pretty without setting it - when setting it Apache CXF throws empty lines into the In-Messages
5    return logInInterceptor;
6}

What´s important here: The bean has to be of the type org.apache.cxf.interceptor.AbstractLoggingInterceptor , and we should refrain from using the obvious method setPrettyLogging(true), because it will just do the exact opposite and deform our SOAP message by adding unattractive empty lines (with one execption: Inside a test in Intellij , that log message still looks nice).

And we didn´t overlook the “In” contained in LoggingInInterceptor – we have to do the same also for our responses. For that we create a class LoggingOutInterceptorXmlOnly.java and derive it from org.apache.cxf.interceptor.LoggingOutInterceptor . Besides the log message containing “Outbound” it´s merely identical to our inbound interceptor implementation. The corresponding Spring Bean in our WebServiceConfiguration will also deliver an AbstractLoggingInterceptor, but in this case we can go ahead and use the method setPrettyLogging(true) – at this point, the Apache CXF implementation surprisingly differs completely from the incoming message logging:

1@Bean
2public AbstractLoggingInterceptor logOutInterceptor() {
3    LoggingOutInterceptor logOutInterceptor = new LoggingOutInterceptorXmlOnly();
4    logOutInterceptor.setPrettyLogging(true);
5    return logOutInterceptor;
6}

Finally we´ll hook our own Logging Interceptor into the Apache CXF interceptor chains. And since we don´t want to lose a single message, we also configure them into the fault chains that are executed in case of an error. All this is done directly on the SpringBus inside our WebServiceConfiguration :

1@Bean(name = Bus.DEFAULT_BUS_ID)
2public SpringBus springBus() {
3    SpringBus springBus = new SpringBus();
4    springBus.getInInterceptors().add(logInInterceptor());
5    springBus.getInFaultInterceptors().add(logInInterceptor());
6    springBus.getOutInterceptors().add(logOutInterceptor());
7    springBus.getOutFaultInterceptors().add(logOutInterceptor());
8    return springBus;
9}

As we fire up our WeatherServiceSystemTest , the SOAP messages inside our log statements only contain what we intended:

12016-07-15 08:35:05.522  INFO 45895 --- [nio-8090-exec-1] o.a.c.s.W.WeatherService.WeatherService  : Inbound Message:
2<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
3  <soap:Body>
4    <GetCityForecastByZIP xmlns="http://www.codecentric.de/namespace/weatherservice/general" xmlns:ns2="http://www.codecentric.de/namespace/weatherservice/exception" xmlns:ns3="http://www.codecentric.de/namespace/weatherservice/datatypes" xmlns:xmime="http://www.w3.org/2005/05/xmlmime">
5      <ForecastRequest>
6        <ZIP>99425</ZIP>
7        <flagcolor>blackblue</flagcolor>
8        <productName>ForecastBasic</productName>
9        <ForecastCustomer>
10          <Age>67</Age>
11          <Contribution>500</Contribution>
12          <MethodOfPayment>Bitcoin</MethodOfPayment>
13        </ForecastCustomer>
14      </ForecastRequest>
15    </GetCityForecastByZIP>
16  </soap:Body>
17</soap:Envelope>

Elasticsearch, Logstash, Kibana – How to log SOAP messages in 2016

When I learned about elastic ELK stack´s (or more recent “Elastic-Stack” ) possibilities, I was left really enthusiastic. And I am not the only one, considering the countless articles on our codecentric blog concerning the topic .

A colleague of mine´s idea made me think that it might be worth a try to map the requirements of monitoring SOAP messages onto the capabilities of an ELK stack. At the beginning we already mentioned that solely logging to log files on one server out of many inside a big cluster of servers is possibly not a good idea. Especially if we set up a whole lot of servers and think about the need for staging environments that multiply the resulting number again. Not to mention the need to analyze our log data to form key performance indicators – ideally presented in nice-looking dashboards that not only IT nerds get something out of. Not to mention the need for mechanisms to drill into a specific case of error and have a look at the very specific SOAP request, so that we can analyze in detail what happened.

And last but not least there was a driver behind my approach that really only came to my mind while writing this blog post. And that is, many solutions in the area of enterprise application integration try to sell their solutions by promising great analysis features and dashboards. But in my experience these solutions often promised things they couldn´t really keep. For example we want transparency over all the incoming and outgoing SOAP messages, which often has a massive performance impact on our integration applications. This is something we really want to avoid. And that not being enough, the cost of activating those nice-looking dashboards often exceeds our integration project´s budget. As if we didn´t already have enough problems inside those projects… The usage of an open source solution helps us to really proceed fast in our project. If we gain the desired success, we can add some nice enterprise features into our monitoring system like alerting and security features or enterprise support, just to mention a few.

And the final point: I promised to use the Elastic stack in the first article of this blog series. So now I can deliver on my own promise. OK. Let´s go!

Initial ELK architecture

There are for sure countless possibilities to set up an Elastic stack. The search engine Elasticsearch is completely dedicated to scalability. If you fire up several instances inside the same network, they´ll find each other automatically and connect into one joint cluster. In the same way, the process of shipping the logs – which is quite costly in some cases – could be buffered with some kind of queueing mechanism or the like. We have these options ready if our ELK installation gets really big and so we are steeled for huge amounts of data.

In our use case we´ll for now want to start with a preferably simple approach. It should give us everything we need while being extensible. Colleagues of mine described a whole bunch of solutions how a Spring Boot application is brought together with an ELK server. Just have a look at our blog or the Softwerker special regarding Elasticsearch (german only atm). From all of them we just pick the most suitable alternative and expand it for being able to cope with the requirements of SOAP message monitoring. And that´s the point where an architectural picture comes in handy:

spring-boot-cxf-elk-architecture

Our SOAP endpoints, which we made available through the usage of Spring Boot and Apache CXF, log everything through slf4j and Logback after working past the starting paragraphs of this article. A really easy but nevertheless powerful approach is the utilization of the logstash-logback-encoders , which will do all the heavy lifting of shipping our log events to the ELK server. And that comes with the additional benefit that we don´t have to install or manage any separate log shipper or agent on our SOAP servers.

The logstash-logback-encoder´s appenders deliver the log events to Logstash – and all of them already JSON encoded. Logstash will then index those log events and stuff them into Elasticsearch . Once every log event is pushed to Elasticsearch, we are able to search and visualize the results with the help of the web application kibana . Optionally we can put an enterprise firewall friendly reverse proxy like Nginx in front to provide Kibana with port 80.

Right. That sounds like rocket science? But don´t worry. We´ll just see it in a moment with the aid of our example project. Therefore – and you´re already familiar with that – we have a new project step8_logging_into_elasticstack inside our GitHub repository .

Configuring the logstash-logback-encoder

Let´s begin with the configuration of the logstash-logback-encoder. It comprises some encoders that will preprocess our log events and put their contents into Fields in JSON style (key value). These Standard Fields contain a good starting package for our later analysis of logs inside the ELK server.

But before we dig into the configuration of the logstash-logback-encoder , we have to add the appropriate dependency inside our pom.xml:

1<!-- Logging with ELK -->
2<dependency>
3  <groupId>net.logstash.logback</groupId>
4  <artifactId>logstash-logback-encoder</artifactId>
5  <version>4.6</version>
6  <!-- Exclude older version of logback-core for conflicting with newer in spring-boot,
7  see https://github.com/logstash/logstash-logback-encoder/issues/153 -->
8  <exclusions>
9    <exclusion>
10      <artifactId>logback-core</artifactId>
11      <groupId>ch.qos.logback</groupId>
12    </exclusion>
13  </exclusions>
14</dependency>

As you can see, it´s recommended to exclude the transitive dependency to Logback itself, because Spring Boot already brings its own version into the mix. And sadly at this point beside the issue 153 there are some more headaches concerning the interaction of Spring Boot, logstash-logback-encoder und Logback. For your wellbeing, I would recommend for now that you stick with the Spring Boot version 1.3.3.RELEASE. Otherwise you run into issue 160 , which actually is a problem inside of Logback 1.1.7 . This error is fixed in 1.1.8, which isn´t released yet and therefore not a Spring Boot ingredient for now. Hopefully the release schedule of Logback 1.1.8 will match the one of Spring Boot 1.4.0. Then the whole problem will be gone. If you can´t wait to use a newer Spring Boot version, you could try overriding the Logback version inside the Maven properties tag (but I can´t really recommend that):

1<logback.version>1.1.6</logback.version>

But now back to the essence. To configure the encoder, we expand our logback-spring.xml known from the project step7_soap_message_logging_payload_only . We replace our FileAppender and substitute it with the appropriate appender from the logstash-logback-encoder:

1<?xml version="1.0" encoding="UTF-8"?>
2<configuration>
3    <include resource="org/springframework/boot/logging/logback/base.xml"/>
4    <logger name="org.springframework" level="WARN"/>
5    <logger name="de.jonashackt.tutorial" level="DEBUG"/>
6    <logger name="org.apache.cxf" level="INFO"/>
7 
8    <!-- Logstash-Configuration -->
9    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
10        <destination>192.168.99.100:5000</destination>
11        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
12            <customFields>{"service_name":"WeatherService 1.0"}</customFields>
13            <fieldNames>
14                <message>log_msg</message>
15            </fieldNames>
16        </encoder>
17        <keepAliveDuration>5 minutes</keepAliveDuration>
18    </appender>
19 
20  <root level="INFO">
21        <appender-ref ref="logstash" />
22  </root>
23</configuration>

Inside our example project we rely on the LogstashTcpSocketAppender , which is one of the many available appender variants . As a following step the alternative usage of an async appender would be imaginable, resulting in another dimension of decoupling the generation of logs from our productive web service calls. You could achieve a similar effect by leveraging an intermediary Redis for example.

But for now and the most cases, the LogstashTcpSocketAppender will suffice because it will also never block a logging thread. Internally all TcpAppenders work asynchronously with the help of the LMAX Disruptor RingBuffer . In the unlikely event of a ring buffer being overrun, the log events will be dropped to prevent our system from crashing. But for those extreme situations the mentioned alternative solutions could be worth a look. In each and every case we avoid any effects on the performance of our SOAP endpoints while at the same time gaining complete transparency over every incoming and outgoing message. Hence we are one step ahead of the expensive enterprise application integration suites.

Inside our logback-spring.xml we forward the destination containing our Elastic server´s IP and the port of Logstash to the LogstashTcpSocketAppender. We add two field definitions inside the LogstashEncoder. At first we configure a Custom Field containing the name of our SOAP web service. This field is evaluable later on in Kibana. Additionally we optionally change the Standard Field strong>message into log_msg, just to provide better readability and identification in Kibana.

In the end we configure a keepAlive every 5 minutes in the connection between our SOAP server and the Elastic server – just to provide a bit more reliability.

Set up our ELK server and configure Logstash

As we already learned, there are manifold options to set up an Elastic server. Again we´ll use a simple but powerful setup. Cloning a GitHub Repository and calling docker-compose up inside the root folder seems to be pretty simple, right?! The only prerequisites are a running Docker installation besides Docker Compose . Brew on the Mac has both available for you. Many thanks at this point to deviantony for providing this easy ELK setup. 🙂

Just one moment until we give in to the temptation of firing up our Elastic server. We should just configure Logstash. An example is again provided inside the mentioned GitHub repository inside the folder logstash/config/logstash.conf. You can ignore the mutate-Filter that´s used there for now:

1input {
2  tcp {
3    port => 5000
4  }
5}
6 
7filter {
8  ### getting all fields that where extracted from logstash-logback-plugin
9  json {
10    source => "message"
11  }
12  ### filter out keep-alive-messages, that have no valid JSON-format and produce _jsonparsefailure in tags
13  if "_jsonparsefailure" in [tags] {
14      drop { }
15  }
16}
17 
18output {
19  elasticsearch {
20      hosts => "elasticsearch:9200"
21  }
22}

Every Logstash configuration is comprised of three building blocks: input, filter and output. Inside the input block we configure our entry channel. In our concrete use case we utilize the input plugin tcp – one of the many possible input plugins

Inside the filter block we harness one of the likewise many filter plugins . Based on the filled fields from the logstash-logback-encoder, the json filter plugin expandes the contained JSON into actual logstash data structures, which will then be pushed to Elasticsearch. As we configured a keepAlive in our logstash-logback-encoder , we have to filter out the keep alive messages again because we don´t want ugly “_jsonparsefailure” log statements inside our Kibana discovery perspective.

In the last section we configure the goal of all of this inside our output block: the port of our Elasticsearch instance. Now we can finally bring our Elastic server to life by submitting a docker-compose up on commandline.

Starting the log analysis with Kibana

It will take some time, so feel free to grab a coffee. But eventually our Elastic server has started and all the components like Logstash, Elasticsearch and Kibana are running. Depending on the IP of your Docker host and on your setup, the URL to Kibana can slightly differ. If your Docker host has the IP 192.168.99.100, the URL to Kibana should look like this: http://192.168.99.100:5601/app/kibana . The port is defined inside the docker-compose.yml of our ELK setup. If you open up your browser, Kibana should look like this:

kibana_first_screen

At that point our Elastic server seems to be running. As a next step, we should create an index pattern under Settings/Indices. We just accept the preallocated logstash-* and click right on Create. After that Kibana should show a list of fields:

kibana-create-index

Now we eventually want to produce some log statements and evaluate if our “initial ELK architecture” is working in the field. Therefore we start a single system integration test (see part 2 of this blog series ) like the WeatherServiceSystemTest . This should be a no-brainer, so that we can focus on working with Kibana. And going back again to the Discover page, we see our first log statements:

kibana-first-logs

With the help of the add buttons we could configure our Discovery perspective to use the indexed fields as column headings. For example the custom field service_name and the output of the actual log event inside of log_msg should be interesting. The time heading is always presented as the first column:

kibana-fields-added

And that´s it! Now our SOAP server based on Spring Boot and Apache CXF is logging into our Elastic server. From now on, the play instinct of some readers will come to life. Because now we are able to leverage all the power of the analysis and visualisation tooling provided in Kibana. And we can produce some of the shiny dashboards our marketing staff will envy us about.

If we really want to exhaust all the possibilities, we could optimize the data that is flowing into our Elastic server. For instance we can stuff our SOAP messages into their own Elasticsearch fields, so that we can evaluate them far better. Additionally we want to know what log statements belong to a specific SOAP request. Erm… Let´s actually build that right now. The play instinct of the author is also coming through. 🙂

Logging SOAP messages into their own Elasticsearch fields

To answer all the questions of the non IT-departments it´ll pay off to have Custom Fields escpecially for the inbound and outbound SOAP messages. That´s because an evaluation onto a specific Elasticsearch field is done far more easily later on in Kibana and sometimes it´s just impossible otherwise. Therefore we need a concept to push the SOAP messages logged by Apache CXF to Elasticsearch, residing inside their own fields.

Again there´s a whole bunch of possible solutions. But one of them is simultaneously easy to use, but really powerful when it comes to its features. The elegancy of the concept is quite thrilling: We just use the Mapped Diagnostics Context – in short MDC. As part of the slf4j API it is implemented by Logback and based on the book Patterns for Logging Diagnostic Messages in Pattern Languages of Program Design written by R. Martin, D. Riehle and F. Buschmann. But fear not. You don´t have to read the whole book now. From a user´s perspective the Logback MDC is just some kind of Map in which we can put our log messages at the time of their generation, accompanied by an appropriate key. The logstash-logback-encoder at the end just transfers every MDC record into an field inside a log event. And these fields are traveling through all the intermediate stations par for par into an Elasticsearch field. And that is also suitable for many SOAP requests in parallel, which are processed inside respective threads. Because we already know how to hook into the Apache CXF logging mechanism, our only remaining concern is how to write our SOAP messages into an MDC field.

OK. Let´s roll up our sleeves! As usual we´ll find everything inside an separate project in our GitHub repository called step9_soap_message_logging_into_custom_elasticsearch_field . We start with the adjustment of the interceptors we built in step7_soap_message_logging_payload_only and rename them appropriately: SoapMsgToMdcExtractionLoggingInInterceptor.java and SoapMsgToMdcExtractionLoggingOutInterceptor.java . Instead of logging the SOAP messages directly with the help of the logstash-logback-encoder´s method net.logstash.logback.marker.Markers.append we put them directly into the MDC. Therefore we have to initialize a org.slf4j.Logger at first:

1import org.apache.cxf.interceptor.LoggingInInterceptor;
2import org.apache.cxf.interceptor.LoggingMessage;
3import org.slf4j.Logger;
4import org.slf4j.LoggerFactory;
5import static net.logstash.logback.marker.Markers.append;
6 
7public class SoapMsgToMdcExtractionLoggingInInterceptor extends LoggingInInterceptor {
8 
9    private static final Logger LOG = LoggerFactory.getLogger(SoapMsgToMdcExtractionLoggingInInterceptor.class);
10    private static final String SOAP_MESSAGE_INBOUND = "soap-message-inbound";
11 
12    @Override
13    protected void log(java.util.logging.Logger logger, String message) {
14        // just do nothing, because we don´t want CXF-Implementation to log,
15        // we just want to Push the SOAP-Message to Logback -> Logstash -> Elasticsearch -> Kibana
16    }
17 
18    @Override
19    protected String formatLoggingMessage(LoggingMessage loggingMessage) {
20        if (loggingMessage.getPayload().length() > 0) {
21            LOG.info(append(SOAP_MESSAGE_INBOUND, loggingMessage.getPayload().toString()), "Log Inbound-SoapMessage to Elasticseach");
22        }
23        return "";
24    }
25}

Furthermore we override the method log(Logger logger, String message) of the class org.apache.cxf.interceptor.AbstractLoggingInterceptor to undermine further Apache CXF SOAP message logging. We use the same method for the Outbound-Logging-Interceptor . After that we just start our WeatherServiceSystemTest und add the new Elasticsearch fields into our Kibana´s Discovery perspective. Now the SOAP messages should look like this:

kibana-soap-messages-customfields

Now we could fire up any custom query onto our new Elasticsearch fields. For example:

1soap-message-inbound: "*<MethodOfPayment>Bitcoin</MethodOfPayment>*"

The results would show all incoming SOAP messages that contain Bitcoin as the method of payment. And building on top of that result set, we could set up more surveys such as counting or setting in relation to the time of occurrence… and so forth. At that point there are almost no boundaries. You should just watch out for SOAP messages that are really big – for example if they contain more than one base64 encoded PDF files. In this case it is best to use a grok filter to shorten the messages before pushing them to Elasticsearch.

Correlating all log events relating to a specific SOAP request

Secondly we wanted to know what log statements belong to a specific SOAP request. Inside the scope of our example project, we are often the only user of our implementation because we just start one test that starts one request at a given time span. In this case, the log entries inside Kibana are mostly in a chronological order. But this is not guaranteed. This situation changes particularly if our SOAP endpoint is set productive and is used by many users. They will send many parallel requests to our servers and we couldn´t tell anymore what log entry belongs to a specific SOAP request. Additionally, if we have more than one SOAP server, that situation is going to be worse.

The need for some kind of request ID arises for which we could use the filter mechanisms of Kibana. Again the concept of the MDC will help us tremendously. Besides the many benefits mentioned, it holds all entries threadwise – meaning per request thread. Putting an UUID into the game would be the perfect fit. There´s just one thing you have to know about the usage of the MDC: The specification allows the reuse of field contents, when a thread is dying. For that we have to be sure to always generate a fresh UUID at the beginning and again always delete the UUID at the end of a request.

At the same time we want to witness every single log event that our system produces – no matter if this event occurred in our own functional implementation or in Apache CXF. We just have to assess the MDC in the earliest possible stage. At this point the Servlet Specification which the Apache CXF Servlet is implementing comes in handy. The specification allows the usage of a Servlet Filter which could hook in before and after every request that a servlet is processing.

This seems to be a perfect fit. A Servlet Filter would be able to notice every single SOAP request and the correct usage of the MDC guarantees the uniqueness of every ID that is pushed into Elasticsearch. Sounds like a dream team. And we are going to implement that now. Our project step9_soap_message_logging_into_custom_elasticsearch_field already contains an implementation :

1import org.slf4j.MDC;
2import javax.servlet.*;
3import java.io.IOException;
4import java.util.UUID;
5 
6public class LogCorrelationFilter implements Filter {
7    private static final String SERVICE_CALL_ID_KEY = "service-call-id";
8 
9    @Override
10    public void init(FilterConfig filterConfig) throws ServletException {}
11 
12    @Override
13    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
14        MDC.put(SERVICE_CALL_ID_KEY, UUID.randomUUID().toString());
15        try {
16            chain.doFilter(request, response);
17        } finally {
18            MDC.remove(SERVICE_CALL_ID_KEY);
19        }       
20    }
21 
22    @Override
23    public void destroy() {}
24}

We derive our class from the javax.servlet.Filter and override all necessary methods. We are only interested in doFilter(ServletRequest request, ServletResponse response, FilterChain chain). Inside of it we generate a UUID and put it into the org.slf4j.MDC – together with the key service-call-id. After that a chain.doFilter(request, response) call ensures that the Apache CXF Servlet proceeds with its processing. At the end inside the final block, we remove the UUID from the MDC to prevent reusage of this concrete ID. And that´s all for the Servlet Filter. We only have to configure it as a Spring Bean. Therefore we add it to the WebServiceConfiguration.java :

1@Bean
2public Filter filter() {
3    return new LogCorrelationFilter();
4}

Now we can start our WeatherServiceSystemTest and should recognize the new Elasticsearch field that should pop up inside our Kibana´s discovery perspective. Just to test our feature, we repeatedly start the system test or we just fire up our SimpleBootCxfApplication.java and use the power of Load Test inside of SoapUI to treat our SOAP endpoint with many parallel threads. With this approach we generate valid test data and are now able to filter a specific service-call-id inside our log statements. In order to do so, you just have to open a log entry in the discovery perspective and use the small magnifier icon containing the plus symbol behind the field service-call-id:

kibana-filter-for-service-call-id

The result ist quite striking. Kibana shows us all log entries that belong to a specific SOAP request, also if our endpoint handles many parallel calls:

kibana-filtered-for-one-service-call-id

By now we have accomplished nearly every initially stated requirement that a logging and monitoring solution for SOAP endpoints should be capable of. Every used framework now logs through the help of slf4j and Logback. We also know for sure what SOAP messages travel over the wire and are able to help our service consumers quite extensively in case of questions or errors in the implementation or test phase. Additionally we are able to spot errors inside our own implementation substantially faster. And that is true for a nearly uncountable number of SOAP servers that are based on Spring Boot and Apache CXF. In other words, we don´t have to dig into every single server´s log file inside our kind of microservice SOAP endpoints and shorten the needed time massively to find an error.

Also in the case of massive parallel user activity, we know what log entry belongs to a specific SOAP request. And we are prepared for nearly every question that arises from a non IT department, as we have our dedicated custom fields containing the SOAP messages. We could create the appropriate visualization to answer every question and are able to configure nice-looking dashboards – there are many good tutorials outside, for instance on elastic.co . And last but not least we are one step ahead of the big enterprise application integration suites. We are able to log without affecting the productive performance. It´s finally a really nice monitoring solution based on recent technologies.

This article answered many questions that arose in the first part of this blog post series. Nevertheless there will definitely be a follow-up. At least one topic is ready to get off the starting block: We could package all the ideas of the previous articles and make them available for all our SOAP endpoints… But I don´t want to give away too much for now.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//
Sign Up for our Newsletter

Everything worth knowing on one click:
Our newsletter offers you the opportunity to inform you about the current topics at codecentric without much effort.
(written in German)