Beliebte Suchanfragen
//

Enterprise Java Batch: A best practice architecture

24.11.2014 | 4 minutes of reading time

More and more companies are doing their batch processing in Java these days – but how do you do it the right way? This is the start of a series on Enterprise Java Batch about how we think it should be done. Today we will start with some simple questions that need to be answered if you want to establish Java Batch in your company, leading to a best practice architecture that’s still agnostic of specific frameworks. Next up is a post about challenges .
There are a lot of ways to write and run batch jobs, and this series’ goal isn’t to list all of them. Here we are talking about the best way according to our experience with a lot of enterprise customers. And then we want to clear up what micro services have to do with it.
When introducing Java Batch to your company you have to answer three questions:

  1. Should a framework be used? If yes, which?
  2. How should the batch jobs be operated?
  3. How should the batch jobs be integrated into the company? Who is starting them?

1. Should a framework be used? If yes, which?

There are certain features you always need when developing batch jobs, among them are automatic transaction management, persistent job meta data and error handling, and in many cases you’ll want to have restart and scaling capabilities. A common programming model for jobs has a lot of advantages as well.
It makes sense to use an established framework for those features. We made a lot of good experiences with Spring Batch, but we’re not bound to it – the batch standard JSR-352 specifies the features above as well, and other implementations than Spring Batch might make sense as well.

2. How should the batch jobs be operated?

Neither the JSR-352 nor Spring Batch make direct assumptions about how jobs should be operated, even though some JSR-352 implementations are bound to JEE containers. So in principle it’s your decision if you want to start a JVM for each job run, if you want to deploy jobs to a JEE application server or if a servlet container is enough. We recommend a deployment to a servlet container / application server out of the following reasons:

  • HTTP is a well established protocol for communication between applications even in polyglot environments, which can be secured easily.
  • A continuously running batch server allows for fail-fast. While booting environment-specific configurations and connections to other systems are checked, so that there are less error sources when actually starting the job.
  • Monitoring for servlet containers is established – whether over HTTP, JMX or as support for a specific application server.
  • Memory management for continuously running applications is easier. Should JVMs be startet and stopped arbitrarily it may happen that the operation system cannot provide the needed memory.

In addition a lot of companies have guidelines for operating Java applications that restrict execution to certain, licensed systems with enterprise support. WebSphere, JBoss, Weblogic or Tomcat are often used candidates and work with our approach.

3. How should the batch jobs be integrated into the company? Who is starting them?

Job control and job execution should always be decoupled:

We recommend a RESTlike HTTP-API for the batch application that has four functions:

  • Start job
  • Get the state of the job
  • Stop job
  • Get the protocol of the job run

In most of the bigger companies that also host a mainframe we have a central place for job controlling and scheduling. Here the question is how our batch application can be integrated. Should such a scheduling not exist we are free of choice – everything is possible from a simple cron job to integration into a workflow system. Whatever the client is, when communicating with our batch server it should follow this simple algorithm:

  • Start the job,
  • poll for the state of the job in regular intervals, checking if it’s finished,
  • and if that’s the case, get the job protocol and give it back.

We like simple solutions, so one option would be to put this logic into a script, adding a shutdown hook that stops the job if the operator shuts down the script. Place of execution and language of the script depends a lot on your system – many of our customers have their job scheduling system on their mainframe, and in this case REXX is one solution. In UNIX-based environments a shell script will do the trick as well.

Conclusion

This solution serves well at a lot of customers, and it’s simple compared to other solutions we’ve seen at customers, but of course, it’s just the base, there are a lot of questions regarding the details that will be answered in the following parts of this series. Next part will be about challenges we met and still see at customers regarding this approach.

share post

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.