Beliebte Suchanfragen
//

Writing JSR-352 style jobs with Spring Batch Part 2: Dependency injection

1.9.2014 | 5 minutes of reading time

Spring Batch 3.0 supports writing and running batch jobs that comply with the JSR-352 specification, which is the standard for batch processing also included in JEE7. This article series focuses on three topics:

  • configuration options using Spring Batch’s implementation the standard way
  • integrating the possibility to run JSR-352 style jobs in your existing Spring Batch environment
  • using Spring’s dependency injection functionality within JSR-352 style jobs

You find the post about the first two topics here , this one is about using Spring’s dependency injection capabilities within JSR-352 style jobs.

The JSR-352 doesn’t specify how dependency injection is done, instead it leaves it up to the implementation to add support for a certain dependency injection framework or specification, and in addition to that it requires that two fallback strategies for instantiating batch artifacts are implemented. Let’s take a look at these first.

Referencing batch artifacts by qualified class name in job xml / batch xml

A simple example for this style is the following:

1<?xml version="1.0" encoding="UTF-8"?>
2<job id="simpleJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://www.oracle.com/webfolder/technetwork/jsc/xml/ns/javaee/jobXML_1_0.xsd">
3    <step id="chunkStep">
4        <chunk item-count="2">
5            <reader ref="de.codecentric.batch.item.DummyItemReader"/>
6            <processor ref="de.codecentric.batch.item.LogItemProcessor"/>
7            <writer ref="de.codecentric.batch.item.LogItemWriter"/>
8        </chunk>
9    </step>
10</job>

The references to batch artifacts are fully qualified class names, and when the JSR-352 implementation starts this job, the class is looked up in the classpath and instantiated via reflection and no-arg constructor.
Second option is to specify batch artifacts in a file named batch.xml placed in META-INF. This is more or less just a mapping of a reference name to a fully qualified class name:

1<batch-artifacts xmlns="http://xmlns.jcp.org/xml/ns/javaee">
2    <ref id="dummyItemReader" class="de.codecentric.batch.item.DummyItemReader" />
3    <ref id="logItemProcessor" class="de.codecentric.batch.item.LogItemProcessor" />
4    <ref id="logItemWriter" class="de.codecentric.batch.item.LogItemWriter" />
5</batch-artifacts>

Those artifacts may be referenced then by name in the job xml:

1<?xml version="1.0" encoding="UTF-8"?>
2<job id="simpleJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://www.oracle.com/webfolder/technetwork/jsc/xml/ns/javaee/jobXML_1_0.xsd">
3    <step id="chunkStep">
4        <chunk item-count="2">
5            <reader ref="dummyItemReader"/>
6            <processor ref="logItemProcessor"/>
7            <writer ref="logItemWriter"/>
8        </chunk>
9    </step>
10</job>

Again, batch artifacts are instantiated via reflection / no-arg constructor.
You may inject batch properties into these batch artifacts:

1<reader ref="de.codecentric.batch.item.PartitionedItemReader">
2    <properties>
3        <property name="myProperty" value="myValue"/>
4    </properties>
5</reader>
1@Inject @BatchProperty(name="myProperty")
2private String myProperty;

The JSR-352 implementation of Spring Batch uses a base ApplicationContext with the batch infrastructure (JobRepository and co.) as the parent context for a job ApplicationContext that is created and destroyed for each job run. The contents of this child ApplicationContext consist of the job xml and the batch.xml.
Because of this implementation it’s possible to inject every component from the infrastructure context into batch artifacts via @Inject, so if you need the DataSource from the infrastructure context in a reader, you may do just this:

1@Inject
2private DataSource dataSource;

That’s the dependency injection you’ll get with this approach. However, you could put business components into the infrastructure context and inject them like the DataSource above, but it’s probably not a good idea to mix business components and infrastructure components. And there are technical limitations: It’s not possible to inject batch properties into components from the base context, and it’s very unlikely that you don’t have any job parameter that needs to be injected into your business components.

Using Spring dependency injection in job xmls

You can use Spring dependency injection in job xml files like this:

1<?xml version="1.0" encoding="UTF-8"?>
2<beans xmlns="http://www.springframework.org/schema/beans"
3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4          xsi:schemaLocation="http://www.springframework.org/schema/beans
5                              http://www.springframework.org/schema/beans/spring-beans.xsd
6                              http://xmlns.jcp.org/xml/ns/javaee
7                              http://xmlns.jcp.org/xml/ns/javaee/jobXML_1_0.xsd">
8 
9    <bean id="reader" class="de.codecentric.batch.item.PartitionedItemReader" scope="step"/>
10    <bean id="processor" class="de.codecentric.batch.item.LogItemProcessor"/>
11    <bean id="writer" class="de.codecentric.batch.item.LogItemWriter"/>
12    <bean id="mapper" class="de.codecentric.batch.item.SimplePartitionMapper"/>
13 
14    <!-- Job is defined using the JSL schema provided in JSR-352 -->
15    <job id="partitionMapperJobSpringDI" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0">
16        <step id="chunkStep">
17            <chunk item-count="2">
18                <reader ref="reader">
19                    <properties>
20                        <property name="datakey" value="#{partitionPlan['datakeyPartition']}"/>
21                    </properties>
22                </reader>
23                <processor ref="processor"/>
24                <writer ref="writer"/>
25            </chunk>
26            <partition>
27                <mapper ref="mapper" />
28            </partition>
29        </step>
30    </job>
31</beans>

It’s a combination of a normal Spring xml file and a JSR-352 job xml. It works, but of course it’s not a portable JSR-352 xml anymore. You may split that file into two, a valid JSR-352 job xml and a Spring xml that imports the job xml file via Spring’s import tag. Anyway, to start the job you have to use the name of the Spring xml file, not the JSR-352 xml file then.

Also working and maybe the cleanest solution if you don’t have too many jobs in your application (which is a best practice): Place your Spring configuration in the batch.xml using Spring DI there and not the JSR-352’s style xml content.

1<?xml version="1.0" encoding="UTF-8"?>
2<beans xmlns="http://www.springframework.org/schema/beans"
3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4          xsi:schemaLocation="http://www.springframework.org/schema/beans
5                              http://www.springframework.org/schema/beans/spring-beans.xsd">
6 
7    <bean id="partitionedItemReader" class="de.codecentric.batch.item.PartitionedItemReader" scope="step"/>
8    <bean id="logItemProcessor" class="de.codecentric.batch.item.LogItemProcessor"/>
9    <bean id="logItemWriter" class="de.codecentric.batch.item.LogItemWriter"/>
10    <bean id="simplePartitionMapper" class="de.codecentric.batch.item.SimplePartitionMapper"/>
11 
12</beans>

And then a clean JSR-352 style job xml:

1<?xml version="1.0" encoding="UTF-8"?>
2<job id="partitionMapperJobSpringDIBatchXml" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
3    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://www.oracle.com/webfolder/technetwork/jsc/xml/ns/javaee/jobXML_1_0.xsd">
4    <step id="chunkStep">
5        <chunk item-count="2">
6            <reader ref="partitionedItemReader">
7                <properties>
8                    <property name="datakey" value="#{partitionPlan['datakeyPartition']}"/>
9                </properties>
10            </reader>
11            <processor ref="logItemProcessor"/>
12            <writer ref="logItemWriter"/>
13        </chunk>
14        <partition>
15            <mapper ref="simplePartitionMapper" />
16        </partition>
17    </step>
18</job>

Then there’s no need for imports, and the job can be started with the name of the job xml file name.

Conclusion

You have two options: either live with more or less no dependency injection, or combine Spring DI xmls and JSR-352 xmls in a way that doesn’t feel 100% compliant to the spec. To be honest I’d stick to Spring Batch configurations whenever I can because good portability is only given when using the first approach discussed in this article (which would mean to skip dependency injection), and what use is there of using a limited standard if you even can’t port it to another vendor easily? Anyway, portability is an argument often heard when talking about standards, but when I think back on my career I never ported a standard technology from one vendor to another. Really never. And if you really have to move away from Spring Batch to some other JSR-352 implementation: original Spring Batch and JSR-352 have the same concepts, so porting is possible. Though there are always two aspects: the runtime and the components, and moving away from Spring Batch’s rich component set will be expensive because you’ll have to reimplement a lot.

share post

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.