I hope you enjoyed my first article about Gatling. The second one will go much more into detail and involves a lot of Scala code because this time, we will not only use Gatling but also extend it.
Although some community extensions already exist (MQTT, Kafka …) I get the impression that a lot of protocols are not covered therefore performance tests could be useful. Writing an extension is still quite cumbersome because detailed documentation of Gatling’s internal workings does not exists and the Gatling Scala Docs could be more extensive. That is why I wrote this blog post. Along with the post, we will create a JDBC extension for Gatling and implement the first action.
The whole project can be found here: https://github.com/rbraeunlich/gatling-jdbc
It already contains some more commands and has been refactored. Therefore, some classes might look different than presented here. But do not worry, this post contains the initial source code.
Please note that everything within this article is based on reading the code of the existing Gatling modules (official and community), the Scala Docs and my experience working with Gatling. So take all general statements I make with a grain of salt.
This post is based on Gatling 2.2.5.
Basics
Before we can start with the actual implementation, I want to cover two basics because they will appear within the example code at some point. They are not related to the actual simulation “flow” and that’s why we cover them now.
Gatling EL
Because a user could use a feeder for the table names our implementation shall support the EL right from the beginning. Within the code, the EL makes things a little bit more complicated, but not too much (how to use Gatling EL is explained in the documentation ). What is important for the implementation is that the EL requires us to use Expression[String]
when we would use a string, e.g. as parameter for a table name.
When working with an expression, it first has to be validated. This is done by calling apply()
with the current session as parameter. Doing so checks whether there are any variables present in the string (marked by the dollar sign) and whether they are present within the session. apply()
returns an instance of io.gatling.commons.validation.Validation
. A validation can either be a Success
or a Failure
(those are Gatling classes, do not mix them up with scala.util.Success
and scala.util.Failure
). Therefore, after applying the session, we have to check which one resulted. Depending on the value returned, we can continue.
If you start with the implementation of a Gatling extension you can of course avoid using Expression
and simply use strings. But be aware that later you will have to replace all strings with expressions if you want to use variables.
Stats Engine
The StatsEngine
is the class that is needed in order to log response times. It should always be passed from the ActionBuilder to the Action (see the JdbcTableCreationActionBuilder section). You might find tutorials on the internet in which the engine is not used. Those are most probably older ones, written before it was introduced. The default one, DataWritersStatsEngine
, writes the results into a file from which the reports are being generated.
The most important method the engine provides is logResponse()
. This is also the method we will use in our implementation. One of the parameters the method takes is a status. Gatling provides the two objects io.gatling.commons.stats.OK
and io.gatling.commons.stats.KO
. Using them, either a successful or an unsuccessful response can be logged.
Implementation
Within this section I’ll explain how the different required parts of a Gatling module can be implemented. To start simple, the first command we create is the table creation. For now, we will do this completely statically. The user can give a name for the table and that’s it. A single column representing the ID will be part of the table. Defining arbitrary columns will be added later. For now, this simple implementation is sufficient to assert that everything works as expected.
Predef and JdbcDsl
The combination of the two classes might be slightly over-engineered but I wanted to resemble the Gatling modules as closely as possible. The Predef object is intended to be imported with the underscore in your simulation code. That way, everything necessary to access the JDBC functionality should be present. Predef
extends the JdbcDsl
trait. If the module would consist of more than one protocol (e.g. Gatling HTTP also contains WebSocket and Server Sent Events (SSE)) a seconds *Dsl trait
should exist, which Predef
should extend, too. Here is Predef’s complete code:
1object Predef extends JdbcDsl
JdbcDsl
contains a val and two methods, of which one is an implicit conversion method. The value called jdbc
contains the object to create the protocol (protocol in a Gatling sense), i.e. the general configuration. The jdbc()
method is the starting point for defining the simulation steps. It behaves like http()
or jms()
from the Gatling modules.
1trait JdbcDsl {
2 val jdbc = JdbcProtocolBuilderBase
3 def jdbc(requestName: Expression[String]) = JdbcActionBuilderBase(requestName)
4 implicit def jdbcProtocolBuilder2JdbcProtocol(protocolBuilder: JdbcProtocolBuilder): JdbcProtocol = protocolBuilder.build
5}
The final tweak is the implicit conversion. Usually, you would write something like this in your simulation:
1val jdbcProtocol = jdbc.url(...).username(...).password(...).driver(...)
This actually creates a JdbcProtocolBuilder
, not a JdbcProtocol
. The implicit conversion calls build()
on the object to actually create the protocol. The protocols()
method that follows setUp()
requires such a protocol object. That way the users do not have to call build()
explicitly but can rely on the implicit conversion.
JdbcProtocolBuilderBase
Before we can run any simulation using JDBC a protocol has to be created. That’s what the JdbcProtocolBuilderBase
file and its contents are for. The builder base and the inner builder steps define the common configuration that ends up in the protocol and stays the same for all following jdbc()
calls. By using the different step classes we can enforce an exact order and make the users provide all required parameters. I only show a snippet here:
1case object JdbcProtocolBuilderBase {
2 def url(url: String) = JdbcProtocolBuilderUsernameStep(url)
3}
4
5case class JdbcProtocolBuilderUsernameStep(url: String) {
6 def username(name: String) = JdbcProtocolBuilderPasswordStep(url, name)
7}
8...
JdbcProtocol
As already mentioned, the protocol represents what is common for the simulation steps relying on that protocol. In our case, the protocol loads the JDBC driver and establishes the connection to the database. Internally, we use ScalikeJDBC (version 2.5.2) to make it a little bit easier. If we would not use it for the database, JdbcProtocol might also wrap the database connection. Luckily, ScalikeJDBC does that for us. Also, JdbcProtocol
is now the first class that actually has to extend a Gatling trait, namely io.gatling.core.protocol.Protocol
:
1class JdbcProtocol(url: String, username: String, pwd: String, driver: String) extends Protocol {
2 Class.forName(driver)
3 ConnectionPool.singleton(url, username, pwd)
4}
The JdbcProtocol
companion object is a little bit more complicated because it contains a io.gatling.core.protocol.ProtocolKey
object:
1object JdbcProtocol {
2
3 val jdbcProtocolKey = new ProtocolKey {
4
5 override type Protocol = JdbcProtocol
6 override type Components = JdbcComponents
7
8 override def protocolClass: Class[protocol.Protocol] = classOf[JdbcProtocol].asInstanceOf[Class[io.gatling.core.protocol.Protocol]]
9
10 override def defaultValue(configuration: GatlingConfiguration): JdbcProtocol =
11 throw new IllegalStateException("Can't provide a default value for JdbcProtocol")
12
13 override def newComponents(system: ActorSystem, coreComponents: CoreComponents): (JdbcProtocol) => JdbcComponents = {
14 protocol => JdbcComponents(protocol)
15 }
16
17 }
18
19 def apply(url: String, username: String, pwd: String, driver: String): JdbcProtocol = new JdbcProtocol(url, username, pwd, driver)
20}
The ProtocolKey
serves two purposes: Firstly, it allows you to lookup your components (see next section for what components are). Every ActionBuilder
class (also see following sections) receives the io.gatling.core.structure.ScenarioContext
as parameter. By calling
1ctx.protocolComponentsRegistry.components(<your ProtocolKey>)
where ctx
is the ScenarioContext you can retrieve your components if you need them while creating an Action.
Secondly, the ProtocolKey
creates the components and can define default values. Since there are no meaningful defaults for a JDBC connection, we throw an IAE when being asked for the defaults.
JdbcComponents
The JdbcComponents
take the JdbcProtocol
as a parameter and extends the io.gatling.core.protocol.ProtocolComponents
trait:
1case class JdbcComponents(protocol: JdbcProtocol) extends ProtocolComponents {
2 override def onStart: Option[(Session) => Session] = None
3 override def onExit: Option[(Session) => Unit] = None
4}
Based on the trait, the components allow for some initialization and finalization. We can see that the methods are supposed to return functions from sessions to either Session
or Unit
. Because a session is individual for each virtual user, the start and stop are probably performed on a per-virtual-user base. Nevertheless, when placing breakpoints, the methods were never called on my computer.
Apart from that, I suppose the only other use for components is to be able to access the JdbcProtocol
. The common pattern – at least among the different modules – is to have the protocol as a field within the components.
Now we covered everything for setting up our JDBC module. The classes presented above are all present within the root package and the protocol package. Next, we switch to the builder package.
JdbcActionBuilderBase
The JdbcActionBuilderBase
class takes a requestName as parameter and delegates to the different builders. The request name will appear in the results later. This class contains the “entry points” to the different operations. For now, we place only a create()
method in it:
1case class JdbcActionBuilderBase(requestName: Expression[String]) {
2 def create()(implicit configuration: GatlingConfiguration) = JdbcTableCreationBuilderBase(requestName)
3}
Later, we would add select()
, insert()
, update()
… ,which all would delegate to different builder classes. In the HTTP module the classes Http
and Ws
represent the same. Again, this “base” builder is not necessary. It just organizes the code in a convenient way. The jdbc()
method from JdbcDsl
returns this class and from there we can navigate to the actual JDBC functionality.
JdbcTableCreationBuilderBase
This is the explicit builder class for the CREATE TABLE command. One could argue that it should belong into the action package. The usual convention seems to be to keep it in a different package. I can see two reasons for that: Firstly, this class could be confused with the builder that actually creates the action (see JdbcTableCreationActionBuilder section) and secondly, this class does not extend any Gatling class. Within this builder class (or rather file), we could again use different case classes to represent the individual steps and parameters we need. Especially when we want to allow arbitrary columns, this class has to be extended. For now, we just place a single method in it:
1case class JdbcTableCreationBuilderBase() {
2 def name(name: Expression[String]) = JdbcTableCreationActionBuilder(name)
3}
Again, we use an expression to allow the user to utilize variables in the table name. Then we directly return the JdbcTabeCreationActionBuilder
, which is located in the action package. This looks pretty simple right now but if you think about columns, we will most probably need column()
, dataType()
and constraint()
methods in the future. Still, for our first example implementation, name()
is enough. Next comes the action package.
JdbcTableCreationActionBuilder
ActionBuilder
are the classes required by the exec()
method, which you use in your simulations. Therefore, this class is actually passed to the simulation and when being executed, they create the concrete action:
1case class JdbcTableCreationActionBuilder(name: Expression[String]) extends ActionBuilder {
2
3 override def build(ctx: ScenarioContext, next: Action): Action = {
4 val statsEngine = ctx.coreComponents.statsEngine
5 JdbcCreateTableAction(name, statsEngine, next)
6 }
7
8}
The JdbcTableCreationActionBuilder
extends the Gatling trait io.gatling.core.action.builder.ActionBuilder
and, of course, it has to create an action. There are two important things here an ActionBuilder
should always do: Firstly, it should pass the “next” Action to the action it creates. Otherwise, your simulation will stop at this point. Secondly, as you can see in the code above, the ActionBuilder
can retrieve the StatsEngine
from the core components. If the engine is not being passed to the Action, there is no chance to measure the action’s performance.
Apart from that, the ActionBuilder
could also retrieve the own components from the context and pass them to the Action, if needed. Now that we know how to create an action, let’s take a look at it.
JdbcCreateTableAction
Within an Action
, the action takes place (pun intended). The action is the place to measure the performance and to use your communication protocol for whatever you intend to do. E.g. somewhere within an HTTP POST action you would use your HTTP client to make a POST request. We use the API provided by ScalikeJDBC. Most of the times, actions should extend the io.gatling.core.action.ChainableAction
trait. From my experience, the actions you write are usually chainable. After creating a table, the user shall be able to directly insert data and not be forced to stop there, therefore our action is chainable. Our action class looks more complicated than expected, but the logic within is not:
1case class JdbcCreateTableAction(tableName: Expression[String], statsEngine: StatsEngine, next: Action) extends ChainableAction {
2
3 override def name: String = "Create table action"
4
5 override def execute(session: Session): Unit = {
6 val start = TimeHelper.nowMillis
7 val validatedTableName = tableName.apply(session)
8 validatedTableName match {
9 case Success(name) =>
10 val query = s"CREATE TABLE $name(id INTEGER PRIMARY KEY)"
11 DB autoCommit { implicit session =>
12 SQL(query).execute().apply()
13 }
14
15 case Failure(error) => throw new IllegalArgumentException(error)
16 }
17 val end = TimeHelper.nowMillis
18 val timing = ResponseTimings(start, end)
19 statsEngine.logResponse(session, name, timing, OK, None, None)
20
21 next ! session
22 }
23
24}
We use a TimeHelper
class provided by Gatling to retrieve the exact time. Then, we have to validate the expression for the table name. In case of a success we create the table, otherwise we throw an exception. This behaviour should be fine, because the expression only fails if the variable is not placed in the session. Lastly, we log the time and pass the session to the next action. Be careful here, if your action manipulates the session somehow, it has to pass the return value of the session manipulation method to the next action. If the original one is being passed on, the change is lost.
Now that we have our basic action we could write a simulation to test it. Nevertheless, you might have noticed that we only log an OK
value. If the create crashes, e.g. because of a duplicated table name, the whole simulation would crash. So, let’s improve the action a little bit to handle errors:
1...
2 validatedTableName match {
3 case Success(name) =>
4 val query = s"CREATE TABLE $name(id INTEGER PRIMARY KEY)"
5val tried = Try(DB autoCommit { implicit session =>
6 SQL(query).execute().apply()
7})
8tried match {
9 case scala.util.Success(_) => log(start, TimeHelper.nowMillis, OK, requestName, session)
10 case scala.util.Failure(_) => log(start, TimeHelper.nowMillis, KO, requestName, session)
11}
12 case Failure(error) => throw new IllegalArgumentException(error)
13 }
14…
15
16def log(start: Long, end: Long, status: Status, requestName: Expression[String], session: Session): Unit = {
17 val timing = ResponseTimings(start, end)
18 requestName.apply(session).map { resolvedRequestName =>
19 statsEngine.logResponse(session, resolvedRequestName, timing, status, None, None)
20 }
I left out the parts that did not change. We use the Try class to wrap any possible exceptions that might occur. Now, when the database returns an error, we log a KO value. At this point, we can start to use our action in a simulation.
Simulation
As before, we start easy:
1class CreateTableSimulation extends Simulation {
2
3 val jdbcConfig = jdbc.url("jdbc:h2:mem:test;DB_CLOSE_ON_EXIT=FALSE").username("sa").password("sa").driver("org.h2.Driver")
4
5 val testScenario = scenario("createTable").
6 exec(jdbc("foo table").create().name("foo"))
7
8 setUp(testScenario.inject(atOnceUsers(1))).protocols(jdbcConfig)
9}
As you can see, first we configure the JDBC connection. Then, within the scenario, we create the table. Finally, everything is being executed with one simulated user.
After running this simulation, a single OK
value should appear. This shows us that everything works as expected. At this point, we could start to add more actions, or enable arbitrary columns or refactor a little bit, but wait… didn’t we forget something?
Testing
“Wait, stop, nobody told me I’d have to write tests.”
If you just said this to yourself, you know why I had to add this section 😉
Taking a look at the community extensions referenced in the Gatling documentation (Cassandra, MQTT, Kafka, RabbitMQ, AMQP), Cassandra was the only one that contained basic unit tests. Within all of the extensions, the tests were simulations. This might be sufficient, I do not want to judge the quality of the extensions here, but I think the “job” of a simulation (in the Gatling sense) is to evaluate the performance, not the functionality. Therefore, we will write some unit tests (we should actually have done that at the beginning but I did not want to scare you 😉 )
As a first step, we “borrow” the MockStatsEngine from the JMS module. We use the StatsEngine
in the JdbcCreateTableAction
and probably in every other action, too. Therefore, the mock will be quite useful. Apart from that we need three more things for being able to test the action:
- A
io.gatling.core.session.Session
- A database
- An Action that is “next”
The latter two are not very difficult. For the tests a simple H2 database is sufficient, which we start in a beforeAll()
method in the same way as we did it in our protocol class. Since any class extending the Action
trait can be “next”, we can easily use any stub/mock/class for that.
Luckily, the first point is no big problem either. Session
’s constructor is public and provides default values for all attributes except “scenario” (String) and “userId” (Long). As we can see in the Gatling tests (e.g. SessionSpec ) doing something like:
1val session = Session("scenario", 0)
is fine. What we end up with is the following “preamble” of our first test:
1class JdbcCreateTableActionSpec extends FlatSpec with BeforeAndAfter with BeforeAndAfterAll {
2
3 val session = Session("scenario", 0)
4 val next = new Action {
5 override def name: String = "mockAction"
6
7 override def execute(session: Session): Unit = {}
8 }
9 val statsEngine = new MockStatsEngine
10
11 override def beforeAll(): Unit = {
12 Class.forName("org.h2.Driver")
13 ConnectionPool.singleton("jdbc:h2:mem:test", "sa", "sa")
14 }
15
16 before {
17 statsEngine.dataWriterMsg = List()
18 }
19
20 override def afterAll(): Unit = {
21 ConnectionPool.closeAll()
22 }
We only have to watch out not to use the same table name in different tests unintentionally, because that could cause problems.
We can directly use the ScalikeJDBC API and execute SQL to check the results of our actions. To check e.g. whether a table was created we can do the following (please note that this is already the refactored API of JdbcCreateTableAction
):
1val action = JdbcCreateTableAction("request", "new_table", Seq(column(name("foo"), dataType("INTEGER"), constraint("PRIMARY KEY"))), statsEngine, next) 2 3action.execute(session) 4 5val result = DB readOnly { implicit session => 6 sql"""SELECT * FROM information_schema.tables WHERE TABLE_NAME = 'NEW_TABLE' """.map(rs => rs.toMap()).single().apply() 7} 8result should not be empty
The rest should be basic unit testing. The builder classes do not interact with Gatling. All of the classes that extend ActionBuilder take Gatling classes as parameters, but those can be mocked. Also, everything in the protocol
package can be tested by simply checking the properties. Those classes do not contain much logic.
Finally, when testing, do not forget to include
1import io.gatling.core.Predef._
or else the implicit conversion from String
to Expression[String]
will not work.
Moving On
Now that you know the basics, you should be able to write your own Gatling module. Within the example project, I refactored the create()
method to work with arbitrary columns and added actions for DELETE, SELECT, INSERT and DROP TABLE. The implementations for the operations all follow the same pattern. A *BuilderBase
class is referenced in the JdbcActionBuilderBase
. Within that one different build steps are realised by case classes and finally, ActionBuilder and Action implementations were created.
For DELETE and SELECT I created two ActionBuilder
classes because it is possible to issue both operations without a WHERE clause. I wanted to make
1exec(jdbc("selection").select("*").from("bar"))
possible as well as
1exec(jdbc("selection").select("*").from("bar").where("abc=4"))
without using explicitly the build()
method. Therefore, I used two ActionBuilder
.
Finally, there are obviously some SQL operations missing. Those are left for the reader as exercise 😉 Feel free to create pull requests on GitHub.
Before I finish the article, there are two more things I would like to talk about. The first one is the possibility to add checks to our actions.
Checking the results
If you have already used Gatling for some performance testing, you probably know that it is possible to perform some basic checks. The HTTP module allows you to check the status of the response and the JMS module provides the generic simpleCheck()
method among others. Here, we want to add something similar to simpleCheck()
to our extension. SELECT seems to be the best candidate for that purpose. Before we start, let’s see how the existing modules implement their checks.
The JmsSimpleCheck class shows us how simple a check can be implemented. The function of Message => Boolean
passed in the constructor is being applied to the JMS message. That’s all. In case of a true everything is fine, else a failure is being recorded. The trait JmsCheck
is a type alias for a Check[Message]
defined in the package.scala
file. Based on the JmsSimpleCheck
class, we now know that we have to implement a class extending io.gatling.core.check.Check
. The type of the check is also important. Since we are working with JDBC and do not want to limit what the user can check, we will use List[Map[String, Any]]
as type. Although the type is slightly inconvenient, it is the simplest one we can come up with and which is provided by ScalikeJDBC. Therefore, we end up with the following skeleton:
1class JdbcSimpleCheck extends Check[List[Map[String, Any]]{
2 override def check(response: List[Map[String, Any], session: Session)(implicit cache: mutable.Map[Any, Any]): Validation[CheckResult] = ???
3}
Like JmsSimpleCheck
, we want the user to provide a function for the evaluation. This results in the implementation:
1case class JdbcSimpleCheck(func: List[Map[String, Any] => Boolean) extends Check[List[Map[String, Any]] {
2 override def check(response: List[Map[String, Any], session: Session)(implicit cache: mutable.Map[Any, Any]): Validation[CheckResult] = {
3 if (func(response)) {
4 CheckResult.NoopCheckResultSuccess
5 } else {
6 Failure("JDBC check failed")
7 }
8 }
9}
Now we have to add the check to the JDBC DSL somehow so that the user can write something like this:
1exec(jdbc("selection").select("*").from("bar").where("id=4").check(result.head(“foo”) == “test“))
From the implementation of our action we know that the where()
method returns an ActionBuilder
object and that should be the end of the builder chain because the exec()
method expects an ActionBuilder
. As we can see in the JmsDslBuilder , the checks are simply added to the ActionBuilder
, i.e. at that point the checks are simply an additional builder step. Hidden within io.gatling.jms.client.Tracker
we can see how checks are applied:
1val (checkSaveUpdate, error) = Check.check(message, session, checks) 2val newSession = checkSaveUpdate(session) 3error match { 4 case Some(Failure(errorMessage)) => executeNext(newSession.markAsFailed, sent, received, KO, next, requestName, Some(errorMessage)) 5 case _ => executeNext(newSession, sent, received, OK, next, requestName, None) 6}
The Check
class comes from the Gatling core. Because the JMS environment is asynchronous, the check has to be performed later and in a more complicated way using messages. Because our JDBC environment is synchronous, there is no need to apply the checks later. Therefore, they can be passed directly to the JdbcSelectAction
and applied there. For convenience, we follow the same approach as the JMS module and define a type alias:
1package object jdbc {
2 type JdbcCheck = Check[List[Map[String, Any]]]
3}
This is not the prettiest type alias but it will suffice. As mentioned before, a list of maps is the simplest choice which does not limit the user’s checks and can represent every table.
Within our JdbcSelectAction
we apply the checks by using the Check
class and, if an error occurs, we log KO
values and mark the session as failed. The code for executing the checks looks like this:
1private def performChecks(session: Session, start: Long, tried: List[Map[String, Any]]) = {
2 val (modifySession, error) = Check.check(tried, session, checks)
3 val newSession = modifySession(session)
4 error match {
5 case Some(failure) =>
6 requestName.apply(session).map { resolvedRequestName =>
7 statsEngine.logResponse(session, resolvedRequestName, ResponseTimings(start, TimeHelper.nowMillis), KO, None, None)
8 }
9 next ! newSession.markAsFailed
10 case _ =>
11 log(start, TimeHelper.nowMillis, scala.util.Success(""), requestName, session, statsEngine)
12 next ! newSession
13 }
14}
The log()
method that is being called here is just a helper method and not part of the Gatling API. Finally, the builder classes have to be extended. Because we already have two builders for SELECT, we create a trait:
1trait JdbcCheckActionBuilder extends ActionBuilder {
2
3 protected val checks: ArrayBuffer[JdbcCheck] = ArrayBuffer.empty
4
5 def check(check: JdbcCheck): ActionBuilder = {
6 checks += check
7 this
8 }
9}
Both selection builder classes extend this trait and within the build()
method, the checks are being passed to the action. Lastly, although it is a little bit overkill, we create a JdbcCheckSupport
trait. This resembles the HTTP and JMS module structure again. JdbcDsl
has to extend this trait. The trait itself simply contains a single line in order to create a convenient API:
1trait JdbcCheckSupport {
2 def simpleCheck = JdbcSimpleCheck
3}
Now we can write something like:
1jdbc("selection").select("*").from("bar").where("abc=4") 2 .check(simpleCheck(result => result.head("FOO") == 4)) 3)
Except for the additional simpleCheck()
method, we reached our previously defined goal. Now the users can perform arbitrary checks on the results of their selections. A simple example is shown in the SelectCheckSimulation
.
Last but not least, I created two more examples in order to show that our Gatling extension is capable of more than just using the H2 in-memory database.
Examples
You might have seen this animation before. In order to demonstrate that our JDBC extension is truly made for JDBC and not just H2, there are two more simulations present in the example project. The classes are named de.codecentric.gatling.jdbc.InsertMySqlSimulation
and de.codecentric.gatling.jdbc.InsertPostgresSimulation
.
As you can easily guess, the classes show the interaction with MySQL and PostgreSQL. They are basically integration tests to show that the whole module works with other databases, too. Under src/test/resources you can find two shell scripts for starting Docker container with the respective databases.
“>this animation before. In order to demonstrate that our JDBC extension is truly made for JDBC and not just H2, there are two more simulations present in the example project. The classes are named de.codecentric.gatling.jdbc.InsertMySqlSimulation
and de.codecentric.gatling.jdbc.InsertPostgresSimulation
.
The classes show the interaction with MySQL and PostgreSQL. They are basically integration tests to show that the whole module works with other databases, too. There are two shell scripts for starting Docker container with the respective databases under src/test/resources.
The End
To summarize the article, you should now know about:
- A module’s core components
Predef
and*Dsl
*Protocol
and*Components
*BuilderBase
ActionBuilder
andAction
s- How to test them
- Adding checks to your actions
Knowing about those parts should give you a better understanding of the existing Gatling extensions and how to write your own extension.
I hope you enjoyed the article and that I could show you some parts of Gatling’s internal workings that you did not know about. If you have any questions, comments etc. feel free to leave a comment in the blog or on GitHub.
More articles
fromRonny Bräunlich
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Ronny Bräunlich
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.