Introduction: A lot of our performance tests are driven by gatling. When moving our tests to Google cloud, we face a lot of questions. One of them is how to manage the life cycle of the test. First, we need to generate the .csv file as the feeder to the gatling test. Second, we need to know when the test is finished. Third, we need to retrieve the results from the cloud. According to Distributed load testing with Gatling and Kubernetes , a Kubernetes Job should be used. While this blog provides good information, I still need to figure out how to create the feeder .csv and share it with gatling script. Using InitContainers to pre-populate Volume data in Kubernetes provides me another piece of information. In this blog post, I will show you my experiment of creating a Kubernetes job to drive performance workload with Gatling. Create Feeder script: My gatling script reads the feeder .csv (benchmark.csv). I have a python script generating the benchmark.csv file.
Introduction: Gatling is a light weighted load generator. Though it is 'load test as code', sometimes it is hard to debug. Especially when there is lack of documentation and tutorial. I run into an issue when upgrading from 2.3.1 to 3.0.2. The error message is not very helpful. Here I will show this case to illustrate an upgrading issue, as well is tricks to print out some debug information. Some Debug Tricks: According to gatling document , you can print out debug information in 2 ways. change conf/logback.xml to DEBUG/TRACE for logger name="io.gatling.http.engine.response" This generates too much information and my tests times out. Use println in exec. This is more flexible and can prints the exact information I need. How ever I would like to put the debug messages in a separate file. Here is an example: val file = System.getProperty("file", "/tmp/debug_cookie.out") val scn = scenario(myReadScenario) .during (myDur