Skip to main content

Performance Test with Gatling in Cloud

Introduction:

A lot of our performance tests are driven by gatling. When moving our tests to Google cloud, we face a lot of questions. One of them is how to manage the life cycle of the test. First, we need to generate the .csv file as the feeder to the gatling test. Second, we need to know when the test is finished. Third, we need to retrieve the results from the cloud.

According to Distributed load testing with Gatling and Kubernetes, a Kubernetes Job should be used. While this blog provides good information, I still need to figure out how to create the feeder .csv and share it with gatling script. Using InitContainers to pre-populate Volume data in Kubernetes
provides me another piece of information.

In this blog post, I will show you my experiment of creating a Kubernetes job to drive performance workload with Gatling.

Create Feeder script:

My gatling script reads the feeder .csv (benchmark.csv). I have a python script generating the benchmark.csv file. 

Usage: create-feeder.py -p <path/filename> -b <path/filename> -m <preload_records> -n <benchmark_records>

Gatling Script:

Here is my gatling script ManUsersCreatePut.scala. Some of the environment variables can be set from java command line.
class ManUsersCreatePut extends Simulation {
    // Get Params
    val concurrency: Integer = Integer.getInteger("concurrency", 10)
    val warmup: Integer = Integer.getInteger("warmup", 10)
    val idmHost: String = System.getInteger("idm_host", "myhost.com")
    val idmPort: String = System.getProperty("idm_port", "443")
    val idmProtocol: String = System.getProperty("idm_protocol", "http")
    val duration: Integer = Integer.getInteger("duration", 60).toInt
    val csvfile = System.getProperty("csvfile", "bench.csv")

    val idmUrl: String = idmProtocol + "://" + idmHost + ":" + idmPort
    val httpProtocol: HttpProtocolBuilder = http
        .baseURLs(idmUrl)
        .inferHtmlResources()
        .contentTypeHeader("""application/json""")
        .header("X-OpenIDM-Username", "openidm-admin")
        .header("X-OpenIDM-Password", "openidm-admin")
        .userAgentHeader("Robot/Gatling")
        .disableCaching // without this nginx ingress starts returning 412

    // Get Data
    val userData = ssv(csvfile).queue

    val scn = scenario("IDM Create Managed Users")
.during (duration) {
    feed(userData)
.exec(http("Create Managed Users Put")
    .put("/openidm/managed/user/${username}")
    .body(StringBody("""{
        "givenName" : "${givenname}",
                        "sn" : "${familyname}",
                        "mail" : "${email}",
                        "telephoneNumber" : "444-444-4444",
                        "password" : "${password}",
                        "description" : "Managed User",
                        "userName" : "${username}"
                    }""")).asJSON)}

    setUp(scn.inject(atOnceUsers(concurrency))).protocols(httpProtocol)

Kubernetes ConfigMap:

I used configMap to load the python script (create_feeder.py) and Gatling script (ManUsersCreatePut.scala) to the Google cluster. This is a temporary work around. 

Create configmap:
kubectl create configmap prepare-work \ 
--from-file=<path>/feeder/create-feeder.py \
--from-file=<path>/benchmarks/ManUsersCreatePut.scala 

Kubernetes Job:

Before you begin, you should have a basic understanding of Kubernetes Job.

With Job, we can create feeder .csv file during initContainer, create multiple pods running gatling scripts, and reading from the same feeder file, and the pod is in 'Complete' state when the job is done.

Here is my gatling-job.yaml:

apiVersion: batch/v1
kind: Job
metadata:
  name: gatling-test
spec:
  template:
    metadata:
      name: test
    spec:
      volumes:
        - name: config-volume
          configMap:             ------ 1
             name: prepare-work
        - name: tmp-data         ------ 2
          hostPath:
             path: /tmp
             type: Directory
      initContainers:
        - name: feeder
          image: python:alpine3.7          ------ 4
          imagePullPolicy: Always
          volumeMounts:
            - name: config-volume          ------ 5
              mountPath: /etc/gatling-data
            - name: tmp-data               ------ 6
              mountPath: /tmp
          command: [ "python", "/etc/gatling-data/create-feeder.py" ]  ------ 7
          args: [ "-p", "/tmp/preload.csv", "-b", "/tmp/bench.csv", "-m", "10", "-n", "3000"]
      containers:
        - name: gatling-run
          image: forgerock-docker-public.bintray.io/forgerock/gatling:6.5.0 --- 3
          imagePullPolicy: Always
          command:                         ------ 8
          - /bin/bash
          - -c
          - cp /etc/gatling-data/ManUsersCreatePut.scala /opt/gatling/user-files/simulations/ &&
            JAVA_OPTS="-Dcsvfile=/tmp/bench.csv -Didm_host=openidm -Didm_port=80" gatling.sh -s ManUsersCreatePut
          volumeMounts:
            - name: config-volume                          ------ 9
              mountPath: /etc/gatling-data
            - name: tmp-data                               ------ 10
              mountPath: /tmp
      restartPolicy: Never
  backoffLimit: 1

1 - use configMap prepare-work as volumn config-volumn
5,9 - for initContainer and container, mount config-volumn at /etc/gatling-data
2 - volumn tmp-data is defined so that both initContainer and container can access directory /tmp
6,10 - for initContainer and container, mount tmp-data at /tmp
4 - pull python to initContainer
7 - call create-feeder.py to generate feeder file at /tmp/bench.csv
3 - pull gatling image to container
8 - copy gatling script to /opt/gating/user-files/simulations. Then call gatling. Please note the environment variables are set by JAVA_OPTS

Comments

Popular posts from this blog

Gatling Load Test Tricks and Debug

Introduction: Gatling is a light weighted load generator. Though it is 'load test as code', sometimes it is hard to debug. Especially when there is lack of documentation and tutorial. I run into an issue when upgrading from 2.3.1 to 3.0.2. The error message is not very helpful. Here I will show this case to illustrate an upgrading issue, as well is tricks to print out some debug information. Some Debug Tricks: According to  gatling document , you can print out debug information in 2 ways. change conf/logback.xml to DEBUG/TRACE for logger name="io.gatling.http.engine.response" This generates too much information and my tests times out.  Use println in exec. This is more flexible and can prints the exact information I need. How ever I would like to put the debug messages in a separate file. Here is an example: val file = System.getProperty("file", "/tmp/debug_cookie.out") val scn = scenario(myReadScenario) .during (myDur

G1GC Performance Tuning Parameters

G1GC Performance Tuning Parameters In this post, I will list some common observations from G1GC logs, and the parameters you can try to tune the performance. To print GC logs, please refer to  my blog about how to print gc logs . Threads Related In JDK9, with -Xlog:gc*=info, or -Xlog:gc+cpu=info, you can get log entry like: [12.420s][info][gc,cpu      ] GC(0) User=0.14s Sys=0.03s Real=0.02s This can give you some indication about the cpu utilization for the GC pause. For example, this entry indicates for this gc pause, total user-cpu is 0.14s, wall time is 0.02s, and system time is 0.03s. The ratio of User/Real could be used as an estimation of number of gc threads you need. For this case, 18 gc threads should be good. If you see long termination time,  User/Real less than the gc threads, you can try to reduce the number of gc threads. Here is a list of threads related parameters: Name Pa