Skip to main content

Posts

Performance Test with Gatling in Cloud

Introduction: A lot of our performance tests are driven by gatling. When moving our tests to Google cloud, we face a lot of questions. One of them is how to manage the life cycle of the test. First, we need to generate the .csv file as the feeder to the gatling test. Second, we need to know when the test is finished. Third, we need to retrieve the results from the cloud. According to  Distributed load testing with Gatling and Kubernetes , a Kubernetes Job should be used. While this blog provides good information, I still need to figure out how to create the feeder .csv and share it with gatling script.  Using InitContainers to pre-populate Volume data in Kubernetes provides me another piece of information. In this blog post, I will show you my experiment of creating a Kubernetes job to drive performance workload with Gatling. Create Feeder script: My gatling script reads the feeder .csv (benchmark.csv). I have a python script generating the benchmark.csv file. 
Recent posts

Gatling Load Test Tricks and Debug

Introduction: Gatling is a light weighted load generator. Though it is 'load test as code', sometimes it is hard to debug. Especially when there is lack of documentation and tutorial. I run into an issue when upgrading from 2.3.1 to 3.0.2. The error message is not very helpful. Here I will show this case to illustrate an upgrading issue, as well is tricks to print out some debug information. Some Debug Tricks: According to  gatling document , you can print out debug information in 2 ways. change conf/logback.xml to DEBUG/TRACE for logger name="io.gatling.http.engine.response" This generates too much information and my tests times out.  Use println in exec. This is more flexible and can prints the exact information I need. How ever I would like to put the debug messages in a separate file. Here is an example: val file = System.getProperty("file", "/tmp/debug_cookie.out") val scn = scenario(myReadScenario) .during (myDur

Disk IO performance diagnose and improvement on CentOS

System Stats when I/O Might Be the Bottleneck: 1. iostat -x <delay> <repeats> shows the io for each device: Device: rrqm/s   wrqm/s   r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util sdb     0.00   121.67    0.00  454.67  0.00  7696.00    33.85      0.06    0.12    0.00    0.12   0.04    1.97 sda     0.00     3.00    0.00   5.33   0.00   857.33    321.50     0.07   12.31    0.00   12.31   2.12   1.13 avg-cpu:  %user   %nice %system %iowait  %steal   %idle           3.58    0.00    0.69    0.08     0.00     95.65 2. Identify the process that produces the most IO: sudo iotop -oPa -d <delay> (show process and accumulatively) Total DISK READ : 0.00 B/s | Total DISK WRITE :    71.88 M/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE:    78.41 M/s    PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND   6937 be/3 root          0.00 B     60.00 K  0.00 %  5.78 % [jbd2/sdb2-8]    875 be/3 root       

Profiler Survey - JMC, Solaris Studio, FlameGraph and VTune

Introduction: In this post, I will compare the commonly used java profilers(Java Flight Recorder, Solaris Studio, FlameGraph and VTune) in terms of usage, overhead and data presented. The observation is based on default collections. If more data are collected, the observations may change. I run SPECjbb2015 at a prefixed IR for 5 minutes. During this 5 minutes, I collect system level statistics, SPECjbb2015 console output, and collect profiles for 60 seconds (except for VTune). Software and Hardware  JDK8u144 Ubuntu 16.04.1 LTS Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz, 2 Socket, 8 cores per Socket Java Flight Recorder(JFR): JFR gives Java level profile, and some JVM internal statistics.  There are 2 ways to start JFR. From Java command line, or from jcmd. Command Line:  From command line -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=parameter1=value1[,parameter2=value2]   Parameters for -XX:StartFlightRecording (ref