Skip to main content

Disk IO performance diagnose and improvement on CentOS

System Stats when I/O Might Be the Bottleneck:

1. iostat -x <delay> <repeats> shows the io for each device:
Device: rrqm/s   wrqm/s   r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb     0.00   121.67    0.00  454.67  0.00  7696.00    33.85     0.06    0.12    0.00    0.12   0.04   1.97
sda     0.00     3.00    0.00   5.33   0.00   857.33   321.50     0.07   12.31    0.00   12.31   2.12   1.13

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

          3.58    0.00    0.69   0.08    0.00     95.65
2. Identify the process that produces the most IO: sudo iotop -oPa -d <delay> (show process and accumulatively)
Total DISK READ : 0.00 B/s | Total DISK WRITE :    71.88 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE:    78.41 M/s
   PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  6937 be/3 root          0.00 B     60.00 K  0.00 %  5.78 % [jbd2/sdb2-8]
   875 be/3 root          0.00 B    780.00 K  0.00 %  1.02 % [jbd2/sda1-8]
 16739 be/4 systemd-   0.00 B   1936.03 M  0.00 %  0.19 % mysqld
   561 be/4 root      0.00 B      0.00 B  0.00 %  0.17 % [kworker/16:1]

   568 be/3 root          0.00 B     24.00 K  0.00 %  0.15 % [jbd2/sda3-8]

The top 2 jdb2 are ext4 journaling operations.
Note: iotop does not work with some versions of kernel. Refer to https://unix.stackexchange.com/questions/446624/error-with-command-iotop-on-centos to make it work:
As a zeroth approximation of a fix, edit (as root) /usr/lib/python2.7/site-packages/iotop/data.py ca l.195:
def parse_proc_pid_status(pid):
    result_dict = {}
    try:
        for line in open('/proc/%d/status' % pid):
            if not line.strip(): continue
            key, value = line.split(':\t', 1)
            result_dict[key] = value.strip()
    except IOError:
        pass  # No such process

    return result_dict

Other notes related to configuring IO

For my performance testing purpose, ACID is not critical, disabling journaling increase the IO performance for MySQL database. According to mariadb-or-mysql-ext3-ext4-jounaling

Linux ext3, ext4 journaling options from fastest to slowest:
data=writeback
When your ext3 filesystem is mounted with data=writeback option ext3(ext4) records onlythe metadata. BTW this is default journaling mode for xfs and Reiserfs filesystems. 

data=ordered
data=ordered is recording metadata and grouping metadata related to the data changes. By doing so  it provides more of journaling protection vs. data=writeback at the expense of reduced performance when compared to data=writeback.

data=journal - the is the ext3 default mode
data=journal is recording metadata and all data and is the slowest method at the benefit of full protection against data corruption.
So I decided to put the database on a file system for ext4 writeback. Add an entry to /etc/fstab
dev/sdb2 /docker            ext4    defaults,data=writeback        1 2


More about creating file system:
$lsblk - show the block devices
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 279.4G  0 disk
├─sda1   8:1    0 183.4G  0 part /local
├─sda2   8:2    0    64G  0 part [SWAP]
└─sda3   8:3    0    32G  0 part /
sdb      8:16   0   1.4T  0 disk
├─sdb1   8:17   0 465.7G  0 part
└─sdb2   8:18   0 928.7G  0 part /docker
sr0     11:0    1  1024M  0 rom

$sudo parted /dev/sdb - get device details and configure partitions:
(parted) print
Model: DELL PERC H730 Mini (scsi)
Disk /dev/sdb: 1497GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size   Type     File system  Flags
 1      1049kB  500GB   500GB  primary  ext4
 2      500GB   1497GB  997GB  primary  ext4
$sudo fdisk -l -- check the new partion
$/sbin/mkfs -t ext4 /dev/sdb2 - Format the new partion
$sudo e2label /dev/sdb2 /docker - assign a label to the new partition
Edit /etc/fstab to mount the newly created file system: add
dev/sdb2 /docker   ext4    defaults,data=writeback        1 2
$sudo mount -a - mount all the file system defined in /etc/fstab

Change Docker Image Installation Directory:

My MySQL is a docker image, so all I need to do is to put the docker image on the newly created directory so that the ext4 journaling will be disabled.

Edit /etc/sysconfig/docker and adding '-g /path'. For example:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false -g /docker'

Comments

Popular posts from this blog

Gatling Load Test Tricks and Debug

Introduction: Gatling is a light weighted load generator. Though it is 'load test as code', sometimes it is hard to debug. Especially when there is lack of documentation and tutorial. I run into an issue when upgrading from 2.3.1 to 3.0.2. The error message is not very helpful. Here I will show this case to illustrate an upgrading issue, as well is tricks to print out some debug information. Some Debug Tricks: According to  gatling document , you can print out debug information in 2 ways. change conf/logback.xml to DEBUG/TRACE for logger name="io.gatling.http.engine.response" This generates too much information and my tests times out.  Use println in exec. This is more flexible and can prints the exact information I need. How ever I would like to put the debug messages in a separate file. Here is an example: val file = System.getProperty("file", "/tmp/debug_cookie.out") val scn = scenario(myReadScenario) .during (myDur

Performance Test with Gatling in Cloud

Introduction: A lot of our performance tests are driven by gatling. When moving our tests to Google cloud, we face a lot of questions. One of them is how to manage the life cycle of the test. First, we need to generate the .csv file as the feeder to the gatling test. Second, we need to know when the test is finished. Third, we need to retrieve the results from the cloud. According to  Distributed load testing with Gatling and Kubernetes , a Kubernetes Job should be used. While this blog provides good information, I still need to figure out how to create the feeder .csv and share it with gatling script.  Using InitContainers to pre-populate Volume data in Kubernetes provides me another piece of information. In this blog post, I will show you my experiment of creating a Kubernetes job to drive performance workload with Gatling. Create Feeder script: My gatling script reads the feeder .csv (benchmark.csv). I have a python script generating the benchmark.csv file. 

G1GC Performance Tuning Parameters

G1GC Performance Tuning Parameters In this post, I will list some common observations from G1GC logs, and the parameters you can try to tune the performance. To print GC logs, please refer to  my blog about how to print gc logs . Threads Related In JDK9, with -Xlog:gc*=info, or -Xlog:gc+cpu=info, you can get log entry like: [12.420s][info][gc,cpu      ] GC(0) User=0.14s Sys=0.03s Real=0.02s This can give you some indication about the cpu utilization for the GC pause. For example, this entry indicates for this gc pause, total user-cpu is 0.14s, wall time is 0.02s, and system time is 0.03s. The ratio of User/Real could be used as an estimation of number of gc threads you need. For this case, 18 gc threads should be good. If you see long termination time,  User/Real less than the gc threads, you can try to reduce the number of gc threads. Here is a list of threads related parameters: Name Pa