Linux QEMU/KVM direct disk write performance

Linux howto's, compile information, information on whatever we learned on working with linux, MACOs and - of course - Products of the big evil....
Post Reply
User avatar
peter_b
Chatterbox
Posts: 383
Joined: Tue Nov 12, 2013 2:05 am

Linux QEMU/KVM direct disk write performance

Post by peter_b »

I'm setting up a system on a Backblaze storage pod with 45 drives.
The setup is roughly as follows:

1) OS on the storage pod is Debian Wheezy, acting as KVM-host
2) One virtual client is RHEL 6.5

The disks are partitioned in 3x RAID6 (14 disks + 1 spare) using Linux kernel software RAID. All 3 RAIDs initialized and formatted identically on XFS.

Each of these 3 SoftRAIDs is handed through in the VM-client, by adding blocks like this to the qemu XML (/etc/libvirt/qemu/<vm_name>.xml):

Code: Select all

<disk type='block' device='disk'>
  <driver name='qemu' type='raw'/>
  <source dev='/dev/md1'/>
  <target dev='vdb1' bus='virtio'/>
</disk>
In order to find out which caching and I/O strategy works best, I've set up each RAID differently for some testing:
(After reading IBM's best-practice article about KVM caching)
  • /dev/md1: cache=writethrough, io=native
  • /dev/md2: cache=none, io=native
  • /dev/md3: cache=directsync, io=native

Code: Select all

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writethrough' io='native'/>
      <source dev='/dev/md1'/>
      <target dev='vdb1' bus='virtio'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/md2'/>
      <target dev='vdc1' bus='virtio'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='directsync' io='native'/>
      <source dev='/dev/md3'/>
      <target dev='vdd1' bus='virtio'/>
    </disk>
The 3 RAID partitions are mounted as "/exports/brick{1,2,3}".

I've now made read/write tests on the VM-host (Debian 7), as well as on the VM-client (RHEL 6.5).

Write-Test #1
Write tests done with 1 GiB files using "dd" with and without "oflag=direct":

Code: Select all

$ for i in 1 2 3; do dd if=/dev/zero of=/exports/brick$i/testing/1gb bs=1024k count=1000 oflag=direct; done
  • VM-Host:
    • (normal) average 81 MB/s
    • (oflag=direct) average 40 MB/s
  • VM-Client (normal):
    • writethrough/native = 12,0 MB/s
    • none/native = 82,5 MB/s
    • directsync/native = 51,5 MB/s
  • VM-Client (oflag=direct):
    • writethrough/native = 24,2 MB/s
    • none/native = 43,2 MB/s
    • directsync/native = 33,3 MB/s

Read-Test #1
Read tests done with 1 GiB files using "dd" with and without "iflag=direct":

Code: Select all

$ for i in 1 2 3; do dd of=/dev/zero if=/exports/brick$i/testing/1gb bs=1024k count=1000 iflag=direct; done
  • VM-Host:
    • (normal) 208 MB/s
    • (iflag=direct) average 165 MB/s
  • VM-Client (normal):
    • writethrough/native = 1.1 GB/s
    • none/native = 124 MB/s
    • directsync/native = 122 MB/s
    Executing the same test a 2nd time, gives cached results with around 6.5 GB/s
  • VM-Client (iflag=direct):
    • writethrough/native = 2.3 GB/s :shock: (I don't trust this value)
    • none/native = 151 MB/s
    • directsync/native = 141 MB/s
User avatar
peter_b
Chatterbox
Posts: 383
Joined: Tue Nov 12, 2013 2:05 am

Re: Linux QEMU/KVM direct disk write performance

Post by peter_b »

So now my current setup produces almost the same speeds on the VM-client for writing onto the MD RAIDs as directly on the VM-host.
Reading is still a bit slower (156 MB/s on the client, 200 MB/s on the host).

The configuration is as follows:

VM-host (Debian):
  • kernel boot parameter: elevator=deadline
VM-client (RHEL):
  • kernel boot parameter: elevator=noop
  • QEMU RAID device parameters: cache=none, io=native
Post Reply