Linux QEMU/KVM direct disk write performance
Posted: Wed Mar 19, 2014 7:33 pm
I'm setting up a system on a Backblaze storage pod with 45 drives.
The setup is roughly as follows:
1) OS on the storage pod is Debian Wheezy, acting as KVM-host
2) One virtual client is RHEL 6.5
The disks are partitioned in 3x RAID6 (14 disks + 1 spare) using Linux kernel software RAID. All 3 RAIDs initialized and formatted identically on XFS.
Each of these 3 SoftRAIDs is handed through in the VM-client, by adding blocks like this to the qemu XML (/etc/libvirt/qemu/<vm_name>.xml):
In order to find out which caching and I/O strategy works best, I've set up each RAID differently for some testing:
(After reading IBM's best-practice article about KVM caching)
The 3 RAID partitions are mounted as "/exports/brick{1,2,3}".
I've now made read/write tests on the VM-host (Debian 7), as well as on the VM-client (RHEL 6.5).
Write-Test #1
Write tests done with 1 GiB files using "dd" with and without "oflag=direct":
Read-Test #1
Read tests done with 1 GiB files using "dd" with and without "iflag=direct":
The setup is roughly as follows:
1) OS on the storage pod is Debian Wheezy, acting as KVM-host
2) One virtual client is RHEL 6.5
The disks are partitioned in 3x RAID6 (14 disks + 1 spare) using Linux kernel software RAID. All 3 RAIDs initialized and formatted identically on XFS.
Each of these 3 SoftRAIDs is handed through in the VM-client, by adding blocks like this to the qemu XML (/etc/libvirt/qemu/<vm_name>.xml):
Code: Select all
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/md1'/>
<target dev='vdb1' bus='virtio'/>
</disk>
(After reading IBM's best-practice article about KVM caching)
- /dev/md1: cache=writethrough, io=native
- /dev/md2: cache=none, io=native
- /dev/md3: cache=directsync, io=native
Code: Select all
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='writethrough' io='native'/>
<source dev='/dev/md1'/>
<target dev='vdb1' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/md2'/>
<target dev='vdc1' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='directsync' io='native'/>
<source dev='/dev/md3'/>
<target dev='vdd1' bus='virtio'/>
</disk>
I've now made read/write tests on the VM-host (Debian 7), as well as on the VM-client (RHEL 6.5).
Write-Test #1
Write tests done with 1 GiB files using "dd" with and without "oflag=direct":
Code: Select all
$ for i in 1 2 3; do dd if=/dev/zero of=/exports/brick$i/testing/1gb bs=1024k count=1000 oflag=direct; done
- VM-Host:
- (normal) average 81 MB/s
- (oflag=direct) average 40 MB/s
- VM-Client (normal):
- writethrough/native = 12,0 MB/s
- none/native = 82,5 MB/s
- directsync/native = 51,5 MB/s
- VM-Client (oflag=direct):
- writethrough/native = 24,2 MB/s
- none/native = 43,2 MB/s
- directsync/native = 33,3 MB/s
Read-Test #1
Read tests done with 1 GiB files using "dd" with and without "iflag=direct":
Code: Select all
$ for i in 1 2 3; do dd of=/dev/zero if=/exports/brick$i/testing/1gb bs=1024k count=1000 iflag=direct; done
- VM-Host:
- (normal) 208 MB/s
- (iflag=direct) average 165 MB/s
- VM-Client (normal):
- writethrough/native = 1.1 GB/s
- none/native = 124 MB/s
- directsync/native = 122 MB/s
- VM-Client (iflag=direct):
- writethrough/native = 2.3 GB/s (I don't trust this value)
- none/native = 151 MB/s
- directsync/native = 141 MB/s