These forums are locked and archived, but all topics have been migrated to the new forum. You can search for this topic on the new forum: Search for kvm performance problem in cloudmin? on the new forum.
On the same box I tested the following write test dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
and got the following results
OpenVz 52MB/s
KVM 6MB/s
I setup OpenVZ manually on CentOS (since Cloudmin GPL doesn't support OpenVZ unforunately), and KVM via Clouldmin (using the default settings, e.g. virtio)
Any idea why there is such a difference in write performance?
Howdy,
Well, all Cloudmin is doing is provision Virtual Machines using the tools available to your distribution. It's not responsible for good or bad performance (similar to how Virtualmin isn't responsible for Apache's performance). Cloudmin simply makes VPS's available.
There are likely to be performance differences between various virtualization types. The way that OpenVZ and KVM perform their virtualization is quite different, and you may be running into differences in how they each work.
KVM provides much better isolation between the various VM's, where OpenVZ has less isolation, but runs everything from within a single kernel. That may allow OpenVZ to be more efficient.
-Eric
KVM seems to be really bad then. I would understand maybe a 30% overhead, but almost 1000% is really bad, right?
Yeah, I would have expected the difference to be lower than that.
Were both tests run on the same machine, with the same Linux distribution and kernel version?
-Eric
Yes, same machine and same distribution (CentOS 5.6) with a reinstall before each test. Regarding Kernel I am not so sure, since OpenVZ requires a custom kernel, right?
Do you know what kernel version it was you used to test OpenVZ?
I believe the default kernel on CentOS 5.6 is 2.6.18 -- for an accurate comparison, you'd want to use a similar version when testing OpenVZ.
-Eric
Yes, OpenVZ and KVM tests were based on the same kernel version
2.6.18
. I actually also tried LVM for KVM instead of file images. But the result was virtually the same.I wonder what your test results were (in case you tried)?
Sorry, I don't have a OpenVZ or KVM test system handy to run tests like that on. Hopefully someone else who does can chime in!
I did some Google searches for VPS benchmarks, but found few concrete numbers out there.
-Eric
I asked okinobk to do the same test in this topic. Same results. Maybe KVM is that bad?
What type of virtual disk format uses Cloudmin? (raw, qcow, qcow2, vmdk...)
I check it by command "qemu-img info" and the format is RAW.
What do you think about convert img to qcow2? Maybe this hepls.
I found solutions. Test please.
http://blog.bodhizazen.net/linux/improve-kvm-performance/ section "Cache writeback option"
I run my VM with "-drive file=image.img,cache=writeback,media=disk" instead of "-hda image.img" and now my performance is:
dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 268435456 bytes (268 MB) copied, 3.4728 s, 77.3 MB/s
OR "-drive file=image.img,cache=none,media=disk"
http://www.linux-kvm.org/page/Tuning_KVM
dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 268435456 bytes (268 MB) copied, 2.48537 s, 108 MB/s
Thanks for the test. I think in another topic you mentioned that the test on your host was 99MB/s, but KVM gets 108MB/s. Is there an explanation? Doesn't make sense to me.
Yes, it is right. I don't know why.
Maybe, because I run my host on USB flash drive and my KVM instances are on SATA disks.
What kind of USB flashdrive did you use? 100MB is pretty fast for a USB flash drive :-)