Hi, I’m posting this more as a self point-of-reference than anything else, but someone might find it interesting - you never know.
I’ve been working on a new product which is designed to provide shared / high performance / high availability storage for virtual machines on commodity PC hardware, essentially proper virtualisation for people without bottomless pockets. The idea is to provide data over a LAN from storage servers, then have the VM server RAID10 the information and cache part of it locally on SSD. A sort of mash-up of ideas, network RAID10 meets hybrid drives if you will.
The product is now at a point where it runs usefully and I can install a copy of Ubuntu on top of the infrastructure and it runs without losing or corrupting anything, and indeed I’ve spent the last couple of days doing some performance tuning and have the throughput up by a number of orders of magnitude. (debug eh!!)
Anyway, here’s my first set of results, I’m using Gnome’s pretty desktop benchmarking tool for comparison purposes, the first two graphs are taken from my development workstation which runs a 64G SSD and a bunch of 500G Caviar black HD’s. The last graph is taken from “within” a KVM instance that’s running on the new product, configured with 10G of shared storage (backing onto 1TB Fujitsu Spinpoints) with a 1G cache running on a 64G SSD.
- again, bear in mind the first two graphs are local devices …

[also bear in mind my SSD’s are limited to SATA II speeds]
[smg id=1568 type=full caption=“Local SSD”]
[smg id=1569 type=full caption=“Local HDD”]
[smg id=1570 type=full caption=“VDC Network RAID10 / Cache”]
Granted the Fujitsu’s are a little quicker than the Caviars, but the contrast between VDC and local hard drive is quite interesting … 
[Note; I know the SSD can’t do 3G/sec and the speed is due to various levels of memory cache, however this was a ‘real’ test at a random point in time without any pre-caching or understanding of exactly what the GNome tool is doing, so to an extent it is indicative of the expected average performance of a VM running on the product]
I have no idea what those graphs are telling me … if I’m reading this right, the setup appears to have MASSIVE burst speed (SSD caching), with the VM having access to an average read rate of roughly twice that of a local SSD (partially due to the SSD caching and the rest from striping) ?
I must admit that not knowing how KVM caches stuff, and what it was doing it’s hard to make much sense of those graphs … but the impressive numbers seem to speak for themselves.
Or am I getting this totally wrong ?
How resilient is this to say an SSD (cache) failure ?
–
Sure, I’m not going to try to quantify the figures at this stage because it’s not ‘primarily’ about performance (and anyway, I’m certainly not done optimising, I’ve just reaches a point where it’s fast enough to justify it’s existence so I’m moving onto more features before I worry about extra percentages on the thruput);
The main considerations are;
- There is more than one complete copy of your data, in this instance on more than one machine
- If a machine holding your data is rebooted, you won’t notice and it recovers automatically re-mirroring only changed blocks (v.quick)
- Because the data is available on the network, you can ‘live migrate’ your KVM instance between machines
- I’m supporting up to 7 replica’s at the moment, i.e. RAID10 with 7 stripes
- I will be supporting ‘remote’ replica’s, so you’ll be able to live migrate between physical locations
- The network connection is NBD compatible, so you can use pre-existing images in either RAW or QCOW* format
SSD resilience isn’t “too” critical as it’s only used for cache, however two additional features are planned;
- RAID0 for the Cache device (i.e. striping over two devices)
- RAID10 for the Cache device (i.e. striping + mirroring over two devices)
The former adds capacity and read-write speed, the latter give you more read speed and resilience against a failed cache SSD.
Formal benchmarks will come and I’m sure look a little different, but for now, try spinning up a VM on your machine and run;
palimpsest
You may have to install it first, but when it’s running, do a read-only benchmark and post the resulting graph … assuming your VM is running off a local hard drive, it would be an interesting comparison. (Although again it’s raison d’etre isn’t performance … if it was you’d just run on local SSD’s … )
Ok, I’ve something to put this into context. You know Amazon, well, they have a Cloud based service (Amazon AWS) which is the most established / leading cloud services provider and I’m afraid over the years I’ve been a little critical about their offering. Partly because of the (rather exorbitant) price but mainly because of the performance.
I run an instance for offsite monitoring purposes, so I’ve just installed and run the benchmark on it … so this is a live / commercial instance that Amazon are providing as a service … how does that compare to my really cheap home-made kit and home-made software …
Bear in mind “my” main concern with VM instances is IO latency (i.e. the time it takes the IO subsystem to respond to a request) rather than the actual linear thruput, as most access tends to be small and random whereas it’s nice to copy large files quickly, it’s not a frequent operation.
[smg id=1571 type=full caption=“Amazon AWS performance”]
So, “why do I want two copies of my data available via the network”?
