How to determine the performance of a local disk array for ESXi

I needed to create an estimation guide to tell system engineers how many virtual machines (running my companys applications) could be consolidated onto a single ESXi Host. This required a number of questions to be answered including how many IOPs can a disk array deliver based on disk speed, array controller (when using cache) and RAID configuration.

I first looked at the typical virtual machine I/O Read to write ratio that was seen in the past. This was determined previously using esxtop on running virtual machines.  The typical profile was somewhere between 40% – 50% reads and 60% – 50% writes. So I settled on a 50% read to 50% write ratio to simplify things.

Next I needed to gather details for hard disks. This information is generalized and serves to give a good idea of what each hard drive type can deliver.

  • 15K rpm drives = 200 IOPs
  • 10K rpm drives = 130 IOPs
  • 7.2K rpm drives = 75 IOPs

From this information we can determine the backend potential of the array simply by multiplying the number of disks with the IOPs.

Number of drives Multipled by the drive IOPs.

Example: 10 x 15K rpm drives

10 * 200 = 2000 IOPs

Next you need to define the penalty based on the RAID level used in the array. Using a Read to write ratio of 50/50 simplifies this step. These values are calculated  based on the number of extra writes that are required in the backend of the array for redundancy to be established.

When using RAID 0 there is no penalty as there is no redundancy so the backend and the frontend IOPs are the same.

RAID 10 consumes 2 writes in the backend for every frontend write. In the example of a 50/50 read write ratio for 1 write request 2 IOPs are consumed in the backend and 1 read request consumes 1 IOP. This converts to a 2/3 or 0.66 penalty.

Similar principles determine the RAID penalty for RAID 5 and RAID 6 but there is a little bit of generalization here.

When using RAID 5, 1 write request consumes 4 IOPs in the backend and 1 read request consumes 1 IOP. This converts to a 2/5 or 0.4 penalty.

When using RAID 6, 1 write request consumes 6 IOPs in the backend and 1 read request consumes 1 IOP. This converts to a 2/7 or 0.28 penalty.

Remember for all these examples a 50 to 50 read to write ratio is used, other ratios do not calculate the same way.

  • RAID 10 = 0.66
  • RAID 5 = 0.4
  • RAID 6 = 0.28

Taking the RAID level into consideration, it is now possible to determine the calculated IOPs figure for the disk array:

Number of drives Multiplied by the drive IOPs muliplied by the RAID penalty.

Example: 10 x 15K rpm drives in RAID 10

10 * 200 * 0.66 = 1320 IOPs

This shows the calculated IOPs based only on the Disks but this leaves out the potential of the RAID controller to assist when cache is used.

So how do I determine the potential gain of the cache in the RAID controller?

I needed to find this out so I ran some tests.

I started with a PowerEdge R510 server with a H700 PERC controller with 512MB cache. The Disk array had 10 x 15krpm disks in RAID 10. To start off, I calculating that the array had the potental to deliver 1320 IOPs at a 50/50 read to write ratio. This was a good starting point but I needed to see how much extra the cache in the controller would offer when Write back and adaptive read ahead were configured. To do this I used vSphere 5 and an ESXi Hosts with a virtual machine that has IOMeter http://www.iometer.org/doc/downloads.html

I installed a windows 2008 R2 virtual machine with 2vCPU, 3GB RAM and a 500 GB hard disk. It is important to ensure that the hard disk size for this virtual machine is at least half the size of the total disk array. This ensures that the disk area that is tested is large enough to give an accurate sample of what will happen when many virtual machines are requesting I/O at the same time. I then installed IOMeter.

After installation I needed to configure IOMeter (always run IOMeter as administrator as it will attempt to access c:\) to mimic a workload similar to my companies applications. I have posted the IOMeter configuration files with a .txt extension iometerMytest to help (before they can be used in IOMeter change the .txt to .icf). Images below:

Using a maximum disk size value of 900000000 builds a test file of 450GB. This allows a large test area to be used when assessing the array performance.

When IOMeter first runs it builds a file according to the maximum disk size. You can see the file in the c:\ .

The above image shows 26KB request size (in blue), 85% random requests (in red), 50/50 Read to Write ratio (in green) and the important one that will need adjusting to suit different arrays is the Burstiness adjustment (in orange). I use this to throttle IOMeter as by default it will hit the array as hard as possible and not have any consideration for latency. Virtual machines are sensitive to latency so I always consider this.  These 2 settings allow IOMeter to be throttled to give a more realistic idea of what the array can do.

When the access specification is run, IOMeter displays its view of the array performance but esxtop is the best way to view this.

Looking at esxtop\VM disk, shows that the disk array is performing at 1910IOPs but the latency is too high as 25ms is the maximum that I tolerate under normal situations.

Adjusting the Burst length lower will allow less requests in each burst and reduce the IOPs that IOMeter is requesting.

esxtop now shows less IOPs and a latency that is more acceptable.

Conclusion:

To use this information and apply it to the disk array calculations, I have first calculated that the disk array should have the potential for 1320 IOPs but it actually has produced 1744 IOPs with acceptable latency. This demonstrates a 30% improvement in IOPs potential when a Cache is included in a Disk array controller. So I have included this in the estimation guidance. I have seen even better improvements with 1GB cache.