Apologies for those of you who have been following my blog, for the delay in publishing Part 3. Part 4 is nearly complete and will be published soon.
I have heard it said, and I am in general agreement, that Enterprise Flash Drives (EFD’s) are over engineered for the needs of the current generation on Enterprise Storage Arrays (ESA’s).
EFD’s can be thought of as a Ferrari.
A Ferrari is a great car, but it won’t get you to work in the rush hour traffic much quicker than a standard car.
The point I am trying to make is that whilst EFD’s can offer amazing IOPS performance, currently in the real world, either storage arrays cannot fully exploit their full IOPS performance, and more importantly most customer workloads do NOT demand the levels of IOPS performance that EFD’s can deliver.
Customer workloads can benefit from the significantly reduced response times that EFD’s offer, which can translate into the ability to process more transactions per minute, and enable companies to better handle spikes in transactions, such as ticket sales for the latest concerts which sell out in minutes, or the ability to reduce overnight batch run times, or large database queries etc etc.
Notice I used the words ‘can translate’. This is provided that the HDD performance was the performance bottleneck, and there is not another bottleneck which immediately appears (such as server CPU utilisation), with the removal of the I/O bottleneck, and the improved I/O performance.
However in my experience most customers do not need the ultra high IOPS performance of an EFD, but rather the low sub millisecond response time that they can provide in the 100’s – 1000’s of IOPS range.
HDD response times for small block (say 8KB) random read miss I/O, are at best in the 5 - 10ms range with a light I/O load on a HDD. As the I/O load on an HDD increases, its average response time rapid degrades (climbs) to 10’s of milliseconds (ms). This is all down to good old queuing theory and Little’s Law.
For capacity/IOPS sizing purposes it can be assumed a 15K FC HDD can handle 175 – 200 IOPS at acceptable average response times (approx 20 – 30ms range), even though you might hear people quote that a 15K disk can reach close to 400 IOPS, this is when the drive is at 100% utilisation, and average response times are in the 100’s of ms, which generally means the end user experience is unacceptable.
This where Solid State Disks win over the traditional HDD, along with power and cooling savings and its silence (no noise) . An EFD provide a consistently low sub millisecond response time, even as its utilisation level increases into the 100’s and 1000’s of IOPS per second, as there is no seek or latency overhead (component) that impacts the I/O response time, meaning that each I/O can be very quickly serviced as it arrives, rather than it been queued, waiting for the disk to rotate/spin and seek.
So whilst all SSD manufacturers headline the max IOPS (for very small blocksize I/O - 512 byte), and MB/s throughput figures for their EFD’s, what customers actually require is a more balanced IOPS/GB/$ cost, at a consistently low sub millisecond response time.
In classic (current generation) Enterprise Storage Arrays (ESA), the response time of the I/O from the EFD is no longer the biggest component of the I/O response time as seen by the server. There is an inherent overhead in an ESA with handling/servicing each I/O, which has typically been measured in 10’s-100’s of microsecond. With I/O’s been measured in milliseconds for HDD’s, this was a small % of the average I/O response time, but now with the response times of I/O from an EFD been measured in 10’s to low 100’s of microseconds, this enterprise array overhead is now a significant component of the overall I/O response time.
This highlights that whilst the current generation of enterprise storage arrays can use EFD’s they cannot fully exploit them from an IOPS performance perspective. More on this in a future post.
So whilst STEC and all other EFD vendors are continually introducing performance enhancements to their EFD’s, the big performance gain for customers is achieved by moving from HDD’s to EFD’s in the current generation of Enterprise Storage Arrays. The latest generation of EFD’s that are now coming to market will offer further performance gains, but these are unlikely to offer the step change in performance that the move from HDD’s to EFD’s offers, until new architectures and designs of storage arrays are introduced. Again more on storage array architectures in a future blog.
I noted in Part 1, that most storage vendors have followed EMC’s lead and are currently using the STEC Zeus IOPS SSD (EFD), but Dell has bucked this trend and has been shipping their EqualLogic PS6000S array with Samsung’s 50GB server class SSD’s since March of this year.
For the SMB market where the Dell EqualLogic PS6000 array is targeted, I believe Dell have made the right choice, where the lower performing Samsung SSD offers good enough performance at a lower price point , and at a lower capacity point. The Samsung SSD that Dell are using is still an SLC based SSD, so I should really refer to it as an EFD, but it provides a lower maximum IOPs and MB/s throughput, compared to the STEC EFD. Note also the Samsung EFD is a 2.5” drive, and has a lower power consumption, so whilst multiple Samsung EFD’s are required to provide the same capacity as a single STEC EFD, this is mitigated by the smaller form factor and lower power consumption of the Samsung EFD.
Currently the PS6000S array is configured with 8 or 16 of these 50GB Samsung EFD’s. So whilst 4 of these 50GB Samsung EFD’s are required to provide the same capacity as a single 200GB STEC EFD, for example in a Symmetrix, EMC have to realistically configure a minimum of 4 drives (plus a spare), assuming RAID 5(3+1) protection, providing 1000GB of raw capacity (600GB useable taking in account RAID 5(3+1) protection and the spare drive), compared to EqualLogic’s 400GB of raw capacity.
So Dell can provide smaller increments of EFD capacity, and have the performance of 7 (assuming 1 is reserved as a spare) EFD’s (compared to EMC’s 4) to service a lower useable capacity, which means the Dell EqualLogic PS6000S is still able to provide a very performant EFD based array.
This means Dell EqualLogic has a lower entry level cost for the deployment of EFD’s because of the lower minimum raw capacity, and the lower cost of the Samsung SSD compared to the STEC EFD.
For the SMB market, with the current cost of EFD’s in general, customers are likely only to need, or be able to justify a small capacity of high performance Tier 0 storage.
Which in the real world means that the PS6000S array can offer an effective balance between cost and performance, and reside in an array group with other PS6000 series arrays which have SAS and SATA drives.
Which brings me back to my Ferrari. Whilst I would love to have one, from a TCO perspective, I could get to work as quickly in a lower performance car, at a significantly lower TCO.
As a note, I assume Dell EqualLogic will qualify the 100GB Samsung EFD, which Samsung have now been shipping for a few months, when they see demand for larger capacities of EFD based storage, and I assume that Samsung will continue to develop their EFD both from a capacity and performance perspective, and Dell will take advantage of these developments as they see fit.
In 2010 I believe the introduction of the STEC MLC based EFD will act as a catalyst for the use of EFD’s in Enterprise Class Storage arrays, with its lower price per G. AND with the likes of Dell driving the use of the Samsung EFD, at its lower price / performance point into the SMB market, and with some small signs of economic recovery starting to appear, I believe we will start to see significant signs of EFD adoption in storage arrays.
Also with the likes of Intel, Western Digital, Seagate and others either already shipping EFD products or likely to be shipping EFD’s in 2010 this is going to create significant competition to help further stimulate and drive (no pun intended) EFD adoption in storage arrays, as the cost per GB of EFD's continues to fall.
Comments