Main | Is the Future Flash (Part 1) ? »

04 August 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

Hey Spacrc,

Hats off to you for reminding us that Thin Provisioning, as a technology, has been around for a long time! Back in the days when you had to "Vary off CHPID's" before you could take a cable out! And you could still find miles of "bus & tag" under the tiles! (I've had to work under those STK libraries back in the day!)

However, I’m not sure I fully agree with your answer to the question you pose. Comparing the OS/390 space to the current predominant compute space (which is divided between Microsoft and Open Systems Unix / Linux) only works to an extent. As you point out, the storage technology innovators of the 90's tended to be the larger companies like EMC, HDS, IBM of course (Not forgetting Digital!!) and their design criteria was always "customer-centric" innovation, so they delivered what the market demanded. I think as IT became more widely adopted by the SMB / SME space through the late 90's into the current centaury, the "Design-Criteria" began to change to match the demands of the customers, hence we saw the emergence of companies like NetApp, and later The virtualised IP Storage vendors like EqualLogic. I think the current adoption of thin provisioning has been a response to the customer demand for ever increased efficiency in their arrays, not forgetting.... that it is only in the coming months that we will start to see an implementation of the "Volume Re-thinning" capability within file systems that will allow customer to "Reclaim" back space from an array based thinly provisioned volume. When this feature hits Vsphere and W2k8 I think you will see this as a default setting in many customer deployments.


Keep up the blogging!


Welcome to the storage blogosphere! It's great to have another perspective on storage, especially someone who knows so much about EMC kit!

Thin provisioning is interesting, but it's still only a half-solution to the utilization problem we've all faced for so many years. I've been meaning to blog on this, and maybe I will thanks to your prodding, but in short, even thin provisioned arrays will be purchased vastly over-sized due to the provision/refresh cycle in enterprise shops and poor capacity forecasting. In fact, thin provisioning promises to make utilization appear much worse not much better, especially in the short run. Instead of doling out 50% of an array's usable capacity as under-used LUNs, we'll dole out 10% as full LUNs. What could go wrong?

Looking forward to this debate!

Suds (Paul)
Many thanks for taking the time to leave a comment, and a good long one at that.
I am not sure I if really provided an answer, to the question I posed.
I was not trying to compare the mainframe space to the MS/Unix space, but rather question why it took so long for the established mainframe storage players to deliver Thin Provisioning to the open system space, given the STK delivered it for the mainframe, I would have expected this to have acted as a catalyst. And the financial and environmental impact this has had due to their slowness it bringing it to market.
I totally agree the SMB/SME space has totally changed the game, and the likes of NetApp, EqualLogic (sorry did not mention them in the initial blog),3PAR, Compellent and others have driven innovation and EMC, HDS and IBM have had to play feature catch up, when in fact they could have innovated given their experience from the mid 90's. I am not sure how much truth there is in it, but I have heard it said that EMC were worried about the threat of from STK Iceberg in the early 90's.
And you mention about Volume Re-thinning and space reclamation. These are the features I refer to by integration of Thin Provisoning with the O/S and File System, which will help the effective and successful deployment of Thin Provisioning, and act a a further catalyst to it become a defacto featue.
Again many thanks for you comments and I look forward to further dialogue.

Many thanks for the welcome and comments.
I am not trying to say Thin Provisioning will deliver nirvana for storage.
I totally agree there is far more to driving up storage utilisation, but I believe it is a good start.
And as you point out when a new array is purchased and provisioned initially its utilisation will appear much worse.
However based on my real world experience with the STK Iceberg, and more recently with the EMC Symmetrix (and I would expect similar results with most other vendors storage arrays which support Thin Provisioning), it does drive up utilisation levels, as empty unallocated space is not reserved and is available in a pool to use by all. Perhaps I should blog some examples of my experience.
Of course there is a risk from over provisioning, but I believe this risk is very low, provided the customer implementation is appropriate and adequate monitoring is in place. At Lloyds with the Icebergs, I typically ran them at an average 80%+ utilisation, and sometimes they peaked at 90% during busy periods such as end of month/year, but there was never an occasion where an Iceberg got close to 100% utilisation. And as I mentioned Lloyds acquired 18 Icebergs over a period of a couple of years. Basically when utilisation levels were getting to high a few more were installed. So yes the overall utilisation level dropped when the new Icebergs hit the floor, but as I say on average the utilisation level was approx 80%+.
I know things are a little different in Open Systems land, but if all customers could get even to 60 or 70% then that is far better than they are doing now.

The comments to this entry are closed.