Discussing all things virtualization and storage in the data center.

USPV Review

So its here, the USPV.  Faster and smarter than its fast and pretty smart predecessor.  Since Ive finally got a second to breathe, here’s my unbiased take on it – seriously, Im not biased toward HDS, I think that HP and Sun also do cracking enterprise storage arrays ;-) Warning:  When gauging my overall reaction to the USPV you should bear in mind that I was hoping for two specific things from it –

  1. Thin Provisioning
  2. wider backend striping.   

As I appear to have gotten both, Im bound to be pleased.  However, there were a few other good things included as well as a couple of surprising omissions. For me the positives were -

Thin Provisioning (more on that further down)

Wier striping on the back end.  (more on this further down as well)

The new 4GB/sec back end switching architecture (hey, wouldn't be a true Hitachi related post if I didn’t mention the mighty crossbar switch ;-) ).  I always felt that the USP and its predecessors were a little “front end heavy”.  They seemed to spend most of their energy and effort on the front end, reading and writing to cache.  Fair enough for a cache centric array.  But even the 9980V, mighty storage array of not that long ago, had a puny 1Gbps back end loop architecture!  So for me this is a plus point, but I won't overplay it because compared to some of the other stuff its not that exciting.  After all, such "feeds n speeds" type improvements are a given for all vendors.

Increased Shared Memory for control data (and associated bandwidth).  This allows for more ShadowImange and TrueCopy pairs which is a real world need.  Down here in the so called trenches this is much needed, particularly in very large environments.  And on the back of such huge uptake on consolidation via externally attached storage this will be very helpful going forward.

Support for 64 thousand open systems volumes (65,536 actually).  A lot of people might have missed this one.  They actually offered this to the guys in the mainframe world a year or so ago but left us in the open systems world to play around with our paltry 16,000.  This was no doubt a relatively simple improvement to offer but another one with real world implications.  This will be a real help to those of us working in very large environments (externally attached again) who have a lot of ShadowImage – every ShadowImage volume is……well, a volume after all. 

Now for the missing persons -

Support for more disks.  This is two things really. 

  1. The maximum number of disks you can install in the USP remains unchanged at 1152.  This is not really that much of a surprise though.  In fact I’m starting to believe that Hitachi really mean it when they say they no longer think the future is in large monolithic arrays. 
  2. There is no support for 500GB disks or even 300GB 15k, never mind SAS or SATA.  But of course, these are all supported through externally attached devices. 

As for the SATA thing, it actually makes a lot of sense when you think about it.  The USPV has a honkin 4GB/sec Crossbar switch on the back end now, so why would you attach lower performing disks off such a high performing interconnect.  Load the USPV with your tier 1 FC stuff and hang the rest off the back.

And on the point of not supporting the larger disks (internally), well many of you will know that Im not a big fan of large disks - from a performance point of view.  Now its hard for me not to rant over this but spare me a couple of lines........ If disk A is twice the size of disk B you are quite likely to place twice the workload on it.  The problem there is that although the disk has twice the capacity it doesn't have twice as many actuators/heads, platters and buffers etc.  So when the drones from the other vendors make a song and dance over this, tell them they can keep their slow clunky big disks and shove them …. well, in their own arrays.  To be honest though I expect we will see support for larger FC disk later in the year. 

An interanl disk to store performance metrics.  This is one that I wanted but nobody else cared about.  When implementing Tuning Manager, many people are disappointed that it can't show them stats from before Tuning Manager was installed (duh!).  The addition of an internal disk that stores performance metrics from day one would be nice.  Never expected it though.  Not the end of the world.

The USP still runs windows on a Windows kernel.  Honestly!  You attach a USP or USPV to another storage array, as you do for externally attached storage, and how does the external array see the USP?  As a Windows host! - must mean that under the hood the USP is running Windows!!  OK so Im just kidding ;-)  

Still with me?  Im not sure Im still here!

A couple of thoughts on the inclusion of Thin Provisioning - the real storage virtualisation ;-) 

A lot of people seem to be asking what it is, so here it is, the skinny on Thin Provisioning ;-) 

You present a volume to a host and the host thinks that, for arguments sake, its a 1TB volume.  But actually its not, its way way smaller but the host is oblivious to that fact.  As far as the host is concerned it has a 1TB volume.  So when it writes a file system ("formats" to those of you in the Windows world) it will create all the necessary structures for a 1TB drive.  Now, if the host only uses 100GB for the first year, it will only consume 100GB of USPV disk and the rest will stay in the pool.  Pool, what pool?  Behind it all is a pool of free space, think of it as a buffer, that is allocated to that 1TB volume on demand.  A lot like the way a snapshot grows as data on the original volume is modified.  But the beauty, and the beast, of this is.....<drum roll>..... the pool does not have to contain the additional 900GB of space that the host thinks it has.  It might only contain 100GB of buffer.  The reason being that we know the host is not going to need 1TB for a very l ong time, so we wont bother buying the disk until it gets closer to that time.  This is beautiful because you have not spent the money on space that has sat there unused for a year and you can sleep better at night knowing you have not wasted power spinning and cooling the disks that were never really used.  The beast side of it though, is that if you’re not careful you can overcommit yourself and not be able to satisfy the demand for space in your applications hour of need.  And we know what happens then - not good.

Explanation aside.  I think its a technology that was very much needed and I think the rest (IBM and EMC) will have to follow suit sooner or later.  Sure they may debunk the technology at first and resist for a while.  But what the market demands, the market gets.  Just look at EMC and their positioning on RAID 6 (EMC do not really believe in RAID 6, but their customers wanted it and they got it.  I admire EMC for resisting as long as they did). 3PAR have been making a good go of Thin Provisioning recently and Hitachi are playing catch-up here and levelling the playing field against 3PAR on this issue.

Then there’s the fact that everybody is moaning that SAN storage has not delivered on its promise to do away with wasted unused disk space.  I for one remember the talk of pooling your storage and making it accessible to all hosts on the SAN so that you would no longer have the situation where you had unused storage attached to a server that you could not allocate ot another server.  The SAN once promised to do away such stranded space.  And to be honest in many ways it can and does.  But because everybody is too busy to trend their storage usage and every company buys as little as possible and takes an age to order any more when the business desperately requires it, everyone has taken to the practice of massively overallocating “just to be safe”.  So enter the next technology to promise higher utilization of space - Thin Provisioning.

BTW.  On the topic of companies dragging their heals buying more disk when its needed, better keep those Thin Provisioning buffer pools well stocked!  Hmmmmmmm wonder how long it will be before we are back at square one with the buffer pool containing more free space than we used to waste in the “old days” of massive overallocation. 

Then there is the green issue (we’re still on Thin Provisioning here).  While I think the USP could have been greener , I think Thin Provisioning gives it a somewhat greener tint.  If your giving your hosts the impression it has more storage than it actually does, on the premise that it wont need it for a long time, if ever, you are saving power by not spinning and cooling so many disks, natural resources because you haven’t bought as many disks (so not so many need to be made) and of course you save on your company a bit of green (£$¥€) – and everybody likes that!  Mind you, I don’t expect the license will be free….. so what Hitachi giveth, Hitachi (will probably) taketh ;-) 

So, all of that wind was the plus side of Thin Provisioning.  The negative looks like this to me.  It’s a version 1 product on an enterprise storage array.  So it will require a lot of proofing before it goes in anywhere live.  Although Hitachi did a superb job with version 1 of Universal Volume Manager, that was relatively simple to implement compared with Thin Provisioning.  I wonder how long how long it will be before Thin Provisioning is fully supported alongside other core Hitachi products such as TrueCopy, external storage, Shadow Iamge……..   

Still, once its fully up and running and supported with the rest of the software I think it will be a great addition to the toolset.  I see the marriage of Thin Provisioning and External storage being a happy one. 

Then there is the ability to stripe across more disks on the back end.  Ive not had a chance to look into the Hitachi implementation of this (how are the traditional 4 and 8 disk RAID Groups affected…..).  So I wont say much on the topic (is there anybody still reading this?), other than it was another one that was much needed.  Another area where Hitachi lagged behind some of its competitors, 3PAR included. 

Finally, as usual there will be no SPC benchmarking.  The official reasoning being that the SPC tests do not represent real world environments and workloads.  Funny, that never seems to stop them mentioning the 3.5 million IOPs that were acheived in something that is probably more resembling of Disney World than the world you and I live in ;-)  Actually Snig, dont you live in Disney World? 

PS.  Anyone close to needing the 247PB externally attached storage that HDS marketing is claiming?

Well that was so much longer than expected, my fingers will urt tomorrow.

Nigel

comments powered by Disqus