Discussing all things virtualization and storage in the data center.

USP-V and Hitachi High Availability Manager

Well there has been a lot of confusion the past couple days about the latest announcement from HDS about clustering of the USP-V.  I hope to clear up some of the technical details with this post, but I'm sure there will be some questions that remain.

First, lets talk about why someone would want to cluster the USP-V.  For me there are only two reasons to do this.

First, if I have a box that has been on the floor for 3+ years and I'm having to pay maintenance on it, it might make more sense for me to buy a new box with more capacity and faster "gazintas and gazouttas".  In order to do that I'm going to need to migrate all of my data/information off of the old system.  Well what better way to do that than to connect my new box to the old one and migrate everything non-disruptively to my applications?  When I'm done, I scrub the old box and out the door it goes. 

Secondly, say I work for a company that can't have any downtime at all.  Stop counting the .9's of availability, I want 100% availability!  Well I can now cluster my USP-V (which was already in the 6 - 7 nines anyway) and have 100% availability for my applications.  The mainframe can do it already, why can't my open systems have it too?  Well now they can.

So let's look at how these things are done.  Looking at the diagram below here is how it is broken down.

  • Physical capacity can reside on internal or external disk.  (P-VOL/S-VOL)
  • The P-VOL and S-VOL have the same VOL ID in the SCSI Inquiry.  RCU takes the MCU serial number upon failover.  (Nice aye?)
  • External volumes must be mapped to both storage controllers.
  • A quorum drive is created on both subsystems for checking whether there are any data differences between the MDKC and RDKC in order to insure data consistency. 
    • You can have up to 32 Quorums per USP-V. 
    • They can be from 38MB to 4TB in size. 
    • They can reside on any currently supported device.
  • Alternate path software recognizes the P-VOL and S-VOL as the same LU on multiple paths.  P-VOL paths are the owner, S-VOL paths are the non-owner.
  • Write data is transferred from P-VOL cache to S-VOL cache without being destaged from S-VOL in normal condition.
  • When I/O's fail to all P-VOL owner paths, alternate path software issues I/O to the non-owner S-VOL paths.  Storage controller stops the copy of cache data and the S-VOL becomes write enabled.



So hopefully that answers a lot of questions people have had.  I think it's really simple.  HDS has had this functionality built into the USP-V since day one.  The microcode has been there (LDKC anyone?).  The hardware port(s) have been there as well.  Ever looked at a USP-V and asked youself "I wonder what this port is for that nothing is plugged into?"  ;)  I really don't think this is any coincidence that HDS is realeasing this at this time since a lot of the boxes that were bought when these went GA are going to be coming off of lease soon.  I really don't think that there is anything else behind this announcement.  Sorry to disappoint anyone.

A few other notes:

  • A USP-V and an HP XP24000 will be able to hook together at GA time.
  • This will work on all supported RAID types.
  • CVS volumes are supported.
  • All currently supported Open Systems OS's are supported.  Mainframe isn't today.
  • You can have up to 64 physical paths between boxes.  Fast huh?
  • Max distance is the standard True Copy Sync distance.  After all we're only doing syncronous replication here with some software to manage it.
  • You can create up to 64K pairs between devices.
comments powered by Disqus