Archive for February, 2011

Finally – RecoverPoint Replication Licensing Made Simple

February 27, 2011

Replicated SAN/NAS arrays is certainly nothing new.  In fact it is pretty much the standard nowadays for most IT departments.  Replicating EMC arrays has never really been complicated but there were several options for doing it.  If you had a Symmetrix then it was SRDF, if you had a Clariion it was MirrorView and if you had a Celerra you had Celerra Replicator.  Of course if you had a Celerra with backend block enabled you probably needed Celerra Replicator and MirrorView.  Of course in place of those replication technologies you could use RecoverPoint which was the ultimate – if you could afford it.  Fun right?

MirrorView and Celerra Replicator were pretty straight forward, you license each array with the software and you’re off to the races.  No need to worry about how many TB’s of data you had nor any “crap I’m out of capacity” surprises midway through the year.  RecoverPoint on the other hand was licensed…..well…..completely the opposite of everything else.  Now I certainly understand this per TB licensing for RP, EMC is in the business of making a profit after all and there is really nothing that can compete with what RP can do.  None the less RP licensing could get expensive quickly for larger environments.

Several months ago there started to be rumblings that eventually EMC would eventually have RecoverPoint as it primary means of replication for its arrays.  That naturally generated lots of questions around how exactly that would happen and of course – how pricing would be handled.  Now that the new VNX line has been release along with the new software bundles its become much more clear.  As I have mentioned in previous blogs, RecoverPoint is now licensed per array and on top of that the pricing is very attractive.

For VNX customers all the replication technologies are bundled into one remote replication software bundle which gives you MirrorView, Celerra Replicator and RecoverPoint.  This bundle needs to be licensed on both arrays.  To take advantage of RP you still need to purchase RP Appliances at an additional cost but you don’t have to worry about how much data you are replicating.  From what I’ve seen the “max capacity” is currently 300TB which shouldn’t be a problem for most non-enterprise type accounts.

For Celerra and Clariion customers the software licensing for MirrorView and Celerra Replicator is the same as it always was but now you can license RP on a per array model.  If you already have RP (per TB) there is an upgrade path with associated discounts.  If you are adding RP, the same still holds as with VNX, you have to license both arrays and you need to buy the appliances.

So if you had considered RecoverPoint in the past but wrote it off due to costs concerns, now is a great time to revisit the product with your reseller.  Especially for the smaller arrays like the CX4-120 and NS-120, pricing is VERY reasonable.

Advertisements

Vault Drives on the New EMC VNX Arrays

February 4, 2011

Let me preface this blog by stating the following information is not all original.  I am simply passing through some of the information on from one of my contacts at EMC.  Normally I wouldn’t plagurize like this but the information is important and you should be at least aware of some of the changes with the new VNX line.

Any of you familiar with EMC arrays such as the Clariion CX4 or Celerra NS lines are or should be very familiar with EMC’s usage and implementation of “the Vault”.  The Vault is where the OS for the array lives.  In the case of Clariion and Celerra we are talking about the FLARE and DART code that enables the array to do its thing.  Up until now this has always been the 1st 5 drives in the array – Drive Slots 0 – 4.  These disks were setup as a 4+1 R5 Raid Group on which the FLARE/DART took up a small about of space on each drive.  The remaining space could be used just not for any high I/O loads.  Drive size was up to you but pretty much any drive size/type would work.

One of the first things I noticed when building my first VNX solution in EMC’s configuration tool what that the Vault drive choices were much different than before.  For the VNX5100 you’re forced to choose 6 drives for the Vault and for the VNX5300 its 8 drives.  In reality the Vault on these arrays only uses 4 drives to store the OS code.  The reason why – according to what I’ve heard its to give the array a better price point – given what I’ve seen so far …. this holds to be true.

Ok, start plagurizm here:

So say you have 300 GB drives in the first shelf, total of 6 drives.

The useable capacity in first 4 (0-3)drives = 300 GB – 192 GB => 108 GB.

Drives 4, 5 useable capacity is full 300 GB, green box shows useable capacity:

This space can be used two ways:

(a) You can use the available space on the first 4 drives for a stand-alone RG or NAS/Block Pool. For example: RAID 5 (3+1) will give you ~324 GB usable AND use the remaining drives as another RG or part of a pool. There is no space wastage in this case.

(b) You can create a RG that spans across all the 6 drives as shown by the black square going across all 6 drives. This gives widest stripe configuration for RAID Group *but* you only get 108 GB of space from all 6 drives and 192 GB on the drive 4 and 5 is wasted.  (I don’t recommend this).

(c) For larger systems assign the extra 2 or 4 drives as hot spares, or as ‘buffer space’ for when the customer needs to add storage in case they wait until they are 98% full to order it.

So as you can see the vault requirements have shrunk from 5 drives to 4 drives but then got jacked up to 6 drives or 8 drives – hey….. makes sense to me.  Basically just plan to use the “extra” drives as part of another storage pool or raid group or just use them as hot spares.