Archive for December, 2009

I Hooked Up With EMC and She Gave Me VE!

December 31, 2009

Sorry, I couldn’t resist the title, but I bet it got your attention.  Don’t worry it’s nothing that requires a trip to the clinic or a shot of penicillin.  In fact, it’s a good thing.  “VE” is techno-acronym for Virtual Edition and is used to identify products that either run as a VM or virtual appliance or are geared specifically toward virtualization.  Over the past year it seems EMC has gone VE crazy.  Here is a short list of some of the recent “VE” products from EMC:

–          PowerPath/VE

–          Avamar/VE

–          Rainfinity/VE

If you have or are planning a Virtual Infrastructure, VMware more specifically, then you may be able to benefit from some or all of these technologies.  So let’s take a brief look at what each of them do.

PowerPath/VE

It’s all about throughput and path redundancy for your VMware hosts.  But wait, VMware vSphere already has built in failover.  Correct but it is only failover multipathing, so if one path fails the other takes over.  With PowerPath/VE you can now do failover and active load balancing which can give you 2x – 3x the throughput you would normally get.  If you have VMware Hosts with heavy I/O like databases this can significantly improve your performance.

Avamar/VE

Ok, everyone who loves backup to tape please raise your hand.  Now, both of you put your hands down.  Avamar is EMC’s cutting edge source based de-duplication backup solution which is appliance based.   The solution can completely eliminate tape from your environment, reduce backup windows and bandwidth usage and de-dupe data up to 500:1 or more.  For smaller remote sites that may need local restore capabilities a full blown appliance based solution is probably overkill and more importantly overspend.  With Avamar/VE you can run the Avamar software found in the appliances as a virtual appliance on your existing VMware infrastructure and replicate it back to a larger Avamar solution in the datacenter.

Rainfinity/VE

Rainfinity FMA is EMC’s File Management Appliance which basically allows you to move files between different tiers of storage based on policies.  For example, let’s say you want to file data off of a Celerra that hasn’t been access in 6 months or more to your Centera CAS system.  Rainfinity would handle this for you.  Now with the FMA/VE edition this too can be run in a virtual appliance on VMware.  I hope to have some future posts on this in the next few weeks.

So, what does all this really mean for you the client?  Money saved, in the case of Avamar and Rainfinity and performance gained for PowerPath.  Appliance based solutions such as Avamar and Rainfinity can be pretty expensive, especially for smaller environments and remote offices.  VE additions are significantly less expensive.  So check out the technology and jump into bed with EMC and get yourself some VE!

Have a great New Year!

Rainfinity FMA/VE

December 22, 2009

Recently EMC released Rainfinity FMA/VE.  VE stands for Virtual Edition.  So now instead of a hardware/appliance based file management and archiving platform you can run it as a VM in your existing VMware environment.  You can use FMA/VE to archive older data from you NAS to cheaper tiers of storage or a CAS device like EMC Centera.  Also, FMA/VE works with other NAS vendors such as NetApp. 

By putting your FMA solution inside of VMware you can now take advantage of some of the features within VMware.  Typically the appliance based solution for FMA would require and HA configuration which adds rack space and cabling.  Now with a virtual FMA you can take advantage of the HA features within VMware itself.

The only real drawback I could find with FMA/VE is the amount of files that can be processed.  An FMA appliance can handle 200-250 million files where the VE version of the product can process 50-75 million.  Even still, that is a bunch of files.  For most mid-size companies those limits won’t come into play.

Simplifying Avamar

December 6, 2009

There is often confusion around what the available configurations are for EMC Avamar.  I can attest to this because I deal with it on a daily basis and I usually get confused……it could be the rum but let’s not speculate at this point.  🙂

When you start looking at Avamar for your environment you will undoubtedly hear terms like “Single Node”, “RAIN”, “Grid”, “Utility Node”, “Gen2 Node”, “Gen3 Node”, “Deep Node” and “Shallow Node”.  Now unless you’re an EMC engineer or have really done a deep dive into the Avamar technology you’re probably going, “What does all this crap mean?”.  Well let me see if I can simplify it a little.

First let’s dumb down the terminology to where normal people can understand them:

–          Single Node:

  • A single stand-alone node of Avamar with the brains and the storage capacity all in one.

–          RAIN Configuration:

  • Redundant Array of Independent Nodes.  Think of a RAID5 array of servers instead of disks

–          Storage Node:

  • A single Avamar server appliance (2U server).  They are available in 1TB, 2TB and now 3.3TB configurations.

–          Utility Node:    

  • An Avamar appliance similar to the storage node (same physical size) that works as the brains of an Avamar system.

–          Deep Node:

  • Typically this was a 2TB Storage node.  Now that the 3.3TB nodes are out I guess those will be REALLY Deep Nodes….but who knows.

–          Shallow Node:

  • A 1TB Storage Node

 

Now let’s look at the different types of Avamar solutions that are available:

The Single Node

The single node Avamar solution is just what it sounds like, 1 Node (appliance) that backs up your data.  This node contains but the Avamar software “Brains” and the disks necessary to store the data.  This solution is available in 1TB, 2TB or 3.3TB sizes.  This is the most basic Avamar configuration and is good for small shops or remote locations that may require fast recovery of data.  The downside to this configuration is that it is limited from a scalability standpoint.  Once you fill it up you have to either add another single node and manage it separately or buy multiple nodes and upgrade it to a “Grid” or “RAIN” configuration which I’ll discuss later.  Another big drawback is that you have to replicate the solution (buy 2 single nodes).  This is to protect from node failure which would result in data loss.  Both nodes can be sitting in the same datacenter side by side but they must be replicated.

The Grid

Avamar can be configured in a “1×2 Grid” architecture.  This includes 2 single nodes and 1 utility node to do all the work.  The benefit of this configuration is purely space.  If you need more than 3.3TB of Avamar this would be one way to accomplish that.  The downside of this configuration is the same as the single node in that it also must be replicated to guard against node failure.

The RAIN Configuration

Ah ha, finally, a configuration that doesn’t have to be replicated.  The RAIN configuration is built using a minimum of 4 storage nodes and 1 utility node.  Again think of this as RAID5 with physical servers instead of just disks.  There are 2 common initial configurations for this RAIN architecture.  One is called a DS510, DS520 or DS530 and the other is called a DS610, DS620 or DS630.  The 500 series consist of 5 nodes and the 600 series consist of 6 nodes – genius huh!  The last two numbers (10, 20 or 30) represent the size of the storage node.  A DS510 would have five 1TB nodes, a DS520 would have five 2TB nodes and so on.  Let’s take a look at the DS520 and DS620 architectures which seem to be the most common.

–          DS520

  • 3 Active 2TB Storage Nodes
  • 1 Spare 2TB Storage Node
  • 1 Utility Node
  • Up to 6TB licensable capacity

–          DS620

  • 4 Active 2TB Storage Nodes
  • 1 Spare 2TB Storage Node
  • 1 Utility Node
  • Up to 8TB licensable capacity

There are several benefits of going with a RAIN architecture.  First, you don’t have to replicate the solution although if you are trying to protect yourself from a complete datacenter disaster you may want to replicate to a DR site.  Everything is internally redundant in a RAIN configuration.  The second and probably biggest advantage to RAIN is scalability.  If you max out your configuration simply add another node to the array and you’re off to the races.

A quick note about the different “GENs” of Avamar:

Avamar is currently in its 3rd generation which basically means the hardware platform has changed.  Avamar nodes are based on a DELL R710 server, before that it was a Dell 2950 server.  As the server hardware changes so will the GEN usually.  Avamar is backward compatible so if you have a GEN2 RAIN configuration you can add GEN3 nodes to it without any problems.  The only catch is that you cannot add 3.3TB nodes to a RAIN configuration that is built on 2TB nodes.

Hopefully that simplified things a little, if not please feel free to send hate mail.  Happy holidays!