Posts Tagged ‘Avamar’

Avamar Gen 4 Hardware Now Available

April 24, 2011

A new generation of Avamar hardware had recently been released.  Generation 4 of Avamar is now available.  There have been some significant changes with this latest generation.  Perhaps the most obvious change is the new node sizes that are available.  Avamar is now available in 1.3TB, 2.6TB, 3.9TB and 7.8TB nodes.  Nice round numbers right? J  There are a few caveats to this new node sizing however.  RAIN configs (multi-node systems)  can only be built out with the 2.9TB or 7.8TB nodes.  This is a departure from past generations of the Avamar product where RAIN configs could be built with basically any of the nodes that were available.  For mid and large size customers this probably won’t cause any issues but it may make Avamar a bit too pricey for smaller customers to get their foot in the door on Avamar.  The 1.3TB and 2.6TB as well as the two larger nodes can all still be used in single node systems just like before. 

Another change that may affect some is the death of the 1×2 configuration.  This configuration had its place in some solutions where a full RAIN was just too larger and a single node system wasn’t enough.  I don’t see this being a huge problem for most as in my experience we sold very few of this type of configuration.  On top of that, with the release of the larger nodes, a single node system can handle almost 8TB of data.

A few more minor changes that are worth noting:

  • All Gen 4 nodes now use a RAID 1 configuration internally
  • A spare node is no longer required for RAIN configuration (I’d still have one though)
  • The underlying OS is now SLES Linux instead of RHEL Linux
  • Hardware for the 1.3, 2.6 and 3.9TB nodes is the DELL R710
  • Hardware for the 7.8TB node is the DELL R510

For anyone with existing Gen 3 Avamar systems – don’t worry – the 1TB, 2TB and 3.3TB nodes are still available as upgrades for your existing systems.

Thanks for reading!

Advertisements

Avamar Switching

March 28, 2010

Avamar RAIN configurations are composed of at least 5 individual nodes all connected together to create a single array with a single point of management.  These arrays are linked together with what EMC calls a “cube switch” which is simply an IP switch/hub like you would find in any data closet.  Typically when you purchase an Avamar array these switch(es) are included as part of the configuration.  Everything gets networked together when the Avamar array is installed and you’re off to the races.  Simple enough right? 

Honestly I never put much thought into what brand of switch was being used or any of the speeds and feeds associated with it.  What’s more, no one ever asked, even after over a year and a half of selling Avamar.  Finally the question came up from one of our clients so I had to get my learn on.  After a few calls to various contacts at EMC we had our answers.  So, since I’m always up for sharing the knowledge here is some info on the Avamar Cube switch for any of you who are interested:

Manufacturer:              Allied Telesis

Model:                                 AT-9924 (dumbed down to basically a hub)

Type:                                   Multilayer IPv4 and IPv6 Gigabit switch

Ports:                                    24

Power:                                 Dual Power Supplies – 75W maximum

Heat (BTU):                     256 BTU/hr

Size:                                        1 U each

Switching Capacity:   48Gbps

If you want to see the entire data sheet you can check it out here:  http://centre.com/media/datasheets/9900_family_ds.pdf

Now something more important to note is that you do not HAVE to use these EMC cube switches when you implement Avamar into your network.  This is good news for many companies that have strict restrictions on what types of networking devices can be installed.  As long as you have 2 available gigabit Ethernet connections available per Avamar node you can connect Avamar directly to your existing infrastructure.  More than like you would want to VLAN this traffic off but even that is not required.

EMC Avamar and The 1×2 Grid Configuration

January 22, 2010

During Avamar discussions two things generally come up right off the bat, deduplication ratios and cost.  Most people want to know, “How much Avamar do I need to fit all of my data?” and “How much is it going to cost me?”.  Both are very valid questions.  The sizing question is answered with a resounding “it depends” and the answer to cost question is relative to the answer to the first question.  Now, you won’t get any argument from me that Avamar IS expense but when you weigh out all the benefits and do some cost comparisons to other backup options it usually comes out a winner in the end.

In past post I’ve pointed out the different ways in which Avamar can be setup. There are basically 3:

–          Single Node Configuration

–          1×2 Grid Configuration

–          RAIN Configuration

Since cost is always a factor in any solutions many people with get their feet wet with a small single-node implementation and build from there.  Both the Single Node and 1×2 Grid configurations are non-redundant so they always need to be replicated to another site or another node.  The RAIN configuration consists of a minimum of 4 storage nodes and a management node and is fully internally redundant.  RAIN can be replicated if you want or it could simply be place in the DR site and backups could occur over the WAN.

So what’s with the 1×2 Grid?  Basically the only benefit I can see is that you’re able to combine 2 single nodes to get added capacity while keeping the ability to manage it as one system instead of 2 separate single nodes.  After talking though several scenarios with a few of our account executives I’ve come to the conclusion – Why even consider a 1×2 Grid?  For my example below I’ll use 2TB storage nodes and list pricing without any implementation services.  Follow with me:

Single Node Replicated – 2TB Licensed/Max Capacity: $116,000

–          2TB Licensing in Production and 2TB Licensing in DR

1×2 Grid Replicated – 2TB Licensed Capacity / 4TB Max Capacity: $296,000

–          2TB Licensing in Production and 2TB Licensing in DR

–          Adds the ability to expand to 4TB of capacity if needed.

5 Node RAIN – 4TB Licensed Capacity / 6TB Max Capacity:  $181,000

–          4TB Licensing in Production

–          Ability to expand beyond 6TB by adding additional nodes

There are a couple of potential drawbacks to having a non-replicated RAIN configuration:

–          Backups happen across your WAN.  Everything is source based so unless you have a lot of daily changes or a very small link between sites this is probably not an issue.

–          Restores have to be pulled back across the WAN in rehydrated form.  If you are doing restores often or restoring very large files this will obviously affect your bandwidth and restore time.  In most cases, however, the files are small and restores infrequent.   

In my opinion if you are thinking about long-term growth and scalability (you should be) this is a very easy decision – go with a RAIN Configured Avamar solution to start with and license only the capacity you need or start with a replicated Single Node solution and upgrade to RAIN when you run out of space.  The upgrade cost of the Single Node to RAIN is simply the difference between the two costs or $65,000.  Keep in mind everything here is LIST pricing so if you apply a decent discount to this pricing the numbers get even better.

I Hooked Up With EMC and She Gave Me VE!

December 31, 2009

Sorry, I couldn’t resist the title, but I bet it got your attention.  Don’t worry it’s nothing that requires a trip to the clinic or a shot of penicillin.  In fact, it’s a good thing.  “VE” is techno-acronym for Virtual Edition and is used to identify products that either run as a VM or virtual appliance or are geared specifically toward virtualization.  Over the past year it seems EMC has gone VE crazy.  Here is a short list of some of the recent “VE” products from EMC:

–          PowerPath/VE

–          Avamar/VE

–          Rainfinity/VE

If you have or are planning a Virtual Infrastructure, VMware more specifically, then you may be able to benefit from some or all of these technologies.  So let’s take a brief look at what each of them do.

PowerPath/VE

It’s all about throughput and path redundancy for your VMware hosts.  But wait, VMware vSphere already has built in failover.  Correct but it is only failover multipathing, so if one path fails the other takes over.  With PowerPath/VE you can now do failover and active load balancing which can give you 2x – 3x the throughput you would normally get.  If you have VMware Hosts with heavy I/O like databases this can significantly improve your performance.

Avamar/VE

Ok, everyone who loves backup to tape please raise your hand.  Now, both of you put your hands down.  Avamar is EMC’s cutting edge source based de-duplication backup solution which is appliance based.   The solution can completely eliminate tape from your environment, reduce backup windows and bandwidth usage and de-dupe data up to 500:1 or more.  For smaller remote sites that may need local restore capabilities a full blown appliance based solution is probably overkill and more importantly overspend.  With Avamar/VE you can run the Avamar software found in the appliances as a virtual appliance on your existing VMware infrastructure and replicate it back to a larger Avamar solution in the datacenter.

Rainfinity/VE

Rainfinity FMA is EMC’s File Management Appliance which basically allows you to move files between different tiers of storage based on policies.  For example, let’s say you want to file data off of a Celerra that hasn’t been access in 6 months or more to your Centera CAS system.  Rainfinity would handle this for you.  Now with the FMA/VE edition this too can be run in a virtual appliance on VMware.  I hope to have some future posts on this in the next few weeks.

So, what does all this really mean for you the client?  Money saved, in the case of Avamar and Rainfinity and performance gained for PowerPath.  Appliance based solutions such as Avamar and Rainfinity can be pretty expensive, especially for smaller environments and remote offices.  VE additions are significantly less expensive.  So check out the technology and jump into bed with EMC and get yourself some VE!

Have a great New Year!

Simplifying Avamar

December 6, 2009

There is often confusion around what the available configurations are for EMC Avamar.  I can attest to this because I deal with it on a daily basis and I usually get confused……it could be the rum but let’s not speculate at this point.  🙂

When you start looking at Avamar for your environment you will undoubtedly hear terms like “Single Node”, “RAIN”, “Grid”, “Utility Node”, “Gen2 Node”, “Gen3 Node”, “Deep Node” and “Shallow Node”.  Now unless you’re an EMC engineer or have really done a deep dive into the Avamar technology you’re probably going, “What does all this crap mean?”.  Well let me see if I can simplify it a little.

First let’s dumb down the terminology to where normal people can understand them:

–          Single Node:

  • A single stand-alone node of Avamar with the brains and the storage capacity all in one.

–          RAIN Configuration:

  • Redundant Array of Independent Nodes.  Think of a RAID5 array of servers instead of disks

–          Storage Node:

  • A single Avamar server appliance (2U server).  They are available in 1TB, 2TB and now 3.3TB configurations.

–          Utility Node:    

  • An Avamar appliance similar to the storage node (same physical size) that works as the brains of an Avamar system.

–          Deep Node:

  • Typically this was a 2TB Storage node.  Now that the 3.3TB nodes are out I guess those will be REALLY Deep Nodes….but who knows.

–          Shallow Node:

  • A 1TB Storage Node

 

Now let’s look at the different types of Avamar solutions that are available:

The Single Node

The single node Avamar solution is just what it sounds like, 1 Node (appliance) that backs up your data.  This node contains but the Avamar software “Brains” and the disks necessary to store the data.  This solution is available in 1TB, 2TB or 3.3TB sizes.  This is the most basic Avamar configuration and is good for small shops or remote locations that may require fast recovery of data.  The downside to this configuration is that it is limited from a scalability standpoint.  Once you fill it up you have to either add another single node and manage it separately or buy multiple nodes and upgrade it to a “Grid” or “RAIN” configuration which I’ll discuss later.  Another big drawback is that you have to replicate the solution (buy 2 single nodes).  This is to protect from node failure which would result in data loss.  Both nodes can be sitting in the same datacenter side by side but they must be replicated.

The Grid

Avamar can be configured in a “1×2 Grid” architecture.  This includes 2 single nodes and 1 utility node to do all the work.  The benefit of this configuration is purely space.  If you need more than 3.3TB of Avamar this would be one way to accomplish that.  The downside of this configuration is the same as the single node in that it also must be replicated to guard against node failure.

The RAIN Configuration

Ah ha, finally, a configuration that doesn’t have to be replicated.  The RAIN configuration is built using a minimum of 4 storage nodes and 1 utility node.  Again think of this as RAID5 with physical servers instead of just disks.  There are 2 common initial configurations for this RAIN architecture.  One is called a DS510, DS520 or DS530 and the other is called a DS610, DS620 or DS630.  The 500 series consist of 5 nodes and the 600 series consist of 6 nodes – genius huh!  The last two numbers (10, 20 or 30) represent the size of the storage node.  A DS510 would have five 1TB nodes, a DS520 would have five 2TB nodes and so on.  Let’s take a look at the DS520 and DS620 architectures which seem to be the most common.

–          DS520

  • 3 Active 2TB Storage Nodes
  • 1 Spare 2TB Storage Node
  • 1 Utility Node
  • Up to 6TB licensable capacity

–          DS620

  • 4 Active 2TB Storage Nodes
  • 1 Spare 2TB Storage Node
  • 1 Utility Node
  • Up to 8TB licensable capacity

There are several benefits of going with a RAIN architecture.  First, you don’t have to replicate the solution although if you are trying to protect yourself from a complete datacenter disaster you may want to replicate to a DR site.  Everything is internally redundant in a RAIN configuration.  The second and probably biggest advantage to RAIN is scalability.  If you max out your configuration simply add another node to the array and you’re off to the races.

A quick note about the different “GENs” of Avamar:

Avamar is currently in its 3rd generation which basically means the hardware platform has changed.  Avamar nodes are based on a DELL R710 server, before that it was a Dell 2950 server.  As the server hardware changes so will the GEN usually.  Avamar is backward compatible so if you have a GEN2 RAIN configuration you can add GEN3 nodes to it without any problems.  The only catch is that you cannot add 3.3TB nodes to a RAIN configuration that is built on 2TB nodes.

Hopefully that simplified things a little, if not please feel free to send hate mail.  Happy holidays!

Data Domain and Avamar – De-duplication from Different Points of View

August 16, 2009

The recent acquisition of Data Domain by EMC will surely give EMC a powerful advantage in the area of data de-duplication.  We’ve had some questions and statements from clients basically asking “isn’t that going to hurt the sales of Avamar?”.  So I’d thought I’d weigh in with my opinion on this, which in short is, I don’t think so.

For those of you not completely familiar with data de-duplication there are basically two flavors, target based and source based.  Target based meaning that all the data is sent to a device (Data Domain) and de-duplicated after it gets there.  Source based is the opposite, where the data is de-duplicated at the source before it is sent across the network (Avamar).  Typically the “well which is better?” question comes up and to that the typical “it depends.” answer comes flying back (what a surprise).  One size definitely doesn’t fit all for either of the products. 

In some situations where you may have lots of remote offices and would like to back them all up to one central location an Avamar solutions will probably be just what the doctor ordered.  You can dedup your data before you send it across your WAN and save your precious bandwidth.  In addition you save on the administrative resources required to backup your remote sites.

Now let’s look at a slightly different situation that involves your datacenter backup environment.  Suppose you have large volumes of database data that needs to be backed up and requires the faster restore times of a backup to disk environment.  This data already resides in your datacenter and doesn’t really need to traverse slower WAN connections.  You still can benefit from data de-duplication to reduce the size of your backups but this doesn’t necessarily have to happen at the source.  With a Data Domain solution you could easily achieve your goals and probably at a lower cost point than an Avamar solution.

In my opinion, for a lot of environments, a combination of both Avamar and Data Domain may provide the best solution in the end.  Use Avamar to achieve very high dedup ratios of your file data and other data that is relatively static or if you have remote office environments.  Use Data Domain to backup data that has higher change rates, like databases, and data that may never have to “leave” the datacenter.

I also think in the end it will come down to the cost of ownership.  Avamar is a very powerful product but not all companies can afford it.  Data Domain offers a simple, less expensive way to de-duplicate your data but it does have some downsides. 

I’ll try to do a side by side Avamar to Data Domain comparison in an upcoming post.  Stay tuned…..

Avamar NDMP Accelerator Overview

February 1, 2009

EMC’s Avamar backup and recovery solutions provide the ability to deduplicate your data at the source before sending the data over the WAN or LAN.  This opens up all sorts of opportunities for businesses with widely distributed remote sites and greatly reduce the amount of data that is backed up.  Avamar also provides the ability to replicate itself to another offsite location which could essentially eliminate the need for tape backups.

So now, for example, you could have your primary datacenter with a SAN/NAS (EMC Celerra for this example) and several small remote offices with no local IT staff.  Using Avamar you are able to back up those remote sites to an Avamar node(s) at the datacenter with no local intervention at the remote sites.  All of your remote sites are now backed up over the WAN, deduped and consolidated at your datacenter.  But what about backing up your NAS at the datacenter?  It would make sense to dedupe and back this data up to the Avamar as well and eliminate multiple backup systems.  How is this done?  Enter the Avamar NDMP Accelerator.

The Avamar NDMP Accelerator uses the standard NDMP Protocol to backup your NAS datastores (Celerra and NetApp) just like you would use NDMP to tape.  The appliance must be located on the same LAN as the storage device being protected.

avamarndmp1

Avamar NDMP Accelerator Supported Devices:

  • EMC Celerra IP Storage with I18N enabled running DART5.5.
  • NetApp running Data ONTAP 6.5.1R1, 7.0.4, 7.0.5, 7.0.6, 7.1 or 7.2

Capabilities and Limitations Summary (partial list):

  • Full support for Storage Device ACLs and Q-tree Data 

  • Support for Backup and Restore of Volumes, Q-trees and

    Directories

     

  • Support for Multiple Storage Devices Using One Accelerator 

  • Support for Multiple Backup Streams From a Single Storage Device 

  • Support for Multiple Simultaneous Backups 

  • Only One Target Volume or Directory per Backup or Restore 

  • Maximum Number of Files = 10 Million 

  • Celerra Incremental Backups Should Only Be Performed at the Volume

    Level

     

  • File Level Restores Not Supported on Network Appliance Filers