Archive for March, 2010

Avamar Switching

March 28, 2010

Avamar RAIN configurations are composed of at least 5 individual nodes all connected together to create a single array with a single point of management.  These arrays are linked together with what EMC calls a “cube switch” which is simply an IP switch/hub like you would find in any data closet.  Typically when you purchase an Avamar array these switch(es) are included as part of the configuration.  Everything gets networked together when the Avamar array is installed and you’re off to the races.  Simple enough right? 

Honestly I never put much thought into what brand of switch was being used or any of the speeds and feeds associated with it.  What’s more, no one ever asked, even after over a year and a half of selling Avamar.  Finally the question came up from one of our clients so I had to get my learn on.  After a few calls to various contacts at EMC we had our answers.  So, since I’m always up for sharing the knowledge here is some info on the Avamar Cube switch for any of you who are interested:

Manufacturer:              Allied Telesis

Model:                                 AT-9924 (dumbed down to basically a hub)

Type:                                   Multilayer IPv4 and IPv6 Gigabit switch

Ports:                                    24

Power:                                 Dual Power Supplies – 75W maximum

Heat (BTU):                     256 BTU/hr

Size:                                        1 U each

Switching Capacity:   48Gbps

If you want to see the entire data sheet you can check it out here:  http://centre.com/media/datasheets/9900_family_ds.pdf

Now something more important to note is that you do not HAVE to use these EMC cube switches when you implement Avamar into your network.  This is good news for many companies that have strict restrictions on what types of networking devices can be installed.  As long as you have 2 available gigabit Ethernet connections available per Avamar node you can connect Avamar directly to your existing infrastructure.  More than like you would want to VLAN this traffic off but even that is not required.

Rainfinity/VE – In Theaters Now!

March 21, 2010

I finally got the latest and greatest version of Rainfinity VE installed in our lab.  I’ve been messing around with the freeware/demo version of this for a few weeks and have been pretty impressed with its ease of use and ease of installation.  I put together a brief 10 minute demo of what the product looks like and how it works out on Youtube.  Ok, so it’s no Steven Spielberg production but given my chosen career path it’s not too bad in my own opinion.  Feel free to disagree though, I welcome constructive criticism. 

View the Video Here:

http://www.youtube.com/watch?v=MOyxsRm6L6w

EMC Storage 101

March 14, 2010

We recently hired some new account exectives from various fields.  I know from experience that trying to learn and understand all the different products the EMC alone represents can be a daunting task.  In an effort to help I tried to create a high level overview/cheatsheets for our people that would help non-technical folks get up to speed more quickly and have an easy reference resource.  As I thought about it I realized that many of our blogs involve varying degrees of mostly technical information.  So I figured why not post some of the basics out there for people new to shared storage or EMC.

EMC Storage

SAN vs NAS vs DAS

–          SAN – Storage Area Network

  • Clariion is EMC’s SAN product
  • Provides storage for servers

–          NAS – Network Attached Storage

  • Celerra is EMC’s NAS product
  • Also called Unified storage
  • NAS adds file server type functionality to a SAN.
    • Instead of having dedicated file servers you can use the Celerra/NAS to manage and store files directly on the array.

–          DAS – Direct Attached Storage

  • Think of this as a stand-alone server where data is stored directly on drives in that server.

ISCSI vs Fibre Channel Network or “Fabric”

–          ISCSI

  • Uses standard computer Ethernet cables.  All traffic runs across this type of cable.
  • Easy to implement because you can use existing cabling and switches that most companies already have.  Dedicated switches are generally recommended.
  • Servers can use existing network cards to connect into the ISCI network.

 –          Fibre Channel

  • Uses fiber optic cables to transfer data between the servers and the storage array.
  • Requires special fibre switches and cabling
  • Typically more expensive and requires additional technical knowledge
  • Faster than ISCSI in most cases
  • Servers require special network cards called Host Bus Adapters (HBA’s).
  • Typically you will find Fibre Channel infrastructure in larger environments.

CIFS and NFS

–          CIFS is a file share protocol for Windows servers

  • You may hear it called a “CIFS Share”
  • When you map a drive on your Windows laptop to your home drive – that’s a “CIFS Share”

–          NFS is a file share protocol for UNIX/LINUX servers

  • You may hear it called a “NFS Mount”

 

EMC Clariion

–          Hardware Components

  • Storage Processor (SP)
    • 2 in each array.
    • Do all of the moving of data to and from the disks
  • Standby Power Supply (SPS)
    • Usually 2 in each array.
    • Provides battery power to the storage processors and the first 5 disks in case of a power failure.
  • Disk Array Enclosure (DAE)
    • The enclosure(s) that the actual disk drives fit into.
    • Sometimes called “shelves”.
    • Each one holds 15 disks (12 for the AX4)
  • Disk Drives
    • 3 Types of drives
      • SATA
        • Slower performance but higher capacity.
        • 80 IOPS or less per drive
        • 1TB or 2TB each
        • 5400 RPM or 7200 RPM  – Slower RPM = Slower Performance
        • Commonly used for backup to disk, archiving or file shares
        • Fibre Channel (FC)
          • High Performance
          • 10,000 RPM (140 IOPS) or 15,000 RPM (180 IOPS)
          • 146GB, 300GB, 450GB and 600GB sizes
          • Enterprise Flash Drive (EFD)
            • Super high performance
            • 2500 IOPS per drive
            • 73GB, 200GB and 400GB sizes
            • Good for high activity apps and databases
            • Very expensive
      • SATA drives and FC drives cannot be used in the same DAE.  So if you wanted 10 FC drives and 10 SATA drives you would need 2 DAE’s
      • FC and EFD can exist in the same DAE.

–          Software Components

  • SnapView
    • Enables the taking of snapshots on the Clariion
    • We usually add this to every array
  • MirrorView
    • Only required when replicating from one Clariion to another
    • 2 versions:
      • MirrorView /A (Asynchronous) – data is replicated in batches
        • Used when there is limited bandwidth or long distances
        • The DR array will be several seconds, minutes or hours “behind” the Production array.
        • MirroView /S (Synchronous) – data is replicated in real-time.
          • The Production and DR arrays are always identical
          • Requires fast connections between sites

EMC Celerra

–          Hardware Components

  • ** The guts of a Celerra are a Clariion so all the components listed above are also part of the Celerra.  There are a few additional components to the Celerra**
  • Datamovers (DM) – also called X-Blades or Blades
    • At least 2 in each Celerra.
    • Think of these as file servers that are built into the Celerra
    • Allow you to create and manage CIFS and NFS file shares
  • Management Console
    • 1 per Celerra
    • Basically just a small server like device used to administer the array.

–          Software Components

  • Celerra Replicator
    • Only required when replicating from one Celerra to another

EMC Networker – The Install

March 7, 2010

This past week I tasked myself with getting Networker 7.6 setup in our lab.  The long term plan is to get it integrated in to our entire environment which includes Celerra NAS’, Avamar, Data Protection Advisor, DataDomain, etc, etc.  The cool thing is that Networker can act as your single pane of glass to manage all your backups and backup targets like tape, disk and source or target dedupe device.

To be honest it’s been years since I’ve even had to touch any kind of backup software and generally speaking I’ve always tried to avoid being the guy responsible for backups.  That being the case I wasn’t exactly looking forward to setting up Networker from scratch.  Now I probably should have done some research  read some white papers on the correct way to do the install but that was too obvious.  I went with the slap the CD in and start hitting NEXT approach.  I’m sure nobody reading that has ever done that though.  I had heard stories about how the old Networker was complicated and difficult to use so I really wanted to see how easy and intuitive the new and improved Networker was to install and use.

To my pleasant suprise everything went off without a hitch.  I installed the Networker for Windows 7.6 in less that 15 minutes.  Everthing was menu driven and with the exeption of one or two screens everthing was really just Next, Next, Next.  Even the roll out of the agents was extremely simple.  A quick discovery process and a few check boxes and you’re off to the races.  I was able to get a couple standard agents deployed and configured in a few minutes.

Next steps will be to get Avamar and DataDomain integrated as well as setting up the ability to backup to the EMC Atmos cloud.  More to come.