Posts Tagged ‘EMC’

How-To Generate SP Collect Data from Unisphere

December 22, 2011

Many times you may be asked to provide “SP Collects” for your EMC Array.  This may be for EMC Support or for your VAR, who may need to see how your existing array is configured.  Either way SP Collect data is very useful and relatively easy to generate.  With the release of Unisphere for EMC Clariion, Celerra and VNX the menus and processes have changed just a little.  Below you will see step by step instructions on how to generate these files from the new Unisphere interface.

         Login to the Unisphere console

         Select the array you want to collect data from

         Click on the “System” tab, then click on “Generate Diagnostic Flies – SPA” under the “Diagnostic Files” section on the right.

          Click YES to generate the SP Collect Data for SPA

         Click on “OK” to confirm the success of the process

         Now click on “Generate Diagnostic Files – SPB” under the “Diagnostic Files” section on the right.

         Click YES to generate the SP Collect Data for SPB

         Click on “OK” to confirm the success of the process

         Click on “Get Diagnostic Files – SPA

         The SP A – File Transfer Manager Window will appear.

  • 1st – sort the files by date with the most recent date at the top by clicking the top of the “Date” column.
  • 2nd – Highlight the “APMxxxxxx.zip” file.  (Depending on your array – your file may not start with “APM” but with a different set of letter.  Regardless, the rest of the file name will be followed by a series of numbers_SPA_date_time)
  • 3rd – Select your destination directory where you want the file saved to.  (Use something easy to find like c:\)
  • 4th – Click the “Transfer” button below the list of files

         Click on YES to confirm you want to transfer the file.

 

         In the “Transfer Status” window – look and wait for the file transfer to report successful. (This may take a minute or so)  Once the transfer is successful click OK.

         Click on “Get Diagnostic Files – SPB

 

         The SP B – File Transfer Manager Window will appear.

  • 1st – sort the files by date with the most recent date at the top by clicking the top of the “Date” column.
  • 2nd – Highlight the “APMxxxxxx.zip” file.  (Depending on your array – your file may not start with “APM” but with a different set of letter.  Regardless, the rest of the file name will be followed by a series of numbers_SPB_date_time)
  • 3rd – Select your destination directory where you want the file saved to.  (Use something easy to find like c:\)
  • 4th – Click the “Transfer” button below the list of files

         Click on YES to confirm you want to transfer the file.

 

         In the “Transfer Status” window – look and wait for the file transfer to report successful. (This may take a minute or so)  Once the transfer is successful click OK.

         Open up Windows Explorer or whatever file manager you use and go to the destination directory where you downloaded the files to.  Verify the SPA and SPB .zip files are there.

         Send those files to your VAR or EMC technical consultant/contact.

EMC Celerra CIFS Share Management – Two Ways To Skin A Cat

June 18, 2010

First off, the “Skin a cat” thing is just a figure of speech so if you’re a cat lover don’t report me to PETA.  I would never do something like that…….well…….unless the damn thing bit me or really pissed me off, then all bets are off!  Actually I’m sure this will make good FUD for the competitors.  Now on top of all the EMC stories we hear like, “EMC kills puppies and kicks old ladies down the stairs” they can add “they skin cats too!”.  Barbarians!

Anyway on to the EMC stuff.  This week I’ve had to do a few demo’s showing some basic functionalities of the Celerra platform.  The one thing most people do with a NAS solution is to manager their Windows CIFS shares.  So to demonstrate how easy this process is I created a short Youtube video which illustrates how do this both using Celerra Manager and Windows MMC.  Check it out below.

Generating NAR Data from Your EMC Array

April 1, 2010

Many time we need to have performance data from a clients array.  If you have the full version of Navisphere Analyzer you can do this yourself but in many cases Navi Analyzer is not available.  If you are trying to troubleshoot performance problems or design a new array based on existing performance data you may need to provide a “NAR” file to EMC or your EMC reseller.  This is not something you will probably do often and in fact you may never need to do it.  There is not a lot of info on the web on how to gather this information so I created a short 5 minute video to walk through the process.  Click below to check it out.

EMC 146GB Drives Get Shanked!

February 7, 2010

It’s all over for the 146GB drives.  Last week EMC announced that the 146GB fibre channel disk drives will no longer be available for new Clariion and Celerra platforms.  I don’t see this as a huge deal to most customers but for some very specific applications the drives were very useful. 

Some databases that we run across have very high I/O requirements but really do not require much disk capacity.  For applications like these where you many need 20+ spindles to get the performance you need the 146GB drives were perfect because of their price point.  The other situation where the 146GB drives were handy was for the Vault on the Clariion and Celerra platforms.  As a general practice you don’t want to use the remaining space on the Vault drives for anything with high I/O requirement as to not affect the performance of the array.  For many clients they would not put anything on the remaining space on the Vault.  So why use 300GB of 450GB drives for the Vaults 4+1 RAID5 when you really only need 300GB – the 146s were perfect for this.

On the flip side of this argument, drives prices over the last year have come down significantly so having to use 300GB drives really doesn’t cause a huge financial impact on new arrays.  The fact is that for most of the clients we’ve worked with, 146GB drives were just too small for the majority of applications.  Let’s face it; the trend of data growth is going upwards steeply.  Most reports and studies I’ve seen show industry average data growth anywhere from 30 – 50% per year.  With that kind of growth smaller capacity drives will continue to become “legacy hardware”.

Rainfinity/VE – First Impressions

January 10, 2010

This past week I played around with getting Rainfinity/VE up and running in our lab.  Rainfinity/VE is EMC’s latest file management appliance that can now be run as a vm in your VMware environment.  I was hoping we would have the latest, full version release of the software but I had to settle for a 30 day demo version.  The latest version has the ability to do archiving to the EMC Atmos cloud which I was hoping to test but it looks like I’ll have to wait a few weeks.  Anyway, on to the Rainfinity/VE details……

First, I had to find some documentation on how to install and configure the system so it was off to Google search.  I found a couple of sites that were extremely helpful. 

–          Link to some more excellent info and videos from Chad Sakac’s blog: 

–          Link to download the demo:

–          Link to the QuickStart Guide: 

There is also a pdf document that comes as part of the .zip demo package that gives good step-by-step install and setup documentation.  The file name is  300-009-001-a01.pdf

If you want screenshots and very detailed technical info about each step of the process check out the links above.  I’m lazy and there’s no sense reinventing the wheel but here are a few things I ran into. First, get the demo downloaded and unzipped.  The file is almost 700mb so it may take some time.  I was installing the virtual appliance on one of our vSphere hosts and for whatever reason had some major issues getting the .OVA appliance format imported.  I converted it over to OVF format and still had problems.  Finally I got the OVF format to import using the free version of VMware Converter.  Hopefully you’ll have better luck than me!  Second, make sure DNS is setup and working correctly.  Due to the setup of our lab I ran into some issues here that can make things go not so smoothly.  Other than these two issues everything was pretty straight forward and simple to setup….. “So easy, a Sales Engineer can do it!”.

For my testing I connected the Rainfinity/VE appliance into our EMC Celerra NS-120.  I created a new CIFS server and share and then copied a bunch of old MS Office and PDF files over to it that had been sitting on our SharePoint server forever.  I created a second CIFS server and share on some 2nd tier storage that was also on the same Celerra.  This is where I was hoping to be able to create and use some Atmos storage but for now the 2nd tier was good enough to do some testing.  From the Rainfinity side of things you basically just tell it where your different storage repositories are and then start creating policies around what data you want to archive and where you want to archive it.  For my test I created 2 different policies, one to archive files not modified in > 90 day and another to archive PDF documents > 3MB.  Once the policies are created you can run them immediately or schedule them at whatever frequency you need.  Once the files are archived a small stub is left behind to link users to the actual files.

Overall I think this is going to be a great solution for mid-sized customers.  It’s affordable, it runs on an existing VMware infrastructure with minimal resource requirements and is relatively easy to administer.  In addition you will now have the ability to archive to the cloud and leverage the cost saving that it brings to the table.

EMC Clariion RAID Group Recommendations

October 4, 2009

Last week we had a client that needed a replacement disk drive for an older EMC Clariion array.  Now this is by no means anything complicated but the drive needed wasn’t available anymore.  The questions was posed, “ Can you mix 15k and 10k fibre channel drives in the same RAID group?”  Hmmmmm, I hadn’t run across this yet so I had to look it up.  The short answer is YES you can but it’s not best practice.  So what is the best solution in a case like this?  Simple, buy a larger capacity drive of the same speed (10k rpm, 15k rpm) and use it in place of the failed drive.  You’ll lose the additional capacity of the drive but the performance won’t be affected.

In the process of looking for the official answer to this question I came across several other little tidbits that are good information to know about Clariion arrays in regards to drives.  To give credit where it’s due most of this and additional info can be found at www.emcstorageinfo.com

–          All disks in a RAID group will match the smallest capacity drive.

–          The Vault Drives in a Clariion MUST all be the same size.

–          SATA and FC drives can NOT be mixed in the same disk tray or DAE

–          SATA drives can only use a SATA Hot Spare and FC drives can only use a FC Hot Spare

–          A 15k FC drive can use a 10k Hot Spare and vice versa.

–          A DAE allows one speed change within the shelf but it is recommended to have all the same speed drives in a DAE.

–          If drive speeds will be mixed in a DAE the faster drives should be installed in the leftmost drive slots first.

Data Domain and Avamar – De-duplication from Different Points of View

August 16, 2009

The recent acquisition of Data Domain by EMC will surely give EMC a powerful advantage in the area of data de-duplication.  We’ve had some questions and statements from clients basically asking “isn’t that going to hurt the sales of Avamar?”.  So I’d thought I’d weigh in with my opinion on this, which in short is, I don’t think so.

For those of you not completely familiar with data de-duplication there are basically two flavors, target based and source based.  Target based meaning that all the data is sent to a device (Data Domain) and de-duplicated after it gets there.  Source based is the opposite, where the data is de-duplicated at the source before it is sent across the network (Avamar).  Typically the “well which is better?” question comes up and to that the typical “it depends.” answer comes flying back (what a surprise).  One size definitely doesn’t fit all for either of the products. 

In some situations where you may have lots of remote offices and would like to back them all up to one central location an Avamar solutions will probably be just what the doctor ordered.  You can dedup your data before you send it across your WAN and save your precious bandwidth.  In addition you save on the administrative resources required to backup your remote sites.

Now let’s look at a slightly different situation that involves your datacenter backup environment.  Suppose you have large volumes of database data that needs to be backed up and requires the faster restore times of a backup to disk environment.  This data already resides in your datacenter and doesn’t really need to traverse slower WAN connections.  You still can benefit from data de-duplication to reduce the size of your backups but this doesn’t necessarily have to happen at the source.  With a Data Domain solution you could easily achieve your goals and probably at a lower cost point than an Avamar solution.

In my opinion, for a lot of environments, a combination of both Avamar and Data Domain may provide the best solution in the end.  Use Avamar to achieve very high dedup ratios of your file data and other data that is relatively static or if you have remote office environments.  Use Data Domain to backup data that has higher change rates, like databases, and data that may never have to “leave” the datacenter.

I also think in the end it will come down to the cost of ownership.  Avamar is a very powerful product but not all companies can afford it.  Data Domain offers a simple, less expensive way to de-duplicate your data but it does have some downsides. 

I’ll try to do a side by side Avamar to Data Domain comparison in an upcoming post.  Stay tuned…..

EMC Replication Technologies – Simplified Guide

May 3, 2009

From a DR standpoint, using replicated SAN’s to provide a fully functional DR site is nothing new but was usually a fairly expensive endeavor especially for smaller organizations.  Now, as SAN prices have come down and technologies like VMware Site Recovery Manager are making replicating data for DR more automated and reliable, more and more organizations are taking a serious look at replicated storage solutions.

Being as we are an EMC Partner I’ll concentrate on some of things to think about when considering a replicated SAN infrastructure.  EMC has many ways to replicate SAN data.  Here is a short list of some of the technology and where they’re used:

–          MirrorView:                      

  • MirrorView is used on Clariion and Celerra arrays and comes in two flavors, /A – Async and /S – Sync.

–          Celerra Replicator:         

  • Used to replicate data between Celerra unified storage platforms.

–          RecoverPoint:                  

  • Appliance based Continuous Data Protection solution that also provides replication between EMC arrays.  This is commonly referred to a “Tivo for the Datacenter” and is the latest and greatest technology when it comes to data protection and replication.

Each of these replication technologies replicates LUNS between arrays but they have different overhead requirements that you should consider.

MirrorView

–          MirrorView requires 10 to 20% overhead to operate.  So if you have 10TB of data to replicate you are going to need an additional 1 to 2TB of storage space on your production array.

Celerra Replicator

–          Celerra Replicator can require up to 250% overhead.  This number can vary depending on what you are replicating and how you plan to do it.  This means that 10TB of replicated data could require an additional 25TB of disk space between the Production and DR arrays.

RecoverPoint

–          While RecoverPoint is certainly used as a replication technology it provides much more than that.  The ability to roll back to any point in time (similar to a Tivo) provides the ultimate granularity to DR.  This is accomplished via a Write Splitter that is built in to Clariion arrays.  RecoverPoint is also available for non-EMC arrays.

–          RecoverPoint can be deployed in 3 different configurations, CDP (local only/not replicated), CRR (remote) and CLR (local and remote).

  • CRR replicates data to your DR site where your “Tivo” capability resides.  You essentially have an exact copy of the data you want to protect/replicate and a “Journal” which keeps track of all the writes and changes to the data.  There is an exact copy of your protected data plus roughly 10 to 20% overhead for the Journal.  So 10TB of “protected” data would require around 11 to 12TB of additional storage on the DR array.
  • CLR is simply a combination of local CDP protection and remote CRR protection together.  This provides the ultimate in protection and granularity at both sites and would require additional storage at both sites for the CDP/CRR “Copy” and the Journal.

This is obviously a very simplified summary of replication and replication overhead.  The amount of additional storage required for any replicated solution will depend on the amount, type and change rate of data being replicated.  There are also many things to consider around bandwidth between sites and the type of replication, Sync or Async, that you need.   EMC and qualified EMC Partners have tools to assess your environment and dial-in exactly what is required for replication.

Flash Drives – Solid State Drive Technology for your SAN

February 8, 2009

Every once and a while at client meeting the discussion about Flash Drives or Solid-State Drives (SSD) comes up.  Flash drives are now an option in EMC’s Clariion arrays.  Usually the first question is “I heard they’re expensive, what do those things cost anyway?”  Well the short answer is A LOT.   Retail from EMC is over $15,000 each!  Yeah….I’ll take a few trays just in case we need some extra storage space!

Obviously you wouldn’t purchase this type of storage to store your MP3 collection but what are the benefits and use cases for these drives?  According to EMC the new Flash Drive technology is for Tier Zero apps or in other words, applications that require incredible amounts of disk I/O and performance.  Examples of this could be some SQL and Oracle production databases.

So why use Flash Drives?  Marketing numbers from EMC state “up to 30x the IOPS” of a standard drive can be achieved.  Let’s be conservative and say 15x IOPS for the purposes of this example.  If you had a database the generated, let’s say 10,000 IOPS, you would need to stripe that data over at least 55 drives to achieve the performance needed (assuming 180 IOPS/drive).  With Flash drives you could theoretically do it with as little as 4 or 5 drives or even less.  Of course with a maximum drive size of 73GB per drive, space might be an issue.

Here is a brief summary of the benefits of these new drives:

          More Performance:        At up to 30x IOPS of a normal high-speed drive you can reduce the number of spindle you need and increase performance.  Latency is also dramatically reduced

          Greener:                             The drives use much less power because there are now moving parts which translates in to less heat, less cooling a smaller carbon footprint and more financial savings.

          Higher Reliability:           No moving parts equal less to break as well as much faster rebuild times.

          Less Rack Space:               Using less drives will free up rack space and datacenter floor space.

The obvious drawback to these drives is cost.  Most companies with tight budgets will have a hard time justifying the astronomical cost per megabyte that comes along with this technology.  For some high-performance apps though, they may be a great solution.  Hopefully with time and competition the cost will come down and the usable space will go up.

Avamar NDMP Accelerator Overview

February 1, 2009

EMC’s Avamar backup and recovery solutions provide the ability to deduplicate your data at the source before sending the data over the WAN or LAN.  This opens up all sorts of opportunities for businesses with widely distributed remote sites and greatly reduce the amount of data that is backed up.  Avamar also provides the ability to replicate itself to another offsite location which could essentially eliminate the need for tape backups.

So now, for example, you could have your primary datacenter with a SAN/NAS (EMC Celerra for this example) and several small remote offices with no local IT staff.  Using Avamar you are able to back up those remote sites to an Avamar node(s) at the datacenter with no local intervention at the remote sites.  All of your remote sites are now backed up over the WAN, deduped and consolidated at your datacenter.  But what about backing up your NAS at the datacenter?  It would make sense to dedupe and back this data up to the Avamar as well and eliminate multiple backup systems.  How is this done?  Enter the Avamar NDMP Accelerator.

The Avamar NDMP Accelerator uses the standard NDMP Protocol to backup your NAS datastores (Celerra and NetApp) just like you would use NDMP to tape.  The appliance must be located on the same LAN as the storage device being protected.

avamarndmp1

Avamar NDMP Accelerator Supported Devices:

  • EMC Celerra IP Storage with I18N enabled running DART5.5.
  • NetApp running Data ONTAP 6.5.1R1, 7.0.4, 7.0.5, 7.0.6, 7.1 or 7.2

Capabilities and Limitations Summary (partial list):

  • Full support for Storage Device ACLs and Q-tree Data 

  • Support for Backup and Restore of Volumes, Q-trees and

    Directories

     

  • Support for Multiple Storage Devices Using One Accelerator 

  • Support for Multiple Backup Streams From a Single Storage Device 

  • Support for Multiple Simultaneous Backups 

  • Only One Target Volume or Directory per Backup or Restore 

  • Maximum Number of Files = 10 Million 

  • Celerra Incremental Backups Should Only Be Performed at the Volume

    Level

     

  • File Level Restores Not Supported on Network Appliance Filers