Archive for the ‘Storage’ Category

How-To Generate and Collect NAR Data from VNX Arrays

December 27, 2011
  • Login to the Unisphere console
  • Select the array you want to collect data from

  • Click on the “System” tab, then click on “Monitoring and Alerts

  • Click on “Statistics for Block

  • Click on “Performance Data Logging

  • A “Data Logging” windows will appear.  Configure the settings to match the screenshot below.  Leave the “Archive Interval” set to 600 seconds unless otherwise requested EMC or your EMC Reseller.  Once the settings match click on “START”.

  • Confirm that you wish to continue by clicking “YES”.

 

  • Confirm again that you want to apply these new settings by clicking “YES”.

  • Click “YES” yet again to confirm the data logging warning.

  • Click “OK” to confirm (finally) that the operation was successful.

 

  • You will be returned to the Data Loggin settings window.  Click “OK” to close the window.

  • You will now need to let the data collection run for the specified time period – in this case 3 days.

Retrieving NAR File for VNX from the GUI

  • Now that the NAR data collection process has run for 3 days you will need to retrieve the NAR/NAZ files.
  • Login to the Unisphere console
  • Select the array you want to collect data from

  • Click on the “System” tab, then click on “Monitoring and Alerts

 

  • Click on “Statistics for Block

  • On the right, click on “Retrieve Archive”.

  • In the Retrieve Archives window, select your array and choose SPA. (NAR files are ONLY needed for ONE of the SP’s)
  • Change the “Save File(s) As:” destination to a location that is easy to locate.
  • Highlight the .NAR or .NAZ (if encrypted) files that correspond to the 3 days you collected data for.  The time may be off a little but this will not matter.  There will probably be 4 – 6 files total.
  • Click the “Retreive” button

 

  • Click “YES” to confirm the retrieval in the “Confirm: Retrieve Archives” window.

  • At the bottom of the “Retrieve Archives” window verify that the files complete – this may take a few seconds.
  • Click “Done

  • Go to the destination folder where you retrieved the files to.

  • Upload all the files to EMC or your EMC Reseller as instructed.
Advertisements

EMC FAST Suite Version 2 – Available Now

August 12, 2010

I blogged earlier about designing with storage pools vs traditional raid groups.  I’m sure you were all overly excited to read about that – both of you.  Well one of the biggest benefits of using storage pools is now finally available from EMC.  I’m talking about the latest release of the FAST from EMC.  This is really two products, FAST Sublun Tiering and FAST Cache.  With FAST Sublun Tiering you can now create storage pools with EFD, FC and SATA drives and let the array do all the work in regards to deciding which data needs to run on which tier of drives.  If you have part of a database that constantly blasts 2500 I/O all day long, no problem, it gets moved on to the EFD tier.  Got some freaky adult entertainment type files that the creepy guy two cubes down from keeps on his home drive for ole time sake – yep that will be hanging out on SATA (after several virus scans I hope).  For most environments this approach really makes sense and really makes life easier.  Let’s face it, you probably have no idea what the I/O graph on most of your luns look like on a day to day basis.  Automation is your friend!  One thing to remember if you are considering purchasing an new EMC array – It is much easier to implement FAST Sublun tiering from the start instead of after you have the array in place.  Existing raid groups have to be migrated to storage pools before sublun tiering can be used.  So if you don’t have available drives to move things around the process could be more expensive or impossible.

Probably even more of a game changer than sublun tiering is the release of FAST Cache.  With FAST Cache you can now add gobbs of EFD based frontend cache to your storage array – up to 2TB depending on the array.  Now before you get too excited about 2TB of cache remember someone has to pay for that much EFD space and it ain’t cheap.  Not to mention that that capability only exists on a CX4-960 which alone cost more than most of our houses.  Even so, adding as little as 73GB of EFD cache can really improve the performance across ALL of your luns.  Even better yet you can select which luns selectively take advantage of this feature with a simple check box.  On smaller arrays like the CX4-120 this feature certainly gives you the most bang for your buck and is really pretty inexpensive.  Here is a chart that shows the minimum and maximum FAST cache configuration for EMC arrays:

Using Storage Pools and F.A.S.T on EMC Arrays

July 25, 2010

Traditionally with EMC SAN and NAS arrays the design was based on raid groups (RG) and LUNs.  Overall the process was pretty straight forward.  First you create a raid group such as a raid 5 raid group that may have 5 drives.  I/O is usually one of the big concerns.  The I/O generated by an application would dictate the number of drives that needed to be in a raid group, the type or speed of those drives and the raid group configuration such as raid 5 or raid 10.  Once we had all of that worked out was simply a matter or “carving up” the storage by creating a LUN or LUNs on the raid group.  Present the LUNs to your host and you’re off to the races.   As simple as this whole process may sound its about to get even more simple for EMC arrays.

With the soon to be released Unisphere management console, FAST suite and new OS code from EMC storage design should get even easier.  EMC FAST (Fully Automated Storage Tiering) suite has several cool features but the one I want to focus on is the ability to “move” data around on a sublun level automatically.  This combined with Storage Pools will not only make the design easier but should also boost performance.  To illustrate this concept lets assume we have a database application.  The database has several drives all with varying performance requirements.  Parts of this database require very high performance and generate large I/O while other drives are pretty stagnent and don’t require much I/O at all.  Under the old way of doing things we may have luns on dedicated raid groups for certain parts of the database app.  For example maybe we would have the high I/O portions of the apps on fiber channel drives and the other slower portions on SATA drives.  Using the new storage pool concept we may have a storage pool with FC, SATA and EFD (Flash Drives) all in the same storage pool.  We would create all of our luns inside this storage pool and let FAST figure out what to do with them.  The cool thing is that this is done on a sub-lun level .  If performance requirements go up all of a sudden, those parts of the lun that are “hot” may be moved over to the EFDs to handle the increase in I/O while the “cooler” parts of the lun will remain on cheaper SATA drives.

As cool as this feature may be you still have to plan for I/O but its more planning for total I/O of the environment.  In some case, such as critical databases, where you still want to have dedicated disks to guarantee a certain level of performance, I still see the old raid group approach being used.  The good news is you can do both raid groups and storage pools on the same array so you don’t really loose anything from a functionality standpoint.  For many of our clients with medium sized environment I see this storage pool approach being very beneficial.  The ability to take an array with say 60 drives of SATA, FC and EFD drives, put all 60 drives in one storage group and create all your luns on that one storage pool really simplifies things.  All of this fuctionality should be available very soon with the GA releases of Unisphere and FAST right around the corner.

EMC Celerra CIFS Share Management – Two Ways To Skin A Cat

June 18, 2010

First off, the “Skin a cat” thing is just a figure of speech so if you’re a cat lover don’t report me to PETA.  I would never do something like that…….well…….unless the damn thing bit me or really pissed me off, then all bets are off!  Actually I’m sure this will make good FUD for the competitors.  Now on top of all the EMC stories we hear like, “EMC kills puppies and kicks old ladies down the stairs” they can add “they skin cats too!”.  Barbarians!

Anyway on to the EMC stuff.  This week I’ve had to do a few demo’s showing some basic functionalities of the Celerra platform.  The one thing most people do with a NAS solution is to manager their Windows CIFS shares.  So to demonstrate how easy this process is I created a short Youtube video which illustrates how do this both using Celerra Manager and Windows MMC.  Check it out below.

EMC F.A.S.T – Details and Architecture Considerations

April 30, 2010

If you have or are looking at an EMC Clariion you may have heard about the relatively recent release of F.A.S.T for the Clariion line.  F.A.S.T stands for Fully Automated Storage Tiering which basically means the ability to move data between different tiers of disks (FC, SATA and FLASH).  EMC F.A.S.T is currently in stage 1 which infers that there will be a stage 2……and there will be.  Since stage 2 is an unknown number of months away let’s concentrate on what F.A.S.T stage 1 really gets you.

At the end of the day the idea is to have SATA, FC and EFD’s in your array and then let the array monitor the disk activity and move the data to the most appropriate type of disks.  If you have a part of a database that is cranking out 5000 IOPS it gets sent to the EFD’s while file data like your user’s shopping lists and porn videos will end up on SATA.  What F.A.S.T does is monitor your disk performance using a tool many of you may already have on your arrays, Navisphere Analyzer.  Once the array has a load on it and enough data gathered (usually around 5 days) F.A.S.T can start doing it thing.

You have two choices when it comes to the migration of a LUN, automatic or manual.  Under automatic mode F.A.S.T will identify a LUN that should be moved and do the migration for you.  Under manual mode F.A.S.T will simply make recommendations as to which LUNs should be moved.  You then have a choice to schedule the migration at a later date and time or do it right then.  To summarize the product in a sentence, ”F.A.S.T it automatically analyzes Clariion NAR data for you.”  Could you do all of this manually? Absolutely.  Would you?  That is the question. 

There are a few things to be aware of with F.A.S.T stage 1.

  • Only FC LUNs are analyzed.  If you have LUNs that reside on SATA and EFD’s F.A.S.T will not make recommendations or migrations for you.  This restriction will hopefully be lifted in future versions.
  • Migrations are done on a LUN level (sub-lun level coming soon)
  • F.A.S.T requires an enabler be loaded on the array first
  • A Windows server is required for the F.A.S.T software (can be a virtual machine)
  • If you are adding F.A.S.T to an existing array FLARE 29 is required

From a SAN design perspective adding F.A.S.T doesn’t really create any major challenges.  The biggest challenge may be financial in that you need to make sure that you have enough free space on the various tiers of storage to move your LUN’s around.  That being said the biggest benefit of F.A.S.T may also be financial in that you can maximize the use your storage infrastructure.

RecoverPoint/CE – What’s that?

February 28, 2010

Recently we’ve had some clients interested in the ability to geographically separate some of their Microsoft clusters.  Honestly I wasn’t aware that EMC even had a product to enable this until one of my colleagues mentioned RecoverPoint/CE.  So I did a little reading and thought I give a brief summary of what it is and how it can benefit you.  RecoverPoint/CE is an add-on software package to RecoverPoint.  The “CE” stands for Cluster Enabler and it essentially integrates RecoverPoint with your Microsoft Failover Clustering software to allow geographically dispersed cluster nodes in your environment.  RP/CE will manages the cluster failover between your production and DR storage systems using RecoverPoint API’s.

So what are some of the advantages to doing this?  Most importantly….higher availability and a reduced Recovery Time Objectives (RTO).  CE enables your clusters to automatically manage app failover and manage resources.  Should a failure take place the amount of time required to get your app back online is greatly reduced.  There could also be a reduction in the amount of hardware and software resources required.  In many environments there would be a Microsoft cluster in the production and then another clustered environment at the DR site.  Now with the ability to geographically disperse the nodes, DR becomes both cheaper and easier. 

From a cost perspective, CE is not terribly expensive but it is licensed based on capacity of your existing RecoverPoint environment.  If you have multiple sites and run Microsoft Clustering it is definitely worth taking a look at.  If you want deeper technical detail around RecoverPoint/CE check out the EMC website or do a quick Google search on the product. I did find a YouTube video that gives a quick demo of the product.  There is no sound but it at least shows how the product works.  Check it out here:  http://www.youtube.com/watch?v=r_E9mH8mAww

EMC Replication Technologies – Simplified Guide

May 3, 2009

From a DR standpoint, using replicated SAN’s to provide a fully functional DR site is nothing new but was usually a fairly expensive endeavor especially for smaller organizations.  Now, as SAN prices have come down and technologies like VMware Site Recovery Manager are making replicating data for DR more automated and reliable, more and more organizations are taking a serious look at replicated storage solutions.

Being as we are an EMC Partner I’ll concentrate on some of things to think about when considering a replicated SAN infrastructure.  EMC has many ways to replicate SAN data.  Here is a short list of some of the technology and where they’re used:

–          MirrorView:                      

  • MirrorView is used on Clariion and Celerra arrays and comes in two flavors, /A – Async and /S – Sync.

–          Celerra Replicator:         

  • Used to replicate data between Celerra unified storage platforms.

–          RecoverPoint:                  

  • Appliance based Continuous Data Protection solution that also provides replication between EMC arrays.  This is commonly referred to a “Tivo for the Datacenter” and is the latest and greatest technology when it comes to data protection and replication.

Each of these replication technologies replicates LUNS between arrays but they have different overhead requirements that you should consider.

MirrorView

–          MirrorView requires 10 to 20% overhead to operate.  So if you have 10TB of data to replicate you are going to need an additional 1 to 2TB of storage space on your production array.

Celerra Replicator

–          Celerra Replicator can require up to 250% overhead.  This number can vary depending on what you are replicating and how you plan to do it.  This means that 10TB of replicated data could require an additional 25TB of disk space between the Production and DR arrays.

RecoverPoint

–          While RecoverPoint is certainly used as a replication technology it provides much more than that.  The ability to roll back to any point in time (similar to a Tivo) provides the ultimate granularity to DR.  This is accomplished via a Write Splitter that is built in to Clariion arrays.  RecoverPoint is also available for non-EMC arrays.

–          RecoverPoint can be deployed in 3 different configurations, CDP (local only/not replicated), CRR (remote) and CLR (local and remote).

  • CRR replicates data to your DR site where your “Tivo” capability resides.  You essentially have an exact copy of the data you want to protect/replicate and a “Journal” which keeps track of all the writes and changes to the data.  There is an exact copy of your protected data plus roughly 10 to 20% overhead for the Journal.  So 10TB of “protected” data would require around 11 to 12TB of additional storage on the DR array.
  • CLR is simply a combination of local CDP protection and remote CRR protection together.  This provides the ultimate in protection and granularity at both sites and would require additional storage at both sites for the CDP/CRR “Copy” and the Journal.

This is obviously a very simplified summary of replication and replication overhead.  The amount of additional storage required for any replicated solution will depend on the amount, type and change rate of data being replicated.  There are also many things to consider around bandwidth between sites and the type of replication, Sync or Async, that you need.   EMC and qualified EMC Partners have tools to assess your environment and dial-in exactly what is required for replication.

Flash Drives – Solid State Drive Technology for your SAN

February 8, 2009

Every once and a while at client meeting the discussion about Flash Drives or Solid-State Drives (SSD) comes up.  Flash drives are now an option in EMC’s Clariion arrays.  Usually the first question is “I heard they’re expensive, what do those things cost anyway?”  Well the short answer is A LOT.   Retail from EMC is over $15,000 each!  Yeah….I’ll take a few trays just in case we need some extra storage space!

Obviously you wouldn’t purchase this type of storage to store your MP3 collection but what are the benefits and use cases for these drives?  According to EMC the new Flash Drive technology is for Tier Zero apps or in other words, applications that require incredible amounts of disk I/O and performance.  Examples of this could be some SQL and Oracle production databases.

So why use Flash Drives?  Marketing numbers from EMC state “up to 30x the IOPS” of a standard drive can be achieved.  Let’s be conservative and say 15x IOPS for the purposes of this example.  If you had a database the generated, let’s say 10,000 IOPS, you would need to stripe that data over at least 55 drives to achieve the performance needed (assuming 180 IOPS/drive).  With Flash drives you could theoretically do it with as little as 4 or 5 drives or even less.  Of course with a maximum drive size of 73GB per drive, space might be an issue.

Here is a brief summary of the benefits of these new drives:

          More Performance:        At up to 30x IOPS of a normal high-speed drive you can reduce the number of spindle you need and increase performance.  Latency is also dramatically reduced

          Greener:                             The drives use much less power because there are now moving parts which translates in to less heat, less cooling a smaller carbon footprint and more financial savings.

          Higher Reliability:           No moving parts equal less to break as well as much faster rebuild times.

          Less Rack Space:               Using less drives will free up rack space and datacenter floor space.

The obvious drawback to these drives is cost.  Most companies with tight budgets will have a hard time justifying the astronomical cost per megabyte that comes along with this technology.  For some high-performance apps though, they may be a great solution.  Hopefully with time and competition the cost will come down and the usable space will go up.