Archive for July, 2010

Using Storage Pools and F.A.S.T on EMC Arrays

July 25, 2010

Traditionally with EMC SAN and NAS arrays the design was based on raid groups (RG) and LUNs.  Overall the process was pretty straight forward.  First you create a raid group such as a raid 5 raid group that may have 5 drives.  I/O is usually one of the big concerns.  The I/O generated by an application would dictate the number of drives that needed to be in a raid group, the type or speed of those drives and the raid group configuration such as raid 5 or raid 10.  Once we had all of that worked out was simply a matter or “carving up” the storage by creating a LUN or LUNs on the raid group.  Present the LUNs to your host and you’re off to the races.   As simple as this whole process may sound its about to get even more simple for EMC arrays.

With the soon to be released Unisphere management console, FAST suite and new OS code from EMC storage design should get even easier.  EMC FAST (Fully Automated Storage Tiering) suite has several cool features but the one I want to focus on is the ability to “move” data around on a sublun level automatically.  This combined with Storage Pools will not only make the design easier but should also boost performance.  To illustrate this concept lets assume we have a database application.  The database has several drives all with varying performance requirements.  Parts of this database require very high performance and generate large I/O while other drives are pretty stagnent and don’t require much I/O at all.  Under the old way of doing things we may have luns on dedicated raid groups for certain parts of the database app.  For example maybe we would have the high I/O portions of the apps on fiber channel drives and the other slower portions on SATA drives.  Using the new storage pool concept we may have a storage pool with FC, SATA and EFD (Flash Drives) all in the same storage pool.  We would create all of our luns inside this storage pool and let FAST figure out what to do with them.  The cool thing is that this is done on a sub-lun level .  If performance requirements go up all of a sudden, those parts of the lun that are “hot” may be moved over to the EFDs to handle the increase in I/O while the “cooler” parts of the lun will remain on cheaper SATA drives.

As cool as this feature may be you still have to plan for I/O but its more planning for total I/O of the environment.  In some case, such as critical databases, where you still want to have dedicated disks to guarantee a certain level of performance, I still see the old raid group approach being used.  The good news is you can do both raid groups and storage pools on the same array so you don’t really loose anything from a functionality standpoint.  For many of our clients with medium sized environment I see this storage pool approach being very beneficial.  The ability to take an array with say 60 drives of SATA, FC and EFD drives, put all 60 drives in one storage group and create all your luns on that one storage pool really simplifies things.  All of this fuctionality should be available very soon with the GA releases of Unisphere and FAST right around the corner.

Advertisements

How-To: Cisco C-Series Rack Server Configuration in Netformix

July 11, 2010

For those of you not on the vendor/reseller side of things this may not be very interesting unless you’re just curious how Cisco UCS configuration are built.  Netformix is just a tool that allows us to build these configurations and output a bill of materials that we can then place orders from.  I put together this short little video that walks through the process and shows some of the available options on the new C-Series rack mount servers.