Archive for May, 2009

EMC Source One – An Overview

May 31, 2009

EMC has recently released its replacement for its EmailXtender product which is now called SourceOne.  More and more companies are faced with Email archiving challenges due to things like compliance, legal discovery and just the plain explosive growth of email data.  SourceOne solves many of these problem and integrates right in to most corporate email solutions. 

There are 4 key areas that SourceOne can address:  Compliance, Legal & Archiving, eDiscovery, and Supervision.

Compliance:

–          The core SourceOne product provides capturing, indexing and retention for compliance purposes.

–          EMC DiskXtender can be used in conjunction with SourceOne to provide advanced archiving features such as Centera or WORM.  This can also provide for tiering of your storage.

–          Works with most e-mail clients such as Outlook.

–          Works with most phone clients like BlackBerry and iPhone

Legal and Archiving:

–          Reduces the storage on e-mail servers

–          Improves backup and recovery

–          Provides for the management of PST files.  This includes:

  • Discovery
  • Delete
  • Move
  • Ingest the data
  • Copy to archive and maintain PST structure

–          Indexing of messages and attachments for advanced archiving

Supervision: (EmailXaminer)

–          By using the EmailExaminer add-on to SourceOne you additionally get:

  • Proactive monitoring and supervision
  • Policy based keyword sampling
  • Full auditing and reporting

eDiscovery:

–          Finally if you need a solution for eDiscovery in your invironment the Discovery Manager add-on will allow you to:

  • Search User e-mails
  • Provide for legail retention
  • Chain of Custody
  • Matter Lifecycle management

Overall SourceOne provides a comprehensive e-mail management and archive solution.  Here is a diagram that lays out the process of how SourceOne works:

SourceOne process

SourceOne process

Advertisements

I Got The Power!

May 25, 2009

Something that is often easy to overlook when putting new toys in your datacenter is power, UPS protected power in particular.  In the case of most large datacenters this is usually done with a large central UPS systems that power the entire datacenter.  In smaller datacenters or data closets a much smaller rack mount or floor mount UPS is usually sufficient to power a rack or two of equipment.  We are partnered with and recommend APC by Schneider Electric in most cases due to their quality and reliability.

Ok so you know you need a UPS but which one, there are lots of differents models with all kind of ratings like “VA” and “kVA” and all kind of plug options and other configuration.  If you’re like most IT pro’s you probably understand this a little but are far from an electrical engineer.  Fortunately there is a handy tool from APC that can simplify things.  The APC UPS Selector tool allows you to add all kind of devices from established product list and then spits out a power report and recommends products for your specific application.  The tool can be found here:  http://www.apcc.com/tools/ups_selector/

Once your connected everything is pretty straight forward, you can configure your system “by Devices” or “by Load” (if you know it).  The main page looks like this:

Main Page

Main Page

 

 

 

 

 

 

 

If you select “Configure by Devices” you will get a relatively user friendly page like this:

configbydevice

 

 

 

 

 

 

 

 

 

From here you can pick and choose the devices in any combination that you may have in your rack (switches, storage, servers…..) and generate a load report.

If you prefer the more techy “configure by load” approach you can simply enter in the known load of your equipment and your good to go.

configbyload

 

 

 

 

 

 

 

 

As you work through the different pages there are lots of options like allowing for a specific growth percentage and desired run-times at load.  Either way you do it the tool can be a valuable tool if you are trying to figure out how much UPS to buy for a given amount of equipment or for a given run-time at a specific load.  Check it out.

Memory Configuration in the new HP G6

May 17, 2009

I had a conference call with our guys from HP this past week about the optimal memory configuration for the new HP G6 servers.  The architecture of the G6 is much different from the G5 which really changes the way memory is configured in the new servers.

In the G5 servers everything was configured in pairs and shared between processors if you had more than one.  So if you wanted 32GB or RAM in your servers you would have 8 x 4GB DIMMs.  The new G6 is configured in sets of 3 DIMMs and memory is dedicated per processor if you have more than one.  In the case of an VMware ESX server we would usually recommend a DL380 with 32GB of RAM.  For a G5 the memory would be configured in the 8 x 4GB DIMM configuration mentioned above.  Now with the new G6 architecture you could still use this configuration but the performance would be significantly affected.  The optimal configuration would be:

 Processor 1:    3 x 4GB DIMMs + 3 x 2GB (6 memory slots total) = 18GB RAM

Processor 2:    3 x 4GB DIMMs + 3 x 2GB (6 memory slots total) = 18GB RAM

Total RAM = 36GB

The new DDR3 Memory chips come in two flavors, Unregistered (U) and Registered (R), the configuration above uses registered memory or RDIMMs.  See details under “DDR-3 memory technology” below.

 Here is some more detailed info that I pulled from the HP website.

 Integrated Memory Controller

One of the biggest improvements in Intel Xeon 5500 series processors is the integrated memory controller. The memory controller uses three channels (up to 1333-MHz each) to access dedicated DDR-3 memory sockets. This delivers a big performance improvement over previous architectures that provide only two memory channels and require processors to share a single pool of system memory. The three memory channels have a total bandwidth of 32 GB/s.

 Each Intel Xeon 5500 series processor has a three-level cache hierarchy that consists of an on-core 64-KB L1 cache, a separate 256-KB L2 cache for each core, and a new inclusive, shared L3 cache of up to 8 MB. The L3 cache duplicates the data stored in the L1 and L2 caches of each core. This data duplication eliminates unnecessary searches, or snoops, to those caches and minimizes latency. Additional data tracking technology in the L3 cache ensures inter-core cache coherency. If one processor needs to access the cache or DDR-3 memory of the other processor, it uses the high-speed QPI between the two processors.

 DDR-3 memory technology

ProLiant G6 servers based on the Intel Xeon 5500 series processor support DDR-3 memory technology–DDR3-800, DDR3-1066, or DDR3-1333. DDR-3 dual in-line memory modules (DIMMs) provide the same reliability, availability, and serviceability as DDR-2 DIMMs; however, DDR3 DIMMs use less power, have lower latency, and deliver higher bandwidth. DDR-3 DIMMs operate at 1.5V, compared to 1.8V for DDR-2 DIMMs. This translates into more than 25% in power savings comparing the fastest DDR-2 DIMM (DDR2-800) to the slowest DDR-3 DIMM (DDR3-800). The power savings increase to almost 35% comparing the most commonly used DIMMs, DDR2-667 and DDR3-1066. It‘s important to note that there are two types of DDR-3 DIMMs—registered (RDIMMs) and unbuffered (UDIMMs)—and they cannot be used together in a system. ProLiant G6 servers support up to three RDIMMs per channel or up to two UDIMMs per channel. RDIMMs have larger capacity (up to 8 GB each) than UDIMMs (up to 2 GB each). Higher-end ProLiant G6 servers support up to 18 sockets. In these servers, RDIMMs enable total memory capacity of up to 144 GB, compared to 24 GB for UDIMMs. This makes RDIMMs the ideal choice for virtualization, while UDIMMs provide cost and power savings for less memory-intensive applications.

 The memory channels can operate at up to 1333 MHz, but the actual speed depends on the number and type of DIMMs populating the slots. For example, in a fully-populated system using DDR3-1333 5 DIMMs, the memory bus speed drops to 800 MHz to maintain signal integrity. Therefore, the type of workload dictates the optimum number and type of DIMMs to use. Memory capacity may be of primary importance in virtualization environments, while memory channel speed may be more critical for high performance computing applications.

Because there are several memory options, HP simplifies memory selection by providing two helpful resources. First, the on-line ProLiant Memory Configuration Tool (www.hp.com/go/ddr3memory-configurator) will walk you through the steps to configure your server‘s memory and provide an orderable parts list. Second, the DDR-3 Memory for Dummies booklet provides information and tips about populating the system memory of ProLiant G6 servers.

 Here are some links that may help if you want more details:

 Proliant G6 Technology Overview (whitepaper)

 Memory Technology Evolution: An Overview of System Memory Technologies (whitepaper)

HP ProLiant G6: A Backstage Tour (24 minute video)

Why VMware ESX/vSphere Over Other Virtualization Platforms?

May 10, 2009

This week we had a client meeting where I was asked specifically, “Why should we go with VMware over Citrix or Hyper-V for server virtualization?”  It forced me to really sit down and map out what I see as the advantages to using VMware ESX/vSphere as an enterprise virtualization solution.  Let me just preface this by saying that we are partners/resellers of both Citrix and VMware and both companies make excellent products.  I’ve used and tested Citrix XenServer and the product worked great and was easy to use and let’s face it they have a very strong pricing model.  Price aside, if I was going to run my production systems in a virtual environment today they would be installed on VMware ESX/vSphere.  So in my OPINION here are some of the advantages I see to the VMware product.

–          True Enterprise Solution

  • Tested, Proven, very stable
  • 100% of Fortune 100 companies use it.  85% of them use it in Production
  • Huge investment in R&D
  • Capable of running the most I/O intensive apps

–          Higher Consolidation Ratios

  • Handles memory and resources better
    • Memory Ballooning and memory over allocation
  • Supports the most number of OS’s of any virtualization platform

–          Ease of Use

  • vCenter – Single pane of glass, simple and intuitive
  • Very configurable and customizable
  • Excellent alerts and reporting

–          Dynamic Resource Scheduling (DRS)

  • Active and automated load balancing using vMotion
  • Currently not available with other virtualization platforms

–          vMotion and Storage vMotion

  • vMotion type functionality is available with other products but with VMware it is virtually flawless and has been perfected over several versions of the product.
  • Storage vMotion to move VM’s live across storage platforms with no downtime

–          Backup / VCB

  • Better backup solution(s)
  • VCB integrates with most 3rd party backup solutions
  • Data Recovery in vSphere (new) – eliminates need for 3rd party backup software
    • De-Dupe enabled

–          Fault Tolerance (new in vSphere)

  • Real-time, zero data loss clustering solution
    • Enabled with a single checkbox

–          Thin Provisioning

  • Built in to vSphere and enables you eliminate over  provisioning of storage resources

–          vNetwork Distributed Switch

  • Centralized/Simplified management of networking
  • Nexus v1000 option – manage virtual switches like other Cisco switches
    • Excellent if your company has a Storage Team and a Network Team

–          Site Recovery Manager – automated disaster recovery

  • Only solution like it on the market.  Plug-in to vCenter
  • Simplifies DR testing, documentation and execution once replicated SANs are implemented for DR/BC

–          VDI Ready

  • Tight integration with VMware View VDI solution

–          3rd Party Tools

  • Currently more 3rd party tools available than other platforms

EMC Replication Technologies – Simplified Guide

May 3, 2009

From a DR standpoint, using replicated SAN’s to provide a fully functional DR site is nothing new but was usually a fairly expensive endeavor especially for smaller organizations.  Now, as SAN prices have come down and technologies like VMware Site Recovery Manager are making replicating data for DR more automated and reliable, more and more organizations are taking a serious look at replicated storage solutions.

Being as we are an EMC Partner I’ll concentrate on some of things to think about when considering a replicated SAN infrastructure.  EMC has many ways to replicate SAN data.  Here is a short list of some of the technology and where they’re used:

–          MirrorView:                      

  • MirrorView is used on Clariion and Celerra arrays and comes in two flavors, /A – Async and /S – Sync.

–          Celerra Replicator:         

  • Used to replicate data between Celerra unified storage platforms.

–          RecoverPoint:                  

  • Appliance based Continuous Data Protection solution that also provides replication between EMC arrays.  This is commonly referred to a “Tivo for the Datacenter” and is the latest and greatest technology when it comes to data protection and replication.

Each of these replication technologies replicates LUNS between arrays but they have different overhead requirements that you should consider.

MirrorView

–          MirrorView requires 10 to 20% overhead to operate.  So if you have 10TB of data to replicate you are going to need an additional 1 to 2TB of storage space on your production array.

Celerra Replicator

–          Celerra Replicator can require up to 250% overhead.  This number can vary depending on what you are replicating and how you plan to do it.  This means that 10TB of replicated data could require an additional 25TB of disk space between the Production and DR arrays.

RecoverPoint

–          While RecoverPoint is certainly used as a replication technology it provides much more than that.  The ability to roll back to any point in time (similar to a Tivo) provides the ultimate granularity to DR.  This is accomplished via a Write Splitter that is built in to Clariion arrays.  RecoverPoint is also available for non-EMC arrays.

–          RecoverPoint can be deployed in 3 different configurations, CDP (local only/not replicated), CRR (remote) and CLR (local and remote).

  • CRR replicates data to your DR site where your “Tivo” capability resides.  You essentially have an exact copy of the data you want to protect/replicate and a “Journal” which keeps track of all the writes and changes to the data.  There is an exact copy of your protected data plus roughly 10 to 20% overhead for the Journal.  So 10TB of “protected” data would require around 11 to 12TB of additional storage on the DR array.
  • CLR is simply a combination of local CDP protection and remote CRR protection together.  This provides the ultimate in protection and granularity at both sites and would require additional storage at both sites for the CDP/CRR “Copy” and the Journal.

This is obviously a very simplified summary of replication and replication overhead.  The amount of additional storage required for any replicated solution will depend on the amount, type and change rate of data being replicated.  There are also many things to consider around bandwidth between sites and the type of replication, Sync or Async, that you need.   EMC and qualified EMC Partners have tools to assess your environment and dial-in exactly what is required for replication.