SQL Server with Mr. Denny


May 27, 2010  11:00 AM

When using an EMC CX array don’t plan on changing the port that you use for MirrorView



Posted by: Denny Cherry
CLARiiON, EMC

If you are using an EMC CX storage array, and you plan on using MirrorView to replicate data between two storage arrays there are a few catches which you need to keep in mind that aren’t all that obvious unless you read through literally hundreds of pages of documentation.

When you go to use MirrorView the ports which you will be using have been defined already, and can not be changed without resetting the entire storage array.  Now depending on what hardware is installed when the array is first powered up this will determine which port you can use for MirrorView.

If you have the base cards which come with the system, then FC0 will be the port which is used for MirrorView.  If you have an expansion fiber card installed in the system, then port FC4 (the port labeled 0 on the expansion card) is the port which will be used.  Now, the catch here is that FC0 will be a 4 Gig port.  Currently with the newest CX4 arrays you can’t change the cards which the system ships with from 4 gig to 8 gig ports.  So if after time you find that you need more bandwidth between the arrays for Mirrorview, pretty much your only option is to move your hosts off of the FC0 port.  If the only thing using FC0 is MirrorView and you still need more bandwidth you’ll need to contact EMC support and work with them to come up with a solution short of resetting the array to the same state you received it in.

Now the same applied to the iSCSI MirrorView port.  If you get the array with no expansion cards, the iSCSI port with be port 0.  If you get the array with an iSCSI expansion card installed on first power up then iSCSI port 3 (the port labeled 0 on the expansion card) will be your iSCSI MirrorView port.

Now another catch that you may not be aware of, MirrorView is SP specific.  So if your LUN is hosted by SPA which you setup the MirrorView replication it has to go to SPA on the remote array.  Now if your LUN trespasses to the other SP, the MirrorView replication will stop until the LUN moves back to the hosting SP.  If you need to move the LUN to the other SP, then you’ll need to remove MirrorView from the LUN and reset it back up.

May 24, 2010  11:00 AM

How do I know if I should be using RAID 5 or not?



Posted by: Denny Cherry
Storage

One of the big questions out there is how do I know if I should use RAID 5, or RAID 10, or something else?

The answer is usually something abstract like “if you have a lot of writes then use RAID 10, otherwise use RAID 5″.  We’ll I’ve finely gotten some numbers from someone.  These numbers are all unofficial and your mileage may vary.

On a typical RAID array (JBOD, DAS, etc) if your disk will have a higher than 10% change rate then you’ll want to look at a RAID 10 array.  Now if you are using an EMC array (keep in mind I got these numbers from Dell/EMC) then you’ve got more leeway.  The recommended number to stay below on an EMC CX line of arrays is 30%.  So if your data change is less than 30% you should be OK on a RAID 5 array, higher than that and you’ll want to move onto a RAID 10 array.

Now if you want the extra read performance that RAID 5 gives you, but you want more redundancy than RAID 5, take a look at RAID 6.  It is only slightly more expensive per Gig (especially when working on larger RAID arrays as their are more disks) as you have double parity.  While there is a little more overhead for RAID 6 over RAID 5, this additional overhead is typically only an extra 2% to use RAID 6.

Now obviously these numbers are for when the RAID Array is running at full capacity.  The lower the load you are putting on the RAID Array, the higher the percentage of changes you can use and still safely use a RAID 5 (or RAID 6) array.

Denny


May 20, 2010  11:00 AM

When joining fiber switches together you won’t get the bandwidth you think over that link cable.



Posted by: Denny Cherry
Storage

So you’re going along with your work and you need to add a new server to your fiber channel switches.  However you don’t have any more ports left on the switch.  You’ve got two options, buy two new switches and link each pair of switches (first existing switch to one new switch, second existing switch to the other new switch).  So logic tells you that you’ve got 4 Gig ports on the switch, and 4 gig cables, so if you plug one cable between the two switches you’ll get 4 Gigs of bandwidth between the switches right?

Not so much.  When you ISL the switches together (that’s the fancy technical term for connecting two fiber channel switches in a single fabric) you only get 50% of the bandwidth through the cable.  So in order to get the full 4 Gigs of bandwidth you’ll need to string two cables between each pair of switches.  You’ll see more clearly in the diagram below.

You’ll see in the diagram that Existing Switch 1 connects to both sides of the storage array, and it connects to New Switch 1 as well.  Existing Switch 2 also connects to both sides of the storage array as well as to New Switch 2.  You’ll see that in the diagram you don’t connect Existing Switch 1 to New Switch 2 or visa verse.

Now if you have a sever which will need more than 4 Gigs of bandwidth that will be plugged into one of the new switches you’ll want to connect more than two cables between the switches so that you can get the full 8 Gigs of possible bandwidth between the server and the storage array.

Like Ethernet switches you can bond these ports between the switches together into a single virtual switch via the built in trunking options within the switch.  As each switch vendor has a different way to do this (and those methods vary depending on the OS running on the switch) I won’t do into the specifics here, but your support desk with your vendor should be able to provide you with support in doing this.

One last thing to keep in mind when ISLing these switching together is that some vendors has a license fee to enable the ISL features so keep that in mind when doing your planning.

Denny

P.S. This diagram is based on an EMC CX3 Storage Array, but it is perfectly valid for any dual head fiber channel storage array.


May 17, 2010  11:00 AM

Telecommuting is awesome, except when it sucks.



Posted by: Denny Cherry
Brent Ozar, Professional Development

Telecommuting is the holy grail of IT work. You save a ton of money on gas, and you don’t have to work in a cube farm. You can decorate however you see fit and wear what ever you’d like to (or as little as you’d like to) when you work.

However everything isn’t all roses and puppy dogs… Continued »


May 15, 2010  12:59 AM

EMC World Day 4 (The final day)



Posted by: Denny Cherry
EMC World, EMC World 2010, Samuel Adams, Tech Ed, Tech Ed 2010

Well yesterday was the final day of EMC World 2010. This was a great conference (see posts for Days -1, 0, 1, 2, 3).  On day 4 I didn’t really do any official sessions as my brain was simply to full for any more lecture information to be stuff into it.  Instead I object to sit in the bloggers lounge, get some work done, and talk with the other bloggers that were there.

We covered some great topics like how to try and stay impartial, and when trying to stay impartial just makes you look like a chump. Continued »


May 13, 2010  7:31 PM

And the winner is…



Posted by: Denny Cherry
Evals, SSWUG

I just received an email from Stephen who runs the SSWUG Virtual Conference that my Storage for the DBA session that I gave at the recent Virtual Conference was voted as the “Best of Show”.

I’d just like to thank everyone who attended my session (especially those that submitted feedback, the feedback is always invaluable).

I would of course like to thank the people behind the Amazon Turk because without them by ballot box stuffing system wouldn’t have worked so well.  (Please I’m kidding, can you imagine how much it would cost to get that many people to sign into the SSWUG vConference.)

Seriously though, I’m glad that so many people were able to attend the session, and that so many people liked the content.  For those that didn’t make it to the SSWUG, but are going to Connections in November I’ll be giving a very similar presentation at Connections (with some updated material of course).

Denny


May 13, 2010  4:39 PM

EMC World Day 3



Posted by: Denny Cherry
Celerra, EMC World, EMC World 2010, Emulex, Exchange 2010, FAST, VMware, Xsigo

Yesterday was Day 3 of EMC World, and there were more great sessions packed full of technical information.  Yesterday was also the last day of the exhibit hall being open so it was Apple iPad giveaway day as well, sadly I didn’t win one.

The first session that I want to recap here was the SAN meets NAS sessions that I attended.  One of the big takeaways from EMC World was the technology that EMC is putting into all of the mid-tier products.  This includes the EMC Celerra which is EMC’s Network Attached Storage product.  The Celerra is basically as EMC CLARiiON with no fiber ports and a NAS connector on it, with a lite version of NaviSphere Manager running on it (unless you get the gateway only, and you then present LUNs from another fiber channel storage platform).  What the FAST package lets you do is to have the hardware automatically move less used data from expensive storage to cheaper slower storage.  This allows you to keep the data online so that your users can access it, but the access times will be just a little bit slower.  Instead of having a 5ms response time it may have a 50ms response time, but just for the files which are older and haven’t been touched in a while. Continued »


May 12, 2010  8:16 PM

EMC World Day 2 (2010)



Posted by: Denny Cherry
Bob Ward, Bruce Zimmerman, CLARiiON, EMC World, EMC World 2010, VMware

So yesterday was Day 2 of EMC World, and my body is really starting to feel it.  All the sessions today were top notch sessions.

The first session yesterday was by Bruce Zimmerman.  For those of you in the SQL Server community that are reading this Bruce is to the EMC Storage community as Bob Ward is to the SQL Server Community.  Bruce talks on EMC CLARiiON performance tuning every year at the 400-500 level.  Here are some of the highlights.

When using NaviSphere Analyzer to monitor the utilization of your array you may not be getting the correct picture.  Metrics such as utilization are not true measurements of the utilization but instead a calculation.  In the case of the utilization metric Analyzer simply looks at the utilization of the RAID Group that each Storage Processor (SP) is putting on the RAID Group  and the higher of the two numbers is reported.  If you have a single LUN on the RAID Group, or all the LUNs on the RAID Group are owned by a single SP this isn’t an issue as the other number will be 0, but if the LUNs on a single RAID Group are owned by both SPs and each SP is running the RAID Group to 40%, Analyzer will show a 40% load, instead of an 80% load on the RAID Group.

If you need to dump Analyzer data to a CSV file via the naviseccli command use the -archivedump switch.  (Someone asked me about this via twitter a while back, which is why I made sure to include it.)

If you monitor the performance of your Storage Processors you may see the CPU spike to 100% on regular intervals.  This interval will correspond to the data logging interval that you have set within NaviSphere Manager.  While this CPU spike may worry you, unless your normal CPU load on the Storage Processor is very high, this CPU spike will not effect your performance through put.  If you are concerned that it is affecting your performance throughput through the storage processors, try disabling the data collection for a period of time in the SP properties.

If you look at the NaviSphere properties for the array you’ll see two settings for data logging.  One for the background process, and one for live data capture.  If these settings are different then the data logging happens at the lower of the two intervals.  Most people should set both of these options to 300 seconds unless you need capture data more frequently than that for a specific reason.

Some improvements to the FLARE version 29 that you’ll notice is that the load placed on the storage processor by the data logging process has been reduced by about 80% which is a huge savings.  You’ll also notice that with Release 29 that when doing a non-disruptive update (NDU) the CPU on each storage processor has to be slow 65%.  In older versions the CPU load had to be below 50%.  This change was made because the amount of backround processes which the array is performing as background management processes can be about 7-8% (per SP) and these processes don’t fail over.

Another naviseccli trick is to include the -np flag for all your commands.  This will tell naviseccli not to poll the array for response information.  Now if you need to get back information from the array when you run your commands you’ll want to include this.  For example if you create a LUN and have the array assign the LUN id and you want to do something with the LUN id later in the script you’ll need to exclude the -np switch, however if you specify the LUN id and don’t care about the feedback including the -np flag will save the Storage Processor quite a bit of work as the CLI requests a good deal of information from the Storage Processor for each CLI command that is issued.

I also gathered a lot of information about VMware in other sessions yesterday.

I’m not sure if this was supposed to be released, but the next release of vSphere (aka ESX) will be in Q3 of 2010 and will be vSphere 4.1.  This next release has a lot of enhancements to ease administration and improve integration between ESX and the EMC storage arrays.  You can assume that all of these integrations between vSphere and EMC CLARiiON arrays will require FLARE release 30 which should also be coming out in Q3 2010.

The first improvement is the vStorage APIs.  This is a set of APIs within vSphere 4.1and the EMC arrays that allows the vCenter server or the vSphere server it self (if not running with a vCenter server) to talk to the array directly and perform some actions.

These actions include Bulk Zero Acceleration.  This allows the vSphere host to when creating a new file to tell the array to fill the file with 0s instead of having to transmit all those zeros to the array over the fibre or iSCSI.  This is done by the vSphere host writing a single block to the array with all 0s in it, then telling the array to replicate that block n number of times.  While this won’t reduce the amount of data that the array has to write, it will reduce your network traffic and because of this may safe time.  By default this feature will be enabled in vSphere 4.1, but can be disabled in the advanced settings page of the host.

Another feature are some hardware locking changes.  Currently when vSphere needs to take a lock on the LUN it locks the entire volume then performances it’s operation then releases the lock on the LUN.  In vSphere 4.1 it will be able to lock just the specific block on the disk that it wants to work with, then release just that block.  This will allow multiple hosts to take locks on the same LUN at the same time without having to wait in line to complete the operation.  There are a few places when this benefit will be seen including boot storms (where you’ve got lots of machines booting at the exact same time), and allow for more snapshoting to take place (as when each snapshot is created a lock has to be taken on the LUN when the new file is going to be created).  By default this feature will be enabled on vSphere 4.1, but can be disabled in the advanced settings page of the host.

The next feature is called Full Copy Acceleration.  This is a great feature which will reduce the amount of traffic between the array and the host when cloning a virtual machine.  Today when you clone a file the file is copied up from the array to the host, then written from the host back to the array in the new location.  With this feature enabled (which it is by default) the API will simply tell the array to copy the blocks which make up the file from one location to another preventing the entire file from being transferred from the array up to the host.  If your network between the array and the host is bandwidth limited this will reduce the time it takes to clone the virtual machine.

Of the new VMware Features which require array integration there is only one which doesn’t require FLARE 30 and that is the Stop and Resume feature, which requires FLARE 29 on the host.  This feature cleans up the way that the guest OSs see that a thin provisioning pool is out of space and the LUN can’t consume any additional space.  Prior to vSphere 4.1 (also known as today) if a thin LUN can’t be expanded as needed on the array because there isn’t any space the guest OS will throw (within Windows at least) a blue screen of death (BSOD) because the page that it’s requesting to write to isn’t available.  In vSphere 4.1 an error message will be thrown as a popup within the guest OS which effectively says that there was a problem writing to the disks.

Something which will be coming in Q2 of 2010 (so probably within the next 6 weeks or so) will be the CLARiiON Provisioning Plugin for vSphere.  This will let you provision a new LUN on the storage array, and attach it to the VMware Cluster from a single screen which should greatly decrease the amount of time required to provision and attach storage from the array to the server.

I’m curious to see how long it takes other storage vendors to get these APIs working on their arrays (with or without VMwares assistance).

Check back tomorrow for my Day 3 post.

Denny


May 11, 2010  9:02 PM

EMC World Day 1



Posted by: Denny Cherry
EMC World, EMC World 2010

So yesterday was day 1 of EMC World.  I attended some great sessions (and one not so great one).

The first session that I hit was the futures for the EMC CLARiiON’s FLARE software.  For those that don’t know what FLARE is, FLARE is the software which runs the array and handles all the functions of the Array.  The next release will be FLARE v30.  If you are a CX3 or older customer this new release will be of no use to use as this version only supports the CX4 array.

Some of the new features which are being included are a totally new management interface called Unisphere.  This new interface will give a single interface for your EMC CX arrays as well as your Celera devices and RecoverPoint.  Eventually other EMC products will be integrated into Unisphere with products such as Replication Manager coming hopefully in 2011. Continued »


May 10, 2010  2:26 PM

EMC World Day 0



Posted by: Denny Cherry
EMC World, EMC World 2010

Yesterday was Day 0 or EMC World which means that it’s party day.  The day started with Registration and the Welcome reception.  If you’ve never been to EMC World, registration is probably the longest line in the place.  You’ve got all 10,000 or so attendees trying to get checked in.  Fortunately for me I’m a returning attendee so my line was much shorter than the general registration line (thank god).

After the welcome reception was the concert featuring the Counting Crows.  The Counting Crows put on a pretty good show, so far I’d have to say that the Bare Naked Ladies are still my favorite concert at EMC World so far.

I took a bunch of pictures at the party and registration which I’ve posted to Flickr.

Probably my favorite picture of Day 0 is this one of me with the walking Celerra.

I’ll try and post sessions daily about everything that I’ve seen though out the sessions.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: