SQL Server with Mr. Denny


March 25, 2011  3:41 AM

Join me Friday morning as I talk SQL Server and Clustering with Keith Combs

Denny Cherry Denny Cherry Profile: Denny Cherry

Friday morning I’ll be joining Keith Combs on the “Talk TechNet” at 9:00am Friday March 25th, 2011.  You should register now (sorry about the late notice, I thought I posted about this already).

Denny

March 24, 2011  2:00 PM

What is faster SAN or DAS (local disk), and most importantly why?

Denny Cherry Denny Cherry Profile: Denny Cherry

So here’s an answer to the great myth of SAN, it isn’t always going to be faster than local disk (or DAS, JBOD, etc.).  The reason for this is actually pretty straight forward.

Local disk is cheap.  That’s the reason in a nut shell.  Let me see if I can’t explain in a little more detail.

Because local disk is so cheap, I can buy a 300 Gig SAS drive from Dell for $469, we can easily throw a lot of them at the problem, getting really, really fast storage really, really cheaply (at least compared to a SAN).  Throwing 10 or 20 disks at a server is only ~$4600 or ~$9200 respectively which in the grand scheme of things isn’t all that much.

Those same 300 gig disks in an EMC array as an example will retail for ~$2500 each (~$25,000 for 10 or ~$50,000 for 20).  So why would I purchase SAN storage, instead of buying a ton of local disks?  The local disk is faster, and cheaper so where is the benefit?

The benefit from the SAN storage comes in a few places.

1. Centralized Administration

2. Better use of resources

3. Lower power utilization

4. Lower TCO

Lets look at each of these one at a time (and yes there is some overlap).

Centralized Administration

Instead of having to get bad disk alerts from all the servers that the company owns I get them all from one place.  Instead of having to connect to each server to manage its storage configuration I have a single point that I can do this from.

Better use of resources

When using local disk I have to purchase storage for each server as a one off purchase.  If a server I bought 6 months ago doesn’t need anywhere near the amount of storage that I purchased for it I’m stuck.  That storage will sit there eating power doing nothing while I go out and purchase new storage for my next server.

When using a storage array, each server only has what it needs.  If a server needs more storage later, that can be easily assigned.  If a server has more storage than it needs you can shrink the LUN (only a couple of vendors can do this so far, the rest will catch up eventually) and that storage can be easily reallocated to another server in the data center for use.  If a server needs faster storage, or is on storage which is just to fast and the faster storage could be better utilized somewhere else these changes can be made on the array, with no chance for loss of data on the array, and with no impact to the system.

Lower Power Utilization

This goes back to the better use of resources point above.  When you have shelves of disks sitting around doing nothing, or next to nothing, those disks need to be powered.  Power costs money which effects the bottom line.  When you can re-utilize the disks the over all power costs are lower, especially when multiple servers are all sharing the spindles.

Lower TCO

This goes back to the Power Utilization above.  When you are using more power, you are generating more heat.  The more heat you generate the more cooling that you need to keep everything up and running.  Along with this, and tied into the better use of resources, when you need 50 Gigs of storage you use 50 Gigs of storage.  When you need 1 TB of storage you use 1 TB of storage, no more no less.  So while you have to purchase a bit more up front (which is always recommended so that you can get the best possible prices), when you use the storage, you’ll only need to use the storage that you actually need.  If you do charge backs this will be very important.

Storage arrays also provide all sorts of extra goodies that they can do.  The array it self can help with your backup and recovery process.  It can help present full data sets to your Dev/QA/Test/Staging systems without using up full sets of data via the built in snapshot technologies.  When migrating or upgrading from one server to another, the storage array can make this very easy.

Migrating between servers is just a matter of disconnecting the LUN(s) from the old server, and attaching them to the new server.

Upgrading SQL Server?  That’s no problem.  Disconnect the LUNs from the old server, and take a snapshot of the LUNs.  Then attach the LUNs to the new server.  You can then fire up the database engine, and in the event of a failure to upgrade the databases, just roll back the snapshot and attach the LUNs back to the original system, or take another snapshot and try attaching the databases again.

Want to keep a copy of every database that you have in the company, no matter the version of SQL Server at your DR site?  Storage based replication can replicate the data for any application, it doesn’t matter if that application supports replication or not, from one array to another.  Every time a new or changed block is written to the array, the array will grab that block and sent it over the wire to the remote array.  This can be done in real time (synchronously) or on a delay as specified by the admin (asynchronously).

Hopefully this opened up the array a little to you, and gave you some insight into how the magic box works.

Denny


March 23, 2011  2:00 PM

DAS, NAS and SAN oh my.

Denny Cherry Denny Cherry Profile: Denny Cherry

In the server world we have three different kinds of storage available.  Today, only two of these can be used with your SQL Server (as long as you want to keep the SQL Server in a supportable configuration).  Your three options are Direct Attached Storage (DAS), Network Attached Storage (NAS) and Storage Area Network (SAN).

Network Attached Storage is the configuration that you shouldn’t be using with your SQL Server.  NAS can be used with SQL Server if you drop in a trace flag and run your server in an unsupported configuration.  However if there is a problem with the SQL Server Microsoft PSS (CSS, whatever they are called this month) may not be willing to help you as officially SQL Server does not support Network Attached Storage.  NAS devices are specialized devices, typically running some flavor of Linux which present network shares which Windows can recognize to the network so that people or services can access the storage over the network.  NAS devices can also be running Windows sort of like a traditional file server where you can access the files over the IP network.

Direct Attached Storage also called JBOD (Just a Bunch Of Disks) or local storage is storage which is directly attached to the back of the server.  There will probably be a couple of disks which sit within the server, and when these are outgrown you’ll get an external shelf of storage which has more disks in it.  These will be connected via SCSI, SAS or fibre channel cable to a card within the server.  For SAS or fibre channel DAS units the controller which does the RAID will probably be within the shelf which holds the disks.  For older SCSI units the card which handles the RAID will probably be within the server (there are SCSI shelves which have the controller within the shelf).  Direct Attached Storage is usually faster than SAN storage as the disks within the DAS units are dedicated to the servers.  What you gain in speed you loose in flexibility, and manageability.

Storage Area Network storage (SAN) is very flexible storage which has a big management overhead.  But that management overhead gives you lots of options, and makes it very easy to reconfigure the storage on the fly without any changes to the servers.  You can easily extend the storage, reduce the storage (if you have Windows 2008 and a storage array which supports making the volumes smaller), move the volume to faster storage, or slower storage all with no outage to the server.  You don’t even need to tell the server.  Because there is a lot of management involved with storage arrays (the device which actually holds the disks, also called arrays among other things) correctly configuring storage arrays for maximum performance is quite difficult and usually isn’t done to maximize the performance of the array.  Storage arrays don’t ship in the best possible config, then need to be tweaked and tuned to fit the workload that your specific environment will be putting on them.  All to often people that are managing the storage array don’t understand what all the nobs within the array management software do, so they don’t touch them (which is probably good) or the tweak them incorrectly (which is very, very bad).

I’ve talked to people that have deployed storage array’s from EMC that have gotten less than 1/2 the performance than I’ve been able to get from similar storage arrays.  What makes me think that this is a storage array configuration problem is that they were using high end 15k RPM disks, while I was using 10k RPM disks.  Obviously the workloads were different, but their workload was better suited to high speed storage than mine was.  They were doing sequential reads (where the blocks which are being read and right next to each other on the spindles) on a RAID 10 array while I was doing VERY random reads and writes (where the block which are being read and written are all over the array) against a RAID 5 array.

Hopefully this helps shed the light on some of the terms which you may hear flying around your office.

Denny


March 22, 2011  2:00 PM

Why can’t I just use RAID 10 for everything?

Denny Cherry Denny Cherry Profile: Denny Cherry

In a perfect work, you would use RAID 10 for everything.  However we don’t work in the perfect world, we have budgets to deal with.  And these budgets mean that we have to make sacrifices at times so we don’t get what we want.

RAID 10 is very expensive to implement, much more so per gig than RAID 5 or RAID 6, especially if there are a lot of disks in the RAID array.  If the database doesn’t specifically need more performance than the RAID 5 array can provide, then using a RAID 10 array is just a waste of money.  And that money is money that could be used for other projects that the company is trying to complete.

In the real world that we all work in (or at least most of us) performance comes at a cost, and those costs have to be controlled.  If you need RAID 10, and you actually need it, and you’ve got the budget for it, then use it.  Otherwise use something less expensive.  If you aren’t sure if you need RAID 10 or not, start with a lower level such as RAID 5 or RAID 6, and if needed switch up to RAID 10.

Denny


March 21, 2011  2:00 PM

Expanding local storage with minimal downtime

Denny Cherry Denny Cherry Profile: Denny Cherry

Welcome to the first day of storage week of SQL University.  Over the next week, we’ll be covering some different topics, at different levels, but hopefully the information is useful to everyone.

One of the big reasons that companies pay for storage arrays is that when it comes to expanding storage (LUNs) you can do it quickly and easily with no downtime.  This begs the question, how do you expand your storage when you are using local disk, with little to no downtime?

So assume that you’ve got 4 72 Gig spindles in your server, and you want to upgrade these to 400 Gig spindles, you’ve got two options.  The first requires a lot of downtime, the second requires basically none.

With a bit outage

If your RAID card doesn’t support online expansion (the documentation for the RAID card will tell you) then you’ll have to take a decent outage to do this.  You’ll need to connect another hard drive like a large USB drive, down all the services, and copy all the data off of the disk onto the USB drive (backing everything up to tape will work as well).  Then down the server, remove the old drives, put in the new drives and create a new RAID array.  Boot Windows back up, create the partition (don’t forget to align the partition) copy the data back to it and you are up and running.

With a really small outage

If you have a RAID card that supports online expansion (the documentation for the RAID card will tell you) then you can do this with little to no downtime.  First replace each disk one at a time, leaving enough time to rebuild the RAID array so that you don’t loose any data.  Once all 4 disks have been replaced (this will take 2-3 days to get to this point) either open the RAID cards management tools, or reboot into the RAID card’s BIOS and have the RAID card expand the virtual disk (or what ever the manufacture calls it).  Once that process is done bring Windows back up if you had to reboot into the BIOS, and open diskpart and use diskpart to extend the volume.

This is done by opening diskpart and typeing in the following:

list disk

select disk n

extend

You use “list disk” to find the disk number, then use “select disk n” to select the correct disk where n is the number of the list that shows from the output of list disk.  The “extend” command extends the volume to fill the disk.

Denny


March 17, 2011  2:00 PM

Sign up for the LA to Phoenix Code Camp bus

Denny Cherry Denny Cherry Profile: Denny Cherry

Something which was talked about at the last SoCal Code Camp was running a bus from SoCal to Phoenix to help our two communities get to know each other better, and to give speakers and attendees easier access to more events.

So along those lines a bus has been setup taking people from SoCal to the next Desert Code Camp and registration is open to get a seat on the bus.  Now if you plan on taking the bus don’t wait until its to late to sign up.  If enough people don’t sign up for the bus, it’ll be canceled.

If you plan on attending be sure to get signed up soon, and we’ll see you in Phoenix.

Denny


March 14, 2011  2:00 PM

Free online classes “Microsoft Virtualization for VMware Professions”

Denny Cherry Denny Cherry Profile: Denny Cherry

Microsoft is putting on a great free three day online class for VMware professionals that need to learn more about Hyper-V.

Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering)

Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions)

Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI)

The class is being taught by two top notch presenters Microsoft Technical Evangelist Symon Perriman (a good friend of mine, who knows his stuff) and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes.

Each of the days is a separate event so you need to sign up for each one separately.  I’ve made each of the days a separate link to make your life easier.   Just click through each link and register.  The only downside to the amazing training opportunity is that it is happening during the Dev Connections conference, so if you are attending Dev Connections in April, 2011 you won’t be able to take advantage of this amazing FREE (did I mention FREE) training event.  As I’ll be at Dev Connections I won’t be able to make it, which is something which I really bummed about.

Denny


March 10, 2011  2:05 PM

SQL Excursions launched yesterday.

Denny Cherry Denny Cherry Profile: Denny Cherry

In case you missed all the talk about SQL Excursions yesterday, I’ll recap here since a lot of people read blogs but aren’t on Twitter.

Yesterday, with great fanfare I announced the launch of SQL Excursions.  You can read all about it on the announcement blog post.  The important parts are:

SQL Excursions is designed to provide fun, educational SQL Server training events.  The excursions will be held in beautiful locations across the US (and eventually world wide). Each excursion will also have social events for the session attendees as well as spouses, partners, guests, etc. so that they can come to the excursion and not be bored by the technical sessions.

Each excursion will have top notch speakers, typically Microsoft MVPs and/or Microsoft Certified Master recipients.  As each excursion is announced the speaker and session outline will be posted as well.  What will make this training different from other kinds of training is that while a basic outline will be posted with what the speakers are planning on speaking about, the exact content will be voted on by you, the session attendees.  This will ensure that the you receive the sessions that will be of the most use to you in your day to day work life.

If you are on Facebook I’ve got a page there, and if you are on Twitter there’s the official SQL Excursions account (@SQLExcusions).  As the excursion is ready to be announced (there’s a bunch of legal stuff which I need to get out of the way) it’ll be announced on Facebook, Twitter as well as the SQL Excursions web page.  There’s a news letter which you can sign up for on the home page of the SQL Excursions web page so that the new information will be sent directly to your inbox.

Denny


March 10, 2011  2:00 PM

Thinking about attending the SSWUG vConference? Sign up now and save $30.00

Denny Cherry Denny Cherry Profile: Denny Cherry

If you were planning on attending the SSWUG virtual conference “DB Tech Con” which is happening April 20-22, 2011; and you’d like to save $30.00 of the cost of the conference, have I got a deal for you.  When you sign up using the discount code SP11DBTechDC you’ll be given a $30 discount off of the current price.

You can sign up on the normal registration page and enter the code SP11DBTechDC into the VIP Code box then press the “Update Registration” button.

Denny


March 7, 2011  2:00 PM

Join me and Marathon Technologies as I talk about Consolidation and Virtualization

Denny Cherry Denny Cherry Profile: Denny Cherry

Join me Wednesday March 9th at 8am Pacific (11am Eastern) as I join Marathon Technologies as I present a webcast titled “Controlling SQL Server Sprawl: The Consolidation Conundrum and Availability Imperative“.  During this session I’ll be talking about some of the benefits and risks consolidating SQL Server databases and instances.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: