SQL Server with Mr. Denny


December 17, 2010  7:27 PM

MCM exam passed, time to prep for the lab.

Denny Cherry Denny Cherry Profile: Denny Cherry

I just received notice today that I received a passing score on the first of the MCM exams.  I must say, that the few weeks have been extremely nerve racking waiting for the results.  I took the exam the day before Thanksgiving (which was awesome since the testing center is an hour away normally) and it took until today to get the results.

Now that the first exam is out of the way, I can start worrying about the lab portion.  I’ve got a few weeks before I can take the exam, as not all the testing centers can offer the lab portion, so I’ll have to wait until my center has the exam ready.  But when it does I’ll be there taking that beta exam and praying for the best.

If you are planning on taking the MCM exams when they come out, obviously I can’t tell you what is on the exam (you have to pinky swear not to tell before you can take it) but I will tell you it is by far the hardest Microsoft Exam I’ve taken.  Even harder I think than the Business Intelligence exams which were very tough.

I’m off to start studying for the lab, which will be tough as I have no idea what is going to be covered, or what I’ll be asked to do.

Denny

December 16, 2010  2:00 PM

Moving SQL database files using T/SQL only.

Denny Cherry Denny Cherry Profile: Denny Cherry

So last week I needed to rename some physical database files so that I didn’t have any naming conflicts when I restored some other databases.  I had about 10 databases that I needed to rename both the physical database files as well as the databases, and I really didn’t want to detach each database, then go find the files, rename them and put it all back together one at a time.

With a little work, I was able to beat the SQL Server into submission and using just T/SQL (and xp_cmdshell which I had to enable for this specific task) I was able to detach the databases rename the physical files and then attach the databases back.  The trick here is that SQL Server changes the permissions on the physical database files when you detach the database so that only the person who detaches the database can touch the physical files.  The way that I was able to get around that was via the xp_cmdshell proxy account.  I changed the proxy account to use my domain account, but this still didn’t quite do the trick.  The reason that it didn’t is because as a member of the sysadmin fixed server role I bypass the proxy account and anything which I do using xp_cmdshell uses the SQL Service account by default.

To get around this using of the SQL Service account I had to use the execute as to impersonate a lower level account.  In this case I chose the guest account as the account to use when I executed xp_cmdshell.  The code that ended up getting run against the database instance.

exec sp_detach_db 'prod_db'
go
execute as user = 'guest'
go
exec xp_cmdshell 'rename W:\Data\prod_db.mdf test_db.mdf'
exec xp_cmdshell 'rename T:\TLogs\prod_db.LDF test_db.ldf'
go
revert
go
exec sp_attach_db 'test_db', 'w:\data\test_db.mdf', 't:\tlogs\test_db.ldf'
go

Don’t you worry I didn’t write out all those code by hand. I used a SQL query to generate the xp_cmdshell and the bulk of the sp_attach_db code. That SQL was…

select 'exec xp_cmdshell ''rename ' + filename + ' ' + 
	replace(substring(filename, len(filename)-charindex('\', reverse(filename))+2, len(filename)), 'prod_', 'test_')
 + '''', ', ''' + replace(filename, 'prod_', 'test_') + '''' from prod_db.dbo.sysfiles

Hopefully you never need to go through this exercise, but in case you do hopefully this will come in handy.

Denny


December 13, 2010  2:00 PM

SQL Saturday #62 looks like it is going to be a great event

Denny Cherry Denny Cherry Profile: Denny Cherry

The first SQL Saturday of 2011 is looking like it is going to be an awesome event.  One reason that I can say that a month ahead of time is that for the first time (as far as I know) speakers have to be turned away because there have just been so many speakers submitting sessions.  I count 94 sessions submitted to the event, which is just an amazing number of sessions to be submitted.  Some of the sessions were submitted by well known speakers and some were submitted by more unknown speakers that are just getting their speaking started, which is one of the goals of the SQL Saturdays; to get new speakers into the community.

Personally I’ve got 4 sessions submitted, and I’ve got no idea if mine will be picked or not (hopefully they will be).  I’ll be there no matter what as I’ve got a pre-con the day before on Friday.  Personally I’m thrilled that I’ll be able to be involved with such a popular community event.

I know that Pam and Jose are limited all the speakers to a single session for the day, which given the massive popularity of the event just makes perfect sense.  I also know, from some comments on Andy’s blog that a lot of the slots are going to the newer speakers to give them the chance to get started, which also makes perfect sense as we need to grow our speaker pool.  Hopefully I’ll be able to get a speaking slot at the event, but if I don’t get one on Saturday, I’ll understand (if you were planning on giving me a slot, please don’t take it away after reading this).  After all I’ve got all day Friday.

Anyway, see you next month in Tampa.

Denny


December 9, 2010  2:00 PM

Do we have to worry about disk alignment if we have an EMC san?

Denny Cherry Denny Cherry Profile: Denny Cherry

Yes.  Every storage platform no matter what vendor makes it needs to be correctly aligned.  If the disks aren’t correctly aligned then you are doing twice as much work for every read and write operation than you need to be doing.

Fortunately new volumes created on a Windows 2008 or higher server (or Vista or higher client OS) will be correctly aligned.  They will be aligned at the 1024kb mark instead of the 64kb mark, but as long as the place they are aligned to is evenly divisible by 64k then it’ll be fine.

Denny


December 6, 2010  2:00 PM

Microsoft SQL Server’s MDS has come a long way.

Denny Cherry Denny Cherry Profile: Denny Cherry

OK, now that I have your attention pretty much nothing has changed in Denali’s Master Data Services.  The install procedure is a bit different now but that is about it.  To install the Denali version of MDS just run through the normal SQL Server installer.  When you get to the service list page scroll to the bottom to find “Master Data Services” listed down there.  Check it and move through the installer.  After installation open
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:”Times New Roman”;
mso-bidi-theme-font:minor-bidi;}
Start > Program Files > Microsoft SQL Server Denali CTP 1 > Master Data Services > Configuration Manager.  This will allow you to configure the database and website to configure MDS, which looks identical to the SQL Server 2008 R2 Configuration Manager.  The website to configure MDS is also identical to the SQL Server 2008 R2 website as well.

When it comes to loading and processing data within MDS, nothing has changed.

In other words, good luck with the damn thing.

Denny


December 2, 2010  2:00 PM

Will separation of data file and log file of SQL sever help if all are on the SAN? From performance perspective not management.

Denny Cherry Denny Cherry Profile: Denny Cherry

Most likely yes, but it really depends on the storage array and the storage array configuration.  If the array is a shared everything array like an IBM XIV then their may not be any benefit to breaking everything onto separate disks as each LUN is spread over every spindle in the device.  However if you are using an EMC array where you manually configure which LUNs are created on which disks then it will definitely matter.

One thing to remember which I can’t stress enough if that if you are writing directly to disk in a SAN environment then there is probably something wrong, as the array has a ton of cache which would be accepting all the writes, which are then destaged to the disk later on.

Now even when using a system like an IBM XIV array having multiple LUNs can give you some benefits as Windows will now have multiple disks queues and multiple paths over which to send the data.

Basically like everything else, you’ll need to test what works best in your environment, and test the other options occasionally to ensure that what you are using is still the best configuration later on.

Denny


November 29, 2010  11:00 AM

I’ll be presenting a pre-con at #sqlsat62 in Tampa, FL.

Denny Cherry Denny Cherry Profile: Denny Cherry

If you wanted to go to my pre-con at the SQL PASS Summit, but you couldn’t make it, or you couldn’t get the boss to approve the $400 price tag that PASS was charging do I have a deal for you.  I’ll be presenting the same pre-con (hopefully better after integrating feedback from the PASS summit) as a pre-con at the Tampa, FL SQL Saturday.  The cost for the pre-con is $99 (plus a $3.46 processing fee for Event Bright).  Talk about a hell of a deal, a $400 session for just $99.  The full abstract for the session is shown here.

This session will be a two part session in which we will be focusing on two of the biggest topics in the DBA field.  How to properly design your SAN storage solution and how to properly design your virtualization solution.

The storage portion of this session will focus on SAN storage, but most of the material will apply to direct attached storage as well.

In the first half of the session we’ll be focusing on the storage array.  Storage can be one of the biggest bottlenecks when it comes to database performance.  It’s also one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not speak the same language.  In this session, we’ll be looking at storage from both the database and storage perspectives.   We’ll be digging into LUNs, HBAs, the fabric, as well as the storage configuration.

After going over the components we’ll dig into some advanced storage configurations.  This includes RAID groups, multi-pathing software, and proper redundant storage network design.  We will also be digging into some advanced storage array backup techniques including taking storage level clones and snapshots.  After going over these advanced techniques we will dig into how these can best be used to backup the SQL Server environment to provide maximum redundancy with no recurring tape costs.

In the second half of the day we’ll be looking into the pros and cons of moving SQL Servers into a virtual server environment.  Specifically we’ll be looking into when it’s a good idea and when it’s probably not a good idea.  Like everything in the database world there are no hard set answers as to if virtualization is a good idea or not, but there are some times when virtualizing a SQL Server is a good idea, and can save you some money.  There are some other times when you will be shooting yourself in the foot and virtualization isn’t a good idea.  We’ll be focusing on when how to make this decision, and how to gather the metrics that you need in order to come to this decision.

We’ll look into how tie the virtual platforms to the storage array so that you can maximize the storage performance for your SQL Servers and the virtual environment.

You can sign up for the normal SQL Saturday event (I’ve got several slide decks submitted including one brand new one) at the SQL Saturday webpage.  The pre-con has a separate sign up page as there is the fee that has to be paid.  There are only 45 seats available for the pre-con (as well as another 45 seats in the BI pre-con which Stacia Misner is presenting).  I haven’t been told that I can’t being goodies to give away, so I’ll see what sort of goodies I can get my hands on between now and then.

Hopefully I’ll see you on Friday at the pre-con session, but if not I’ll hopefully see you on Saturday at the SQL Saturday event.

Denny


November 25, 2010  11:00 AM

What are the best counters or KPIs to identify what dirves woudl benefit from EFD’s and can be used to reflect before and after performance metrics?

Denny Cherry Denny Cherry Profile: Denny Cherry

EFDs (Enterprise Flash Disks) or SSDs (Solid State Disks) are all the rage today for systems which have very high IO load requirements.  But knowing if your disks would benefit from this technology can be a bit of a problem if you don’t know where to look.  After all these disks are not cheap.

Some perfmon settings that you’ll want to look at include the seconds per operation, operations per second, disk queue.  Now if you are in a SAN environment you will also want to look at the metrics which the array presents to confirm that the problem is actually on the array side and not just a problem of the server.

However if the server has its own disks on the storage array, and the array numbers match what the server is seeing then it may be time to look at EFDs or SSDs for your disk solution.

Denny


November 22, 2010  11:00 AM

What should be done for a Transaction log drive (array) to make sure the log records are ON the disk, even if the power fails?

Denny Cherry Denny Cherry Profile: Denny Cherry

This is a great question that I got a while back during a web cast that I did during the 24 Hours of PASS.  Most storage arrays, at least the better quality ones, have a battery within them which will keep a portion of the array online in the event of a power outage.  While the array won’t be available for use, this will give the array enough time to safely take its write cache from memory and write it down to the disk, then gracefully shutdown.

In the case of the transaction log, every write into the log is put down to disk immediately, and then written into the database file.  SQL tells Windows to do the write, and Windows tells the disk to do the write.  When the write is written to the write cache the disk tells Windows that the write has been completed, and Windows passes that information to SQL which tells SQL that the write to the log is done, and that it can now write to the database files as well.  If the power fails in the middle of this, after the transaction has been committed to the log, but before it has been committed to the database the data still has been written to the disk.  When the array sees that the power has gone out it’ll write the write cache to the disk, then power down.

When the power comes back up the array will load up the write cache back into memory, and flush the write cache down to where it needs to be written (in the similar way that SQL does a roll forward when it restarts).  Once the writes have been completed the array will allow the hosts (the servers) to see the LUNs and the SQL Server can then fire up and do its normal roll back and roll forward of the data within the transaction log.

Hopefully this helps fix some confusion.

Denny

P.S. Yes I am well aware that I have greatly over simplified the process of how SQL writes to the log and the database, but that isn’t the point of the article.  I’m sure that Paul has some great articles on the internals of how this works over on his blog.


November 18, 2010  11:00 AM

Given that LiveMotion, etc give you high availability in the event of a server failure, why would you put a SQL cluster on VMs?

Denny Cherry Denny Cherry Profile: Denny Cherry

This question comes up every once and a while when designing a virtualization solution for someone.  When building a virtualization solution (either Hyper-V or VMware) you get some great free HA solutions included when you build a cluster so why would you want to go through the pain of creating a Windows cluster under an already clustered solution?

Creating a Windows cluster under the hypervisor gives you some additional protection.  If you only have the host HA solution while you can live migration or vMotion from one host to another easily you aren’t protected from a guest OS failure or from a host failure (sort of).  If the host fails, yes the platform will restart the VM on another host pretty quickly, but pretty quickly may not be good enough for you.  You may need it to restart within seconds of a host failure.

What happens on patch Tuesday when all the security patches are rolled out?  The Windows OS within the guest still needs to be rebooted after the patching and the HA solutions of the host won’t do anything for you here.

However if you need to ensure that very high up time of a SQL Server database (or any other cluster-able service) then building a Windows Cluster under the hypervisor of choice is a viable solution for you to use.  Just make sure that you use the rules engine of the host to ensure that both nodes of the cluster are never running on the same host server, because if they are then the whole point of the cluster will be lost in the event of a host failure.

Denny

P.S. This post and the next bunch of posts are all questions and answers which I have received from online sessions at either SQL PASS Virtual Chapters or the 24 Hours of PASS virtual event.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: