SQL Server with Mr. Denny


December 27, 2010  2:00 PM

My book “Securing SQL Server” is coming.

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m happy to say that I officially can’t make a single change to my book “Securing SQL Server“. The publisher made the last little changes and sent it to the printer a couple of days ago.  As I understand it, the book should be shipping out to Amazon, me, etc. around February 1st, 2011 slightly ahead of schedule.  It has been a very interesting learning experience going through the book writing process, now that the entire process is complete.

The book lists at $49.95, but Amazon currently has it listed for $32.97 for pre-order.  I don’t know if the book will be available for the Kindle or not, but hopefully it will be.  Even if it is, feel free to order the hard copy and the digital.

Once the book ships, if you bring the hard copy to any event that I’m at I’ll be happy to sign your copy for you.

I will have a few copies to give away, which I’ll be doing at select events over the year (I’ll say which ones once I know which ones I’ll be giving the book away at).

Hopefully you pick up a copy of the book, and hopefully you find the book useful.

Denny

December 23, 2010  5:14 PM

What should Developers know about SQL Server?

Denny Cherry Denny Cherry Profile: Denny Cherry

Back at Tech Ed 2010 a good friend Robert Cain (blog | twitter) and I were wondering the exhibit hall floor and we got stopped by the guys from the Deep Fried Bytes pod cast and they asked us if we’d be willing to sit down a record a pod cast.  Of course we said sure, we’d be happy to, even though we had no idea what they would want to talk about.

Well that pod cast was just posted up on the deepfriedbytes.com website as Episode #61.

Denny


December 23, 2010  2:00 PM

Do you have any affordable recommendations (mid-size company) for DR to a secondary site, about 11 TB of storage?

Denny Cherry Denny Cherry Profile: Denny Cherry

It depends on how you are going to handle your data replication.  If you are going to use native array based replication then you’ll need to either use array’s data replication options or look at EMC’s recover point.

If however you are going to handle the replication outside of the array:

  • DFS for file servers
  • Mirroring, log shipping, etc. for SQL Server
  • Native Exchange replication for mail data

Then you’ve got a few options available to you.

Dell has a storage option called EquilLogic which can serve as a great storage solution for a DR environment where you need a lots of storage, but you need less IO than you need in your production environment  (I’m not saying that EquilLogic is a slow solution, but there are solutions which are much faster but they will cost you more).  You’ll probably be able to start with a single shelf, then add another as needed.

EMC also has a smaller array option called the AX4.  The AX4 array is based on the same concepts as the CX4 array which is a much more expensive array, but the CX4 has more features available.  The AX4 runs on SATA disks, and supports up to 60 disks which can be up to 1TB each.  For an 11 TB solution I’d recommend at least 15 disks (one full shelf).  If performance ever becomes an issue you simply buy another shelf and move some of the LUNs to the new shelf.

There are a couple of other options which you can look at from smaller companies, but when it comes to storage I prefer to stick with the bigger companies.  Yes you have to pay a little more, but you know that EMC and Dell will be around in a few years, and that the technologies which they are using have been around for quite a while.

Denny


December 20, 2010  2:00 PM

Database Restores and Torn Pages

Denny Cherry Denny Cherry Profile: Denny Cherry

This weekend I was migrating some databases from one server to another.  As these were rather large databases (about 2 TB in size) and I didn’t have the clusters setup on a SAN (using DAS instead) I needed to setup log shipping from one cluster to another so that the downtime when moving was kept to a minimum.

But a little problem came up, when I restored one database I got a torn page error when trying to roll the logs forward.  My first thought was ok, maybe there was a network hiccup or a problem with the storage on the destination server.  So I restored the database again, and got a torn page error again.  However the second time the error was on a different page.  This is very important to know, because the torn page had moved from one page (1:1396373) to another (1:24815312) and I used the same database backup both times, this told me something very important.  That the torn page wasn’t caused by the source database or the database backup.  If it was then the page which was torn would have been the same both times.

Now you’ll notice that the torn page was detected when the transaction logs were being rolled forward, not while the actual restore was happening.  The reason for this is that torn pages aren’t checked for during a full backup restore.  Torn pages are only detected during two specific operations, when the pages are read into the buffer pool (my database is in NORECOVERY so that isn’t going to happen), and when the pages are changed (in this case they are changed when the transaction log rolls forward).  So the only other way I would have been able to verify that there wasn’t a torn page would have been to bring the database up in STANDYBY mode and to a scan of every index (both clustered and non-clustered) which would have caused all the pages to be read into the buffer pool and caused the torn page message to be thrown.  But this can’t happen in this case because I’m log shipping form SQL Server 2005 to SQL Server 2008 and the database can’t be brought up in STANDYBY mode because a database upgrade needs to be done.  So checking for additional torn pages requires taking a fully rolled forward database out of NORECOVERY and putting it into RECOVERY, querying every table on every index so that every page is touched (by using index hints) to ensure that the torn page isn’t there.

This not so quick bit of testing (the database backup in question is about 300 Gigs) told me that there was a problem with either the HBAs or the DAS that the cluster was using.  To further test we failed the SQL Cluster over to the other node and ran the restore again which didn’t cause any torn page error messages when rolling the logs forward.  This would appear to put the DAS in the clear, and put the problem either on the other node of the cluster, the HBAs in that node, cables, etc.  Something which only that server uses to connect to the storage.

If you ever setup log shipping and you get torn pages let this be a lesson, don’t assume that there is a problem on the source database.  It might be a problem on the destination database instead.

Denny


December 17, 2010  7:27 PM

MCM exam passed, time to prep for the lab.

Denny Cherry Denny Cherry Profile: Denny Cherry

I just received notice today that I received a passing score on the first of the MCM exams.  I must say, that the few weeks have been extremely nerve racking waiting for the results.  I took the exam the day before Thanksgiving (which was awesome since the testing center is an hour away normally) and it took until today to get the results.

Now that the first exam is out of the way, I can start worrying about the lab portion.  I’ve got a few weeks before I can take the exam, as not all the testing centers can offer the lab portion, so I’ll have to wait until my center has the exam ready.  But when it does I’ll be there taking that beta exam and praying for the best.

If you are planning on taking the MCM exams when they come out, obviously I can’t tell you what is on the exam (you have to pinky swear not to tell before you can take it) but I will tell you it is by far the hardest Microsoft Exam I’ve taken.  Even harder I think than the Business Intelligence exams which were very tough.

I’m off to start studying for the lab, which will be tough as I have no idea what is going to be covered, or what I’ll be asked to do.

Denny


December 16, 2010  2:00 PM

Moving SQL database files using T/SQL only.

Denny Cherry Denny Cherry Profile: Denny Cherry

So last week I needed to rename some physical database files so that I didn’t have any naming conflicts when I restored some other databases.  I had about 10 databases that I needed to rename both the physical database files as well as the databases, and I really didn’t want to detach each database, then go find the files, rename them and put it all back together one at a time.

With a little work, I was able to beat the SQL Server into submission and using just T/SQL (and xp_cmdshell which I had to enable for this specific task) I was able to detach the databases rename the physical files and then attach the databases back.  The trick here is that SQL Server changes the permissions on the physical database files when you detach the database so that only the person who detaches the database can touch the physical files.  The way that I was able to get around that was via the xp_cmdshell proxy account.  I changed the proxy account to use my domain account, but this still didn’t quite do the trick.  The reason that it didn’t is because as a member of the sysadmin fixed server role I bypass the proxy account and anything which I do using xp_cmdshell uses the SQL Service account by default.

To get around this using of the SQL Service account I had to use the execute as to impersonate a lower level account.  In this case I chose the guest account as the account to use when I executed xp_cmdshell.  The code that ended up getting run against the database instance.

exec sp_detach_db 'prod_db'
go
execute as user = 'guest'
go
exec xp_cmdshell 'rename W:\Data\prod_db.mdf test_db.mdf'
exec xp_cmdshell 'rename T:\TLogs\prod_db.LDF test_db.ldf'
go
revert
go
exec sp_attach_db 'test_db', 'w:\data\test_db.mdf', 't:\tlogs\test_db.ldf'
go

Don’t you worry I didn’t write out all those code by hand. I used a SQL query to generate the xp_cmdshell and the bulk of the sp_attach_db code. That SQL was…

select 'exec xp_cmdshell ''rename ' + filename + ' ' + 
	replace(substring(filename, len(filename)-charindex('\', reverse(filename))+2, len(filename)), 'prod_', 'test_')
 + '''', ', ''' + replace(filename, 'prod_', 'test_') + '''' from prod_db.dbo.sysfiles

Hopefully you never need to go through this exercise, but in case you do hopefully this will come in handy.

Denny


December 13, 2010  2:00 PM

SQL Saturday #62 looks like it is going to be a great event

Denny Cherry Denny Cherry Profile: Denny Cherry

The first SQL Saturday of 2011 is looking like it is going to be an awesome event.  One reason that I can say that a month ahead of time is that for the first time (as far as I know) speakers have to be turned away because there have just been so many speakers submitting sessions.  I count 94 sessions submitted to the event, which is just an amazing number of sessions to be submitted.  Some of the sessions were submitted by well known speakers and some were submitted by more unknown speakers that are just getting their speaking started, which is one of the goals of the SQL Saturdays; to get new speakers into the community.

Personally I’ve got 4 sessions submitted, and I’ve got no idea if mine will be picked or not (hopefully they will be).  I’ll be there no matter what as I’ve got a pre-con the day before on Friday.  Personally I’m thrilled that I’ll be able to be involved with such a popular community event.

I know that Pam and Jose are limited all the speakers to a single session for the day, which given the massive popularity of the event just makes perfect sense.  I also know, from some comments on Andy’s blog that a lot of the slots are going to the newer speakers to give them the chance to get started, which also makes perfect sense as we need to grow our speaker pool.  Hopefully I’ll be able to get a speaking slot at the event, but if I don’t get one on Saturday, I’ll understand (if you were planning on giving me a slot, please don’t take it away after reading this).  After all I’ve got all day Friday.

Anyway, see you next month in Tampa.

Denny


December 9, 2010  2:00 PM

Do we have to worry about disk alignment if we have an EMC san?

Denny Cherry Denny Cherry Profile: Denny Cherry

Yes.  Every storage platform no matter what vendor makes it needs to be correctly aligned.  If the disks aren’t correctly aligned then you are doing twice as much work for every read and write operation than you need to be doing.

Fortunately new volumes created on a Windows 2008 or higher server (or Vista or higher client OS) will be correctly aligned.  They will be aligned at the 1024kb mark instead of the 64kb mark, but as long as the place they are aligned to is evenly divisible by 64k then it’ll be fine.

Denny


December 6, 2010  2:00 PM

Microsoft SQL Server’s MDS has come a long way.

Denny Cherry Denny Cherry Profile: Denny Cherry

OK, now that I have your attention pretty much nothing has changed in Denali’s Master Data Services.  The install procedure is a bit different now but that is about it.  To install the Denali version of MDS just run through the normal SQL Server installer.  When you get to the service list page scroll to the bottom to find “Master Data Services” listed down there.  Check it and move through the installer.  After installation open
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:”Times New Roman”;
mso-bidi-theme-font:minor-bidi;}
Start > Program Files > Microsoft SQL Server Denali CTP 1 > Master Data Services > Configuration Manager.  This will allow you to configure the database and website to configure MDS, which looks identical to the SQL Server 2008 R2 Configuration Manager.  The website to configure MDS is also identical to the SQL Server 2008 R2 website as well.

When it comes to loading and processing data within MDS, nothing has changed.

In other words, good luck with the damn thing.

Denny


December 2, 2010  2:00 PM

Will separation of data file and log file of SQL sever help if all are on the SAN? From performance perspective not management.

Denny Cherry Denny Cherry Profile: Denny Cherry

Most likely yes, but it really depends on the storage array and the storage array configuration.  If the array is a shared everything array like an IBM XIV then their may not be any benefit to breaking everything onto separate disks as each LUN is spread over every spindle in the device.  However if you are using an EMC array where you manually configure which LUNs are created on which disks then it will definitely matter.

One thing to remember which I can’t stress enough if that if you are writing directly to disk in a SAN environment then there is probably something wrong, as the array has a ton of cache which would be accepting all the writes, which are then destaged to the disk later on.

Now even when using a system like an IBM XIV array having multiple LUNs can give you some benefits as Windows will now have multiple disks queues and multiple paths over which to send the data.

Basically like everything else, you’ll need to test what works best in your environment, and test the other options occasionally to ensure that what you are using is still the best configuration later on.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: