SQL Server with Mr. Denny


December 30, 2010  2:00 PM

We are currently running on an EMC SAN. Did I hear you correctly, we should have databases, backups, temp, and logs on the same LUN? Thanks.

Denny Cherry Denny Cherry Profile: Denny Cherry

No, you want to put them on different LUNs.  IBM is currently recommending that on the XIV array that everything can be put on a single LUN as the LUN is spread over all the disks in the array.  For an EMC array you’ll want to have separate LUNs for each.

Denny

December 28, 2010  11:33 PM

Some people really just don’t understand how the technology that runs their lives functions.

Denny Cherry Denny Cherry Profile: Denny Cherry

So I’m sitting at home reading an article on the NY Times website about the current travel mess in New England, and a statement within the article really scared me (about 2/3s of the way down).

“He wonders why during times like these, airlines, which are now profitable, cannot simply rent additional computing power and hire temporary customer-service workers.”

Now the person who is wondering this isn’t just your average guy on the street, he is Tom Groenfeldt who “publishes a blog on financial technology” (http://www.techandfinance.com/) so he should know a little something about technology, or at least I would hope so.

Computer processing power can’t just be rented, and as soon as you sign the contract everything gets faster.  If you need more web servers, the site needs to be deployed to new servers, those servers need to be put into the load balancer.  If you did go to one of the cloud sites like Amazon’s EC2, or Microsoft’s Azure and deployed your application to their web servers, you now need to reconfigure your network to allow these outside network connections in.  But your back end databases aren’t going to get any faster, you just have more web servers.

Increasing the capacity of your database engine is going to require a little more than “poof its faster”.  New hardware needs to be brought in, and the systems migrated to this new hardware.  Before this can happen an OS of some sort needs to be installed, the system needs to be tested to ensure that there are no problems with the new hardware, etc.  Now if this new hardware needs to be brought in, how is it going to get there?  The airlines are canceling flights across the country, making it pretty tough to fly large computers around at the drop of a hat.  Do you’ll need to truck that new equipment from where ever it is to where ever you need it.  Assuming that you only need to go half way across the country that is still a 2-3 day drive (assuming that the roads are drivable).

As to the other half of his statement “hire temporary customer-service workers”, sure no problem.  Do you know 500 people (probably more than 500 are needed, after all there are something like 10 million people that need to talk to customer service at the moment) that they can hire at the drop of a hat, that are located where their call center is?  Does their call center have somewhere for 500 more people to sit?  So we’ll have them all work at home.  So we need to issue them computers, and phones, and they all need to have high speed internet.  Assume for a moment that they all have high speed internet and a computer they need to either be given an office phone which will use their high speed connection so they can take phone calls, or they need to be given the companies Voice Over IP (VOIP) soft phone software which will run on their computer, and they will need a headset (something else which now needs to be purchased and issued, so we probably need to wait for these to be shipped in from somewhere).

Not to mention the little thing about these people need to be interviewed, and background checks need to be run since these people will be taking peoples credit card numbers, etc we’ll probably want to make sure that they aren’t going to steal the customers credit card information.

Needless to say asinine statements like the one above don’t serve anyone’s interest except to cause people to be pissed off for no reason.  This person apparently doesn’t have a clue about how technology really works (I skimmed a couple of pages on his blog and I didn’t see anything about technology on there at all).  He should stop talking about technology at all, unless he actually understands how technology works, and until he has worked with technology as a technology professional.

Denny


December 27, 2010  2:00 PM

My book “Securing SQL Server” is coming.

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m happy to say that I officially can’t make a single change to my book “Securing SQL Server“. The publisher made the last little changes and sent it to the printer a couple of days ago.  As I understand it, the book should be shipping out to Amazon, me, etc. around February 1st, 2011 slightly ahead of schedule.  It has been a very interesting learning experience going through the book writing process, now that the entire process is complete.

The book lists at $49.95, but Amazon currently has it listed for $32.97 for pre-order.  I don’t know if the book will be available for the Kindle or not, but hopefully it will be.  Even if it is, feel free to order the hard copy and the digital.

Once the book ships, if you bring the hard copy to any event that I’m at I’ll be happy to sign your copy for you.

I will have a few copies to give away, which I’ll be doing at select events over the year (I’ll say which ones once I know which ones I’ll be giving the book away at).

Hopefully you pick up a copy of the book, and hopefully you find the book useful.

Denny


December 23, 2010  5:14 PM

What should Developers know about SQL Server?

Denny Cherry Denny Cherry Profile: Denny Cherry

Back at Tech Ed 2010 a good friend Robert Cain (blog | twitter) and I were wondering the exhibit hall floor and we got stopped by the guys from the Deep Fried Bytes pod cast and they asked us if we’d be willing to sit down a record a pod cast.  Of course we said sure, we’d be happy to, even though we had no idea what they would want to talk about.

Well that pod cast was just posted up on the deepfriedbytes.com website as Episode #61.

Denny


December 23, 2010  2:00 PM

Do you have any affordable recommendations (mid-size company) for DR to a secondary site, about 11 TB of storage?

Denny Cherry Denny Cherry Profile: Denny Cherry

It depends on how you are going to handle your data replication.  If you are going to use native array based replication then you’ll need to either use array’s data replication options or look at EMC’s recover point.

If however you are going to handle the replication outside of the array:

  • DFS for file servers
  • Mirroring, log shipping, etc. for SQL Server
  • Native Exchange replication for mail data

Then you’ve got a few options available to you.

Dell has a storage option called EquilLogic which can serve as a great storage solution for a DR environment where you need a lots of storage, but you need less IO than you need in your production environment  (I’m not saying that EquilLogic is a slow solution, but there are solutions which are much faster but they will cost you more).  You’ll probably be able to start with a single shelf, then add another as needed.

EMC also has a smaller array option called the AX4.  The AX4 array is based on the same concepts as the CX4 array which is a much more expensive array, but the CX4 has more features available.  The AX4 runs on SATA disks, and supports up to 60 disks which can be up to 1TB each.  For an 11 TB solution I’d recommend at least 15 disks (one full shelf).  If performance ever becomes an issue you simply buy another shelf and move some of the LUNs to the new shelf.

There are a couple of other options which you can look at from smaller companies, but when it comes to storage I prefer to stick with the bigger companies.  Yes you have to pay a little more, but you know that EMC and Dell will be around in a few years, and that the technologies which they are using have been around for quite a while.

Denny


December 20, 2010  2:00 PM

Database Restores and Torn Pages

Denny Cherry Denny Cherry Profile: Denny Cherry

This weekend I was migrating some databases from one server to another.  As these were rather large databases (about 2 TB in size) and I didn’t have the clusters setup on a SAN (using DAS instead) I needed to setup log shipping from one cluster to another so that the downtime when moving was kept to a minimum.

But a little problem came up, when I restored one database I got a torn page error when trying to roll the logs forward.  My first thought was ok, maybe there was a network hiccup or a problem with the storage on the destination server.  So I restored the database again, and got a torn page error again.  However the second time the error was on a different page.  This is very important to know, because the torn page had moved from one page (1:1396373) to another (1:24815312) and I used the same database backup both times, this told me something very important.  That the torn page wasn’t caused by the source database or the database backup.  If it was then the page which was torn would have been the same both times.

Now you’ll notice that the torn page was detected when the transaction logs were being rolled forward, not while the actual restore was happening.  The reason for this is that torn pages aren’t checked for during a full backup restore.  Torn pages are only detected during two specific operations, when the pages are read into the buffer pool (my database is in NORECOVERY so that isn’t going to happen), and when the pages are changed (in this case they are changed when the transaction log rolls forward).  So the only other way I would have been able to verify that there wasn’t a torn page would have been to bring the database up in STANDYBY mode and to a scan of every index (both clustered and non-clustered) which would have caused all the pages to be read into the buffer pool and caused the torn page message to be thrown.  But this can’t happen in this case because I’m log shipping form SQL Server 2005 to SQL Server 2008 and the database can’t be brought up in STANDYBY mode because a database upgrade needs to be done.  So checking for additional torn pages requires taking a fully rolled forward database out of NORECOVERY and putting it into RECOVERY, querying every table on every index so that every page is touched (by using index hints) to ensure that the torn page isn’t there.

This not so quick bit of testing (the database backup in question is about 300 Gigs) told me that there was a problem with either the HBAs or the DAS that the cluster was using.  To further test we failed the SQL Cluster over to the other node and ran the restore again which didn’t cause any torn page error messages when rolling the logs forward.  This would appear to put the DAS in the clear, and put the problem either on the other node of the cluster, the HBAs in that node, cables, etc.  Something which only that server uses to connect to the storage.

If you ever setup log shipping and you get torn pages let this be a lesson, don’t assume that there is a problem on the source database.  It might be a problem on the destination database instead.

Denny


December 17, 2010  7:27 PM

MCM exam passed, time to prep for the lab.

Denny Cherry Denny Cherry Profile: Denny Cherry

I just received notice today that I received a passing score on the first of the MCM exams.  I must say, that the few weeks have been extremely nerve racking waiting for the results.  I took the exam the day before Thanksgiving (which was awesome since the testing center is an hour away normally) and it took until today to get the results.

Now that the first exam is out of the way, I can start worrying about the lab portion.  I’ve got a few weeks before I can take the exam, as not all the testing centers can offer the lab portion, so I’ll have to wait until my center has the exam ready.  But when it does I’ll be there taking that beta exam and praying for the best.

If you are planning on taking the MCM exams when they come out, obviously I can’t tell you what is on the exam (you have to pinky swear not to tell before you can take it) but I will tell you it is by far the hardest Microsoft Exam I’ve taken.  Even harder I think than the Business Intelligence exams which were very tough.

I’m off to start studying for the lab, which will be tough as I have no idea what is going to be covered, or what I’ll be asked to do.

Denny


December 16, 2010  2:00 PM

Moving SQL database files using T/SQL only.

Denny Cherry Denny Cherry Profile: Denny Cherry

So last week I needed to rename some physical database files so that I didn’t have any naming conflicts when I restored some other databases.  I had about 10 databases that I needed to rename both the physical database files as well as the databases, and I really didn’t want to detach each database, then go find the files, rename them and put it all back together one at a time.

With a little work, I was able to beat the SQL Server into submission and using just T/SQL (and xp_cmdshell which I had to enable for this specific task) I was able to detach the databases rename the physical files and then attach the databases back.  The trick here is that SQL Server changes the permissions on the physical database files when you detach the database so that only the person who detaches the database can touch the physical files.  The way that I was able to get around that was via the xp_cmdshell proxy account.  I changed the proxy account to use my domain account, but this still didn’t quite do the trick.  The reason that it didn’t is because as a member of the sysadmin fixed server role I bypass the proxy account and anything which I do using xp_cmdshell uses the SQL Service account by default.

To get around this using of the SQL Service account I had to use the execute as to impersonate a lower level account.  In this case I chose the guest account as the account to use when I executed xp_cmdshell.  The code that ended up getting run against the database instance.

exec sp_detach_db 'prod_db'
go
execute as user = 'guest'
go
exec xp_cmdshell 'rename W:\Data\prod_db.mdf test_db.mdf'
exec xp_cmdshell 'rename T:\TLogs\prod_db.LDF test_db.ldf'
go
revert
go
exec sp_attach_db 'test_db', 'w:\data\test_db.mdf', 't:\tlogs\test_db.ldf'
go

Don’t you worry I didn’t write out all those code by hand. I used a SQL query to generate the xp_cmdshell and the bulk of the sp_attach_db code. That SQL was…

select 'exec xp_cmdshell ''rename ' + filename + ' ' + 
	replace(substring(filename, len(filename)-charindex('\', reverse(filename))+2, len(filename)), 'prod_', 'test_')
 + '''', ', ''' + replace(filename, 'prod_', 'test_') + '''' from prod_db.dbo.sysfiles

Hopefully you never need to go through this exercise, but in case you do hopefully this will come in handy.

Denny


December 13, 2010  2:00 PM

SQL Saturday #62 looks like it is going to be a great event

Denny Cherry Denny Cherry Profile: Denny Cherry

The first SQL Saturday of 2011 is looking like it is going to be an awesome event.  One reason that I can say that a month ahead of time is that for the first time (as far as I know) speakers have to be turned away because there have just been so many speakers submitting sessions.  I count 94 sessions submitted to the event, which is just an amazing number of sessions to be submitted.  Some of the sessions were submitted by well known speakers and some were submitted by more unknown speakers that are just getting their speaking started, which is one of the goals of the SQL Saturdays; to get new speakers into the community.

Personally I’ve got 4 sessions submitted, and I’ve got no idea if mine will be picked or not (hopefully they will be).  I’ll be there no matter what as I’ve got a pre-con the day before on Friday.  Personally I’m thrilled that I’ll be able to be involved with such a popular community event.

I know that Pam and Jose are limited all the speakers to a single session for the day, which given the massive popularity of the event just makes perfect sense.  I also know, from some comments on Andy’s blog that a lot of the slots are going to the newer speakers to give them the chance to get started, which also makes perfect sense as we need to grow our speaker pool.  Hopefully I’ll be able to get a speaking slot at the event, but if I don’t get one on Saturday, I’ll understand (if you were planning on giving me a slot, please don’t take it away after reading this).  After all I’ve got all day Friday.

Anyway, see you next month in Tampa.

Denny


December 9, 2010  2:00 PM

Do we have to worry about disk alignment if we have an EMC san?

Denny Cherry Denny Cherry Profile: Denny Cherry

Yes.  Every storage platform no matter what vendor makes it needs to be correctly aligned.  If the disks aren’t correctly aligned then you are doing twice as much work for every read and write operation than you need to be doing.

Fortunately new volumes created on a Windows 2008 or higher server (or Vista or higher client OS) will be correctly aligned.  They will be aligned at the 1024kb mark instead of the 64kb mark, but as long as the place they are aligned to is evenly divisible by 64k then it’ll be fine.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: