Come and join me in Poland, May 24th, 2012 (24-05-2012) at SQL Day 2012. During this day long session we will be looking at storage and virtualization from a DBA perspective with the end goal of the day being to improve the your knowledge of enterprise storage and enterprise virtualization.
While we won’t be looking at a specific storage vender or a specific virtualization platform we’ll be covering a lot of the common techniques between them, and looking at a lot of enterprise class theory. The entire day long session is open for Q & A (Questions and Answers) so we can discuss vendor specific issues that you are having in your enterprise today. Be sure to check out the SQL Day 2012 pricing page for more information about pricing (the page is in Polish so I can’t read it, but I’m pretty sure it shows the pre-con pricing at 400 PLN + 23% VAT and the regular conference at 300 PLN + 23% VAT.
There are lots of great pre-cons going on, on the 24th so if mine doesn’t interest you, but sure to check out the other pre-cons which are going on that day as well.
I’ve uploaded my slide decks from SQL Saturday 111. The sessions can be found on the session pages for the two sessions. I gave two sessions at SQL Saturday 111 in Atlanta, GA. The first was index internals and the second was SQL Server Table Partitioning.
I had a great time at SQL Saturday 111 and I hope all the other speakers and all the attendees had a great time as well.
I look forward to seeing everyone at the next event, SQL Rally in just a few short weeks.
If you have physical SQL Servers that you plan on moving into a virtual environment you’ll want to double check your affinity mask settings before actually moving the machine from a physical server to a VM when using P2V software. The reason for this is that if the affinity mask is set for specific CPUs and the number of CPU cores changes the affinity mask won’t be correct and you won’t be able to get into the advanced settings of sp_configure without getting an invalid settings error like that shown below.
Msg 5832, Level 16, State 1, Line 1
The affinity mask specified does not match the CPU mask on this system.
If you haven’t P2V’ed the system before you do simply change the various affinity masks to 0 which sets them for all processors. If you have P2V’ed the system your best option is to log into the SQL Server using the dedicated admin connection and manually change the value in the system table by using the following query.
update sys.configurations set value=0 Where Name = 'affinity mask'
Hopefully you never run across this problem, but if you do there’s the solution for you.
UPDATE: Paul Randal reminded me that CPU Affinity has been deprecated as of SQL Server 2008 R2 so you’ll probably not want to be configuring the CPU Affinity anyway.
Something which has come up when upgrading Microsoft Operations Manager 2007 to 2012 is that there is an extra step which isn’t really documented in the Ops Manager upgrade guide. You see when upgrading from Ops Manager 2007 to 2012 you also need to upgrade the SQL Server to SQL Server 2008 R2 as that is required by Ops Manager 2012. As the install of Ops Manager 2007 to probably from 2007 or 2008 it’s probably running on SQL Server 2005 today so that requires that the database be upgraded before the Ops Manager software can be upgraded as one of the prerequisites for running Ops Manager 2012 is that you are running SQL Server 2008 R2.
The problem comes from the fact that when you upgrade SQL Server there is a setting called the compatibility mode which doesn’t get changed by default. The reason for this is that you can continue to use older T-SQL syntax while still upgrading the database engine to the newest version. When the compatibility mode is left at the older level (in this case SQL Server 2005 compatibility mode) newer T-SQL features aren’t available. In the case of Ops Manager going from SQL Server 2005 to SQL Server 2008 R2 the feature in question that is needed is the MERGE statement which wasn’t available in SQL Server 2005.
The annoying thing here is that Microsoft doesn’t test for the compatibility mode when going through the Ops Manager upgrade process so this doesn’t get flagged. This means that you’ll get through the service upgrade and when you get into the second migrating phase, doing the management group updates) the System Center Management Configuration Service will throw Error number 29112 and the entire Ops Manager system will stop working. Why it is throwing this error message is because the Management Configuration Service is attempting to create stored procedures which use the MERGE statement which the SQL Server 2005 compatibility mode doesn’t understand.
Thankfully fixing this is very easy. Log into the SQL Server database engine which you are using to host the Ops Manager databases. In the object explorer within SQL Server Management Studio right click on the OperationsManager and OperationsManagerDW databases and select properties (do one database at a time). On the options tab change the compatibility mode from SQL Server 2005 to SQL Server 2008. Then click OK as shown below (click to enlarge).
If you prefer this change can also be made with a couple of simple ALTER DATABASE statements as shown below.
ALTER DATABASE [OperationsManager] SET COMPATIBILITY_LEVEL = 100 ALTER DATABASE [OperationsManagerDW] SET COMPATIBILITY_LEVEL = 100
Either way once the change is made there is no restart of the database engine required. Just fire up the System Center Management Configuration Service and let it do it’s thing and it’ll complete that step of the upgrade process.
I hope this helps,
Many companies today came pretty close to needing to implement their DR plans yesterday, and many of them probably didn’t even realize it. In case you didn’t see what was going on in the Dallas area yesterday there was massive hail and several tornado’s toucheing down in the area. The Dallas Fort Worth (DFW) airport was shutdown for hours, hundreds of millions of dollars worth of airplanes were damaged, many homes were destroyed, etc.
What does all this have to do with companies DR plans? Well in the DFW area there is a little hosting company called RackSpace. RackSpace hosts a large percentage of their customers in the data center in the area which they actually call DFW (granted many companies in the Dallas area refer to that office DFW). In the case of RackSpace however the facility really is close to DFW, very close. In fact it’s at the end of Runway 13R/31L at the North/West end of the runway. The red mark at the top left is the RackSpace CoLo facility, the road at the bottom is runway 13R/31L.
So why I am picking on RackSpace right now? Because if you look at this map you’ll see several tornadoes which touched down not all that far away from RackSpace, just a few miles away in fact. This was a very close call.
If those tornadoes had touched down just a few miles to the west there would be a lot of companies would be in a really bad state at the moment as they try and figure out just how much data had been lost between the last tapes to be shipped from RackSpace to offsite storage and when the place was torn apart. Then there’s the problem of how long it’ll take RackSpace to get new servers delivered and racked in another data center (as it’ll probably take a while to get this one dried out and rebuilt).
Best case is that these companies would be looking at several days of downtime, worst case is weeks. The reality of the situation is that most of the smaller companies would be totally hosed as odds are that RackSpace would be focused on getting their largest clients online first, as the 30 largest clients probably bring in more revenue than the rest combined (I’ve done work for several of RackSpace’s larger clients so I know how much they are paying). Given that the major computer companies can only produce so many servers at a time, and RackSpace would pretty much need all of them for a couple of months as RackSpace would need probably thousands of servers and storage arrays to be delivered in order to get everything back up and running.
What would make this even worse is that companies that tried to move to another hosting provider to try to get online faster probably wouldn’t be able to. First they’d need to get their data from tape at RackSpace which would be a problem unto it self as there wouldn’t be anywhere for RackSpace to restore the data. Secondly the new hosting providers may not have been able to get new hardware delivered as RackSpace would be taking up all the production capabilities.
Now this hell could all be avoided by properly planning for this sort of disaster hitting the RackSpace hosting facility. RackSpace has several other data centers in the states that you could easily enough setup some DR machines at another facility and setup data replication between the facilities so that if one facility was taken offline you would be able to keep running at the second site.
But again this all requires planning this in advance. If you are a RackSpace customer I’d recommend talking to your sales team about getting a DR solution up and running within another of the RackSpace facilities.
If you need assistance with these conversations feel free to engage me and we can make sure that your systems are prepared for the next disaster that strikes near (or on top of) your data center.
So I arrived home from my first SQL Bits conference, which was the 10th SQL Bits event (they do two a year). I’ve got to say that I had a great time at the conference. SQL Bits, if you aren’t away is a three day event with the first day (Thursday) being all day sessions, and the second and third days being normal hour long sessions. Day two (Friday) is only for paying attendees while day three (Saturday) is open to anyone who registers and there is no cost to attend.
I was able to have sessions on all three days, doing my “Storage and Virtualization for the DBA” pre-con on Thursday, a session on AlwaysOn on Friday and a session on Virtualization on Saturday. Stacia Misner (blog | @StaciaMisner) and I also had a joint session on Satuday which was part 1 of an all day session which we did at pass which explores the BI side of the SQL Server workload and how those BI processes impact the OLTP database and the EDW/ODS/Reporting databases as data is loaded into them, reports are run and OLAP cubes are updated. (We briefly slowed a link at the end of the slide deck, which points to this page for some additional reading.) Hopefully Stacia and I will be invited back to give the rest of this presentation at the next SQL Bits session (thankfully we were able to end at a pretty good place this year).
A few of us made our way to London a few days early (as well as other cities) in order to try and make it as easy as possible to kill the jet lag before the conference started (I don’t think there is anything worse than trying to give an all day presentation when you are 8 hours off of your normal timezone). Kris (my wife) and I spent a few days before SQL Bits doing a little sight seeing with Stacia, Erika Bakse (blog | @BakseDoesBI) and Adam Machanic (blog | @AdamMachanic). We were able to see some of the great parts of London like the very old Tower of London parts of which date back to the 1200s or so as well as Westminster Abbey and we walked around by the Parliament building.
One of the very cool things that we did was on Wednesday when Stacic, Erika, Kris and I met up with Buck Woody, Jen Stirrup, Lara Rubbelke and a couple of others (I just can’t remember who else was there, I know that Jen and Stacia took pictures of the group) met for lunch in London at “Ye Olde Cheshire Cheese” which was the pub frequented by Charles Dickens while we wrote many of the works for which he is famous for. It was a great lunch with great friends in a pub which has been around for hundreds of years (except for when it burnt down in the great London fire). According to WikiPedia there has been a pub in that location since 1538 and it was last rebuilt (according to WikiPedia and the sign at the pub also shown on WikiPedia) in 1667.
One thing that SQL Bits did, which they did for the first time was invite the speakers and sponsors spouses/guests/etc. out for the afternoon on Saturday so that they wouldn’t be stuck sitting around the hotel for the day. The outing, which Kris attended, was lunch followed by a matinee showing of a London play. Kris said that she had a great time and she made some new friends (hopefully she remembered to collect email addresses). This was SQL Bits way of showing some thanks to the speakers partners for loaning them out for the weekend (and giving them some incentive to want to come to the conference and see what it is that we all do at these events as most of the time our partners want to avoid the conferences like the plague).
Something that I thought was really interesting about London was the mixture of the very old buildings mixed in with the ultra modern looking all Glass buildings. As an example I was standing on the wall of the Tower of London looking over the river Thames looking at the gate where prisoners were brought through, and right across the river were several brand new all glass sky scrapers looking down on us. You can see this a little in the below picture (click to view full size) which shows the entire front side of the Tower as well as some new buildings at both the left and right edges. (Just before leaving for London I picked up a new Android Nexus cell phone, running Android 4 and it’s got a kick ass panoramic photo mode built in which is what I used to take this picture.)
After hours during the conference there were of course some fun activities as well as the sigh seeing around London, which I hadn’t been to since I was a little kid. There were some pretty rare sights to be found…
Surrounded by some of the other attendees and speakers.
Now I’m not going to say that it was a little late when these pictures were taken, because it wasn’t. It was actually pretty early … in the morning. This all happened about 3am and we were still going strong and most importantly we were all there the next day right when we needed to be so that the conference could continue on without a hitch.
Below are a few random pictures from our sigh seeing that I wanted to share with everyone.
The statue in front of Buckingham Palace.
The front of Buckingham Palace.
Some chocolates that I know for a fact that Paul Randal would love to have.
In closing, thanks again to the SQL Bits team and all the attendees. I had a great time, and I hope to be able to attend the event next time.
Every once and a while people ask me if they should use SQL Server Replication to get data to a DR site. And typically to them my answer is “probably not”. The reason that I say that is for a couple of reasons.
1. If there are triggers on your tables, replication doesn’t have a way to ensure that the triggers will be there on the remote site.
2. If you need to add tables, procedures, views, etc. you have to reinitialize the subscription to add the new articles to the subscriber.
3. The failback story is pretty much a mess. Assuming that you do have to fail over to your DR server failing back isn’t exactly the easiest thing to do. Basically you have to take another outage while you move the database back. That or you have to resetup replication in the other direction.
Needless to say that these are some pretty good reasons to not use SQL Server Replication to get data to your DR site. Especially as there are so many better options such as Database Mirroring, Log Shipping, storage replication, third party storage replication and soon enough AlwaysOn Availability Groups.
If you are using SQL Server Replication to replicate data from your production site to your DR site I urge you to look at the other options which are available to you and you should strongly consider moving to one of the other technology options.
We all get emails from recruiters. I know that I’m shocked on a day that I don’t get them. Personally I love reading these emails as they are usually pretty damn funny. But I’m seeing a very disturbing trend in these emails. This trend is making job candidates pay to attend out of town job interviews.
If I was looking for a new position, and I saw the magical line “candidate must pay for all interview travel expenses” showed up in a job posting I’d just keep on moving. If you don’t want to interview out of town candidates then don’t. It’s really not all that hard to figure out. Now the thing that really pissed me off about this specific job posting was that the company was willing to hire a remote employee, but they expected the remote employee to pay for all travel costs for the interview. So does this mean that if hired I would be expected to pay my own way to company meetings?
If you are willing to hire a remote employee part of that includes paying for the candidates expenses to get out to see you for the job interview. There’s nothing that says that you need to bring every candidate out to see you for the in person interview. There’s something called a phone screen which is how you filter out the people you don’t want to bother to interview in person.
One of the costs of doing business with a modern distributed workforce is paying for a few extra plane tickets when hiring for a new position. So suck it up, stop being so cheap and pay for the flights and a night in the hotel. When you are hiring someone you need to put your best food forward so that you look as good as possible so that you can attract the best possible talent.
On occasion I’m asked to come and look into some SQL performance problems on systems. Normally when I’m doing this the customer doesn’t have any third party tools installed and they are actively having problems, and these problems are leading to lost revenue or at least really pissed off customers and/or employees. As the systems are currently in a system down situation there’s usually no time to pitch, buy, install and configure a performance monitoring solution from one of the many vendors.
When I run across these sorts of problems the first thing that I do is put sp_whoisactive and sp_who3 on the server. As these are just stored procedures getting approval to install them is usually pretty easy.
Once the stored procedures are installed sp_whoisactive will pretty quickly tell you the stored procedures which are having performance problems. If the output is showing a CXPACKET wait type that’s when sp_who3 comes into play. Running sp_who3 with a SPID number after the procedure like “exec sp_who3 125” will give you the wait type for each thread within the process.
When doing this recently on a system sp_whoisactive showed me that CXPACKET was the wait type. After digging into the process with sp_who3 I saw that one of the threads was waiting on SOS_SCHEDULER_YIELD. I then used sp_whoisactive to get the execution plan showing me the missing index which needed to be created. In this case there was a clustered index on the table which was being scanned. Based on the page count output from SET STATISTICS IO we were scanning the entire table every time the query was run. This massively expensive query was causing the query to parallelize and the run time to go insanely high.
Once we added the index we figured out the man hours wasted per day by having the index missing. We assumed that the procedure was run 3 times per second over the course of a 12 hour period and the query was taking 3 seconds to complete before the index was added. After the index was created the query was running in just a few milliseconds. This single query running for 3 seconds, 3 times per second, for 12 hours comes out to 108 man hours wasted while this one query was being run.
Using two community written stored procedures and a few minutes of troubleshooting time 108 man hours per day are now being saved for that specific company.
Hopefully you’ll find these stored procedures to be useful in your performance troubleshooting. They aren’t hard to use, but they sure are useful.
So I’ve run into a bit of a strange issue with Windows 8 on my laptop that I wanted to share in case anyone else runs into the same problem. My basic problem is that about 50% of the time when my laptop boots up into Windows 8 it can’t find the network. I know that the WiFi card is working correct because it can see the list of networks in my area, I just can’t actually talk to any of them. If I do an ipconfig from the command line I can’t even see the LAN or WiFi network adapters even though they are showing up correctly in the network adapter window in the control panel. (All pictures can be clicked on to view them full size.)
As you can see the WiFi card found the WiFi network there was just no TCP information coming down from it.
What I was able to figure out is that there’s some TCP/IP setting which isn’t being saved correctly (or at some times is being wiped out from the registry for some unknown reason). To fix the problem in the Network Connections window (shown above) double click on of the network icons (I used the WiFi one). To verify the problem is the same you should see no bytes being passed at all.
If you click details the box which would normally have all sorts of useful information will be blank.
Close the Network Connection Details and on the Network Status (WiFi Status in my picture above) click the “Properties” button.
In the list in the middle scroll down until you find “Internet Protocol Version 4 (TCP/IPv4)”. Select this and click the properties button. This will show you a window similar to the following.
If you have values specified in this screen that are different from my settings shown in the screenshot DO NOT CHANGE THEM! If you change these settings without knowing what you are doing you won’t be able to get online and only the person who setup your network will be able to help you fix the problem!
Now click the “OK” button on this screen (without changing anything), and click the “Close” button on the prior screen.
You will notice that under the Activity section the numbers are no longer 0, which means that data is flowing through the network card to the rest of the network (and in my case to the Internet as well). If you click the “Details” button you’ll see you now have an IP address and everything is happy again.
If you look at the output from ipconfig again we’ll see that the network adapter is now listed and is receiving IP address information.
If you do end up running into this problem yourself, remember these steps as you’ll probably have to do this pretty regularly.