This past week the University of Florida decided that they no longer need to teach their customers (let’s be realistic with decisions like this, colleges don’t have students anymore they have customers) Computer Science. This is doing a major disservice to the customers of the University of Florida. Computer Sciences (and STEM in general) are the future of the American economy. Without offering Customer Science as a major many of the students won’t be able to compete in the work force.
What makes this even more insane is that this only saved about $1.7M and the University of Florida then decided to increase the athletic budget by more than $2M. This just goes to show that the University of Florida doesn’t give a damn about their customers and they only care about making more money to put into their big pile of money. If this wasn’t the case they money would be getting put into something that didn’t make a profit (like teaching students) instead of things that do make a profit (like football).
Now I’m not against sports, let me get that out there before I start getting hate mail from people. And while sports is fun and a great way to get publicity for the school, if the school is really, really lucky one of the guys on the football team per year will be drafted into the NFL, while everyone with a CS degree will end up working somewhere doing something with computers. Based on those numbers alone you would think that the Computer Science department would be worth spending a few dollars on.
Now I received a couple of replies on Twitter when this first came out saying that the University of Florida didn’t have a very good Computer Science program. And frankly with an annual budget of $1.9M I’m not that surprised. With needing to keep software refreshed, paying teachers, etc. $1.9M doesn’t exactly go all that far. If they were sick of having a really poor Computer Sciences department maybe they should have found a way to raise the budget for the department and brought in some industry professionals as guest speakers to try and breathe some life into the department instead of just shutting the doors on the department and being done with it.
I’m done ranting. Back to your regularly scheduled programming.
Microsoft has recently released hot fix 388724 under MS KB 2687741 which resolves a performance issue when failing over a SQL Server 2012 Availability Group from one replica to another.
The basic jist of the problem is that there was an issue with the inter-node communication within the Windows cluster which caused the AG to take longer than expected to fail over.
If you are having this problem I’d recommend reading this MS KB and getting the hot fix installed on your cluster. As this is just a hot fix and not a service pack (it should be included in Windows 2008 R2 SP2) I’d recommend only installing this if you are having the problem it shows.
One of the things that people will need to change to their applications when using AlwaysOn under SQL Server 2012 will be that the applications will need to have retry logic added to the application so that if the SQL Server is down that the application can retry the connection.
Now this shouldn’t be anything new to the application developers as even today there’s nothing that says that the SQL Server database will always be available. Instead of failing the application on the first connection attempt, or the first time that the command was run, the command should be rerun, probably a couple of times. Now if the error that you get back is from the SQL Server itself you don’t want to retry. You’ll only want to retry if the database was up and you got back a normal error message.
If you are working with SQL Azure this same logic applies to your application there as well.
While I’d love to provide you with some sample source code here, I’m not a .NET developer and the last thing that you want me doing is writing .NET source code so I’ll leave that for the .NET professionals.
Come and join me in Poland, May 24th, 2012 (24-05-2012) at SQL Day 2012. During this day long session we will be looking at storage and virtualization from a DBA perspective with the end goal of the day being to improve the your knowledge of enterprise storage and enterprise virtualization.
While we won’t be looking at a specific storage vender or a specific virtualization platform we’ll be covering a lot of the common techniques between them, and looking at a lot of enterprise class theory. The entire day long session is open for Q & A (Questions and Answers) so we can discuss vendor specific issues that you are having in your enterprise today. Be sure to check out the SQL Day 2012 pricing page for more information about pricing (the page is in Polish so I can’t read it, but I’m pretty sure it shows the pre-con pricing at 400 PLN + 23% VAT and the regular conference at 300 PLN + 23% VAT.
There are lots of great pre-cons going on, on the 24th so if mine doesn’t interest you, but sure to check out the other pre-cons which are going on that day as well.
I’ve uploaded my slide decks from SQL Saturday 111. The sessions can be found on the session pages for the two sessions. I gave two sessions at SQL Saturday 111 in Atlanta, GA. The first was index internals and the second was SQL Server Table Partitioning.
I had a great time at SQL Saturday 111 and I hope all the other speakers and all the attendees had a great time as well.
I look forward to seeing everyone at the next event, SQL Rally in just a few short weeks.
If you have physical SQL Servers that you plan on moving into a virtual environment you’ll want to double check your affinity mask settings before actually moving the machine from a physical server to a VM when using P2V software. The reason for this is that if the affinity mask is set for specific CPUs and the number of CPU cores changes the affinity mask won’t be correct and you won’t be able to get into the advanced settings of sp_configure without getting an invalid settings error like that shown below.
Msg 5832, Level 16, State 1, Line 1
The affinity mask specified does not match the CPU mask on this system.
If you haven’t P2V’ed the system before you do simply change the various affinity masks to 0 which sets them for all processors. If you have P2V’ed the system your best option is to log into the SQL Server using the dedicated admin connection and manually change the value in the system table by using the following query.
update sys.configurations set value=0 Where Name = 'affinity mask'
Hopefully you never run across this problem, but if you do there’s the solution for you.
UPDATE: Paul Randal reminded me that CPU Affinity has been deprecated as of SQL Server 2008 R2 so you’ll probably not want to be configuring the CPU Affinity anyway.
Something which has come up when upgrading Microsoft Operations Manager 2007 to 2012 is that there is an extra step which isn’t really documented in the Ops Manager upgrade guide. You see when upgrading from Ops Manager 2007 to 2012 you also need to upgrade the SQL Server to SQL Server 2008 R2 as that is required by Ops Manager 2012. As the install of Ops Manager 2007 to probably from 2007 or 2008 it’s probably running on SQL Server 2005 today so that requires that the database be upgraded before the Ops Manager software can be upgraded as one of the prerequisites for running Ops Manager 2012 is that you are running SQL Server 2008 R2.
The problem comes from the fact that when you upgrade SQL Server there is a setting called the compatibility mode which doesn’t get changed by default. The reason for this is that you can continue to use older T-SQL syntax while still upgrading the database engine to the newest version. When the compatibility mode is left at the older level (in this case SQL Server 2005 compatibility mode) newer T-SQL features aren’t available. In the case of Ops Manager going from SQL Server 2005 to SQL Server 2008 R2 the feature in question that is needed is the MERGE statement which wasn’t available in SQL Server 2005.
The annoying thing here is that Microsoft doesn’t test for the compatibility mode when going through the Ops Manager upgrade process so this doesn’t get flagged. This means that you’ll get through the service upgrade and when you get into the second migrating phase, doing the management group updates) the System Center Management Configuration Service will throw Error number 29112 and the entire Ops Manager system will stop working. Why it is throwing this error message is because the Management Configuration Service is attempting to create stored procedures which use the MERGE statement which the SQL Server 2005 compatibility mode doesn’t understand.
Thankfully fixing this is very easy. Log into the SQL Server database engine which you are using to host the Ops Manager databases. In the object explorer within SQL Server Management Studio right click on the OperationsManager and OperationsManagerDW databases and select properties (do one database at a time). On the options tab change the compatibility mode from SQL Server 2005 to SQL Server 2008. Then click OK as shown below (click to enlarge).
If you prefer this change can also be made with a couple of simple ALTER DATABASE statements as shown below.
ALTER DATABASE [OperationsManager] SET COMPATIBILITY_LEVEL = 100 ALTER DATABASE [OperationsManagerDW] SET COMPATIBILITY_LEVEL = 100
Either way once the change is made there is no restart of the database engine required. Just fire up the System Center Management Configuration Service and let it do it’s thing and it’ll complete that step of the upgrade process.
I hope this helps,
Many companies today came pretty close to needing to implement their DR plans yesterday, and many of them probably didn’t even realize it. In case you didn’t see what was going on in the Dallas area yesterday there was massive hail and several tornado’s toucheing down in the area. The Dallas Fort Worth (DFW) airport was shutdown for hours, hundreds of millions of dollars worth of airplanes were damaged, many homes were destroyed, etc.
What does all this have to do with companies DR plans? Well in the DFW area there is a little hosting company called RackSpace. RackSpace hosts a large percentage of their customers in the data center in the area which they actually call DFW (granted many companies in the Dallas area refer to that office DFW). In the case of RackSpace however the facility really is close to DFW, very close. In fact it’s at the end of Runway 13R/31L at the North/West end of the runway. The red mark at the top left is the RackSpace CoLo facility, the road at the bottom is runway 13R/31L.
So why I am picking on RackSpace right now? Because if you look at this map you’ll see several tornadoes which touched down not all that far away from RackSpace, just a few miles away in fact. This was a very close call.
If those tornadoes had touched down just a few miles to the west there would be a lot of companies would be in a really bad state at the moment as they try and figure out just how much data had been lost between the last tapes to be shipped from RackSpace to offsite storage and when the place was torn apart. Then there’s the problem of how long it’ll take RackSpace to get new servers delivered and racked in another data center (as it’ll probably take a while to get this one dried out and rebuilt).
Best case is that these companies would be looking at several days of downtime, worst case is weeks. The reality of the situation is that most of the smaller companies would be totally hosed as odds are that RackSpace would be focused on getting their largest clients online first, as the 30 largest clients probably bring in more revenue than the rest combined (I’ve done work for several of RackSpace’s larger clients so I know how much they are paying). Given that the major computer companies can only produce so many servers at a time, and RackSpace would pretty much need all of them for a couple of months as RackSpace would need probably thousands of servers and storage arrays to be delivered in order to get everything back up and running.
What would make this even worse is that companies that tried to move to another hosting provider to try to get online faster probably wouldn’t be able to. First they’d need to get their data from tape at RackSpace which would be a problem unto it self as there wouldn’t be anywhere for RackSpace to restore the data. Secondly the new hosting providers may not have been able to get new hardware delivered as RackSpace would be taking up all the production capabilities.
Now this hell could all be avoided by properly planning for this sort of disaster hitting the RackSpace hosting facility. RackSpace has several other data centers in the states that you could easily enough setup some DR machines at another facility and setup data replication between the facilities so that if one facility was taken offline you would be able to keep running at the second site.
But again this all requires planning this in advance. If you are a RackSpace customer I’d recommend talking to your sales team about getting a DR solution up and running within another of the RackSpace facilities.
If you need assistance with these conversations feel free to engage me and we can make sure that your systems are prepared for the next disaster that strikes near (or on top of) your data center.
So I arrived home from my first SQL Bits conference, which was the 10th SQL Bits event (they do two a year). I’ve got to say that I had a great time at the conference. SQL Bits, if you aren’t away is a three day event with the first day (Thursday) being all day sessions, and the second and third days being normal hour long sessions. Day two (Friday) is only for paying attendees while day three (Saturday) is open to anyone who registers and there is no cost to attend.
I was able to have sessions on all three days, doing my “Storage and Virtualization for the DBA” pre-con on Thursday, a session on AlwaysOn on Friday and a session on Virtualization on Saturday. Stacia Misner (blog | @StaciaMisner) and I also had a joint session on Satuday which was part 1 of an all day session which we did at pass which explores the BI side of the SQL Server workload and how those BI processes impact the OLTP database and the EDW/ODS/Reporting databases as data is loaded into them, reports are run and OLAP cubes are updated. (We briefly slowed a link at the end of the slide deck, which points to this page for some additional reading.) Hopefully Stacia and I will be invited back to give the rest of this presentation at the next SQL Bits session (thankfully we were able to end at a pretty good place this year).
A few of us made our way to London a few days early (as well as other cities) in order to try and make it as easy as possible to kill the jet lag before the conference started (I don’t think there is anything worse than trying to give an all day presentation when you are 8 hours off of your normal timezone). Kris (my wife) and I spent a few days before SQL Bits doing a little sight seeing with Stacia, Erika Bakse (blog | @BakseDoesBI) and Adam Machanic (blog | @AdamMachanic). We were able to see some of the great parts of London like the very old Tower of London parts of which date back to the 1200s or so as well as Westminster Abbey and we walked around by the Parliament building.
On of the very cool things that we did was on Wednesday when Stacic, Erika, Kris and I met up with Buck Woody, Jen Stirrup, Lara Rubbelke and a couple of others (I just can’t remember who else was there, I know that Jen and Stacia took pictures of the group) met for lunch in London at “Ye Olde Cheshire Cheese” which was the pub frequented by Charles Dickens while we wrote many of the works for which he is famous for. It was a great lunch with great friends in a pub which has been around for hundreds of years (except for when it burnt down in the great London fire). According to WikiPedia there has been a pub in that location since 1538 and it was last rebuilt (according to WikiPedia and the sign at the pub also shown on WikiPedia) in 1667.
One thing that SQL Bits did, which they did for the first time was invite the speakers and sponsors spouses/guests/etc. out for the afternoon on Saturday so that they wouldn’t be stuck sitting around the hotel for the day. The outing, which Kris attended, was lunch followed by a matinee showing of a London play. Kris said that she had a great time and she made some new friends (hopefully she remembered to collect email addresses). This was SQL Bits way of showing some thanks to the speakers partners for loaning them out for the weekend (and giving them some incentive to want to come to the conference and see what it is that we all do at these events as most of the time our partners want to avoid the conferences like the plague).
Something that I thought was really interesting about London was the mixture of the very old buildings mixed in with the ultra modern looking all Glass buildings. As an example I was standing on the wall of the Tower of London looking over the river Thames looking at the gate where prisoners were brought through, and right across the river were several brand new all glass sky scrapers looking down on us. You can see this a little in the below picture (click to view full size) which shows the entire front side of the Tower as well as some new buildings at both the left and right edges. (Just before leaving for London I picked up a new Android Nexus cell phone, running Android 4 and it’s got a kick ass panoramic photo mode built in which is what I used to take this picture.)
After hours during the conference there were of course some fun activities as well as the sigh seeing around London, which I hadn’t been to since I was a little kid. There were some pretty rare sights to be found…
Surrounded by some of the other attendees and speakers.
Now I’m not going to say that it was a little late when these pictures were taken, because it wasn’t. It was actually pretty early … in the morning. This all happened about 3am and we were still going strong and most importantly we were all there the next day right when we needed to be so that the conference could continue on without a hitch.
Below are a few random pictures from our sigh seeing that I wanted to share with everyone.
The statue in front of Buckingham Palace.
The front of Buckingham Palace.
Some chocolates that I know for a fact that Paul Randal would love to have.
In closing, thanks again to the SQL Bits team and all the attendees. I had a great time, and I hope to be able to attend the event next time.
Every once and a while people ask me if they should use SQL Server Replication to get data to a DR site. And typically to them my answer is “probably not”. The reason that I say that is for a couple of reasons.
1. If there are triggers on your tables, replication doesn’t have a way to ensure that the triggers will be there on the remote site.
2. If you need to add tables, procedures, views, etc. you have to reinitialize the subscription to add the new articles to the subscriber.
3. The failback story is pretty much a mess. Assuming that you do have to fail over to your DR server failing back isn’t exactly the easiest thing to do. Basically you have to take another outage while you move the database back. That or you have to resetup replication in the other direction.
Needless to say that these are some pretty good reasons to not use SQL Server Replication to get data to your DR site. Especially as there are so many better options such as Database Mirroring, Log Shipping, storage replication, third party storage replication and soon enough AlwaysOn Availability Groups.
If you are using SQL Server Replication to replicate data from your production site to your DR site I urge you to look at the other options which are available to you and you should strongly consider moving to one of the other technology options.