So it appears that May is going to be one of the busier travel months for me this year (at least so far). With three important week long events scheduled for this month it’s going to be a hell of a month.
First is SQL Rally where I’m in Dallas Texas from May 7th-12th (I’ve got a pre-con on the 8th and I want to be able to relax with friends Friday night before heading home no Saturday).
After that is SQL Day 2012 is Poland where I’m giving a pre-con, the conference key note and a couple of regular sessions.
Don’t worry, that’s not everything. I’ve also got on site work with clients and a data center migration to do between SQL Rally and SQL Excursions.
Basically the reason that I’m writing this is so that if you email me during May (or June as it’s not much better with two TechEd conferences back to back) don’t expect a prompt reply. Do expect the longest craziest out of office response you’ve ever seen.
One of the many projects which I had the pleasure of working on during the second half of 2011 and the first quarter of 2012 was Wrox’s Professional SQL Server 2012 Administration which was just recently published. Now I didn’t write the whole book, hell I didn’t even write 1/4 of the book but I worked with a great team of other authors who were Adam Jorgensen, Steven Wort, Ross LoForte, Brian Knight, Robert Cain, Jose Chinchilla, Audrey Hammonds, Scott Klein, Jorge Segarra and Gareth Swanepoel.
Now normally on a book project like this you’d be an author or an editor. I got to play both roles giving Jason Strate an assist with the editing duties (obviously Jason did all the tech editing on my chapters).
As always you can find this book listed on my books page over on my website (www.mrdenny.com) so you can find it and all the other books which I’ve worked on over the years over there including some handy links over to Amazon where you can check out the books via the “Look Inside!” feature that Amazon offers where you can check out the books before actually purchasing.
Hopefully you get a copy of the book and enjoy it. It was a bit of a painful process for all of us, but it was great to be working on the project with such a great group of people.
In case you missed my announcement earlier this week over on SecuringSQLServer.com…
I’m pleased to be able to announce that the 2nd edition of Securing SQL Server is going to be available soon. It’s just been made available for pre-order on Amazon.com. The second edition comes in at about 350 pages (according to Amazon, I don’t actually have a copy of it yet) while the first edition came in at about 270 pages so there has been a LOT of material added to the book.
While a lot of the new information is focused on SQL Server 2012, there is also a lot of new material which relates to older version of SQL Server including chapters on SQL Server Analysis Services and SQL Server Reporting Services, information on Instant File Initialization, EXECUTE AS, Database Firewalls, SAN Security, Actual Data Security (no idea how this got missed the first time around, but that’s to Brent Ozar for pointing it out).
As far as the SQL Server 2012 information you’ll find updated information about the SHA2 hashing algorithms, Securing AlwaysOn Availability Groups, Security and SQL Server Clustering, Security and Contained Databases and a lot more.
If you already have a copy of the 1st edition I encourage you to take a look at the second edition as well. I know that it’s really soon for a second edition of a book (the first edition just came out February 2011, but this new edition comes on the release of SQL Server 2012.
Hopefully you pre-order you copy today.
P.S. Yes this edition will be available for the Kindle as well, that takes a little time. As soon as I know that it’s been posted for the Kindle (usually happens a little after Amazon gets the physical books) I’ll post another announcement here.
P.P.S. If you visit my SecuringSQLServer.com site I’ve updated everything there for the new edition. You can always find the old edition listed on the Other Books page on that site or on the Books page on mrdenny.com.
This past week the University of Florida decided that they no longer need to teach their customers (let’s be realistic with decisions like this, colleges don’t have students anymore they have customers) Computer Science. This is doing a major disservice to the customers of the University of Florida. Computer Sciences (and STEM in general) are the future of the American economy. Without offering Customer Science as a major many of the students won’t be able to compete in the work force.
What makes this even more insane is that this only saved about $1.7M and the University of Florida then decided to increase the athletic budget by more than $2M. This just goes to show that the University of Florida doesn’t give a damn about their customers and they only care about making more money to put into their big pile of money. If this wasn’t the case they money would be getting put into something that didn’t make a profit (like teaching students) instead of things that do make a profit (like football).
Now I’m not against sports, let me get that out there before I start getting hate mail from people. And while sports is fun and a great way to get publicity for the school, if the school is really, really lucky one of the guys on the football team per year will be drafted into the NFL, while everyone with a CS degree will end up working somewhere doing something with computers. Based on those numbers alone you would think that the Computer Science department would be worth spending a few dollars on.
Now I received a couple of replies on Twitter when this first came out saying that the University of Florida didn’t have a very good Computer Science program. And frankly with an annual budget of $1.9M I’m not that surprised. With needing to keep software refreshed, paying teachers, etc. $1.9M doesn’t exactly go all that far. If they were sick of having a really poor Computer Sciences department maybe they should have found a way to raise the budget for the department and brought in some industry professionals as guest speakers to try and breathe some life into the department instead of just shutting the doors on the department and being done with it.
I’m done ranting. Back to your regularly scheduled programming.
Microsoft has recently released hot fix 388724 under MS KB 2687741 which resolves a performance issue when failing over a SQL Server 2012 Availability Group from one replica to another.
The basic jist of the problem is that there was an issue with the inter-node communication within the Windows cluster which caused the AG to take longer than expected to fail over.
If you are having this problem I’d recommend reading this MS KB and getting the hot fix installed on your cluster. As this is just a hot fix and not a service pack (it should be included in Windows 2008 R2 SP2) I’d recommend only installing this if you are having the problem it shows.
One of the things that people will need to change to their applications when using AlwaysOn under SQL Server 2012 will be that the applications will need to have retry logic added to the application so that if the SQL Server is down that the application can retry the connection.
Now this shouldn’t be anything new to the application developers as even today there’s nothing that says that the SQL Server database will always be available. Instead of failing the application on the first connection attempt, or the first time that the command was run, the command should be rerun, probably a couple of times. Now if the error that you get back is from the SQL Server itself you don’t want to retry. You’ll only want to retry if the database was up and you got back a normal error message.
If you are working with SQL Azure this same logic applies to your application there as well.
While I’d love to provide you with some sample source code here, I’m not a .NET developer and the last thing that you want me doing is writing .NET source code so I’ll leave that for the .NET professionals.
Come and join me in Poland, May 24th, 2012 (24-05-2012) at SQL Day 2012. During this day long session we will be looking at storage and virtualization from a DBA perspective with the end goal of the day being to improve the your knowledge of enterprise storage and enterprise virtualization.
While we won’t be looking at a specific storage vender or a specific virtualization platform we’ll be covering a lot of the common techniques between them, and looking at a lot of enterprise class theory. The entire day long session is open for Q & A (Questions and Answers) so we can discuss vendor specific issues that you are having in your enterprise today. Be sure to check out the SQL Day 2012 pricing page for more information about pricing (the page is in Polish so I can’t read it, but I’m pretty sure it shows the pre-con pricing at 400 PLN + 23% VAT and the regular conference at 300 PLN + 23% VAT.
There are lots of great pre-cons going on, on the 24th so if mine doesn’t interest you, but sure to check out the other pre-cons which are going on that day as well.
I’ve uploaded my slide decks from SQL Saturday 111. The sessions can be found on the session pages for the two sessions. I gave two sessions at SQL Saturday 111 in Atlanta, GA. The first was index internals and the second was SQL Server Table Partitioning.
I had a great time at SQL Saturday 111 and I hope all the other speakers and all the attendees had a great time as well.
I look forward to seeing everyone at the next event, SQL Rally in just a few short weeks.
If you have physical SQL Servers that you plan on moving into a virtual environment you’ll want to double check your affinity mask settings before actually moving the machine from a physical server to a VM when using P2V software. The reason for this is that if the affinity mask is set for specific CPUs and the number of CPU cores changes the affinity mask won’t be correct and you won’t be able to get into the advanced settings of sp_configure without getting an invalid settings error like that shown below.
Msg 5832, Level 16, State 1, Line 1
The affinity mask specified does not match the CPU mask on this system.
If you haven’t P2V’ed the system before you do simply change the various affinity masks to 0 which sets them for all processors. If you have P2V’ed the system your best option is to log into the SQL Server using the dedicated admin connection and manually change the value in the system table by using the following query.
update sys.configurations set value=0 Where Name = 'affinity mask'
Hopefully you never run across this problem, but if you do there’s the solution for you.
UPDATE: Paul Randal reminded me that CPU Affinity has been deprecated as of SQL Server 2008 R2 so you’ll probably not want to be configuring the CPU Affinity anyway.
Something which has come up when upgrading Microsoft Operations Manager 2007 to 2012 is that there is an extra step which isn’t really documented in the Ops Manager upgrade guide. You see when upgrading from Ops Manager 2007 to 2012 you also need to upgrade the SQL Server to SQL Server 2008 R2 as that is required by Ops Manager 2012. As the install of Ops Manager 2007 to probably from 2007 or 2008 it’s probably running on SQL Server 2005 today so that requires that the database be upgraded before the Ops Manager software can be upgraded as one of the prerequisites for running Ops Manager 2012 is that you are running SQL Server 2008 R2.
The problem comes from the fact that when you upgrade SQL Server there is a setting called the compatibility mode which doesn’t get changed by default. The reason for this is that you can continue to use older T-SQL syntax while still upgrading the database engine to the newest version. When the compatibility mode is left at the older level (in this case SQL Server 2005 compatibility mode) newer T-SQL features aren’t available. In the case of Ops Manager going from SQL Server 2005 to SQL Server 2008 R2 the feature in question that is needed is the MERGE statement which wasn’t available in SQL Server 2005.
The annoying thing here is that Microsoft doesn’t test for the compatibility mode when going through the Ops Manager upgrade process so this doesn’t get flagged. This means that you’ll get through the service upgrade and when you get into the second migrating phase, doing the management group updates) the System Center Management Configuration Service will throw Error number 29112 and the entire Ops Manager system will stop working. Why it is throwing this error message is because the Management Configuration Service is attempting to create stored procedures which use the MERGE statement which the SQL Server 2005 compatibility mode doesn’t understand.
Thankfully fixing this is very easy. Log into the SQL Server database engine which you are using to host the Ops Manager databases. In the object explorer within SQL Server Management Studio right click on the OperationsManager and OperationsManagerDW databases and select properties (do one database at a time). On the options tab change the compatibility mode from SQL Server 2005 to SQL Server 2008. Then click OK as shown below (click to enlarge).
If you prefer this change can also be made with a couple of simple ALTER DATABASE statements as shown below.
ALTER DATABASE [OperationsManager] SET COMPATIBILITY_LEVEL = 100 ALTER DATABASE [OperationsManagerDW] SET COMPATIBILITY_LEVEL = 100
Either way once the change is made there is no restart of the database engine required. Just fire up the System Center Management Configuration Service and let it do it’s thing and it’ll complete that step of the upgrade process.
I hope this helps,