If you were planning on attending the SSWUG virtual conference “DB Tech Con” which is happening April 20-22, 2011; and you’d like to save $30.00 of the cost of the conference, have I got a deal for you. When you sign up using the discount code SP11DBTechDC you’ll be given a $30 discount off of the current price.
You can sign up on the normal registration page and enter the code SP11DBTechDC into the VIP Code box then press the “Update Registration” button.
Join me Wednesday March 9th at 8am Pacific (11am Eastern) as I join Marathon Technologies as I present a webcast titled “Controlling SQL Server Sprawl: The Consolidation Conundrum and Availability Imperative“. During this session I’ll be talking about some of the benefits and risks consolidating SQL Server databases and instances.
If you have a SAN RAID 5 LUN for your data and a SAN RAID 10 LUN for your logs, and say a local RAID 1 array for the C drive in the server itself, where do you usually install the SQL Server binaries to?
In this sort of configuration, which is pretty normal actually, I would but the SQL Server binaries on the C drive.
The last two weekends I’ve presented at two different SQL Saturday events, so as I’m pretty lazy I’m posting the decks for both weekends in a single blog post.
SQL Saturday 47 – Phoenix
- Back to Basics; Getting Back to the Basics of SQL Server
- SQL Server Clustering 101
- Where should I be encrypting my data
SQL Saturday 65 – Vancouver, BC
Now the slide decks for my encryption deck are the same, just different templates, but I wanted to upload what people actually saw.
So a little while back we noticed that we had some high CPU load coming from a single stored procedure in the Session State database. This single procedure (TempUpdateStateItemLong) was taking up 80% of the CPU time that the database was using (we used Spotlight for SQL Server Enterprise from Quest software). But in another session state database that same procedure was down in the single digits. So something must be different between them.
I opened them both up in SSMS and the code for the procedures was identical (as you would expect), but there was something different. The procedure that had the really high CPU % times was compiled with the SET QUOTED_IDENTIFIER ON setting, while the procedure that had the really low CPU % times was compiled with the SET QUOTED_IDENTIFIER OFF setting.
I have no idea why there was a difference, but I changed the one which was ON to OFF and pushed the procedure into the database. As soon as I did the CPU % for that procedure dropped down into the single digit range where it should have been.
Let this be a lesson, those setting definitely matter. And don’t trust that they are correct, even in Microsoft provided code like Session State.
So a while back we were seeing some very strange behavior with our SQL Replication. Every once and a while for no apparent reason the log reader would just slow down the pulling of records from a single publisher database. Our replication was setup with a single publisher, and a single distributor with over a dozen publications all being sent to a couple of different subscribers.
At random times we would see the latency for all the publications for a single database start to climb, eventually being a few hours behind for no apparent reason. Looking in the normal places didn’t lead me to much. I looked at some execution plans, and saw a couple of performance issues there (with the Microsoft code) so I threw a couple of new indexes onto the MSlogreader_agents and MSsubscriptions tables (see below) and I also made a couple of tweaks to the sp_MSset_syncstate procedure to fix some of the pathetic code which I found within the procedure (you’ll also find this below).
This helped a little, but it didn’t solve the problem. What did was when I queried the sys.dm_os_waiting_tasks dynamic management view. This showed a large number of processes with a wait_type of TRACEWRITE, and these were waiting long enough that blocking was actually starting to pop up (very sporadically making it very hard to see). A query look at sys.traces told me that there were three traces running against the server. I knew that I didn’t have one running, so I took the session_id values which were shown in sys.traces and looked in sys.dm_exec_sessions for those session IDs to find out who needed to be kicked in the junk. Turns out that the traces were being run by Quest Software’s Spotlight for SQL Server Enterprise’s Diagnostic Server (the program_name column read “Quest Diagnostic Server (Trace)”).
So I logged into the diagnostic server’s via RDP, and opened Spotlight . Then edited the properties for the server which is our distributor. Then I opened the SQL Analysis window, and disabled the SQL Analysis for this server. Pretty much as soon as I clicked OK through the windows the TRACEWRITE locks went away, and the latency went from 2 hours down to 0.
This just goes to show, just how careful that you have to be when using SQL Profiler (or any sort of tracing) against your database server.
P.S. If you decide to make these changes to your distributor keep in mind that they may cause anything or everything to break, including patches, etc. that you try and install against the SQL Server engine. These changes were made for a distributor running SQL Server 2008 R1 build 10.0.1600, use against another build at your own risk. That said, here’s the code.
USE distribution GO CREATE INDEX IX_sp_MSget_new_errorid ON dbo.MSrepl_errors (id) WITH (FILLFACTOR=100) GO CREATE INDEX IX_sp_MSadd_logreader_history ON dbo.MSlogreader_agents (id) include (name, publication) GO CREATE NONCLUSTERED INDEX IX_sp_MSset_syncstate ON MSsubscriptions (publisher_id, publisher_db, article_id, subscription_seqno) include (publication_id) with (fillfactor=80) GO CREATE NONCLUSTERED INDEX IX_sp_MSset_syncstate2 ON MSsubscriptions (publisher_id, publication_id, sync_type, status, ss_cplt_seqno, publisher_db) include (article_id, agent_id) WITH (FILLFACTOR=90, DROP_EXISTING=ON) GO ALTER procedure sp_MSset_syncstate @publisher_id smallint, @publisher_db sysname, @article_id int, @sync_state int, @xact_seqno varbinary(16) as set nocount on declare @publication_id int select top 1 @publication_id = s.publication_id from MSsubscriptions s where s.publisher_id = @publisher_id and s.publisher_db = @publisher_db and s.article_id = @article_id and s.subscription_seqno < @xact_seqno if @publication_id is not null begin if( @sync_state = 1 ) begin if not exists( select * from MSsync_states where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id ) begin insert into MSsync_states( publisher_id, publisher_db, publication_id ) values( @publisher_id, @publisher_db, @publication_id ) end end else if @sync_state = 0 begin delete MSsync_states where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id -- activate the subscription(s) so the distribution agent can start processing declare @automatic int declare @active int declare @initiated int select @automatic = 1 select @active = 2 select @initiated = 3 -- set status to active, ss_cplt_seqno = commit LSN of xact containing -- syncdone token. -- -- VERY IMPORTANT: We can only do this because we know that the publisher -- tables are locked in the same transaction that writes the SYNCDONE token. -- If the tables were NOT locked, we could get into a situation where data -- in the table was changed and committed between the time the SYNCDONE token was -- written and the time the SYNCDONE xact was committed. This would cause the -- logreader to replicate the xact with no compensation records, but the advance -- of the ss_cplt_seqno would cause the dist to skip that command since only commands -- with the snapshot bit set will be processed if they are <= ss_cplt_seqno. -- update MSsubscriptions set status = @active, subscription_time = getdate(), ss_cplt_seqno = @xact_seqno where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id and sync_type = @automatic and status = @initiated and ss_cplt_seqno <= @xact_seqno OPTION (OPTIMIZE FOR (@automatic=1, @initiated=3, @publisher_id UNKNOWN, @publisher_db UNKNOWN, @xact_seqno UNKNOWN)) end end GO
I know that security people like to remove permissions from everything before certifying that a server is ready to go into production. And like 10+ years ago that was something that you might have wanted to do (I’m just talking about SQL Server here). However in today’s world of SQL Server 2005 and newer that isn’t needed. These newer versions are designed to take security much more seriously than before. The rights that are granted to public in the master and msdb databases should be left the hell alone.
If you are going to go around revoking database permissions that you don’t understand what do, don’t come to be complaining that your SQL Server isn’t working correctly. Guess what, those permissions where there for a reason, and should be left alone. If you have some out dated security mandate that says that all database permissions must be revoked from public before the server can be used, then you had damn well better understand what that means. And you should probably update your stupid security policy so that it reflects the changes that have been made in the product over the last 10 or so years. Even under SQL Server 2000 I didn’t ever recommend that people remove all the rights from public, ever. Not if you wanted the SQL Server to work as expected.
If you have decided to go and remove all the permissions, then you will probably want to install a new SQL Server, and find all the permissions there and grant them back. That or restore your master database back to a state from before you screwed it up, which is the same thing that I recommended to the person in the forum thread above do. If you intentionally break your SQL Server don’t expect much sympathy from me.
So just in case you missed it, I took and passed the Microsoft MCM lab last Thursday. Since I’m the first person to go through the test in the new format I wanted to put together some comments (as best I can thanks to NDAs).
So the MCM gives you 6 hours to complete, and when you take it, you’ll need most if not all of the time.
I basically finished in a little over 4 hours, then went back and reviewed a couple of the scenarios which took me until about the 5 hour mark. The exam is made up of several smaller scenarios which are independent and each of which will test your knowledge of one or more parts of the SQL Server engine.
As I mentioned in my prior blog post you will need to know all aspects of the SQL Server engine very well. After taking the lab I can say that I feel this even more now. With the time limitation you need to know the production very well, as well as where to find useful information in Book OnLine. The lab is a closed book exam, and the only reference that you have access to is Books OnLine.
When you do decide to take the exam, don’t expect to get your results back as quickly as I did. Currently Microsoft only has one Lab Exam environment to do the tests on, and that environment had to be reset after I took it for another person so my exam had to be scored very quickly. Once the lab is released for general use scoring will take about 30 business days as the exam will be scored by hand to account for creative solutions to the scenarios as well as to account for creative ways that people may try and cheat on the exam.
I didn’t realize just how worried about taking the exam I was until I was done, and logged off of the environment. I could actually feel the stress just leave once I finished the exam. And this was before I knew if I had passed. I knew that I would get my result quickly because of the next person taking it so quickly, and I knew that if I passed I would be thrilled, but I didn’t realize just how happy I would be to have passed.
When you take the exam, you can’t take any water or food into the exam room, but you can leave some in a locker outside the exam room. You’ll want to bring food with you to eat and drink. Getting hungry during the process sure won’t help you pass the exam, and it’ll just distract you. Just keep in mind that snack breaks do count against your 6 hour window.
Personally I didn’t do a whole lot of studying (I talked about my professional experience in my last post). I watched most of the MCM videos that SQL Skills put out. I skipped ones that were on topics that I can speak on in my sleep like storage, Service Broker, etc just because I felt that I knew that material the best. I focused on the indexing internals, and the security pieces as to me those are the most complex parts of the database engine to me.
Hopefully this helps you on your trip though the MCM process.
OK, that’s totally BS, but it sounds awesome doesn’t it.
But it is sort of true. I was grabbing the URL for my book to send to someone today, and I noticed that the Kindle version had a price listed. The book is $39.96 at the moment (I don’t set the price), so all of you that were waiting for the Kindle version, your wait is over. Amazon does have the paperback in stock again ($32.64 at the moment) if that is your preference.
All to often when I give a SQL Service Broker presentation, or when I’m talking to people about it they want to know what the odds are that SQL Service Broker will be removed from the platform like how Notification Services was ripped from the product after just one release.
So here’s just some of the SQL Server features which would need to be re-architected if SQL Service Broker was removed from the SQL Server product.
- Database Mail – Introduced in SQL Server 2005
- Database Mirroring – Introduced in SQL Server 2005
- Always On (HADR) – Introduced in SQL Server “Denali”
There is some other stuff which uses SQL Service broker, mainly BizTalk can use it, and I’ve heard rumors that Microsoft CRM will start using it at some point (if it doesn’t already). I’m sure based on the way the SQL Server product team has been going that even more new features will be based on SQL Service Broker.
So if you were worried about using SQL Service Broker because you’ve heard that not many people are using it, don’t be. The feature isn’t going anywhere.