SQL Server with Mr. Denny


February 7, 2015  5:17 AM

Recommended reading from mrdenny for February 06, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SharePoint, SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: regbac also known as Régis Baccaro

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

February 4, 2015  5:00 PM

No, I Will Not Be Using Your “Secure” Email System

Denny Cherry Denny Cherry Profile: Denny Cherry
Adobe security, Consumer confidence, Document retrieval, IT security, SQL

I get the occasional email with some attachment, that I then have to log into some “secure” system in order to gain access to the attachment. Usually it’s a PDF that I need to sign, either for personal or business reasons. And it’s usually a one off process. Recently I’ve seen an even more annoying process. The Original email has the attachment, but the attachment (usually a PDF) has a password included which I need to go to the annoying “secure” system in order to get the password.

Recently I got one of these from my insurance sales guy. He told me (as he emailed me the PDF without a password) that it was done for compliance reasons.

Let’s review why this is a waste of time.

You’ve sent me a document which has a password. The email includes a link to the website which has the password. Assuming that I’m an attacker who wishes to steal this document that means the attacker has access to my mailbox. So that means that the attack also has the URL. And can click on the reset password link on the website, which allows them to reset the password. Then reset the password to the website, and get the password. So the attacker now has the file, and the password for the file. It took said attacker an extra 20-30 seconds to get the passwords.

That’s assuming that the attacker didn’t spend the $20 to get PDF Password Recovery which would allow them to simply remove the password from the document without needing to know what it is.  And that $20 is a one time fee.  They can unlock all the stolen PDFs that they want after paying for the software, probably with a stolen credit card or just finding a cracked version of it which I was just to lazy to see if there was, spoiler I’m confident if I spent 10 minutes looking I could find a cracked version for free.

In short, I applaud the idea of making sending me a document more secure.  “A” for effort, “F” for implementation.  Two factor authentication (which is basically what they are going for) doesn’t work when both factors rely on the same device, in this case my email software.

Now you are probably thinking that I must be crazy for allowing this sensitive information to be emailed around like this.  The confidential information in this document was my name “Denny Cherry”.  The policy number of my new insurance policy, the amount of the policy, and my insurance guys work address.  That’s it.  Technically there’s nothing in there that really matters.

If we are going to make things “secure”, let’s make them actually secure.  Enough of this making it look secure to the general public.  I get that we need to have some security around this sort of thing.  If this system worked correctly when he uploaded the document to their secure system, it would have asked him for my cell phone number.  Then it would have texted me the password for the document so that I had the password and the text at the same time.  That would be secure and just about as easy to use.

Denny


January 30, 2015  6:10 PM

Recommended reading from mrdenny for January 30, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server, SQL Server 2005, Windows Server 2003

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLNorthdoor also known as SQL Northdoor

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 28, 2015  5:00 PM

Random Unexplained Replication Job Failure in a large SQL Server Replication Topoligy

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Replication, Database replication, SQL, SQL replication, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, SQL Server replication

One of the environments that I work with includes a very large replication topology. While there are only a few servers involved there are dozens of publications between those servers. One of the problems that replication has is that it can only run so many agents at a time.

Now remember that SQL Server replication agents are just SQL Server Agent jobs. And while there are command line applications on the distributor (and subscriber if you are doing pull replication) that’s not quite how SQL Server replication works. You see, when you setup SQL Server replication there are custom job types for the various replication tasks. These custom tasks tell the SQL Server Agent to call custom DLLs which then do some magic and do the same thing as the command line applications which sit on the servers hard drive. At least that’s the theory.

Now on smaller topologies this all works just fine. Everything is happy, the jobs all run as expected. But on larger topologies you start getting some “strange” behavior from these jobs. They’ll randomly throw error messages. Or the agents will say that they are running, but they actually aren’t. The problem is something to do with the DLL that you are running as part of the SQL Agent job.

Thankfully, the trick to fixing this is actually pretty simple. Simply open up the properties for the SQL Agent job which is running the replication agent, and edit step 2 of the job. Change the type from whatever the replication operation is, to “Operating system (CmdExec)”. Then edit the command. Before the batch of parameters in there you want to add the full command line path to the replication agent that you are trying to run. By default this will be “c:\program files\Microsoft SQL Server\120\com\” for SQL Server 2014. For older versions of SQL Server just change the 120 to the version number 110 for SQL 2012, 100 for SQL 2008, 90 for SQL 2005 and 80 for SQL 2000. Then put the name of the executable that you need to use, you’ll find the list of applications listed below.

Application Name Replication Agent
snapshot.exe Snapshot Agent
logread.exe Load Reader Agent
distrib.exe Distribution Agent

When editing the SQL Agent job, if there’s spaces in the path to the executable, you’ll need to put double quotes around the full path and executable, but not the parameters as shown below.

“c:\program files\Microsoft SQL Server\120\com\snapshot.exe” -Publisher [Server1] -PublisherDB [MyDB] -Distributor [Server2] -Publication [MyPublication] -DistributorSecurityMode 1

You shouldn’t need to make any changes to the parameters. Just save and close the SQL Agent job, then restart the replication agent either from the replication monitor or by manually starting the SQL Agent job and everything will just keep on cranking along as expected.

Denny


January 24, 2015  6:46 AM

Recommended reading from mrdenny for January 23, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: wsmelton also known as Shawn Melton

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 21, 2015  7:00 PM

Reducing Locking by “Distribution clean up” Job

Denny Cherry Denny Cherry Profile: Denny Cherry
Database replication, SQL, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014

Anyone who has had to manage a large SQL Server replication environment has had to deal with strange locking within the distribution database. This is because the distribution database is basically a giant named value pair database which has a TON of jobs inserting, updating, querying and deleting huge amounts of data from the database.

Within a SQL Server Distribution server there’s a few different things that you need to watch. There’s the log readers which are inserting new data into the distribution database. There’s the distribution agents which are querying rows, and updating rows in the distribution database, and there are clean up jobs which are deleting rows which have been replicated to all the needed subscribers.

One of clean up jobs, called the “Distribution clean up: distribution” job likes to take large locks on the distribution tables while it’s trying to delete data. In a smaller environment you probably won’t notice this all that much because your replication tables are pretty small so the data is deleted pretty quickly. Looking at the Space Used by Top Tables report in SQL Server Management Studio, we can see that this replication topology isn’t all that small.

replication

Yes, that says that there are 757M commands to be replicated across 300M transactions. In other words, there’s a lot of stuff that needs to be replicated.

When looking at this system recently I noticed that several of the log reader jobs were in an error state because they hadn’t been able to report in for at least 10 minutes, and some of them where in this state for quite a while. Looking at the output from sp_whoisactive the reason became very clear quickly, there was blocking going on in the distribution database, and the log reader jobs were being blocked. Looking into who was doing the blocking, and I saw that it was a delete statement. Looking at the list of running SQL Agent jobs on the server that are clean up jobs (as these are the only ones which should be deleting data) I see that the distribution clean up job is running. All this job does is run a stored procedure, which makes it nice and easy to see what it’s doing.

Reading through the stored procedure, I dived though a couple of additional stored procedures until I found sp_MSdelete_publisherdb_trans which is the one which does the actual deletion of the data from the distribution database. Reading through the stored procedure I ran across the actual DELETE statements from the MSrepl_commands table. And these delete statements all have a WITH (PAGELOCK) hint on them, which I assume is to prevent the DELETE statement from escalating to a table lock and locking the entire table. When unfortunately when you’ve got a lot of different systems writing data into the MSrepl_commands table like this system does (there are 47 publications on this distribution server) there can be data from different publications all stored on the same page.

My solution to solve this probably isn’t something that Microsoft would be all that happy about, but keeping them happy isn’t really my big worry. I changed with PAGELOCK hints to ROWLOCK. While this does of course require more locks to be taken, I’m OK with that overhead, especially as the DELETE statements all have a TOP (2000) on them. I made the same change to the sp_MSdelete_dodelete procedure as well to ensure that it doesn’t do any additional bad locking which I don’t want on the MSrepl_transactions table.

So if you have a large replication topology with lots of data flying through the system and you are seeing blocking on your log readers or distribution agents by the cleanup job, this may be a way to fix it.

In my case this is a SQL Server 2014 distributor, but these procedures haven’t been changed by Microsoft in a good long while, so this should work with all versions.

Denny


January 14, 2015  5:00 PM

Monitoring SQL Replication Latency

Denny Cherry Denny Cherry Profile: Denny Cherry
Database replication, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, SQL Server Distributor

When working with SQL Server replication, you don’t have a whole lot of options for monitoring the latency of the replication. You’ve got the SQL Server Replication Monitor, and that’s basically it.

Well there is another option. You can manually run the code that the replication monitor uses to get those values that it displays.

To run it for a single publication, which is how the replication monitor uses the procedure you’d run it like this.

exec [distribution].sys.sp_replmonitorhelpsubscription @publisher = N’MyPublisher’, @publisher_db = N’MyDatabase’, @publication = ‘MyPublication’, @mode = 0, @exclude_anonymous = 0, @refreshpolicy = N’0′

But that only gives you data for the single publication for the single database. Any (or all) of the @Publisher, @publisher_db and @publication values can be null making them wildcard values. When this is done it’ll give you the information for all the publications, publishers and databases that match the wildcard. Below you can see that I’ll pulling the values for every publication which is monitored by the distributor.

exec [distribution].sys.sp_replmonitorhelpsubscription @publisher = NULL, @publisher_db = NULL, @publication = NULL, @mode = 0, @exclude_anonymous = 0, @refreshpolicy = N’0′, @publication_type=0

Now if I had a monitoring application I could have it run this stored procedure and look for values where the latency column is more than some value, say 120 seconds.

Denny


January 9, 2015  4:00 PM

Recommended reading from mrdenny for January 09, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: CrespiDB also known as Rodrigo Crespi

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 7, 2015  5:00 PM

If I have a filtered index, what do I need to have in my query to make SQL use it?

Denny Cherry Denny Cherry Profile: Denny Cherry
Database index strategy, SQL Server, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014

Filtered indexes are great, because they let you exclude rows from the index. But what do you need to include in your where clause to use the filtered index?

The answer here is surprisingly straight forward. You need to use the exact same where clause value in your queries where clause as is in the definition of the index. You also need that part of the where clause to be a text string in the query, not a parameter.

So for example “WHERE Active=1″ will work, where “WHERE Active=@Active” won’t. The reason for this is that at compile time the SQL Server won’t know what the value within the @Active parameter is, so it can’t use the filtered index. It has to do this because you could pass in 0 to the @Active parameter and then the plan with the filtered index wouldn’t be valid.

Denny


January 2, 2015  10:08 PM

Recommended reading from mrdenny for January 02, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
Reporting Services, SQL, SQL Server, Windows 10

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: PASS_RM_Canada also known as PASS RM Canada

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: