SQL Server with Mr. Denny


February 20, 2015  7:07 PM

Recommended reading from mrdenny for February 20, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: Aschenbrenner also known as Klaus Aschenbrenner

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

February 20, 2015  7:02 PM

Attend PASS BA Con. Get Knowledge, and Fit at the same time

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

Are you planning on attending the PASS BA Conference, but you’re just looking for that right excuse to get signed up for the conference?  For a limited time (from now until March 31, 2015) if you sign up for the PASS BA Conference and put in my name  andpass email address as the person who referred you (just like I’ve posted below),  and you’ll get a FREE FitBit Flex activity tracker (the legal stuff is at the bottom).  So go and get registered before the price jumps from $1595 to $1795 on March 17, and either way be sure to get registered before March 31st to get your free FitBit Flex. This offer can’t be combined with a discount code and is only available for the next 200 registrations so get in quick!

If you aren’t sure if you should go you need to check out the fantastic set of sessions that are scheduled for the conference as well as the top tier keynote speaker, founder of BI Brainz, and author of Data Visualization for Dummies Mico Yuk.  With the session lineup, the speakers and the great sponsor line up this sure does look like the conference to attend for people in the BI and BA space.

Denny

 
Terms and Conditions: A referral to the conference and eligibility to receive one FitBit Flex (valued at US$99) for you and is defined and limited to a complete registration to the conference not combined with any other offer or discount code. You must enter my first name, last name, and email address in the “How did you hear about the Conference” section of the online registration form in order to be eligible to receive the giveaway. Only one FitBit Flex will be available per individual registering for the Conference. By participating in the above promotion, each participant waives any and all claims of liability against the PASS, its employees and agents, the promotion’s sponsors and their respective employees and agents, for any personal injury or other loss which may occur from the conduct of, or participation in the promotion or from the use of the FitBit Flex. Full Terms & Conditions here. This offer can’t be combined with a discount code and is only available for the next 200 registrations so get in quick!


February 13, 2015  11:56 PM

Recommended reading from mrdenny for February 13, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Happy Friday the 13th.  This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: hope_foley also known as Hope Foley

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 11, 2015  4:00 PM

The New Nimble Caching Algorithm is Awesome

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

I’ve been working with a client who has been having some performance problems with their Nimble array for one very specific workload.  This workload was a replication distributor that was receiving a huge number of inserts with a new very small updated.  The inserted rows are very wide.  The insert performance was fantastic as expected as that was being written to flash.  Where the performance problem was happening was when SQL was reading new pages from the disk so that the new rows could be written to the new disk.  This was apparently happening in such a pattern that the Nimble couldn’t identify the pattern and do any prefetching of those pages into it’s SSD cache.

Well apparently this was a known issue, because at the recommendation of Nimble the firmware on the Nimble array was upgraded, and the new caching algorithm enabled.  As soon as that happened the array started pre-fetching the correct blocks into SSD cache and the replication performance improved dramatically.  Instead of loosing about 3-4 hours per day (getting 20-21 hours of replication completed per 24 hours day) we were able to blaze through the backlog.  At the time the firmware up graded replication was about 8 or 9 days behind.  Within 1-2 days the replication had completely caught up.

I’d say that’s pretty good.


February 7, 2015  5:17 AM

Recommended reading from mrdenny for February 06, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SharePoint, SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: regbac also known as Régis Baccaro

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 4, 2015  5:00 PM

No, I Will Not Be Using Your “Secure” Email System

Denny Cherry Denny Cherry Profile: Denny Cherry
Adobe security, Consumer confidence, Document retrieval, IT security, SQL

I get the occasional email with some attachment, that I then have to log into some “secure” system in order to gain access to the attachment. Usually it’s a PDF that I need to sign, either for personal or business reasons. And it’s usually a one off process. Recently I’ve seen an even more annoying process. The Original email has the attachment, but the attachment (usually a PDF) has a password included which I need to go to the annoying “secure” system in order to get the password.

Recently I got one of these from my insurance sales guy. He told me (as he emailed me the PDF without a password) that it was done for compliance reasons.

Let’s review why this is a waste of time.

You’ve sent me a document which has a password. The email includes a link to the website which has the password. Assuming that I’m an attacker who wishes to steal this document that means the attacker has access to my mailbox. So that means that the attack also has the URL. And can click on the reset password link on the website, which allows them to reset the password. Then reset the password to the website, and get the password. So the attacker now has the file, and the password for the file. It took said attacker an extra 20-30 seconds to get the passwords.

That’s assuming that the attacker didn’t spend the $20 to get PDF Password Recovery which would allow them to simply remove the password from the document without needing to know what it is.  And that $20 is a one time fee.  They can unlock all the stolen PDFs that they want after paying for the software, probably with a stolen credit card or just finding a cracked version of it which I was just to lazy to see if there was, spoiler I’m confident if I spent 10 minutes looking I could find a cracked version for free.

In short, I applaud the idea of making sending me a document more secure.  “A” for effort, “F” for implementation.  Two factor authentication (which is basically what they are going for) doesn’t work when both factors rely on the same device, in this case my email software.

Now you are probably thinking that I must be crazy for allowing this sensitive information to be emailed around like this.  The confidential information in this document was my name “Denny Cherry”.  The policy number of my new insurance policy, the amount of the policy, and my insurance guys work address.  That’s it.  Technically there’s nothing in there that really matters.

If we are going to make things “secure”, let’s make them actually secure.  Enough of this making it look secure to the general public.  I get that we need to have some security around this sort of thing.  If this system worked correctly when he uploaded the document to their secure system, it would have asked him for my cell phone number.  Then it would have texted me the password for the document so that I had the password and the text at the same time.  That would be secure and just about as easy to use.

Denny


January 30, 2015  6:10 PM

Recommended reading from mrdenny for January 30, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server, SQL Server 2005, Windows Server 2003

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLNorthdoor also known as SQL Northdoor

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 28, 2015  5:00 PM

Random Unexplained Replication Job Failure in a large SQL Server Replication Topoligy

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Replication, Database replication, SQL, SQL replication, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, SQL Server replication

One of the environments that I work with includes a very large replication topology. While there are only a few servers involved there are dozens of publications between those servers. One of the problems that replication has is that it can only run so many agents at a time.

Now remember that SQL Server replication agents are just SQL Server Agent jobs. And while there are command line applications on the distributor (and subscriber if you are doing pull replication) that’s not quite how SQL Server replication works. You see, when you setup SQL Server replication there are custom job types for the various replication tasks. These custom tasks tell the SQL Server Agent to call custom DLLs which then do some magic and do the same thing as the command line applications which sit on the servers hard drive. At least that’s the theory.

Now on smaller topologies this all works just fine. Everything is happy, the jobs all run as expected. But on larger topologies you start getting some “strange” behavior from these jobs. They’ll randomly throw error messages. Or the agents will say that they are running, but they actually aren’t. The problem is something to do with the DLL that you are running as part of the SQL Agent job.

Thankfully, the trick to fixing this is actually pretty simple. Simply open up the properties for the SQL Agent job which is running the replication agent, and edit step 2 of the job. Change the type from whatever the replication operation is, to “Operating system (CmdExec)”. Then edit the command. Before the batch of parameters in there you want to add the full command line path to the replication agent that you are trying to run. By default this will be “c:\program files\Microsoft SQL Server\120\com\” for SQL Server 2014. For older versions of SQL Server just change the 120 to the version number 110 for SQL 2012, 100 for SQL 2008, 90 for SQL 2005 and 80 for SQL 2000. Then put the name of the executable that you need to use, you’ll find the list of applications listed below.

Application Name Replication Agent
snapshot.exe Snapshot Agent
logread.exe Load Reader Agent
distrib.exe Distribution Agent

When editing the SQL Agent job, if there’s spaces in the path to the executable, you’ll need to put double quotes around the full path and executable, but not the parameters as shown below.

“c:\program files\Microsoft SQL Server\120\com\snapshot.exe” -Publisher [Server1] -PublisherDB [MyDB] -Distributor [Server2] -Publication [MyPublication] -DistributorSecurityMode 1

You shouldn’t need to make any changes to the parameters. Just save and close the SQL Agent job, then restart the replication agent either from the replication monitor or by manually starting the SQL Agent job and everything will just keep on cranking along as expected.

Denny


January 24, 2015  6:46 AM

Recommended reading from mrdenny for January 23, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: wsmelton also known as Shawn Melton

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 21, 2015  7:00 PM

Reducing Locking by “Distribution clean up” Job

Denny Cherry Denny Cherry Profile: Denny Cherry
Database replication, SQL, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014

Anyone who has had to manage a large SQL Server replication environment has had to deal with strange locking within the distribution database. This is because the distribution database is basically a giant named value pair database which has a TON of jobs inserting, updating, querying and deleting huge amounts of data from the database.

Within a SQL Server Distribution server there’s a few different things that you need to watch. There’s the log readers which are inserting new data into the distribution database. There’s the distribution agents which are querying rows, and updating rows in the distribution database, and there are clean up jobs which are deleting rows which have been replicated to all the needed subscribers.

One of clean up jobs, called the “Distribution clean up: distribution” job likes to take large locks on the distribution tables while it’s trying to delete data. In a smaller environment you probably won’t notice this all that much because your replication tables are pretty small so the data is deleted pretty quickly. Looking at the Space Used by Top Tables report in SQL Server Management Studio, we can see that this replication topology isn’t all that small.

replication

Yes, that says that there are 757M commands to be replicated across 300M transactions. In other words, there’s a lot of stuff that needs to be replicated.

When looking at this system recently I noticed that several of the log reader jobs were in an error state because they hadn’t been able to report in for at least 10 minutes, and some of them where in this state for quite a while. Looking at the output from sp_whoisactive the reason became very clear quickly, there was blocking going on in the distribution database, and the log reader jobs were being blocked. Looking into who was doing the blocking, and I saw that it was a delete statement. Looking at the list of running SQL Agent jobs on the server that are clean up jobs (as these are the only ones which should be deleting data) I see that the distribution clean up job is running. All this job does is run a stored procedure, which makes it nice and easy to see what it’s doing.

Reading through the stored procedure, I dived though a couple of additional stored procedures until I found sp_MSdelete_publisherdb_trans which is the one which does the actual deletion of the data from the distribution database. Reading through the stored procedure I ran across the actual DELETE statements from the MSrepl_commands table. And these delete statements all have a WITH (PAGELOCK) hint on them, which I assume is to prevent the DELETE statement from escalating to a table lock and locking the entire table. When unfortunately when you’ve got a lot of different systems writing data into the MSrepl_commands table like this system does (there are 47 publications on this distribution server) there can be data from different publications all stored on the same page.

My solution to solve this probably isn’t something that Microsoft would be all that happy about, but keeping them happy isn’t really my big worry. I changed with PAGELOCK hints to ROWLOCK. While this does of course require more locks to be taken, I’m OK with that overhead, especially as the DELETE statements all have a TOP (2000) on them. I made the same change to the sp_MSdelete_dodelete procedure as well to ensure that it doesn’t do any additional bad locking which I don’t want on the MSrepl_transactions table.

So if you have a large replication topology with lots of data flying through the system and you are seeing blocking on your log readers or distribution agents by the cleanup job, this may be a way to fix it.

In my case this is a SQL Server 2014 distributor, but these procedures haven’t been changed by Microsoft in a good long while, so this should work with all versions.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: