SQL Server with Mr. Denny

May 6, 2015  6:00 PM

Should I use Backup Compression or NTFS Compression for Backups?

Denny Cherry Denny Cherry Profile: Denny Cherry

There’s a few different ways to compress your database backups when backing them up to disk. The two easiest ones are database backup compression (using the COMPRESSION parameter in the BACKUP DATABASE command) or turning on NTFS compression on the folder. Recently I was working with a client doing a database migration and their old scheme used NTFS compression while I was doing SQL backup compression on the server that they were migrated to.

So this sounded like a perfect time to do a little testing while other backups and restores were running.

The backup files that I’m playing with are from a SQL 2014 database. The database was backed up to a database backup file using SQL compression, then to a database backup file without using compression. Then the uncompressed backup file was copied to the old Windows 2008 R2 server to see how well NTFS handled the compression.

The database it self is about 4768.63 MB in size with 1180.09 MB free space within the database.

The SQL compressed database backup was about 38 Megs give or take. The SQL uncompressed database backup was about 3.5 Gigs which is basically what you’d expect as that’s how much data is in the database.


So the SQL Backup Compression here is pretty damn good. So taking those same files and copying them to a folder which has NTFS compression we see that uncompressed backup has been compressed to 1.28 Gigs while the compressed backup has been compressed to 38 Megs (NTFS compression saves all of 512 bytes according to the screenshot below).



So if you are looking for a way to store your backups in a compressed format, SQL Backup Compression is going to be the way to go. The good news is that SQL Backup Compression was first introduced in SQL Server 2008 but as an Enterprise Edition feature. It was moved into the standard edition in SQL Server 2008 R2 so almost every production SQL Server out there should be able to use it (or at least most of them).


(As always when looking at compression metrics from someone, your mileage may vary depending on the kind of data in the database, the alignment of the moon, the color of the sky, etc.)

April 29, 2015  4:00 PM

Stop Touching the Public Database Role

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server, SQL Server security

Odds are you don’t need to make permissions changes to the public database role. Back in the SQL 7 days making changes to it was pretty common as the database engine wasn’t all that well secured. Today in the SQL 2005 and newer times there’s no reason to change any of the default permissions that Microsoft has setup.

I’ve seen auditor requirements to remove access to things like xp_instance_regread as this could potentially access things in the registry that an attacker could use. I suppose this is true, if you haven’t setup the startup accounts correctly. If SQL is running under an account which is a member of the local Administrators group then yes someone could read registry keys which they shouldn’t have access to. But there also shouldn’t be anything all that sensitive written to the registry on a production SQL Server. SQL puts basically nothing in there other than a few startup parameters and there shouldn’t be really anything else installed on the system. Anything sensitive that Windows writes to the registry is going to be encrypted.

Now what’ll break by changing these permissions could be just about anything. I know that if users don’t have access to run xp_instance_regread for example they won’t be able to use SSMS as they’ll get an error when they connect to the instance. So to make the error go away you now have to grant the EXECUTE right to xp_instance_regread for every user that connects to the database engine with SSMS (which is probably most of them). This means that removing it from public was meaningless as it’s still granted to everyone that connects, just via their login.

Now I know that it’s a lot easier for me to write about this than it is for you to defend leaving it to an auditor or to management, but hopefully you can get logic to prevail. So unless you really want to do a lot of security related troubleshooting that you shouldn’t need to do, don’t be messing with the public security permissions.


April 22, 2015  4:00 PM

Take Your Time With Azure Site to Site VPN Configurations

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server, VPN, Windows Azure

Setting up Azure Site to Site VPN Endpoints can be a real treat sometimes. Recently I was setting up a site to site VPN between two Azure sites. One in the US West data center and the other was in the West Europe data center.

Now the annoyance when setting up a site to site VPN between two Azure sites is that you have to do some stuff in the Azure Portal (manage.windowsazure.com) and some stuff through PowerShell. You have to do this because (as of when I’m writing this) you have to create the dynamic routing gateway in the portal because PowerShell only supports creating a static routing gateway and you have to change the pre-shared key in PowerShell because the “Manage Key” button in PowerShell only supports showing you the key but not changing it.

Well when I was trying to get this setup I was trying to get it done as quickly as possible as I had other stuff to focus on that day. And apparently I wasn’t waiting long enough for the commands which the UI was running to actually finish before trying to run the PowerShell cmdlet Set-AzureVNetGatewayKey to change the pre-shared key. This then ended up causing problems for the commands which the UI had kicked off and the Gateway would only be half created and wouldn’t show up correctly, so I’d drop the gateway and try again.

Long story short I ended up starting the gateway creation then went to dinner. When I came back everything was done, and I could change the pre-shared key and get everything back up and running again without issue.

When creating site to site VPN links between Azure take your time. You’ll spend less time if you slow down than you would otherwise.


April 17, 2015  6:00 PM

Recommended reading from mrdenny for April 17, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
Apple, SQL, SQL Azure, SQL Server, SQL Server Integration Services

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: PASSAppDev also known as PASS AppDev VC

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.


April 15, 2015  4:00 PM

Where to find the Azure PowerShell SDK ToolKit

Denny Cherry Denny Cherry Profile: Denny Cherry
download, downloading, Powershell, SQL, SQL Azure, SQL Server, Windows Azure

If you do basically any work with Azure you’ll need to get the Azure PowerShell ToolKit at some point. Now finding the Azure PowerShell SDK Download page is a royal pain. So to make life easier I’ve setup a quick redirection that will get you to the correct place. Simply browse to www.getazureposh.com and you’ll be redirected to the gitHub download page for the Azure PowerShell SDK.

The most recent build is on the top. Use that one.

You are welcome.


April 10, 2015  4:00 PM

Recommended reading from mrdenny for April 10, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: _dbassassin also known as Tim M. Hidalgo

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.


April 8, 2015  6:00 PM

PASS Leaks Sponsor Data

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Leakage, Data privacy, Data protection, personal data, SQL, SQL Server

So early this week the PASS Organization (www.sqlpass.org) released their new shiny SQL Saturday website (www.sqlsaturday.com). There were a few glitches in the session submission but nothing there that isn’t expected when a major over hall of a much used application.leak

The Problem

There was however a bigger problem with the system. If anyone went into the sponsor page they were able to, without even needing to log in, pull up the address which was entered for the sponsors. Now you might think that this isn’t all that big a deal because who cares if sponsors have the address of their offices shown online. Where the problem comes from is that many SQL Saturday events offer a blogger level of sponsorship for just a few dollars. This level of sponsorship usually is done by individual people who are trying to drive a little more traffic to their blog while helping a SQL Saturday out. So when filling out the sponsor address information in the past they would have put in their home address. Here for example is Robert Davis’s information when selected from the Sponsor list (with his actual address marked out).


This is obviously a problem. There are about 1390 sponsors in the system, many of which are just people with blogs who have sponsored a SQL Saturday or two for a few bucks each. Now I know it isn’t the entire PASS community, but 1000+ people’s personal home information is a pretty big leak.

The Response

PASS as an organization has responded to this problem pretty well.  The problem was first reported by me on the MVP mailing list where we were discussing various things that we weren’t really thrilled with on the new SQL Saturday website (most if not all of the current board members are Microsoft MVPs so there’s actually a good reason to discuss things like this in the MVP mailing list).  The board members then began bringing people form PASS HQ into the conversation with email threads outside of the MVP list.  PASS immediately realized that there was a pretty big problem that needed to be addressed.  How to address it became a more complex problem.

The quickest solution that PASS was able to come up with was to simply remove the page where a Sponsor can sponsor a SQL Saturday and to remove the API which that page used to pull that information the SQL Saturday database.  It was a couple of hours from reporting the problem to getting the offending part of the website pulled down, but in the grand scheme of things that’s a pretty decent response time given that the developer had to do whatever magic is required to remove pages from an ASP.NET site running on Dot Net Nuke (DNN) do a quick test to make sure nothing else is broken, then publish the changes to the live website.

Later Monday night PASS decided to take the entire SQL Saturday website offline completely leaving just a maintenance message on the site while the entire site is reviewed from a security (and possibly usability) perspective with the hopes of getting the site up and running within a day or two.   Now who this really sucks for is the SQL Saturday’s who are running their events this coming weekend (April 11th, 2015) which includes the SQL Saturday here in Orange County, CA.  I’m presenting a session on something, at some time.  I honestly have no idea what or when, hopefully the site comes back up before the weekend so that I can grab the correct slide decks and demos.


If you are on Twitter and follow things PASS related on Twitter you probably noticed that you didn’t see anything really about this on Twitter.  This was on purpose.  Given that it was a data leak no one really wanted the private information of these 1000+ PASS members to be leaked online to be spread even further.  The problem was reported to PASS privately (or to a small group of people known as the Microsoft MVPs) and to PASS.  PASS board members were very transparent with the MVPs as to their handling of the problem and how it was going to get fixed.  They did their best to keep me as the person who found and reported the problem up to date as to what was going on with the fix.

I’m sure that PASS is still trying to figure out what needs to be done as far as notification, monitoring, etc.  But that involves lawyers, money and time.  I’m sure that once they get that all figured out they’ll have some sort of communication to those who’s data may have been effected.

In the grand scheme of things PASS responded quickly and properly to the data leak.  Hopefully we won’t have another data leak from our organization of data management professionals.


April 1, 2015  4:00 PM

Where Does Magic SAN Dust Come From?

Denny Cherry Denny Cherry Profile: Denny Cherry
Magic Software, SQL, SQL Server, Storage arrays

Magic SAN dust is a very expensive product for the storage vendor to purchase and bundle with the arrays. This is why storage costs so much.

You see they start by having to catch pixy fairies. From there they need to collect their tears. Now you’d think that this would be pretty easy, a little light torture and you’re good to go. However not just any tears will work. Tears of joy are what are required, and a lot of them. Your standard storage array is going to need somewhere in the neighborhood of 3 ounces of tears of joy.

Once collected these tears are filtered through a filter made of the tail hair of a unicorn turning the tears into “mudding”. This hair must be plucked, not cut, and must be obtained from a single unicorn. The hair is spun into a fine silk like string and a single string of it is used to create the filter. There can be no breaks in the string, and no contaminants or the process won’t work and the unicorn hair will simply absorb the tears of joy.

Once the tears of joy have been filtered through the unicorn tail hair filter they are fermented in a leprechaun’s pot of gold for 21 days which turns the “mudding” into “mash” (so called because the “mudding” is mashed flat under the weight of the gold. If the mudding is taken out to early it will be to sweet, and if left in for to long will it will sour.

Now for the tricky part. The “mash” is then removed from the pot and laid out on a blanket of spider silk and completely dried. Once completely dried this ultra fine powder is now the “Magic SAN Dust” that we all know and love. The three ounces of tears of joy that we started with will turn into just 0.05 grams of Magic SAN Dust which needs to be gently sprinkled on the storage array as it’s being shipped. The Magic SAN Dust unfortunately wears off and can’t be reapplied, which is why SAN vendors force you to buy a new array every 3-5 years (the length of time the Magic SAN Dust lasts depends on the quality of the tears that you start with).


March 30, 2015  6:12 PM

Recommended reading from mrdenny for March 27, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.
SQL Automated Backup and Patching Support in Existing SQL Server Virtual Machines
Rebuild Partitioned Clustered Columnstore Indexes
Handling NULL Character \x00 when Exporting to File Using BCP
Monitoring Read/Write Latency
Updated SQL Server PHP Driver Now Available
This weeks SQL Server person to follow on Twitter is: denglishbi also known as Dan English
Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.


March 25, 2015  6:00 PM

Failover Clustered Instances in Azure

Denny Cherry Denny Cherry Profile: Denny Cherry
Failover Clustering, Microsoft Cluster Server, SIOS, SQL, SQL Server

For a few weeks now Failover Clustered Instances in the Microsoft Azure cloud have been possible by using SIOS DataKeeper Cluster Edition to cluster the VMs together and get yourself shared storage. This has actually been possible for a while, you just needed to know how to do it. Now it’s a fully Azure Certified configuration and VMs with SIOS DataKeeper preinstalled are even available from the Azure Marketplace.

Now when setting up clustering in Azure you need to be sure to follow the various scripts which are out there so that you can setup what’s called the Internal Load Balancer (the ILB) within Azure. The scripts which I like the most are by Dave Bermingham’s and can be found on his blog.

Now when you get down to the “Create an Internal Load Balancer” pay special attention to some of the settings in the Get-AzureVM lines as some of these values are going to determine how quickly the ILB sees that the SQL Instance has moved from one VM to another. Under the default settings shown in Dave’s blog post (don’t blame him, these are the same scripts that you’ll find on MSDB I just like how Dave presents them better) you’ll see that the ProbeIntervalInSeconds parameter is set for 10 seconds which means that the ILB will only check which VM the clustered IP address is running on every 10 seconds. Now by default the ILB must fail twice before it will move the connections to the new VM. This means that the cluster will be down for an additional ~20 seconds between when SQL comes up on the new cluster node and when connections to it will successfully connect.

You can adjust this value to reduce this time by reducing the ProbeIntervalInSeconds parameter to a lower number. The lowest supported value is 5 seconds which would reduce the outage from ~20 seconds to about ~10 seconds or so. Which is definitely something which I would recommend as we want to keep the downtime window as short as possible as the whole reason for Clustered SQL Instances is the most availability possible.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: