SQL Server with Mr. Denny

May 10, 2017  5:10 PM

Microsoft announces MySQL as a Service

Denny Cherry Denny Cherry Profile: Denny Cherry


Today at Microsoft’s //build conference Microsoft announced a couple of new data platform services that are available for people who are looking to run MySQL driven applications using Microsoft’s cloud in a Platform as a Service (PaaS) solution. This solution gives you the same flexibility and ease of management of the other PaaS data platform solutions that Microsoft has made available in their offering, but now with a fully Open Source platform.

In addition to the MySQL platform they also announced PostgreSQL as an open source offering in their PaaS offering. While the PostgreSQL announcement is just as huge, I’m focusing on the MySQL announcement as that’s the part of the platform that I’ve been working with as it hosts WordPress.


Why would Microsoft be putting these open source offerings? Because they’ve been on the roadmap for quite some time. Remember that a large portion of Azure is running open source software and platforms for customers. But moving those open source platforms into a PaaS offering is quite a bit harder than just standing up a couple of VMs and calling it a PaaS platform. There’s a massive amount of tooling and automation that has to happen on the back end to configure all this, plus setup the portal to handle management, as well as the integration with all the other components in Azure (OMS, firewall, monitoring, alert rules, etc.)

Now I know that some people are going to drone on and on about how this is just Microsoft reacting to Google’s announcement back in March. And that couldn’t be further from the truth. Microsoft has been working on this service for months and it’s been in Private Preview since long before Google’s announcement. I know this because several of our websites have been using this service as the database backend for months now.

What you get when you spin up a MySQL (or PostgreSQL) database in this new PaaS service is a MySQL instance with HA built into the platform. You just connect to it like you would any other MySQL service by using the MySQL command line tools (or any of the web based management tools) and you create databases and point your applications at them.

Is it complete?

My assumption (and hope) is that the goal for these open source PaaS Data Platform services is parity with the SQL Server based solutions. So as this platform gets closer and closer to GA we’ll hopefully see this service getting closer and closer to the parity in features between the SQL Server based data platform solution and the MySQL based solution.  Personally I can’t wait for multi-site configurations to be available to we can stretch a MySQL database across multiple sites, then I’ll have to come up with a plugin that’ll redirect read only queries to another database server in a MySQL configuration which’ll be kind of cool to work on.

This sure is an exciting time to be working in Data Platform especially in Azure as Microsoft brings us more Open Source Software into the Azure stack.

Using MySQL in PaaS

DCAC has been using this MySQL in PaaS solution for quite a while now, since probably sometime in January to run several of our sites, including the development version of our main website where we do some WordPress plugin development.  We put the dev site there because when doing development and testing of WordPress plugins you end up generating a lot of controlled workload for a server so you can see everything that’s coming up and being a problem.  And frankly it’s been pretty easy to forget that this is a private preview service.

The response time for an OLTP application (which was what we were testing) was really good.  The database queries were a few milliseconds longer than when the MySQL database was running locally, and that’s all because of the network difference between talking to a VM in the same subnet and talking to a service in the same region, but that’s understandable and acceptable.  Going across regions introduces the expected level of network latency that you’d get going across the network from one city to another.  As this service is lit up in more regions this problem will become less and less as you’ll be able to have MySQL databases in the same region as your web servers.

All in all we’ve been very happy with the service as we’ve been using it against development and customer facing websites over the last several months.  And we’re really looking forward to see what Microsoft gives us with the product in the coming weeks and months so that we can eventually move all our sites to it and stop managing VMs in Azure.


May 8, 2017  4:00 PM

Your Redgate Winners of an Amazon Echo Are…

Denny Cherry Denny Cherry Profile: Denny Cherry

Back in March Redgate kicked off a contest on a few blogs to give away some SQL Clone licenses and some Amazon Echo Dots.  That contest has come to a close, SQL Clone Logoand Redgate has announced the winners of the contest. So if you entered on the DCAC blog post, Joey’s post, or any of the other pages go check out Redgate’s post announcing the winners to see if you’ve won the Echo and a license.

(I believe there will be other posts from Redgate in the future that’ll have the winners of SQL Clone in them, I’m not really sure how they’re announcing them.)

Go check out their post, and check out SQL Clone as well with their free trial.


May 3, 2017  4:00 PM

Solving performance problems after large data change occurs

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

If you’ve worked with SQL Server for any period of time you’ve problem run into what appear to be pretty random occurrences of SQL Server starting to have performance problems which go away after you rebuild the indexes. What’s going on to make the problem happen, and what’s solving the problem are however not what they appear.

When this happens the common thought it that it’s index fragmentation that is causing the problem. And while that isn’t helping you, it isn’t the root cause of the issue.

In most situations, however the cause of the problem is out of date statistics. And the solution is to update the statistics, or more specifically have SQL Server update them more often automatically.

If you are running SQL Server 2008 R2 SP1 through SQL Server 2016 then you want to turn on trace flag 2371 when SQL Server status (you can set it as a startup parameter, and then turn it on by running DBCC TRACEON (2371, -1) and that’ll make it take effect until the next restart).  There’s really no downside to this trace flag being on, especially for systems that have large amounts of data change impacting query performance. The only negative impact that this trace flag will have is that the system will trigger update statistics more often throughout the day. To ensure that this doesn’t cause problems we also need to enable async auto stats updates.

After you’ve made these changes, you’ll need to update statistics as update stats won’t trigger right away, and the problem shouldn’t return.


April 26, 2017  4:00 PM

Partitioned Tables and ColumnStore

Denny Cherry Denny Cherry Profile: Denny Cherry

ColumnStore indexes are all the rage with data warehouses. They’re fast, they’re new(ish) and they solve all sorts of problems when dealing with massive amounts of data.  However they can cause some issues as well if you aren’t very careful about how you setup your partitions on the ColumnStore index.  This is because, you can’t split a ColumnStore partition once it contains data.

Now, if everything is going according to plan you create your partitions well in advance and there’s no issues.

However, if everything hasn’t gone according to plan and someone forgets to create the partitions and you end up with rows in the final partition, you can’t create any more partitions because you can’t split the partition.

We can see this pretty easily with a quick repro.

AS RANGE LEFT FOR VALUES ( 1, 100, 1000 );

create partition scheme myScheme1
as partition myRangePF1
all to ([primary])

create table t1
(c1 int)
on myScheme1(c1)

create clustered columnstore index cs on t1 on myScheme1(c1)

insert into t1
(1), (10), (100), (600)

With our table here we’ve got a partition function with break points at 1, 100, and 1000. We then insert rows into a columnstore index on that table with the values of 1, 10, 100 and 600. If we try and split our partitions using the ALTER PARTITION FUNCTION command with any value between 1 and 1000 we’ll get an error message.


This is because those partitions contain data. If we try and more any of the partition ranges (1, 100, or 1000) then we’ll get an error because we can’t merge them together either.

So how to be work around this issue? Well the easiest way is to delete the data from the table, then fix the partitions, then put the data back. And frankly, that’s the only option. There’s a variety of ways to do that. The easiest option is probably to partition switch out that partition to an un-partitioned table. Then it’s just a matter of moving that data back into the partition in question.

Now if the partition that you’re currently writing to is the last partition, and you need to keep writing to that partition while this is all happening thing are going to get a little more interesting as you’ll need to take an outage in order to do all this work. There’s really no good solution besides taking an outage to move data around and get it correct in order to resolve this issue.

I wish I had a better answer here, but so far there’s no good solution.


April 19, 2017  3:00 PM

Correcting the Outbound Email Address for Office 365 Users When Using AD Connect

Denny Cherry Denny Cherry Profile: Denny Cherry
Office 365

That’s one hell of a long title isn’t it? I ran across a little issue in our Office 365 account recently. That problem was users having the wrong outbound email address in Office 365. I was trying to figure out why a new account that I had setup in our dcac.co domain kept showing up in Outlook at username@dcassoc.onmicrosoft.com instead of username@dcac.co.

Well the reason that it was showing up this way was because that’s how Office 365 was configured, of course. Now when I create a user (we’ve never had Exchange on prem, so we can’t see any of the Exchange tabs) I setup the account for the user, let it sync to Azure and added the licenses and didn’t think anything of it.

Later I saw the issue. Fixing it is pretty easy, if you know where to look.

Fixing it before the user is in AAD

If you are creating a new user in AD, making it so that the account is created correctly is actually pretty easy.ad1

In the properties of the user, fill out the email address field in the users account. This will tell Azure Active Directory and Office 365 that the users outbound email address should be whatever you put in that field (assuming that it’s a domain that you have control over).

If the users account has already synced to Azure as you discover this problem, this won’t fix it (or at least it didn’t for me).

In the testing that I did, using this field to fix it only works on account creation.  But that’s ok, it’s still a pretty easy fix to fix this after the user has had their account synced to Azure.

Here’s what the user account looks like in Azure after the initial sync has completed with the email field blank.


Fixing it after the user is in AAD

If the user is in AAD, and Office 365’s Exchange properties shows the user as the onmicrosoft.com account, all is not lost. You don’t need to do anything drastic like deleting the user or anything.

In a domain controller, probably the one with AD Connect installed on it, open Active Directory Users and Computers.  Click on View then Advanced Features.

Now go find the problem user and open their account.  You’ll see a new tab called “Attribute Editor”, select it.  Scroll down until you find the proxyAddresses field.  It’s probably listed as “<not set>”. That’s fine, we’re going to fix this.  This proxyAddresses field is how we tell Azure and Office 365 (and Exchange) what the email addresses are that the user can receive email as, and what email address is their outbound email address.Setting outbound email address in Active Directory

If we edit that we’ll get a dialog that allows use to add items to the list. Now don’t just add in email addresses, that isn’t going to work.  You can list all sorts of items in there.  Specifically we want to add in our SMTP addresses.  We do this in the format of smtp:username@domain.com.  In the case of our test user it’ll be testuser@dcac.co.  Now we can put in as many email addresses as we need them to get email at.  So how do we set the outbound email address?  We make the “smtp” part uppercase.

If you are a company that changes users email addresses when then change their name (they get married or divorced for example) then you’ll want to keep their old email address in there.  So you just set it as “smtp:oldname@domain.com” in all lowercase.

In the screenshot we’ve two SMTP addresses, username@dcac.co with the upper case SMTP and oldname@dcac.co with the lowercase SMTP.  This allows the user to received email on both username@dcac.co and oldname@dcac.co while sending any outbound emails as username@dcac.co.

office2If I go look at the list of accounts which the user has, you’ll now see that the SMTP in upper case is the bolded one (that’s how Office 365 shows which email is the outbound).  And if we look at the user in the mailbox list we’ll now see that the email address has changed from username@dcassoc.onmicrosoft.com to username@dcac.co.


A fairly easy change, and no impact to the user, other than the impact that we wanted, which is that the users email address is now correct and matches the companies branding. It’ll take a while at this point for the users email address to be corrected in everyone’s Global Address List (GAL). It’ll take a little time, the GAL doesn’t update that often, but when it gets around to updating the users will all get the update.

Using PowerShell to fix things in bulk

This change can be made in PowerShell as well. In this case I’ve setup the PowerShell to only for against a single user, but you can setup the -Filter parameter anyway you need to in order to get things working the way you need.

$users = Get-ADUser -Filter {UserPrincipalName -eq 'testuser@dcac.co'} -Properties SamAccountName, ProxyAddresses, UserPrincipalName
$users | Foreach {
Set-ADUser -Identity $_.SamAccountName -Add @{Proxyaddresses="SMTP:"+$_.UserPrincipalName} -whatif

I’ve left the -whatif in there so you can see what it’ll do before running it. You’ll want to be careful running PowerShell against your Active Directory domain as you can really screw things up if you aren’t careful. Be warned, if you already have values in this ProxyAddresses field, this will simply append to them. If there’s a default in there you’ll now have two and Office 365 will get cranky. So you may need more logic and checking in your script. But I’ll leave that to you.

If you are planning a cloud migration, or you have already started one, be sure to contact DCAC.  We are one of the premier experts in cloud migrations, frequently training other consulting companies on how to perform cloud migrations.


April 12, 2017  2:00 PM

ZyXEL with client VPN and Azure Site to Site VPN Configuration

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, VPN

I’m going to start out by saying that I’m not a networking guy, at all. But that said sometimes I have to pretend to be a networking guy. In our lab we have a decent set of machines. We also have our Azure environment that’s up and running which is where our websites, etc. are all running out of. To keep things simple we have our VPN setup at the CoLo as that’s where we need to go most of the time when we’re VPNed in since that’s where we can spin up servers for free.

Ever since we setup the CoLo we have had this annoying issue of having to RDP to a VM in the CoLo before being able to talk to Azure. It isn’t a major thing, but it’s gotten annoying. Eventually I got sick enough of it to try and fix it.

4 hours later I had a solution.

Now we have a ZyWall 110 but this should be similar for most of their hardware. The first thing I needed to do what add another address object. That address object needs to be able to cover all the networks inside the network as well as the subnet that you’re using to give IPs to your VPN clients. Our CoLo is and we’re using for our VPN endpoints. Because I’m lazy it setup as  This is going to be needed later on.  I’ve got another address object named “Azure-US-West-Production” that defines our IP Subnet for our Azure Virtual Network.


Once that’s created you’ll need to modify the VPN Connection to use the new address object as the local policy.


When this was done the VPN dropped, so I had to tell the VPN to reconnect (or allow it to reconnect by itself).

Once that’s done you’ll need to add a route to direct the network traffic to where it needs to go.  For the route, you’ll need to setup the Incoming as “Tunnel”, then select the tunnel that your users VPN through (we use L2TP over IPsec and our tunnel is named ClientVPN).  For the source address I left that as “any”. For the Destination I selected the subnet for the Azure vNet that I want to allow people to connect to.  In the Next-Hop section Change the type to VPN Tunnel and select the correct VPN Tunnel that connects to the correct vNet. In my case it’s called Azure-USWest-Production.


We have several vNets in Azure (4 to be specific) and we want to be able to access all of them directly, so I had to repeat this for each one of them.

Now users that are VPNed in are able to access the VMs in Azure directly from their desktops and laptops without having to RDP to a VM in the CoLo.


April 5, 2017  4:00 PM

Compare Plans in Query Store

Denny Cherry Denny Cherry Profile: Denny Cherry

Have you ever been looking at an execution plan in the SQL Server Query Store reports and wanted to be able to see what the differences between that plan and another plan were?

Did you know that you can?

It’s a couple of step process, but you can compare the plan in the Query Store to any plan that is saved as a “.sqlplan” file.  Simply right click on whatever plan you want to compare to (either a real plan, estimated plan, or plan from the query store) and click “Save Execution Plan As” and save the plan to your local hard drive.  Then locate the plan you want to compare in the Query Store.  Right click on that plan, and select “Compare Showplan”.

That’ll give you a pretty GUI like the one shown below (click to enlarge).  By default anything shaded in red in both plans will be identical. Anything not shaded you’ll want to look at as those are different in some way.  Looking at my queries below you’ll notice that they look the same. What’s different on these is actually the costs, row counts, etc.

compareYou’ll also notice the two sets of properties with the little yellow not equal too signs next to them. That tells you which values from the properties don’t match.  That’ll give you a quick easy visual identifier as too what is different between the two execution plans.

Personally I’d like to just be able to right click on a couple of the plans in the Query Store and compare them without saving, but I didn’t see a way (doesn’t mean there isn’t one, or that there won’t be one in the future).


March 28, 2017  11:47 PM

Blog, please blog. Just Blog

Denny Cherry Denny Cherry Profile: Denny Cherry

My name is Denny, and I’m a blogger. You should join me in this endeavor of writing a blog.



Why should you be blogging? Because that’s how we learn, and that’s how we share.

What Do I say in my blog?

You may think that you don’t have anything new to say, but blogging isn’t always about writing something new. It’s about putting your spin and your flare on the topic. Not every topic needs to be earth shattering, or totally new.

Where do I start?

Start by writing about something that you do at work. When you run across something interesting, or that you fix and write about it. It’s a great way to find topics. You can even write about something so that you have somewhere to find the information later.  (Here’s a secret, we all do that.)

Don’t worry about writing something new, write about something that you’ve experienced.

It takes so long.

It can, when you get started.  But your blog posts don’t need to be War and Piece. This post is ~200 words. It took 10 minutes to put together (you’ll get faster, I’ve written 1100+ posts and hundreds of articles and book pages).  They can be short and that’s just fine.

Just blog.

Blogging is expensive.

WordPress has free hosted blogging, or you can pay a few bucks and have your own domain name instead of something.wordpress.com.

Who will read what I write?

And when you start blogging, tell me the URL in the comments. That way I can read it, and so can others.


P.S. And yes, I totally stole this from Andy Leonard.  You’ll notice how we talks about everyone having their own style, this is his post in my style. Same basic information and conclusion, different format and style.  Some people will gravitate towards Andy’s approach, some towards mine.  This is why we blog about stuff that’s already been written.

March 22, 2017  4:00 PM

Accessing an Azure hosted Availability Group from On-Prem

Denny Cherry Denny Cherry Profile: Denny Cherry


When building an AlwaysOn Availability Group that’s hosted on VMs in Azure you may have issues querying through the Availability Group name/IP address from on-prem. You won’t see any issues when running queries from inside Azure, but you’ll see issues when running queries from machines on-prem, when connecting to Azure via a site to site VPN connection. The error that you’ll see is going to be similar to:

Msg 121, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 – The semaphore timeout period has expired.)

Thankfully the fix is actually pretty simple. We just need to drop the MTU on the AG members from 1500 to 1350. Why we have to do this, I have no idea, but it works. We do this by logging onto the console of the VM and use the netsh command to change the MTU. You’ll want to schedule a job to run on the VM at startup as every time the VM restarts and moves the VM to another host the MTU may change back.

I used PowerShell to change the MTU on startup.

$AdapterName = $(Get-NetAdapter | Where { $_.Name -Match ‘Ethernet’}).Name
netsh interface ipv4 set subinterface “$AdapterName” mtu=1350 store=persistent

I choose PowerShell because the network adapter name can change (especially in Classic VMs) so we need to grab the correct name on startup.

Run the PowerShell manually, then schedule it to run at startup and the semaphore timeouts will go away. I am working with SQL Server and Azure engineering to figure out why this happens when using an AG (with an Internal Load Balancer of course) and a site to site VPN (policy based in this case) so that we can fix this and make it not happen anymore. I’ll report back when I hear back about a permanent resolution.


March 15, 2017  4:00 PM

Sending email in SSRS 2016 on port 25 with authentication and SendGrid

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently we moved a client from their aging on-prem environment into Azure. As part of this there was a lot of system upgrading being done to get everything onto current versions. Part of this included upgrading SSRS 2008 R2 to SSRS 2016. As a part of the migration we needed to ensure that email delivery of reports from SSRS was working as expected. This required setting up SMTP with authentication as the only way to send emails out of Azure is to use some sort of authenticated SMTP service such as SendGrid or Office 365. We opted for SendGrid as it includes some nice reporting as part of the email sending.



We setup our nice new SQL 2016 SSRS servers to authenticate using the new SQL 2016 GUI which just supports it now. One problem with the GUI is that it requires that you use encryption. No big deal, SendGrid supports encrypted SMTP.

The problem with SendGrids encrypted SMTP is that it has a funky certificate chain which isn’t trusted by default. And it’s a convoluted process to get the correct certificate chain installed on a few servers (we have an SSRS farm as there’s thousands of reports being delivered every morning).

The easier option was to simply setup SMTP to not use encryption. Now the GUI doesn’t support this, but thankfully the XML Config file does. Simply go find your rsreportserver.config file (it’ll be in C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer by default) and open that puppy up in notepad. There’s a couple of settings you’ll want to change in here. First you’ll want to change the port number to port 25. You’ll find that in the setting SMTPServerPort in the XML Document. The second setting is to disable XML which is the SMTPUseSSL setting, so change that from True to False (yes it’s spelled out not using 1/2 or 0/1 like any sensible setting would).smtp_settings

You’ll probably also want to change out the email address that’s in the from field. By default SSRS will put the username in as the from email. Now if using a corporate system to send email that’s fine. With sendgrid your email address is some massive long string, so that isn’t helpful. Directly under the encrypted username and password you’ll find a From tag. Just change that email address to what you want the from field to be and off you go.

In a scaleout configuration like we have you’ll have to edit these settings on each server. I have no idea why these settings aren’t written to the database, but they aren’t.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: