SQL Server with Mr. Denny

March 28, 2017  11:47 PM

Blog, please blog. Just Blog

Denny Cherry Denny Cherry Profile: Denny Cherry

My name is Denny, and I’m a blogger. You should join me in this endeavor of writing a blog.



Why should you be blogging? Because that’s how we learn, and that’s how we share.

What Do I say in my blog?

You may think that you don’t have anything new to say, but blogging isn’t always about writing something new. It’s about putting your spin and your flare on the topic. Not every topic needs to be earth shattering, or totally new.

Where do I start?

Start by writing about something that you do at work. When you run across something interesting, or that you fix and write about it. It’s a great way to find topics. You can even write about something so that you have somewhere to find the information later.  (Here’s a secret, we all do that.)

Don’t worry about writing something new, write about something that you’ve experienced.

It takes so long.

It can, when you get started.  But your blog posts don’t need to be War and Piece. This post is ~200 words. It took 10 minutes to put together (you’ll get faster, I’ve written 1100+ posts and hundreds of articles and book pages).  They can be short and that’s just fine.

Just blog.

Blogging is expensive.

WordPress has free hosted blogging, or you can pay a few bucks and have your own domain name instead of something.wordpress.com.

Who will read what I write?

And when you start blogging, tell me the URL in the comments. That way I can read it, and so can others.


P.S. And yes, I totally stole this from Andy Leonard.  You’ll notice how we talks about everyone having their own style, this is his post in my style. Same basic information and conclusion, different format and style.  Some people will gravitate towards Andy’s approach, some towards mine.  This is why we blog about stuff that’s already been written.

March 22, 2017  4:00 PM

Accessing an Azure hosted Availability Group from On-Prem

Denny Cherry Denny Cherry Profile: Denny Cherry


When building an AlwaysOn Availability Group that’s hosted on VMs in Azure you may have issues querying through the Availability Group name/IP address from on-prem. You won’t see any issues when running queries from inside Azure, but you’ll see issues when running queries from machines on-prem, when connecting to Azure via a site to site VPN connection. The error that you’ll see is going to be similar to:

Msg 121, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 – The semaphore timeout period has expired.)

Thankfully the fix is actually pretty simple. We just need to drop the MTU on the AG members from 1500 to 1350. Why we have to do this, I have no idea, but it works. We do this by logging onto the console of the VM and use the netsh command to change the MTU. You’ll want to schedule a job to run on the VM at startup as every time the VM restarts and moves the VM to another host the MTU may change back.

I used PowerShell to change the MTU on startup.

$AdapterName = $(Get-NetAdapter | Where { $_.Name -Match ‘Ethernet’}).Name
netsh interface ipv4 set subinterface “$AdapterName” mtu=1350 store=persistent

I choose PowerShell because the network adapter name can change (especially in Classic VMs) so we need to grab the correct name on startup.

Run the PowerShell manually, then schedule it to run at startup and the semaphore timeouts will go away. I am working with SQL Server and Azure engineering to figure out why this happens when using an AG (with an Internal Load Balancer of course) and a site to site VPN (policy based in this case) so that we can fix this and make it not happen anymore. I’ll report back when I hear back about a permanent resolution.


March 15, 2017  4:00 PM

Sending email in SSRS 2016 on port 25 with authentication and SendGrid

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently we moved a client from their aging on-prem environment into Azure. As part of this there was a lot of system upgrading being done to get everything onto current versions. Part of this included upgrading SSRS 2008 R2 to SSRS 2016. As a part of the migration we needed to ensure that email delivery of reports from SSRS was working as expected. This required setting up SMTP with authentication as the only way to send emails out of Azure is to use some sort of authenticated SMTP service such as SendGrid or Office 365. We opted for SendGrid as it includes some nice reporting as part of the email sending.



We setup our nice new SQL 2016 SSRS servers to authenticate using the new SQL 2016 GUI which just supports it now. One problem with the GUI is that it requires that you use encryption. No big deal, SendGrid supports encrypted SMTP.

The problem with SendGrids encrypted SMTP is that it has a funky certificate chain which isn’t trusted by default. And it’s a convoluted process to get the correct certificate chain installed on a few servers (we have an SSRS farm as there’s thousands of reports being delivered every morning).

The easier option was to simply setup SMTP to not use encryption. Now the GUI doesn’t support this, but thankfully the XML Config file does. Simply go find your rsreportserver.config file (it’ll be in C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer by default) and open that puppy up in notepad. There’s a couple of settings you’ll want to change in here. First you’ll want to change the port number to port 25. You’ll find that in the setting SMTPServerPort in the XML Document. The second setting is to disable XML which is the SMTPUseSSL setting, so change that from True to False (yes it’s spelled out not using 1/2 or 0/1 like any sensible setting would).smtp_settings

You’ll probably also want to change out the email address that’s in the from field. By default SSRS will put the username in as the from email. Now if using a corporate system to send email that’s fine. With sendgrid your email address is some massive long string, so that isn’t helpful. Directly under the encrypted username and password you’ll find a From tag. Just change that email address to what you want the from field to be and off you go.

In a scaleout configuration like we have you’ll have to edit these settings on each server. I have no idea why these settings aren’t written to the database, but they aren’t.


March 8, 2017  4:00 PM

Linux Commands You Need To Know

Denny Cherry Denny Cherry Profile: Denny Cherry

As SQL Server is coming to the Linux platform, there’s going to be a bunch of new commands within the OS that you’re going to need to know so that you can properly move around the Linux operating system.


Let’s start with the basics, there’s no remote desktop (RDP) into most Linux systems (there are options out there for setting up RDP on Linux but you probably don’t want it on your production Linux servers as ssh is all you should need).  Because there’s no remote desktop available you’ll need to use Putty to ssh into the server.  Putty is basically like a telnet client that supports all manor of connection options including telnet, ssh, rlogin and direct serial port connections.  99.9% of the time all you’ll care about is ssh.

If you need to get files on and off of the server then you’ll want to have WinSCP.  WinSCP is kind of like an FTP program, but it uses ssh instead of FTP to move the files up and down to the server.

“Run As”

When you log into a Linux server you don’t have admin rights to the server.  You’ll need to use a command called sudo to run commands as the root login.  Normally you’ll just need to run a few commands as root so you’ll just put sudo in front of the command.

sudo someCommand -someParameter

Directory Listings

Linux has directories, in the windows world we either call them directories or folders.  They are basically the same thing.  To see what’s inside of a folder you can either use the dir command or the ls command.  The output of these commands linux1
will be slightly different as you can see here.  The output from dir is all basic text while the output from ls is color coded to mean different things.  Both dir and ls support the –help (there’s two dashes there) flag which will output all the various switches which are available for those commands.  Normally when I’m working in Linux I switch to ls, mostly because I have the switches that I need memorized.  The one that I probably use the most is ls -l as that gives me everything in the directory along with all the attributes of the file (read, write, execute, owner, size and date) that I normally care about seeing.

Changing Directories

Changing directories is done using the cd command just like in windows.

Free Disk Space

Checking for free drive space in Linux is done via the df command.  This will output by default the number of blocks on the partition, thedf -h number used and the number available.  Now at this point you have to do some math to figure out how much space you actually have.  The easier way is to use one of a couple of switches to get the output in a more useful format. If you use df -h you’ll get the output in Gigs (or Megs, or KB) instead of blocks.  This makes it MUCH easier to figure out what’s going on.  Not only does df give you the size of the partition, but it tells you the percentage of free space on the partition.

Running Processes

There’s a few different ways to see what’s running on a server.  The first is ps.  This will give you a list of processes running on the server under your session. ps -all will give you a list of all the processes running on the server, including background processes.  It doesn’t give you all that much useful information about the processes, just the time the process has taken on the CPU, the pid and the process name.

If you want a lot more useful information about what’s running then you’ll want to run toptop gives you a variety of information abouttop what’s running within the machine such as the CPU, uptime, number of processes, memory in use and a wealth of information about each process.  While ps is just a dump of information to the screen, top is a real time feed of data and it updates every couple of seconds to give you fresh information.  If you press the ? key then you’ll get a help screen with will give you a bunch more information about the processes that are running on the system.  This includes being able to change the columns which are displayed, do filtering and sorting,

Editing Files

Editing text files is where things start getting really complicated.  There’s a few text editors out there for Linux and they are all “interesting”.  Personally I use vi, mostly because I hate myself.  vi is a great text editor that’s VERY powerful, but you have to have all sorts of internal switches memorized to use it.  Now I’ve been using it since I first started playing with *nix 15+ years ago so I’m pretty used to it.  To edit a file with it, just run vi with the filename after it.

vi myfile.txt

That will open the file in VI and let you edit it.  Now vi has two modes, command mode and insert mode.  By default you are in command mode. This lets you run all sorts of commands over the file such as search, etc.  When in command mode if you type a : it’ll show up at the bottom of the screen.  For example if you wanted to search for a text string while in command mode you’ll type :/SearchString/ and press enter.  vi will then take you to the next line that has that string on it.  If you want to do a search and replace on the file and replace the next occurrence of the text then you’d use :s/SearchString/ReplaceString.  You can probably start to see why I said that I hate myself for using vi.

To get into insert mode so that you can type and delete press the letter “a” on the keyboard. This will put you into insert mode and you can now change things, navigate with your arrow keys, delete characters, etc.  To get back into command mode press the escape key.

If you need to delete a entire line from a text file go into command mode by pressing the escape key then hit the d key twice. That’ll delete the entire line.

If you want to save the file go into command mode and type :w and press enter.  That’ll save the file without closing it.  To save and close type :q and press enter. To save without closing type :q! and press enter.

While using vi you’ll want to avoid the number pad for typing numbers.  Trust me.

One of the things I really like about vi is that I can tell vi to take me to a specific point in the file.  If I know the line number I need to get to, running vi +nn filename will cause vi to open the file and go directly to that line number.

There’s a variety of other text editors out there such as pico, and emacs as well.  I’ve just never liked them as much as vi.

Job Scheduling

If you need to schedule things to run in Windows you’d use Task Scheduler.  We don’t have Task Scheduler in Linux, we have cron instead.  Cron is edited by using the crontab -e command which opens up a text editor.  The first time you run crontab it’ll ask you what editor you want to use.  Most people use emacs and that’s what you’ll see most of the demos use.

Once in the editor scroll to the bottom of the file. That’s where you can specify new tasks to run.  The format of the line is pretty annoying.  There’s 5 values which you specify to control when the job should run.  These are space delimited and are:

  • Minute
  • Hour
  • Day of Month
  • Month
  • Day of Week

If you want the job to run on any possible value for these fields then use an * for that position.  After those five fields you specify the command and any parameters that you want to run.  Lets look at some examples, all of which are going to run the command /var/myCommand.

Run daily at midnight

0 0 * * * /var/myCommand

Run at 5am

0 5 * * *  /var/myCommand

Run Sunday at midnight

0 0 * * 7 /var/myCommand

(You can use 7 or 0 to represent Sunday. This is for legacy support as is the old days some systems supported 0 and others supported 7.)

Run Every 5 Minutes

*/5 * * * /var/myCommand

Run Every Other Day

* * */2 * * /var/myCommand

Run at Midnight on New Years

0 0 1 1 * /var/myCommand

As you can see this is a pretty powerful way of scheduling commands.

That should give you a pretty good understanding of the basic Linux commands that you’ll need to work with files within the Linux operating system while putting SQL Server database instances on Linux.


March 6, 2017  7:00 PM

SQL Clone and an Amazon Echo Dot

Denny Cherry Denny Cherry Profile: Denny Cherry

Want to learn more about Redgate’s SQL Clone and enter to win an Amazon Echo Dot? Then check out the post I just did on the Denny Cherry & Associates Consulting site for more info.


February 28, 2017  8:37 PM

Another Cloud Outage (#awsdown this time) Another Group of Companies Show They Don’t Have DR

Denny Cherry Denny Cherry Profile: Denny Cherry

awsToday Amazon’s AWS S3 service in US East took an outage. Along with this outage we see a lot of companies that didn’t build DR plans into their cloud deployments.  The most depressing of these was that Amazon AWS couldn’t update their status board because it’s hosted in S3 in US East apparently.  This is what I call a massive design failure of their application.

Along with the issue where were some great graphics that people were creating, probably because they had nothing else they could do at work.

Taken from @melizeche on Twitter. Click for original.

Taken from @melizeche on Twitter. Click for original.


It’s kind of scary just how many services reported as being offline due to the outage. As sampling you can find below.  Even the mighty isitdownrightnow.com was offline.


On top of this there’s people that just aren’t able to work at the moment because they’re either building something against AWS, or the apps which they use are only hosted in a single AWS data center.

Building your systems so that they can survive a total outage of your primary data center is key when putting services into the cloud. You have to plan for your site that’s hosting your services to fail. This isn’t an AWS issue, or an Azure issue. All the cloud providers will have a failure somewhere along the line that’ll take an entire site offline. How you’ve configured your systems to handle these failures will determine if you can keep working during the outage, or if your staff is sitting around tweeting about how they can’t work because your cloud provider is offline.


February 22, 2017  4:00 PM

Are you using SQL 2016 SSMS? Would you like to use native help? Now you can.

Denny Cherry Denny Cherry Profile: Denny Cherry

Are you using the SQL Server 2016 SSMS? Do you HATE the help experience when you press F1 and it opens the GOD DAMN webpage?

Did you know that you can change this horrible annoying behavior?


It’s a shockingly easy change.  Open SSMS. Click help, Set Help Preference, then change the default.ssms

See screenshot.

Now, when I did this I had to restart SSMS to get the change to actually do anything.  Half the time help hangs when I open it, but I’m on the fast train for Windows 10 so it might be that.

Best blog post ever!


February 15, 2017  4:00 PM

You may be getting a free hardware upgrade in Azure this month

Denny Cherry Denny Cherry Profile: Denny Cherry

If you’re hosted in Azure in the West US region you may be getting a free hardware upgrade later this month or early next month.  Part of the Azure infrastructure is being upgraded and VMs that are running in the part which is being upgraded will be automatically to the new hardware.



Now if you’re going to be impacted whoever is your service administrator for your company will be getting (or has gotten) an email about this, which will include a reboot of all the impacted VMs.  If your VMs are within an Availability Set then you’re guaranteed to only have one VM in the Availability Set reboot at a time.  This is why we have Availability Sets.

In fact, this is all one of the reasons that Azure is a great platform.  We’ve got systems that are getting moved to new, (hopefully) better hardware and there’s basically no impact to the systems in question.  Just a rebooted of the impacted systems, that’s it.  All in all, that’s a pretty minimal impact.

Now the email does include instructions if you’d like to schedule the upgrade yourself.  Instead of letting Azure move your services automatically, just power down all the VMs in the Availability Set then power them back on. When they power back on they’ll be automatically moved to the new hardware.  Yes this does mean that you have to take an outage but you’ve got two weeks notice to schedule the outage and complete it before Microsoft will force the issue.

All in all, that’s not bad.

If you’re getting the upgrade, enjoy that new server smell.


February 8, 2017  4:00 PM

Thanks Rodney for driving me and Grant all over Florida

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

A few weeks ago Grant Fritchey and I had the chance to speak at five user group meetings in five days, in five cities all over Floridaexpert.

Clearly we’re both insane as we agreed to do this. Everything was scheduled by Karla Landrum and she got some big shout outs at the User Group meetings.

But one person that didn’t get enough credit was Rodney. He was kind enough to take a week off of work and drive up to Nashville SQL Saturday to pick me up, then drove Grant and I all over Florida to all these user group meetings. Driving for 6-8 hours a day while Grant and I worked away on our laptops couldn’t have been an easy task; and I just wanted to through out a shout out to Rodney for being willing to put up with me, Grant and Karla for 5 days of driving around the state.

So thanks Rodney.


February 1, 2017  4:00 PM

Looking for a free time sheet application for your consulting company (or any company)?

Denny Cherry Denny Cherry Profile: Denny Cherry

Back when I started Denny Cherry & Associates Consulting I needed a way to track my time. I used Excel and it worked good enough. But

Picture of a clock


then I decided to start expanding the company so I needed to find a better solution for tracking time as that’s how consulting companies get paid.

I found lots of options that all cost a small fortune, and all they did was track time. They didn’t do invoicing, and if they did do invoicing they cost even more.  Some were up to $40 per user per month. Needless to say as a small company there was no way I was going to start paying hundreds of dollars a month for just a few people to be able to log time sheets. Especially as we have outside people working for us on occasion that we sub contract work out to for clients. So even if those people weren’t working that month we’d have to pay for them to have access to the system, just in case. That wasn’t going to fly.

So I ended up building our own time sheet system in WordPress as a plugin (we use WordPress as our website so it made it an easy place to work). It’s taken a lot of additional work adding in features that others (and us) would want, but we’ve gotten the plugin ready for public release and use by other companies. It’s now available in the WordPress plugin list as “Time Sheets“. There’s also some information available about it on our website. If you’re running a small (or even larger than small) company that needs a free application to track time, then I’d recommend checking out our time sheet system.

The system has all the workflows that you’d expect from a large timesheet system (approvals by supervisors, an invoicing queue, and a payroll queue). You can turn off features as needed such as the notes for each days work, or the expenses section if those aren’t going to be needed by various teams (or anyone).

We’ve found it to be pretty handy and easy to use for our small team, and hopefully you find it useful as well.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: