SQL Server with Mr. Denny

September 19, 2016  11:48 PM

Announcing your PASS Summit 2016 Speaker Idol Contestants

Denny Cherry Denny Cherry Profile: Denny Cherry

Well the time has come to announce the PASS Summit 2016 Speaker Idol contestants.  We had a great group of people submitPASS 2015_SpeakerIdol_Banners_440x220 to present at the speaker idol, and I have to say that the selection process was quite tough.  But thankfully it’s all over, the folks who have been accepted into this years event have been told (and they replied).  Those you weren’t selected have been notified.

All that’s left now is to tell everyone else.  The speaker order for each day will match what’s listed here.




Daniel Janik
Martin Catherall
Kevin G. Boles
Eric Peterson
Amy Herold
Tom Norman
Shabnam Watson
Tzahi Hakikat
Brian Carrig
Todd Kleinhans
Peter Kral
Robert Verell

For openness, I do know a few of these folks, but that wasn’t taken into account when I sorted the list.  As I’m not a judge what I want has no effect on the outcome.

By the Friday afternoon of the PASS Summit we will know which one of these will be the first speaker announced for the PASS Summit 2017.

Speaker idol sessions this year will be in room TCC Yakima 1 (go through the dining hall, down the escalators into the TCC). The Wednesday, and Thursday sessions will be held at 4:45pm. The Friday session will be at 11am. The finals will be at 3:30pm all in the same room. You can find the Speaker Idol sessions published on the schedule. You can either search for “Idol” or “Denny” and you’ll find the Speaker Idol Sessions so you can add them to your schedule.

Thanks for supporting PASS, thanks for supporting Speaker Idol, and be sure to swing by the Consultants Corner booth in the exhibit hall at booth 316 to get scanned for our prize drawing and to pick up some of our awesome swag!


September 13, 2016  1:54 AM

If your post title doesn’t represent your article, you have failed as a writer

Denny Cherry Denny Cherry Profile: Denny Cherry

I was minding my own business scrolling through Facebook while sitting on thliese airport shuttle to the rental car lot when I ran across the steaming pile of BS shown in the image to the right.  If you aren’t into reading graphics the title of the article is “My Eight Dollar Flight Upgrade Trick” (which I won’t link to).  In the article the author talks about how he buys candy for the flight attendants so that they are nicer to him (shocking, buying gifts for people make it so they are nicer to you).

In this article the author actually says that he doesn’t do this to get an upgrade (there’s a nice humble brag about getting upgraded all the time due to his status), in fact he doesn’t say that this ever works (hint: it doesn’t because upgrades are almost always handled by the computer in order; and are done by the gate agent not the flight attendant).

This is what we call click bait.  This happens when the author has written such a crap article, usually about nothing with no actual substance, that they have to trick you into clicking on the article in order to get page views.  When the editors look at the number of page views that they article gets the number looks great, so the author keeps getting contracts to write articles (because clicks and views equal ad revenue).  You’ve seen these before on Facebook and Twitter.  They come up in all walks of live from IT articles, travel articles, news articles, politics (political authors are REALLY good a writing BS).

When you see this stuff, leave comments about the title being BS. Write to the editors, or better yet if the title seems like crap, just don’t read the article.


September 9, 2016  5:18 PM

Recommended reading from mrdenny for September 9, 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.


September 7, 2016  7:32 PM

PASS Summit 2016 Attendee Orientation Webcast

Denny Cherry Denny Cherry Profile: Denny Cherry

On October 7th at 11am PST / 2pm EST join me (Denny Cherry) at the PASS Summit 2016 Attendee Orientation. DuriPASS_16_SpeakingSmall_250x250ng this webcast we’ll cover everything you need to know about the PASS Summit, before attending the PASS Summit. This includes hotels, rental cars, parties, places t o eat, getting around the city and the convention center.

By the end of this webcast Seattle and the PASS Summit will feel like your second home.

Register today to hold your spot for this free webcast, and we’ll see you on the 7th.


September 7, 2016  6:00 PM

PASS Summit SQL Karaoke Tickets Going Fast

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences and events, SQL Server

original_jpg This year, like the past view years we’ll be kicking off the PASS Summit with a (loud) bang at Amber on 1st street in Seattle. There are still tickets available for this rocking party. This is the event to attend on Tuesday night to come out, have something to drink, sign some karaoke, watch others sign karaoke and have a great time.

Like we’ve done in prior years we’ll have a live band performing with our singers. This event has always been a hit at the PASS Summit in years past, and it’ll be a popular event this year as well.

Get your tickets before they sell out and come down, have a blast, make some noise and enjoy the party.


August 29, 2016  6:38 PM

PASS Speaker Idol Sign Ups Are Closing Soon

Denny Cherry Denny Cherry Profile: Denny Cherry

If you were planning on signing up for the PASS Summit Speaker Idol you have a little over two days left in order to get signed up. Signups close at 11:59pm on August 31st, 2016 as pure the official rules.

So go get signed up, or sign up your favorite speaker that you’ve seen at a SQL Saturday of other event.

See you at the summit.


P.S. Be there Tuesday night for the party at Amber. There are still tickets available, so get signed up for free drinks, free Karaoke and tons of fun on Tuesday night.

August 24, 2016  4:00 PM

Releasing a Page Blob Lease in Azure

Denny Cherry Denny Cherry Profile: Denny Cherry

2478229521_f40dbba2b4_bSometimes when firing up VMs or moving VMs from the page or blob store you’ll get an error that there is still a lease on the file.  To solve this you need to release the lease. But waiting won’t do the trick, as the leases don’t have an expiration date.

I found some VB.NET code online that with some tweaking (with the help of Eli Weinstock-Herman and Christiaan Baes) I was able to get to release the lease.

The first thing you’ll need is Visual Studio 2015.  You’ll also need the Azure Storage Client.

Once those are both installed you need to create a new VB.NET project.  I used a command line app.

Then put this code in the app. Replace the placeholders that I show in {} with the actual values from your Azure account. Then compile and run the code. The lease will be released.

Imports Microsoft.WindowsAzure.Storage.Auth
Imports Microsoft.WindowsAzure.Storage.Blob
Imports Microsoft.WindowsAzure.Storage
Module Module1

Sub Main()
Dim Cred As New StorageCredentials(“{StorageAccount}”, “{StorageAccountKey}”)
Dim sourceBlobClient As New CloudBlobClient(New Uri(“http://{StorageAccount}.blob.core.windows.net/”), Cred)

Dim sourceContainer As CloudBlobContainer = sourceBlobClient.GetContainerReference(“{ContainerName}”)

Dim sourceBlob As CloudPageBlob = sourceContainer.GetBlobReferenceFromServer(“{FileName}”)

Dim breakTime As TimeSpan = New TimeSpan(0, 0, 1)


End Sub

End Module

Sadly, short of doing this I haven’t been able to find an easier way of doing this.


August 17, 2016  4:00 PM

Making Azure PowerShell Scripts Work in PowerShell and As RunBooks

Denny Cherry Denny Cherry Profile: Denny Cherry

Runbooks are very powerful tools which allow you to automate PowerShell commands which need to be run at different times.  One of the problems that I’ve run across when dealing with Azure Runbooks is that there is no way to use the same script on prem during testing and the same script when deploying. This is because of the way that authentication has to be handled when setting up a runbook.

The best way to handle authentication within a runbook is to store the authentication within the Azure Automation configuration as a stored credential.  The problem here is that you can’t use this credential while developing your runbook in the normal Powershell ISE.

One option which I’ve come up with is a little bit of TRY/CATCH logic that you can put into the PowerShell Script, which you’ll find below.

In this sample code we use a variable named $cred to pass authentication to the add-AzureRmAccount (and the add-AzureAccount) cmdlet. If that variable has no value in it then we try get call get-AutomationPSCredential. If the script is being run within the Azure Runbook environment then this will succeed and we’ll get a credential into the $cred variable. If not the call will fail, and the runner will be prompted for their Azure credentials through an PowerShell dialog box box. Whatever credentials are entered are saved into the $cred variable.

When we get to the add-AzureRmAccount and/or the add-AzureAccount cmdlets we pass in the value from $cred into the -Credential input parameter.

The reason that I’ve wrapped the get-AutomationPSCredential cmdlet in the IF block that I have, is so that it can be run over and over again in PowerShell without having to ask you to authenticate over and over again. I left the calls for the add-AzureRmAccount and add-AzureAccount inside the IF block so that it would only be called on the first run as there’s no point is calling add-AzureRmAccount every time unless we are authenticating for the first time.

if (!$cred) {
try {
[PSCredential] $cred = Get-AutomationPSCredential -Name $AzureAccount
catch {
write-warning ("Unable to get runbook account. Authenticate Manaually")
[PSCredential] $cred = Get-Credential -Message "Enter Azure Portal Creds"

if (!$cred) {
write-warning "Credentials were not provided. Exiting." -ForegroundColor Yellow

try {
add-AzureRmAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureRM using the provided credentials")

try {
add-AzureAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureSM using the provided credentials")
write-warning( $ErrorVar)

You’ll be seeing this coming up shortly as part of a large PowerShell script that I’ll be releasing on Git-Hub to make live easier for some of us in Azure.


August 10, 2016  4:00 PM

It’s the cloud, it’s highly available. Do I need to worry about HA and DR?

Denny Cherry Denny Cherry Profile: Denny Cherry
Cloud Computing, Clustering/High availability, High Availability

Short answer: Yes.

While yes the cloud is highly available and services that are taking offline due to hardware failures, host server reboots due to patching, etc. can your application survive being down for several minutes in the middle of the day?5553722800_f673c52839_o

If the answer to that question is “no”, and the answer to that question probably is “no” then you need to build High Availability into your environment’s design when you move to the cloud. If you don’t build your environment with highly available services, then you’ll be disappointed in your experience being hosted in the cloud.

The same applies for disaster recovery. If you don’t have a DR plan for your systems which are running within the cloud platform then when something does happen that it outside of your, and your cloud providers control, you won’t have a good experience. Your disaster recovery plan could be as simple as backing up databases and file servers to a storage account in another region of the cloud platform. Your disaster recovery plan could be as complicated as running with databases configured within Always On Availability Groups which replicas hosted in three regions of the cloud platform and your web tier being hosted in three different regions with a geographic load balancer configured on top of your web tier to route users to their closest geographical site, and avoiding the site which is down during a DR event.

The amount of complexity which is built into the configuration is completely up to the business as to how much or little high availability and disaster recovery they are willing to pay for, and how much down time they are willing to accept due to patching (for HA) and for a total site failure (for DR). We all want no downtime and no data loss, but these things come at a price and we need to understand what these prices are going to be before we start spinning up services in the cloud.


August 2, 2016  7:19 AM

Site to Site VPNs no longer needed for vNets in the same Azure region

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Networking, Windows Azure

Up until August 1st if you had 2 vNets in the same Azure region (USWest for example) you needed to create a site to site VPN between them in order for the VMs within each vNet to be able to see each other.  I’m happy to report that this is no longer the case (it is still the default configuration).  On August 1st, 2016 Microsoft released a new version of the Azure portal which allows you to enable vNet peering between vNets within an account.

Now this feature is in public preview (aka. Beta) so you have to turn it on, which is done through Azure PowerShell. Thankfully it uses the Register-AzureRmProviderFeature cmdlet so you don’t need to have the newest Azure PowerShell installed, just something fairly recent (I have 1.0.7 installed). To enable the feature just request to be included in the beta like so (don’t forget to login with add-AzureRmAccount and then select-AzureRmSubscription).

Register-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network –force

Now this “should” register you pretty quickly, but it took about an hour before I was able to actually setup a peer.  The first step is to check and see if you are registered or registering by using the script below.  If this says Registering then you have to wait.  If it says Registered then you should be good to go.  If you show as Registered and still get hours give it an hour then try again.

Get-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network

To actually setup the peering you have to specify which vNets are allowed to talk to which other vNets.  The easiest way to do this is in the Azure Portal (unless you have a bunch to do then use PowerShell).  Log into the portal and navigate to your Virtual Networks.  In the properties of the vNet you’ll see a new option called “Peerings”.


Select this and click the “Add” button. Which will get you the new peering menu shown below.

Peering2Give the peer a name (I used the name of the vNet that I was connecting to), specify if the peering is to an RM or Classic vNet (yes you read that correctly, Classic is supported, do a degree) the subscription and the vNet that you want to connect to.  You can then enable and disable access to the vNet over the peer, and specify if the peer connection should allow forwarded traffic (from a site to site VPN for example) and if this peer should be allowed to use this vNets Gateway (if it has one).  If the vNet you’re configuring doesn’t have a remote network gateway, you can check that bottom button to use the gateway of the remote vNet instead.  Once that’s done click OK, then setup the same peering on the remove vNet.  Give it a minute or two for the deployments to complete, then you should have full private IP communications between the two vNets.

Now there’s a couple of restrictions to keep in mind.

  1. This only works across vNets in the same account.
  2. This only works across vNets in the same region, so vNets in different regions will still need to have a site to site VPN, but you only really need to have a single vNet in each region with a site to site VPN.
  3. Classic is supported, sort of.  Classic vNets can ONLY peer to RM vNets, no Classic to Classic peering is supported.

That seems to be about it, or at least all I’ve run across.

Personally I think that this feature is fantastic, and it looks like it’ll solve one of my client’s issues that we’ve been working on for about a week now. I can’t wait to dive into it more and really push the limits.


Update: It appears that there’s a bug in the backend of Azure that’s preventing people from getting setup for the service.  For the time being you have to run a second PowerShell command after the AllowVnetPeering command above.  After that command is finished run the following command if the feature isn’t working for you.  That should kick it into working.

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: