SQL Server with Mr. Denny

September 7, 2016  6:00 PM

PASS Summit SQL Karaoke Tickets Going Fast

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences and events, SQL Server

original_jpg This year, like the past view years we’ll be kicking off the PASS Summit with a (loud) bang at Amber on 1st street in Seattle. There are still tickets available for this rocking party. This is the event to attend on Tuesday night to come out, have something to drink, sign some karaoke, watch others sign karaoke and have a great time.

Like we’ve done in prior years we’ll have a live band performing with our singers. This event has always been a hit at the PASS Summit in years past, and it’ll be a popular event this year as well.

Get your tickets before they sell out and come down, have a blast, make some noise and enjoy the party.


August 29, 2016  6:38 PM

PASS Speaker Idol Sign Ups Are Closing Soon

Denny Cherry Denny Cherry Profile: Denny Cherry

If you were planning on signing up for the PASS Summit Speaker Idol you have a little over two days left in order to get signed up. Signups close at 11:59pm on August 31st, 2016 as pure the official rules.

So go get signed up, or sign up your favorite speaker that you’ve seen at a SQL Saturday of other event.

See you at the summit.


P.S. Be there Tuesday night for the party at Amber. There are still tickets available, so get signed up for free drinks, free Karaoke and tons of fun on Tuesday night.

August 24, 2016  4:00 PM

Releasing a Page Blob Lease in Azure

Denny Cherry Denny Cherry Profile: Denny Cherry

2478229521_f40dbba2b4_bSometimes when firing up VMs or moving VMs from the page or blob store you’ll get an error that there is still a lease on the file.  To solve this you need to release the lease. But waiting won’t do the trick, as the leases don’t have an expiration date.

I found some VB.NET code online that with some tweaking (with the help of Eli Weinstock-Herman and Christiaan Baes) I was able to get to release the lease.

The first thing you’ll need is Visual Studio 2015.  You’ll also need the Azure Storage Client.

Once those are both installed you need to create a new VB.NET project.  I used a command line app.

Then put this code in the app. Replace the placeholders that I show in {} with the actual values from your Azure account. Then compile and run the code. The lease will be released.

Imports Microsoft.WindowsAzure.Storage.Auth
Imports Microsoft.WindowsAzure.Storage.Blob
Imports Microsoft.WindowsAzure.Storage
Module Module1

Sub Main()
Dim Cred As New StorageCredentials(“{StorageAccount}”, “{StorageAccountKey}”)
Dim sourceBlobClient As New CloudBlobClient(New Uri(“http://{StorageAccount}.blob.core.windows.net/”), Cred)

Dim sourceContainer As CloudBlobContainer = sourceBlobClient.GetContainerReference(“{ContainerName}”)

Dim sourceBlob As CloudPageBlob = sourceContainer.GetBlobReferenceFromServer(“{FileName}”)

Dim breakTime As TimeSpan = New TimeSpan(0, 0, 1)


End Sub

End Module

Sadly, short of doing this I haven’t been able to find an easier way of doing this.


August 17, 2016  4:00 PM

Making Azure PowerShell Scripts Work in PowerShell and As RunBooks

Denny Cherry Denny Cherry Profile: Denny Cherry

Runbooks are very powerful tools which allow you to automate PowerShell commands which need to be run at different times.  One of the problems that I’ve run across when dealing with Azure Runbooks is that there is no way to use the same script on prem during testing and the same script when deploying. This is because of the way that authentication has to be handled when setting up a runbook.

The best way to handle authentication within a runbook is to store the authentication within the Azure Automation configuration as a stored credential.  The problem here is that you can’t use this credential while developing your runbook in the normal Powershell ISE.

One option which I’ve come up with is a little bit of TRY/CATCH logic that you can put into the PowerShell Script, which you’ll find below.

In this sample code we use a variable named $cred to pass authentication to the add-AzureRmAccount (and the add-AzureAccount) cmdlet. If that variable has no value in it then we try get call get-AutomationPSCredential. If the script is being run within the Azure Runbook environment then this will succeed and we’ll get a credential into the $cred variable. If not the call will fail, and the runner will be prompted for their Azure credentials through an PowerShell dialog box box. Whatever credentials are entered are saved into the $cred variable.

When we get to the add-AzureRmAccount and/or the add-AzureAccount cmdlets we pass in the value from $cred into the -Credential input parameter.

The reason that I’ve wrapped the get-AutomationPSCredential cmdlet in the IF block that I have, is so that it can be run over and over again in PowerShell without having to ask you to authenticate over and over again. I left the calls for the add-AzureRmAccount and add-AzureAccount inside the IF block so that it would only be called on the first run as there’s no point is calling add-AzureRmAccount every time unless we are authenticating for the first time.

if (!$cred) {
try {
[PSCredential] $cred = Get-AutomationPSCredential -Name $AzureAccount
catch {
write-warning ("Unable to get runbook account. Authenticate Manaually")
[PSCredential] $cred = Get-Credential -Message "Enter Azure Portal Creds"

if (!$cred) {
write-warning "Credentials were not provided. Exiting." -ForegroundColor Yellow

try {
add-AzureRmAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureRM using the provided credentials")

try {
add-AzureAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureSM using the provided credentials")
write-warning( $ErrorVar)

You’ll be seeing this coming up shortly as part of a large PowerShell script that I’ll be releasing on Git-Hub to make live easier for some of us in Azure.


August 10, 2016  4:00 PM

It’s the cloud, it’s highly available. Do I need to worry about HA and DR?

Denny Cherry Denny Cherry Profile: Denny Cherry
Cloud Computing, Clustering/High availability, High Availability

Short answer: Yes.

While yes the cloud is highly available and services that are taking offline due to hardware failures, host server reboots due to patching, etc. can your application survive being down for several minutes in the middle of the day?5553722800_f673c52839_o

If the answer to that question is “no”, and the answer to that question probably is “no” then you need to build High Availability into your environment’s design when you move to the cloud. If you don’t build your environment with highly available services, then you’ll be disappointed in your experience being hosted in the cloud.

The same applies for disaster recovery. If you don’t have a DR plan for your systems which are running within the cloud platform then when something does happen that it outside of your, and your cloud providers control, you won’t have a good experience. Your disaster recovery plan could be as simple as backing up databases and file servers to a storage account in another region of the cloud platform. Your disaster recovery plan could be as complicated as running with databases configured within Always On Availability Groups which replicas hosted in three regions of the cloud platform and your web tier being hosted in three different regions with a geographic load balancer configured on top of your web tier to route users to their closest geographical site, and avoiding the site which is down during a DR event.

The amount of complexity which is built into the configuration is completely up to the business as to how much or little high availability and disaster recovery they are willing to pay for, and how much down time they are willing to accept due to patching (for HA) and for a total site failure (for DR). We all want no downtime and no data loss, but these things come at a price and we need to understand what these prices are going to be before we start spinning up services in the cloud.


August 2, 2016  7:19 AM

Site to Site VPNs no longer needed for vNets in the same Azure region

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Networking, Windows Azure

Up until August 1st if you had 2 vNets in the same Azure region (USWest for example) you needed to create a site to site VPN between them in order for the VMs within each vNet to be able to see each other.  I’m happy to report that this is no longer the case (it is still the default configuration).  On August 1st, 2016 Microsoft released a new version of the Azure portal which allows you to enable vNet peering between vNets within an account.

Now this feature is in public preview (aka. Beta) so you have to turn it on, which is done through Azure PowerShell. Thankfully it uses the Register-AzureRmProviderFeature cmdlet so you don’t need to have the newest Azure PowerShell installed, just something fairly recent (I have 1.0.7 installed). To enable the feature just request to be included in the beta like so (don’t forget to login with add-AzureRmAccount and then select-AzureRmSubscription).

Register-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network –force

Now this “should” register you pretty quickly, but it took about an hour before I was able to actually setup a peer.  The first step is to check and see if you are registered or registering by using the script below.  If this says Registering then you have to wait.  If it says Registered then you should be good to go.  If you show as Registered and still get hours give it an hour then try again.

Get-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network

To actually setup the peering you have to specify which vNets are allowed to talk to which other vNets.  The easiest way to do this is in the Azure Portal (unless you have a bunch to do then use PowerShell).  Log into the portal and navigate to your Virtual Networks.  In the properties of the vNet you’ll see a new option called “Peerings”.


Select this and click the “Add” button. Which will get you the new peering menu shown below.

Peering2Give the peer a name (I used the name of the vNet that I was connecting to), specify if the peering is to an RM or Classic vNet (yes you read that correctly, Classic is supported, do a degree) the subscription and the vNet that you want to connect to.  You can then enable and disable access to the vNet over the peer, and specify if the peer connection should allow forwarded traffic (from a site to site VPN for example) and if this peer should be allowed to use this vNets Gateway (if it has one).  If the vNet you’re configuring doesn’t have a remote network gateway, you can check that bottom button to use the gateway of the remote vNet instead.  Once that’s done click OK, then setup the same peering on the remove vNet.  Give it a minute or two for the deployments to complete, then you should have full private IP communications between the two vNets.

Now there’s a couple of restrictions to keep in mind.

  1. This only works across vNets in the same account.
  2. This only works across vNets in the same region, so vNets in different regions will still need to have a site to site VPN, but you only really need to have a single vNet in each region with a site to site VPN.
  3. Classic is supported, sort of.  Classic vNets can ONLY peer to RM vNets, no Classic to Classic peering is supported.

That seems to be about it, or at least all I’ve run across.

Personally I think that this feature is fantastic, and it looks like it’ll solve one of my client’s issues that we’ve been working on for about a week now. I can’t wait to dive into it more and really push the limits.


Update: It appears that there’s a bug in the backend of Azure that’s preventing people from getting setup for the service.  For the time being you have to run a second PowerShell command after the AllowVnetPeering command above.  After that command is finished run the following command if the feature isn’t working for you.  That should kick it into working.

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network

August 1, 2016  4:00 PM

SQL Azure != SQL Server

Denny Cherry Denny Cherry Profile: Denny Cherry
AWS, AWS EC2, Azure, Capacity planning, Disaster Recovery, High Availability, SQL Azure, SQL Database, SQL Server, Windows Azure

This may come as shocking news to some, but Microsoft SQL Server and Microsoft Azure SQL DB (Azure SQL Database, Azure SQL DB, SQL Azure Database, or whatever name you know it as) are very much not the same thing.  They may look similar and they may act similar, but they are very much not the same thing. The same applies to other cloud providers who are offering SQL Server in a Platform as a Service (PaaS) offering such as Amazon’s AWS. They aren’t SQL Server and they aren’t Azure SQL, they are somewhere in between. While other cloud providers are just running a managed SQL Server implementation, they aren’t exposing the full SQL Server instance to you so various features won’t be available to you in their PaaS solutions.

As an administrator it’s very important to remember this as planning migrations and designing systems to run on one or the other have VERY different requirements . This is because Microsoft has committed to a Cloud First deployment method where the cloud is always going to be ahead of “the box” (what we install on-premises).

The code behind Microsoft SQL Server and Azure SQL DB is actually very similar. In fact SQL DB is a fork of the SQL Server code, and as new features are written into Azure SQL DB those code changes get merged back into the SQL Server codebase for release in the next service pack or the next major version of the SQL Server product.

The code that’s now included in SQL Server 2016 has been running in Azure SQL DB for months and many of the features which were “introduced” in SQL Server 2016 such as Row Level Security, Dynamic Data Masking, Replica Seeding, etc. were all in use in Azure SQL DB well before they were introduced to the on-premises product.

Because Microsoft is releasing features to the cloud first, this means that treating PaaS services like they are just an extension of the on-premises environment just doesn’t work.

Azure SQL DB for DR

For example, if you wanted to run a SQL Server on-premises and use Azure SQL DB for your DR strategy, my response would be that you’re trying to use the wrong technology.  Azure SQL DB and SQL Server are different beasts built to solve different problems.  The appropriate DR solution in Azure for a SQL Server running on-premises would be a virtual machine running in Azure IaaS  as the Azure IaaS solution would be a like for like solution supporting all the same features. Attempting to use a PaaS solution as a Disaster Recovery solution for an on-premises SQL Server deployment would mean that your DR solution isn’t the same as your production solution with features not being available. While you may not use any of those features today, can you guarantee that 5 years from now you still won’t be using those features?

Moving large dev databases to SQL DB

If you have a large database that you want to move to Azure SQL DB for testing, then you probably don’t want to be running that in Azure SQL DB to begin with.  You probably want to run that within a VM.  That means just uploading your backup to Azure, or if it’s too large for that just ship it to Microsoft using the import/export service.

If you are planning on using Azure SQL DB for your development platform, and SQL Server on-premises, you’re planning incorrectly. Because SQL DB is different from SQL Server the only development databases which should be hosted in SQL DB are databases which are hosted in SQL DB for production, or which are being moved to SQL DB.

One cloud for production, and one for DR

The thought of using multiple cloud providers for DR and expecting that to “just work” when using PaaS is problematic and naïve.  Suggesting this shows a lack of understanding about how PaaS services and the cloud work.  Each cloud provider has implemented their cloud solutions a little differently, so using one as DR for another isn’t going to be something which you can do right out of the box.  If you did want to put something like that together it would require that your application be written to handle that sort of workload, and error-handling using complex asynchronous messaging to ensure that databases in both cloud providers are up to date with current information.

Even when you are using PaaS a hybrid co-location provider to public cloud provider architecture is a better solution than trying to span public cloud providers. While there are very large firms who broker applications between clouds, this is difficult and requires advanced networking, automation, and coding skills.

Outgrowing SQL DB

There’s the worry that the database might outgrow the PaaS solution.  This is definitely a concern for those who are building services for PaaS platforms.  However, if you have designed sharding of your database correctly then this shouldn’t be an issue.  If you haven’t designed your sharding correctly, then your implementation will be a nightmare and it will need re-architecting in the future.  If you plan on simply moving monolithic databases from on-premises to a PaaS database platform, your migration will most likely fail. Back in the early days of Azure SQL Database this was a painful proposition (I believe the initial white paper was 150 pages). Now Microsoft offers a set of tools called elastic scale which greatly reduce any pain associated with sharding your database.

These are the among the reasons that you need to have the person who designs your cloud architecture to have a solid understanding of cloud and the limits of the various options available. Even then, you need someone who can think like an architect: documenting what your solution requirements are, understanding how those requirements are supported by a solution, and how to implement them in a way that works with your solution goals. If the person who’s designing your cloud migration doesn’t understand all the various solutions and options, then the migration is bound to have problems, many of which can be very complex to solve, taking a lot of time and potential a decent amount of outage time to resolve; not to mention wasted consulting dollars.

Not every database (or application) should be running in PaaS. Azure Database is best choice for apps that live in the cloud. If you are developing a new customer facing app, this is probably your best choice. In all likelihood you will never run your ERP system in a PaaS database. Knowing when to place systems in a PaaS solution, when to place systems in an IaaS solution, and when to place systems on-premises is the mark of a true solutions architect.


July 27, 2016  4:00 PM

Stopping Azure Services Automatically

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL Server

8007015927_feef1d28d2_kLike most speakers who give presentations using Microsoft Azure, I’ve managed to leave services running for days (or weeks) without realizing it. To help solve this I put together a run book that runs in our Azure account and which runs every night. It turns off VMs, scales SQL DBs down to the basic size, alerts you if there’s any SQL DWs or HDInsight clusters running (after they have been running for 3 days it’ll just delete them for you).

Since we found it so useful we went ahead and published it on GitHub so that we can share it with the community. You can review the code on GitHub here and deploy it to Azure using the big “Deploy to Azure” button available on the GitHub page.

Check the GitHub as we’ll be adding more resource types to be checked and shutdown as time goes on.


July 20, 2016  4:00 PM

PASS Summit 2016: Karaoke Time

Denny Cherry Denny Cherry Profile: Denny Cherry

original_jpgI normally announce my PASS Summit Karaoke Party turning my PASS Summit pre-conference webcast, but I’m not sure when I’ll have a chance to schedule the webcast, and I felt that my party announcement was to close to PASS for those that wanted to attend.  So this year I’m doing a separate blog post in advance.

This year, I’m pleased to say that for the fifth year in a row (I think it’s the 5th year) the Karaoke party is on again.  This years party is sponsored by Idera and SIOS (as well as of course Denny Cherry & Associates Consulting).

This year we’re going back to the bar we were at last year which would be “Amber” at 2214 1st Ave in Seattle. It’s about a 15 minute walk from the Seattle convention center (or so).  We’ll have the great band from Rockaraoke once again rocking the house down. I’m told that they’ll be bringing an even bigger and better setup than before.  Check out their website for the list of songs that they have in their playbook.

Just like in prior years the party starts at 9:30pm (the welcome reception ends at 9pm) and goes until 1:30am.

Tickets for the party are available through Eventbrite where you’ll see open bar tickets and cash bar tickets.  If you want free drinks you’ll want to select the open bar tickets.  If those run out (or you don’t want to drink) you’ll want the cash bar tickets.  You do NOT need both an open bar ticket and a cash bar ticket, you only need one or the other.

If you haven’t picked up your wristband for the open bar by 11pm that night from the bouncer at the front door it may be given away to someone else.

I look forward to seeing everyone at PASS and as many as possible at the party.  We’ll have the up stairs balcony open this year as well as the patio, so there’s plenty of room for everyone.


July 18, 2016  2:00 PM

New Webcast and #datachat: The first year of SQL Server 2016 in Production

Denny Cherry Denny Cherry Profile: Denny Cherry

Join Joey D’Antoni and I on July 29th at 11am Pacific Time for a webcast where we’ll be talking about our first year of managing Microsoft SQL Server 2016 in production.  You read that correctly, one of our clients was an official early adopter with Microsoft and they have been running SQL Server 2016 since August 2015.

We’re combining this rather open format webcast with a #datachat on twitter where we’ll be posting questions and answers as well as chatting on the webcast about our experiences with Microsoft SQL Server 2016.

We hope that you’ll join us on the webcast (registration is required).  It should be a good time while being informative about some of the issues that we’ve run across.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: