SQL Server with Mr. Denny


August 17, 2016  4:00 PM

Making Azure PowerShell Scripts Work in PowerShell and As RunBooks

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure

Runbooks are very powerful tools which allow you to automate PowerShell commands which need to be run at different times.  One of the problems that I’ve run across when dealing with Azure Runbooks is that there is no way to use the same script on prem during testing and the same script when deploying. This is because of the way that authentication has to be handled when setting up a runbook.

The best way to handle authentication within a runbook is to store the authentication within the Azure Automation configuration as a stored credential.  The problem here is that you can’t use this credential while developing your runbook in the normal Powershell ISE.

One option which I’ve come up with is a little bit of TRY/CATCH logic that you can put into the PowerShell Script, which you’ll find below.

In this sample code we use a variable named $cred to pass authentication to the add-AzureRmAccount (and the add-AzureAccount) cmdlet. If that variable has no value in it then we try get call get-AutomationPSCredential. If the script is being run within the Azure Runbook environment then this will succeed and we’ll get a credential into the $cred variable. If not the call will fail, and the runner will be prompted for their Azure credentials through an PowerShell dialog box box. Whatever credentials are entered are saved into the $cred variable.

When we get to the add-AzureRmAccount and/or the add-AzureAccount cmdlets we pass in the value from $cred into the -Credential input parameter.

The reason that I’ve wrapped the get-AutomationPSCredential cmdlet in the IF block that I have, is so that it can be run over and over again in PowerShell without having to ask you to authenticate over and over again. I left the calls for the add-AzureRmAccount and add-AzureAccount inside the IF block so that it would only be called on the first run as there’s no point is calling add-AzureRmAccount every time unless we are authenticating for the first time.

if (!$cred) {
try {
[PSCredential] $cred = Get-AutomationPSCredential -Name $AzureAccount
}
catch {
write-warning ("Unable to get runbook account. Authenticate Manaually")
[PSCredential] $cred = Get-Credential -Message "Enter Azure Portal Creds"

if (!$cred) {
write-warning "Credentials were not provided. Exiting." -ForegroundColor Yellow
return
}
}

try {
add-AzureRmAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
}
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureRM using the provided credentials")
write-warning($ErrorVar)
return
}

try {
add-AzureAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
}
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureSM using the provided credentials")
write-warning( $ErrorVar)
return
}
}

You’ll be seeing this coming up shortly as part of a large PowerShell script that I’ll be releasing on Git-Hub to make live easier for some of us in Azure.

Denny

August 10, 2016  4:00 PM

It’s the cloud, it’s highly available. Do I need to worry about HA and DR?

Denny Cherry Denny Cherry Profile: Denny Cherry
Cloud Computing, Clustering/High availability, High Availability

Short answer: Yes.

While yes the cloud is highly available and services that are taking offline due to hardware failures, host server reboots due to patching, etc. can your application survive being down for several minutes in the middle of the day?5553722800_f673c52839_o

If the answer to that question is “no”, and the answer to that question probably is “no” then you need to build High Availability into your environment’s design when you move to the cloud. If you don’t build your environment with highly available services, then you’ll be disappointed in your experience being hosted in the cloud.

The same applies for disaster recovery. If you don’t have a DR plan for your systems which are running within the cloud platform then when something does happen that it outside of your, and your cloud providers control, you won’t have a good experience. Your disaster recovery plan could be as simple as backing up databases and file servers to a storage account in another region of the cloud platform. Your disaster recovery plan could be as complicated as running with databases configured within Always On Availability Groups which replicas hosted in three regions of the cloud platform and your web tier being hosted in three different regions with a geographic load balancer configured on top of your web tier to route users to their closest geographical site, and avoiding the site which is down during a DR event.

The amount of complexity which is built into the configuration is completely up to the business as to how much or little high availability and disaster recovery they are willing to pay for, and how much down time they are willing to accept due to patching (for HA) and for a total site failure (for DR). We all want no downtime and no data loss, but these things come at a price and we need to understand what these prices are going to be before we start spinning up services in the cloud.

Denny


August 2, 2016  7:19 AM

Site to Site VPNs no longer needed for vNets in the same Azure region

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Networking, Windows Azure

Up until August 1st if you had 2 vNets in the same Azure region (USWest for example) you needed to create a site to site VPN between them in order for the VMs within each vNet to be able to see each other.  I’m happy to report that this is no longer the case (it is still the default configuration).  On August 1st, 2016 Microsoft released a new version of the Azure portal which allows you to enable vNet peering between vNets within an account.

Now this feature is in public preview (aka. Beta) so you have to turn it on, which is done through Azure PowerShell. Thankfully it uses the Register-AzureRmProviderFeature cmdlet so you don’t need to have the newest Azure PowerShell installed, just something fairly recent (I have 1.0.7 installed). To enable the feature just request to be included in the beta like so (don’t forget to login with add-AzureRmAccount and then select-AzureRmSubscription).

Register-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network –force

Now this “should” register you pretty quickly, but it took about an hour before I was able to actually setup a peer.  The first step is to check and see if you are registered or registering by using the script below.  If this says Registering then you have to wait.  If it says Registered then you should be good to go.  If you show as Registered and still get hours give it an hour then try again.

Get-AzureRmProviderFeature -FeatureName AllowVnetPeering -ProviderNamespace Microsoft.Network

To actually setup the peering you have to specify which vNets are allowed to talk to which other vNets.  The easiest way to do this is in the Azure Portal (unless you have a bunch to do then use PowerShell).  Log into the portal and navigate to your Virtual Networks.  In the properties of the vNet you’ll see a new option called “Peerings”.

Peering1

Select this and click the “Add” button. Which will get you the new peering menu shown below.

Peering2Give the peer a name (I used the name of the vNet that I was connecting to), specify if the peering is to an RM or Classic vNet (yes you read that correctly, Classic is supported, do a degree) the subscription and the vNet that you want to connect to.  You can then enable and disable access to the vNet over the peer, and specify if the peer connection should allow forwarded traffic (from a site to site VPN for example) and if this peer should be allowed to use this vNets Gateway (if it has one).  If the vNet you’re configuring doesn’t have a remote network gateway, you can check that bottom button to use the gateway of the remote vNet instead.  Once that’s done click OK, then setup the same peering on the remove vNet.  Give it a minute or two for the deployments to complete, then you should have full private IP communications between the two vNets.

Now there’s a couple of restrictions to keep in mind.

  1. This only works across vNets in the same account.
  2. This only works across vNets in the same region, so vNets in different regions will still need to have a site to site VPN, but you only really need to have a single vNet in each region with a site to site VPN.
  3. Classic is supported, sort of.  Classic vNets can ONLY peer to RM vNets, no Classic to Classic peering is supported.

That seems to be about it, or at least all I’ve run across.

Personally I think that this feature is fantastic, and it looks like it’ll solve one of my client’s issues that we’ve been working on for about a week now. I can’t wait to dive into it more and really push the limits.

Denny

Update: It appears that there’s a bug in the backend of Azure that’s preventing people from getting setup for the service.  For the time being you have to run a second PowerShell command after the AllowVnetPeering command above.  After that command is finished run the following command if the feature isn’t working for you.  That should kick it into working.

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network


August 1, 2016  4:00 PM

SQL Azure != SQL Server

Denny Cherry Denny Cherry Profile: Denny Cherry
AWS, AWS EC2, Azure, Capacity planning, Disaster Recovery, High Availability, SQL Azure, SQL Database, SQL Server, Windows Azure

This may come as shocking news to some, but Microsoft SQL Server and Microsoft Azure SQL DB (Azure SQL Database, Azure SQL DB, SQL Azure Database, or whatever name you know it as) are very much not the same thing.  They may look similar and they may act similar, but they are very much not the same thing. The same applies to other cloud providers who are offering SQL Server in a Platform as a Service (PaaS) offering such as Amazon’s AWS. They aren’t SQL Server and they aren’t Azure SQL, they are somewhere in between. While other cloud providers are just running a managed SQL Server implementation, they aren’t exposing the full SQL Server instance to you so various features won’t be available to you in their PaaS solutions.

As an administrator it’s very important to remember this as planning migrations and designing systems to run on one or the other have VERY different requirements . This is because Microsoft has committed to a Cloud First deployment method where the cloud is always going to be ahead of “the box” (what we install on-premises).

The code behind Microsoft SQL Server and Azure SQL DB is actually very similar. In fact SQL DB is a fork of the SQL Server code, and as new features are written into Azure SQL DB those code changes get merged back into the SQL Server codebase for release in the next service pack or the next major version of the SQL Server product.

The code that’s now included in SQL Server 2016 has been running in Azure SQL DB for months and many of the features which were “introduced” in SQL Server 2016 such as Row Level Security, Dynamic Data Masking, Replica Seeding, etc. were all in use in Azure SQL DB well before they were introduced to the on-premises product.

Because Microsoft is releasing features to the cloud first, this means that treating PaaS services like they are just an extension of the on-premises environment just doesn’t work.

Azure SQL DB for DR

For example, if you wanted to run a SQL Server on-premises and use Azure SQL DB for your DR strategy, my response would be that you’re trying to use the wrong technology.  Azure SQL DB and SQL Server are different beasts built to solve different problems.  The appropriate DR solution in Azure for a SQL Server running on-premises would be a virtual machine running in Azure IaaS  as the Azure IaaS solution would be a like for like solution supporting all the same features. Attempting to use a PaaS solution as a Disaster Recovery solution for an on-premises SQL Server deployment would mean that your DR solution isn’t the same as your production solution with features not being available. While you may not use any of those features today, can you guarantee that 5 years from now you still won’t be using those features?

Moving large dev databases to SQL DB

If you have a large database that you want to move to Azure SQL DB for testing, then you probably don’t want to be running that in Azure SQL DB to begin with.  You probably want to run that within a VM.  That means just uploading your backup to Azure, or if it’s too large for that just ship it to Microsoft using the import/export service.

If you are planning on using Azure SQL DB for your development platform, and SQL Server on-premises, you’re planning incorrectly. Because SQL DB is different from SQL Server the only development databases which should be hosted in SQL DB are databases which are hosted in SQL DB for production, or which are being moved to SQL DB.

One cloud for production, and one for DR

The thought of using multiple cloud providers for DR and expecting that to “just work” when using PaaS is problematic and naïve.  Suggesting this shows a lack of understanding about how PaaS services and the cloud work.  Each cloud provider has implemented their cloud solutions a little differently, so using one as DR for another isn’t going to be something which you can do right out of the box.  If you did want to put something like that together it would require that your application be written to handle that sort of workload, and error-handling using complex asynchronous messaging to ensure that databases in both cloud providers are up to date with current information.

Even when you are using PaaS a hybrid co-location provider to public cloud provider architecture is a better solution than trying to span public cloud providers. While there are very large firms who broker applications between clouds, this is difficult and requires advanced networking, automation, and coding skills.

Outgrowing SQL DB

There’s the worry that the database might outgrow the PaaS solution.  This is definitely a concern for those who are building services for PaaS platforms.  However, if you have designed sharding of your database correctly then this shouldn’t be an issue.  If you haven’t designed your sharding correctly, then your implementation will be a nightmare and it will need re-architecting in the future.  If you plan on simply moving monolithic databases from on-premises to a PaaS database platform, your migration will most likely fail. Back in the early days of Azure SQL Database this was a painful proposition (I believe the initial white paper was 150 pages). Now Microsoft offers a set of tools called elastic scale which greatly reduce any pain associated with sharding your database.

These are the among the reasons that you need to have the person who designs your cloud architecture to have a solid understanding of cloud and the limits of the various options available. Even then, you need someone who can think like an architect: documenting what your solution requirements are, understanding how those requirements are supported by a solution, and how to implement them in a way that works with your solution goals. If the person who’s designing your cloud migration doesn’t understand all the various solutions and options, then the migration is bound to have problems, many of which can be very complex to solve, taking a lot of time and potential a decent amount of outage time to resolve; not to mention wasted consulting dollars.

Not every database (or application) should be running in PaaS. Azure Database is best choice for apps that live in the cloud. If you are developing a new customer facing app, this is probably your best choice. In all likelihood you will never run your ERP system in a PaaS database. Knowing when to place systems in a PaaS solution, when to place systems in an IaaS solution, and when to place systems on-premises is the mark of a true solutions architect.

Denny


July 27, 2016  4:00 PM

Stopping Azure Services Automatically

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL Server

8007015927_feef1d28d2_kLike most speakers who give presentations using Microsoft Azure, I’ve managed to leave services running for days (or weeks) without realizing it. To help solve this I put together a run book that runs in our Azure account and which runs every night. It turns off VMs, scales SQL DBs down to the basic size, alerts you if there’s any SQL DWs or HDInsight clusters running (after they have been running for 3 days it’ll just delete them for you).

Since we found it so useful we went ahead and published it on GitHub so that we can share it with the community. You can review the code on GitHub here and deploy it to Azure using the big “Deploy to Azure” button available on the GitHub page.

Check the GitHub as we’ll be adding more resource types to be checked and shutdown as time goes on.

Thanks,
Denny


July 20, 2016  4:00 PM

PASS Summit 2016: Karaoke Time

Denny Cherry Denny Cherry Profile: Denny Cherry

original_jpgI normally announce my PASS Summit Karaoke Party turning my PASS Summit pre-conference webcast, but I’m not sure when I’ll have a chance to schedule the webcast, and I felt that my party announcement was to close to PASS for those that wanted to attend.  So this year I’m doing a separate blog post in advance.

This year, I’m pleased to say that for the fifth year in a row (I think it’s the 5th year) the Karaoke party is on again.  This years party is sponsored by Idera and SIOS (as well as of course Denny Cherry & Associates Consulting).

This year we’re going back to the bar we were at last year which would be “Amber” at 2214 1st Ave in Seattle. It’s about a 15 minute walk from the Seattle convention center (or so).  We’ll have the great band from Rockaraoke once again rocking the house down. I’m told that they’ll be bringing an even bigger and better setup than before.  Check out their website for the list of songs that they have in their playbook.

Just like in prior years the party starts at 9:30pm (the welcome reception ends at 9pm) and goes until 1:30am.

Tickets for the party are available through Eventbrite where you’ll see open bar tickets and cash bar tickets.  If you want free drinks you’ll want to select the open bar tickets.  If those run out (or you don’t want to drink) you’ll want the cash bar tickets.  You do NOT need both an open bar ticket and a cash bar ticket, you only need one or the other.

If you haven’t picked up your wristband for the open bar by 11pm that night from the bouncer at the front door it may be given away to someone else.

I look forward to seeing everyone at PASS and as many as possible at the party.  We’ll have the up stairs balcony open this year as well as the patio, so there’s plenty of room for everyone.

Denny


July 18, 2016  2:00 PM

New Webcast and #datachat: The first year of SQL Server 2016 in Production

Denny Cherry Denny Cherry Profile: Denny Cherry

Join Joey D’Antoni and I on July 29th at 11am Pacific Time for a webcast where we’ll be talking about our first year of managing Microsoft SQL Server 2016 in production.  You read that correctly, one of our clients was an official early adopter with Microsoft and they have been running SQL Server 2016 since August 2015.

We’re combining this rather open format webcast with a #datachat on twitter where we’ll be posting questions and answers as well as chatting on the webcast about our experiences with Microsoft SQL Server 2016.

We hope that you’ll join us on the webcast (registration is required).  It should be a good time while being informative about some of the issues that we’ve run across.

Denny


July 13, 2016  3:00 PM

PASS Summit Speaker Idol 2016 Is a Go!

Denny Cherry Denny Cherry Profile: Denny Cherry
conference, presentations

If you have attended the PASS Summit for the last couple of years you probably heard about the PASS Summit Speaker Idol competition that has happened for the last couple of years. If you haven’t, you are now.

For those that haven’t been to the PASS Summit before its a great conference to learn a ton about Microsoft SQL Server from other people who use SQL Server on a daily basis as well as the people from Microsoft who make and support Microsoft SQL Server.

Becoming a speaker at the PASS Summit is always tough.  There are hundreds of people who submit almost one thousand sessions for the conference, and there are only a little over one hundred session slots available.  Needless to say the competition is tough, especially for someone who hasn’t every spoken at the PASS Summit so the speaker selection committee doesn’t know anything about your speaking style.

So a couple of years ago I had a crazy idea to steal the format and the idea for speaker idol from the TechEd conference and bring it to the PASS summit.   And PASS took a chance and let me run with it.

So now in our third year, the PASS Summit Speaker Idol competition is back.  We’ll be using the same basic rules as last year with some slight tweaks.

Eligibility

You must have spoken at one of the following events (in a format other than a lighting talk, where your session had at least 30 minutes of content delivered by you):

  • SQL Saturday
  • SQL Bits
  • SQL Server Days
  • Microsoft Ignite
  • Dev Connections
  • Other large event with a national or international reach

You have not spoken at the PASS Summit of PASS Business Conference for a breakout, spotlight or pre-con session. (If you gave a lightning talk and no other session you are eligible.)

You are not scheduled to speak at the 2016 PASS Summit.

You will be in Seattle during the PASS Summit (attending PASS isn’t needed, and if you aren’t attending PASS we will get you to the session room for Speaker Idol).

You are available to attend which ever speaker idol session(s) you are assigned to (including the final session on Friday afternoon).

You have a 5 minute technical presentation which you can give. (Time management is your responsibility.)

You sign up on the registration form before August 31th, 2016 at 11:59:59pm (23:59:59) Pacific time according to Denny Cherry’s Verizon Cell Phone.

You are not a member of the PASS Board of Directors or an employee of PASS.

You are not employed by a sponsor of Speaker Idol

Contingences

If a contestant drops out of the contest before the PASS Summit starts the organizer (Denny Cherry) will reach out to the next person on his list that wasn’t accepted and offer them an position in the contest.  People will be offered spots in the competition until someone accepts.  The organizer will NOT announce how many people if any turned down a spot in the competition.

If a contestant drops out of the contest during the PASS Summit (for purposes of this clause “during the PASS Summit” is defined as from the time that Denny gets to Seattle, which will probably be Saturday or Sunday before the conference, until Friday afternoon) no changes to the lineup will be made.

If a contestant wins their round and can not continue to the finals for any reason, the runner up from their round will be declared the winner of their round and will be moved to the finals.

If a judge is not able to complete the competition they will not be replaced.

Other Rules

The wild card winner will be selected by the judges at the end of the Friday morning round from the runners up from each round.  If a winner is not able to compete in the finals, and the runner up is already speaking in the finals, the judges may consider all contestants from that round as eligible for the wildcard slot.

Judges do not have to recuse themselves because a friend is in the competition.  Judges are expected to be fair.

You are responsible for your own travel to the PASS Summit to compete in this competition.

Presentation Format

You may present your session in any format that you’d like.  You can view last years recordings via PASS TV.PASSLogoBW

While you can use any format that you’d like, it is recommended that you use a format which will show the judges how your presentation skills will translate from a 5 minute session to a 60 or 75 minute session.

Contestants are allowed to watch the other sessions during speaker idol and make changes to their presentation as they see fit.

Signing Up

In case you missed it above, the Sign Up Form is <- there.

Stay tuned for announcements on the judges and the contestants.  (Judges won’t be announced until after the PASS schedule is finalized as I need to see who’s available for the sessions.)

Thanks,

Denny


July 5, 2016  4:00 PM

What this Query Store bug tells me?

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server 2016

You may have heard that there’s a bug in SQL Server 2016 related to the query store in non-Enterprise editions.  This is a less than dangerous bug, in that in editions of SQL Server 2016 other than Enterprise Edition when the space allocated to the query store fills it stops collecting data.  Nothing more dangerous than that.475143348_556d950243_b

What this tells me is that at some point during the development cycle Query Store the Query Store was going to be Enterprise Edition only.

But thankfully someone at Microsoft released this would be an awful thing and they introduced it into all the editions.

This also means that this should be pretty quick to fix in a later CU as it’ll just be a matter of moving the call to the function to clean that up from inside as if/then block to outside the if/then block.

Hopefully this will be that easy to fix, and a fix can get rolled out in one of the first couple of CUs.  Until then, just setup a job to clean out the old data from the query store nightly.  If you are hitting this bug, I’d also recommend increasing the size of your query store to 1 Gig (or 5 on a really busy system) to help avoid the bug and make sure that you have enough room for data in the query store between flushing the data.

Denny


June 29, 2016  2:00 PM

How is performance impacted by having too many NUMA nodes?

Denny Cherry Denny Cherry Profile: Denny Cherry
Database performance, NUMA, SQL Server

2894816411_01dc38281c_zHaving too many NUMA nodes can lead to some really strange CPU behavior. What you’ll probably see is that the CPU workload of the machine isn’t evenly balanced across the CPUs. If for example you have 32 cores on the VM, and you have 16 NUMA nodes, with two cores per NUMA node you’ll probably see two cores (might be four, or six it depends on a lot of factors) running very hot compared to the other cores on the server.

At one client I was working with in 2015 they had a server which was configured very similarly to what I’ve described above. They had 32 cores, with 11 NUMA nodes. One NUMA node has 4 cores, the next 9 had three cores each, while the 11th NUMA node had one core in it.

When monitoring the performance on the server what we saw was that three cores were running at 100% (or close to it) while the other cores were running at about 4%.

The reason for this, is that SQL Server is NUMA aware, and it was trying to get the best performance possible out of the configuration that it had access to.

The solution in this case was to reconfigure the virtual machine so that it had only two vNUMA nodes. We did this by changing the number of virtual sockets to 2, and put 16 virtual cores on each virtual socket. Once this was done CPU load on the server was much more balanced across the cores. Now instead of 3 cores running at 100% 16 cores were handing the same workload (that workload was still isolated to a single NUMA node, but now the node had 16 cores in it) but the CPUs were bouncing around the 15-20% range. Suddenly the box was happier, and more importantly the application which the end users was using was behaving much better and queries were responding much faster as they no longer needed to wait for a CPU to become available.

Now you may be asking yourself how a system could end up with a screwy NUMA configuration like this. And the answer is pretty simple. The system grew over time, and more and more cores were added to the server little by little as the years went by. As the system needed more CPU resources more cores were added, but no attention was paid to how those cores were presented to Windows. So we ended up with Windows seeing 32 cores in a very strange configuration.

Because most SQL Servers are running inside VMs these days, we need to be careful how we configure those VMs to ensure that we get the proper configuration of the virtual machine. If we don’t have the correct configuration of the virtual hardware there’s little hope of getting good performance out of the VM.

Check your NUMA node configuration to make sure that it’s correct and if it isn’t correct it. Especially if the configuration is very odd looking. And if you aren’t sure, check with us, we’re happy to help.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: