SQL Server with Mr. Denny


December 16, 2015  5:00 PM

Get Operation Status of an Azure Async Operation

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Powershell, PowerShell Scripts, SQL Server, Windows Azure

Recently I was trying to make a drive bigger on an Azure VM by simply using Set-AzureOSDisk PowerShell commandlet (and the -ResizedSizeInGB switch). The problem is that this command is asynchronous meaning that it tells you that it’s done, even if it hasn’t finished yet.

The problem with this, is that there’s no way in Azure to see if the request has finished or not. All you can do is hope that it finishes without throwing an error.

Well, I’m pleased to say that (with some hacking of some other scripts that I found online) I was able to put together a PowerShell CmdLet that you can download and use to get this information from the Azure Portal. This PowerShell Commandlet uses the Microsoft Azure API to get the information about the request.

You can think of this as a PowerShell version of the Microsoft Azure Get Operation Status API.

function Get-AzureStatus
{
[CmdletBinding()]
param (
[Parameter(ParameterSetName="ByRequestId", Position=1, Mandatory)]
[string] $RequestId,
[Parameter()]
[string] $SubscriptionId
)

if (!$SubscriptionId)
{
$SubscriptionId = (Get-AzureSubscription | ? IsDefault).SubscriptionId
}

$account = Get-AzureAccount | ? { $_.Subscriptions.Contains($SubscriptionId) -and $_.Type -eq "Certificate" }

if (!$account)
{
throw "Can't find an account for Azure subscription $SubscriptionId"
}

$certificate = ls Cert:\CurrentUser\My | ? Thumbprint -eq $account.Id

if (!$certificate)
{
throw "Can't find the certificate for Azure account {0}" -f $account.Id
}

$uri = "https://management.core.windows.net/${SubscriptionId}/operations/${RequestId}"

try
{
$response = Invoke-WebRequest -Method GET -Uri $uri -Certificate $certificate -Headers @{ "x-ms-version" = "2014-06-01" } -

ErrorAction Ignore
}
catch
{
$message = ( $_.ErrorDetails.Message)
throw "{0}: {1}" -f $message.Error.Code, $message.Error.Message
}

$content = $response.Content
$content.ChildNodes
}

Now using this requires that you have a certificate setup for the subscription to allow access to the Microsoft Azure API. Thankfully Cindy Gross has a great post on how to get a certificate created and uploaded to Azure.

To summarize her post.

1. Create a certificate on your machine.
makecert -sky exchange -r -n "CN=" -pe -a sha1 -len 2048 -ss My "c:\temp\.cer"

2. Upload that certificate to the options section of the OLD PORTAL

3. Map the subscription to your certificate
$subID = "Your Subscription Id"
$thumbprint = "Your Cert Thumbprint (from the portal)"
$SubscriptionName = "Your Subscription"
$myCert = Get-Item cert:\\CurrentUser\My\$thumbprint
Set-AzureSubscription –SubscriptionName $SubscriptionName -SubscriptionId $subID -Certificate $myCert

4. Install the POwerShell CmdLet above and run it passing it your RequestId that you get from any Azure PowerShell cmdlet.

Denny

December 7, 2015  3:47 PM

Just how bad is the new Windows 2016 pricing?

Denny Cherry Denny Cherry Profile: Denny Cherry

3462607995_150a6b2624_zYou probably saw the new Windows Server 2016 Pricing FAQ that came out last week.  If not, you should go head through it.  It’s OK, I’ll wait.

So, now that you’ve read it you’re probably all worried about the new Windows Server 2016 pricing structure as it is core based, and you have to purchase a minimum of 16 cores per server.  What’s that? You didn’t actually read the FAQ?  No problem, here’s the relevant part.

Core licenses will be sold in packs of two licenses. Each processor will need to be licensed with minimum of 8 cores which is 4 two-core packs. Each physical server, including 1 processor servers, will need to be licensed with minimum of 16 cores which is 8 two-core packs. Additional cores can then be licensed in increments of two cores (one two core pack) for gradual increases in core density growth.

Needless to say people are beginning to think that this is another SQL Server Licensing disaster from when SQL Server 2012 went from socket to core based licensing.  And in reality this isn’t that big of a deal for most servers.  The way the FAQ states things you have to buy 8 cores per socket, with a minimum of 16 cores per server.  Now that sounds like a lot, but there’s a little chart in the FAQ that’s really important.  I’ve included it below.

licensing

So looking at this chart we can see that any machine with less than 8 cores per socket and two sockets or less will cost you exactly the same as it will today.  Also any machine with one socket and less than 16 cores will also cost the same as it costs today.

It’s important to remember that today dual socket machines are the bulk of the servers being purchased.  Most VM farms (vSphere or Hyper-V) work great on dual docket servers.  That’s the bulk of servers at companies right there.  It’s only when you get into the really big servers that things are going to change.  The really big database servers are going to have a cost increase, but that cost increase is going to be nothing compared to what you are already planning on spending on the SQL Server license.  The other big servers at most companies are things like Oracle and SAP, which are already running on something other than Windows.

In the long run, I see this change making almost zero impact on most companies IT expenses.

Denny


December 3, 2015  4:05 PM

Announcing Internals for Query Tuning Training Class By Kalen Delaney

Denny Cherry Denny Cherry Profile: Denny Cherry
DBA training, IT training, SQL Server, Training

We at Denny Cherry & Associates Consulting are thrilled to announce that we have partnered up with the SQL Server internals guru Kalen Delaney to deliver training classes.  Our first class will be Internals for Query Tuning and will be held Monday-Wednesday April 4th, 5th and 6th of 2016 in Orange County, CA.expert

This is an advanced course designed for advanced SQL Server professionals. The course includes extensive demonstrations that illustrate the details of SQL Server internals as well as tuning techniques.  While this course will focus on SQL Server 2014 the information presented will be relevant for older versions of SQL Server as well.  In addition, some of the changes due in the upcoming SQL Server 2016 will be described.

This three day course is made up of 6 parts which are outlined below.  The cost for this fantastic three day class is $2225 US.  Seating is limited, so get signed up today.

The outline for this class is:

Part 1: SQL Server Metadata

  • Architecture Overview
  • Metadata Overview
  • Dynamic Management Views

Part 2: Index Structures

  • Metadata for Storage
  • Space Allocation
  • Tools for Examining Physical Structures
  • Heaps and B-Trees
  • Clustered Indexes
  • Nonclustered Indexes
  • Fragmentation
  • Filtered Indexes
  • Rebuilding Indexes
  • Partitioning Overview
  • Creating and Maintaining Partitions
  • Metadata for Partitioning
  • Columnstore Index Storage

DAY 2

Part 3: Query Processing and Query Plans

  • SHOWPLAN Options
  • Query Plan Elements
  • Types of Joins
  • Aggregation
  • Sorting
  • Columnstore Indexes: Batch vs Row Mode
  • Data Modification

Part 4: Optimization and Recompilation

  • Optimization Overview
  • SQL Server’s Query Optimizer
  • Plan Management and Reuse
  • Causes of Recompilation
  • Forcing Recompilation
  • Plan Cache Metadata

DAY 3

Part 5: Index Tuning

  • Special Index Features
  • Statistics
  • Indexed Views
  • Covering Indexes
  • Filtered Indexes
  • Included Columns
  • Indexing Guidelines

Part 6: Query Tuning

  • Query Improvements
  • Search Arguments
  • Constants and Variables
  • User Defined Functions and Computed Columns
  • Plan Guides
  • Query Hints


December 2, 2015  2:00 PM

DR Planning in Azure Requires More Than Replicating VMs to Another Data Center

Denny Cherry Denny Cherry Profile: Denny Cherry
Disaster planning, Disaster Recovery, Disaster recovery planning, SQL Server

While flipping through the Microsoft Azure website recently I ran across this page which talks about HA/DR configurations for SQL Server VMs. Seeing this I figured they’d be talking about AlwaysOn Availability Groups, or even SQL Server FCIs and using SIOS’s DataKeeper Cluster Edition.

Thankfully the article does talk about AGs (and database mirroring), but doesn’t mention FCIs. What it does talk about at the bottom is how you need to setup GRS (Globally Redundant Storage) to use that are part of your DR plan. What GRS is, is a blob store configuration that stores a copy of the blob in another data center. If you are counting on this as being part of your DR plan, you have already failed. You shouldn’t ever be relying on GRS for DR for your SQL Server VMs, or really any VMs. If you need to be able to fail over to your DR system is anything that resembles a quick timeframe spinning up new VMs and attaching existing storage to them isn’t the way to do that.

More traditional DR plans like the other ones which are talked about in the article (as well as SQL Server Clustered Instances) are going to be a much more useful DR configuration.

Denny


November 30, 2015  8:23 PM

If you fail to plan for your scale, you’ll end up with no workload at all

Denny Cherry Denny Cherry Profile: Denny Cherry

targetToday is “Cyber Monday”, the artificial “holiday” that stores have decided to create in order to separate customers from their money on the first normal work day of the holiday season. You’d think that as online retailers created this holiday, they would be prepared for it.

Apparently you would be wrong if Target’s webpage was any indication from earlier today. On the biggest online shopping day of the entire calendar year Target’s website couldn’t handle the workload and went offline telling users:

“So sorry, but high traffic’s causing delays. If you wouldn’t mind holding, we’ll refresh automatically and get things going ASAP.”

Planning for peak workloads is VERY important. If your applications are hosted on-premises this can be harder to deal with as you have to have hardware sitting around that you can run the workload on. If however you are running in the Microsoft Azure cloud you can easily scale your databases up and down without any major configuration changes. You just change the size of the databases in the portal and give the Azure backend processes some time to move things around and you are back up and running.

If you are using SQL Server or Oracle in a VM and you need to scale up, simply change the size of your VMs to a larger size. The VM will need to restart (so hopefully you have HA in place so you’d scale up the passive node, restart it then failover to it) and you’ll have more CPU and memory to work with.

If the web farm is what needs to scale up and down, the Microsoft Azure platform can handle that for you as well simply by using the auto-scaling features for web farms which allows Azure to add more web servers as needed as load increases, then remove servers when load decreases. For workloads that burst like they do on Cyber Monday, this is a perfect use case.

Now all this requires some pre-planning as you can’t just flip all of this on if you aren’t working in Microsoft Azure already. But if you are, your scale problems become very easy to solve.

If you are interested in moving to Azure so that you can have this flexibility, we would love to help you move from your current hosting provider to the Microsoft Azure cloud. Contact us and we will schedule some time to talk about your systems today, the problems that you are seeing, and how Azure can solve those problems.

Denny


November 25, 2015  4:00 PM

Inline Nonclustered Indexes

Denny Cherry Denny Cherry Profile: Denny Cherry
Index, SQL Server, SQL Server Indexes

One of the new features that I just recently learned about in SQL Server 2014 is the ability to create nonclustered indexes on a table inline with the creation of the table. This is handy in a couple of ways.

First it makes it so that all the indexes get created with the table so if there’s a problem you don’t have to figure out which indexes were created and which ones weren’t. If the table is there all the indexes are there.

Second it helps greatly with caching of tempdb tables (which I talked about recently) as you can now create the indexes at the same time as the table so the temp table can be cached.

As for the syntax it’s pretty straight forward. Below is a sample table with a couple of indexes, one created on the column c2 and one created on C1 and C2. Now sadly include columns aren’t supported with inline indexes, but hopefully that’ll show up in a future version of SQL Server.

CREATE TABLE MyTable
(c1 int,
c2 int index IX_c2,
INDEX IX_c1_c2 (C1, c2))
GO

If you have created objects in MySQL before this syntax should look pretty familiar because it’s basically the same one supported by MySQL so this should help people move applications from MySQL to SQL Server.

Denny


November 21, 2015  12:06 AM

Recommended reading from mrdenny for November 20, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


November 18, 2015  2:00 PM

I Don’t Want To Create Indexes Because It Will Slow Down My Application

Denny Cherry Denny Cherry Profile: Denny Cherry
Indexes, SQL Server, SQL Server Indexes

“I Don’t Want To Create Indexes Because It Will Slow Down My Application.”

I hear that argument all the time from software developers at clients when I tell them that we need to create indexes. This is usually followed up by “If we add indexes it will slow down the INSERT, UPDATE and DELETE statements.” And yes it will, normally by nanoseconds at most. If you are building the next Twitter, or Facebook then yes having lots of Indexes will be a problem that you can measure. But small applications with hundreds or thousands of users will have such a small performance impact by having the indexes there that you’ll never be able to measure the difference.

But with the indexes in place those slow queries which you have, which are typically the reason that I’m there in the first place, will go from seconds or minutes down to milliseconds. That’s a good pain for an acceptable cost. The gain is a well performing application, the cost is an unperceivable performance difference on data change.

The reason that people are having this argument is because conference speakers (myself included) have apparently done too good of a job telling people about the cost of having indexes on their system without explaining why we still need indexes.

So to the software developers and architects of the world, indexes are good when they are needed and are being used. If I’m there telling you that we need to create a bunch of indexes then we need to create them (my record is a CREATE INDEX script that was 450,000 lines long as there were 1000 databases, one per customer). After we create them we’ll remove any indexes which aren’t being used anymore.

To the speakers who talk about indexes we need to explain not just the downsides of indexes but do a better job of explaining why we need to create indexes and what the real cost of not creating the needed indexes is.

Denny


November 11, 2015  4:00 PM

The Dangers of Indexing Temp Tables

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Tables, SQL Server, SQL Server tables, TempDB, Temporary files

Indexes are good, except when they aren’t. Everything that you do in SQL Server has trade offs. Usually those tradeoffs are easy to see, unless they aren’t.

Indexes are generally a good thing. They make performance of queries within the database engine go faster, often a lot faster. Indexes on temp tables are also usually a good thing, unless you built them incorrectly then as one client of mine just found out things can get very bad very quickly.

This client had just upgraded from SQL Server 2008 R2 to SQL Server 2014 and the Monday after we did the upgrade (the first full business day running on SQL Server 2014) things fell apart, fast. We saw huge amounts of locking and waits on tempdb. The waits were reported as PAGELATCH_IO waits, but the disks according to perfmon had a 1-2ms response time. So it was something happening in memory. All the sessions were locking on a specific page in tempdb, 2:1:128. I looked at the page with DBCC PAGE and found that it was part of sysobjvalues. This table is used in storing information about temporary objects.

With the help of a Microsoft developer who looked through some minidumps from the SQL Server he was able to identify the code pattern that was causing the problem. The root of the problem was that temp tables weren’t being properly cached in the tempdb database. The reason for this is that the tables were being created without any indexes then a clustered index was being added to the temp table after the fact. In this case the code looked something like this:

CREATE TABLE #t1 (c1 int)
CREATE UNIQUE CLUSTERED INDEX I ON #t (c1) WITH IGNORE_DUP_KEY
INSERT INTO #t1 (c1) SELECT * FROM @Something

We were able to resolve the issue by removing the clustered index and making the column c1 a primary key, then doing a distinct insert into the table. Long term the developers are going to clean up the data within the .NET layer so that we know that distinct values are coming into the table so that we can remove the distinct. The new temporary code looks like this:

CREATE TABLE #t1 (c1 int primary key, c2 int, c2 int)
INSERT INTO #t1 (c1) SELECT DISTINCT c1 FROM @Something

Finding the problem code was pretty easy. The Microsoft developer was able to give me the names of a couple of stored procedures. The rest I was able to find by searching through sys.sql_modules looking for anything with “%WITH%IGNORE_DUP_KEY%” or “%CREATE%INDEX%” in the object code. After fixing these the system started performing MUCH, MUCH better.

Denny


November 5, 2015  11:55 PM

Congrats to Our PASS Booth Prize Winner

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences and events, SQL Server

We have done our raffle for the PASS 2015 Consultants Corner prize, which is a $1000 Amazon gift card. Our winner this year is …

David Sun from MasterCard.

(We actually did the drawing a few days ago, but being that David works for a large company there was a lot of checking to be done before this could be announced.)

Congrats David, I’m sure that the gift card will come in handy this year.

For those that came to see us at the PASS summit, we thank you for coming by and saying hello. If you didn’t make it to the PASS Summit be sure to get signed up for our newsletter (which is about to startup again will some great technical information). You can check out the recordings of our PASS sessions on the PASS Website as soon as they are published, and don’t forget to check out the Day 1 keynote which includes videos from by myself and Joey.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: