SQL Server with Mr. Denny


January 18, 2016  11:30 PM

Announcing Feed Monsters…

Denny Cherry Denny Cherry Profile: Denny Cherry

Did you use to use Yahoo! Pipes to manipulate RSS Feeds? Personal I did and I REALLY missed it when it was turned off by Yahoo. So like any good IT person I sat down (eventually) and did something about it. What I ended up building turned into what I’ve called “Feed Monsters“.  Feed Monsters is a web based service that lets you change values within RSS feeds, resort the values within the RSS feeds and filter the RSS feeds.

After you get signed up you create your monster and give it an RSS feed to chew on.  Then you add parts to your monster and those parts each make a different change to the RSS feed that you’ve given it.  As the service is being launched it supports 10 different parts that you can add to your monster.

Data Change Parts

  • Add Category from Publish Date
  • Add Prefix to Title
  • Add Suffix To Title
  • Clip Content
  • Post URL
  • Publish Date
  • Replace field {x} with field {y}
  • Title

Filter Parts

  • Top {n} Posts

Sorting Parts

  • Random Resorting

There’s plenty of more parts which we’ll be adding to the system in order to make RSS feeds even more changeable in the near future.

Registration for the service is free, and lets you try the service out for 10 days without having to pay anything. After that it’s only $20 a year (paid for paypal) to continue using the service.  Hopefully you find the service as valuable to use as I do.  Check out the Feed Monsters website for the full details about the service.

Denny

January 13, 2016  7:00 PM

Creating a cluster without Domain Admin permissions

Denny Cherry Denny Cherry Profile: Denny Cherry
Cluster, Cluster management, Clustering, Clustering/High availability, Clusters, SQL Server, SQL Server availability

If you’ve ever watched a presentation when someone sets up a cluster you’ve probably noticed that it goes pretty smoothly. The reason for this is because the account which the presenter uses is a domain administrator.

But what about the real world?

In the real world unless you are a systems administrator you probably won’t be a Domain Admin when creating your cluster. There’s a couple of ways to get the cluster setup.

The first way is to be created the permission to create objects within the domain. This is the easiest option. If this isn’t an option then you have the second option available to you.

Pre-staging the objects is option #2. This option requires that a member of the systems administration team create the computer objects for the cluster (and any clustered resources like Availability Groups and Failover Clustered Instances). You also need to have the Domain Admin disable the accounts. This step is critical because if the computer accounts are enabled the Failover Cluster Manager won’t be able to use the computer account, and neither with the cluster when it comes to creating the computer objects for the FCI and/or the AGs.

You’ll also need to configure the computer objects for the actual cluster to be managed by the user who is configuring the cluster. For the other computer accounts (for the FCIs and the AGs) you need to setup the computer account for the cluster to be able to manage these other computer accounts.

Denny


January 5, 2016  4:51 PM

2015 By The Numbers

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This post is a little later than normal. It’s been a very busy few weeks with family over until the 3rd of January.

Well 2015 was a hell of a year for my blogging. I recorded just under 250k page views (244,305) which isn’t even all of them (2016 will be the first year that I get a truly complete set of data as I’ve not got page view images on every blog post that I’ve written).

This years top 10 posts look a lot more different from the prior year than has happened in earlier years.

  1. SQL Server 2014 Licensing (Number 3 in 2014)
  2. What Exactly Is MSDTC Any Way, and When Do I Need It? (Number 2 in 2014)
  3. Difference Between An Index and a Primary Key
  4. How To Configure MSDTC On Windows 2008 (Number 1 in 2014)
  5. What Does WITH (NOLOCK) Actually Mean
  6. Calculating Fibre Latency Correctly Is a Must When Trying To Determine Storage Performance Numbers
  7. The Process Could Not Execute sp_replcmds on x
  8. The Least Expensive SQL Server High Availability Solution (Number 22 in 2014)
  9. If you thought Database Restores Were Slow, Try Restoring From an EMC Data Domain
  10. SQL PASS Summit 2015 Karaoke Party

Of the top 10, only three were in the top 10 before (numbers 1, 2 and 3 from 2014 were 4, 2, and 1 in 2015 respectively).  Number 8 from 2015 was number 22 last year.  Everything else in the top 10 appears to be a new post from 2015 (or didn’t have a tracking image until this year).  This is pretty shocking to me.  I haven’t seen the numbers shift like this before, granted that could be do to better data.  Next years numbers will be interesting to see once I have a full years worth of data to look at.

Thanks everyone for reading, and enjoy 2016.

Denny


December 23, 2015  9:00 AM

Introducing Microsoft SQL Server 2016 ebook (Preview Edition)

Denny Cherry Denny Cherry Profile: Denny Cherry
eBook, SQL Server, SQL Server 2016

I’m pleased to report that what is probably one of the first books on Microsoft SQL Server 2016 has been released, or at least the official 3 chapter preview of the book has been released by Microsoft Press.  In this three chapter book, written by myself, Stacia Varga and Joseph D’Antoni, we talk about some of the new security features in SQL Server 2016 including Always Encrypted, Row-Level Security and dynamic data masking.  In Chapter 4 we talk about some of the improvements to the database engine in SQL Server 2016 over SQL Server 2014 including the query store and Stretch Database.  While in Chapter 7 we talk about some of the improvements to SQL Server Reporting Services including the new data visualizations, support for mobile, KPIs and subscription enhancements.

If you are planning on deploying SQL Server 2016, or even if you just want to get a leg up on what’s coming in the new version this is the book for you.

The best thing about this book, is that it is free, there isn’t even a soul sucking registration to go through.  Just click and download.  It’s available is a standard 8.5×11″ PDF (standard US Pages) as well as a smaller PDF for mobile.  If you are looking for EPUB and MOBI files you’ll need to wait a few more weeks as they are supposed to be available starting in January (don’t hold me to that, I’m just going off the MSDN post.

So stop reading this, and go and download the ebook.  Now!

Denny


December 16, 2015  5:00 PM

Get Operation Status of an Azure Async Operation

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Powershell, PowerShell Scripts, SQL Server, Windows Azure

Recently I was trying to make a drive bigger on an Azure VM by simply using Set-AzureOSDisk PowerShell commandlet (and the -ResizedSizeInGB switch). The problem is that this command is asynchronous meaning that it tells you that it’s done, even if it hasn’t finished yet.

The problem with this, is that there’s no way in Azure to see if the request has finished or not. All you can do is hope that it finishes without throwing an error.

Well, I’m pleased to say that (with some hacking of some other scripts that I found online) I was able to put together a PowerShell CmdLet that you can download and use to get this information from the Azure Portal. This PowerShell Commandlet uses the Microsoft Azure API to get the information about the request.

You can think of this as a PowerShell version of the Microsoft Azure Get Operation Status API.

function Get-AzureStatus
{
[CmdletBinding()]
param (
[Parameter(ParameterSetName="ByRequestId", Position=1, Mandatory)]
[string] $RequestId,
[Parameter()]
[string] $SubscriptionId
)

if (!$SubscriptionId)
{
$SubscriptionId = (Get-AzureSubscription | ? IsDefault).SubscriptionId
}

$account = Get-AzureAccount | ? { $_.Subscriptions.Contains($SubscriptionId) -and $_.Type -eq "Certificate" }

if (!$account)
{
throw "Can't find an account for Azure subscription $SubscriptionId"
}

$certificate = ls Cert:\CurrentUser\My | ? Thumbprint -eq $account.Id

if (!$certificate)
{
throw "Can't find the certificate for Azure account {0}" -f $account.Id
}

$uri = "https://management.core.windows.net/${SubscriptionId}/operations/${RequestId}"

try
{
$response = Invoke-WebRequest -Method GET -Uri $uri -Certificate $certificate -Headers @{ "x-ms-version" = "2014-06-01" } -

ErrorAction Ignore
}
catch
{
$message = ( $_.ErrorDetails.Message)
throw "{0}: {1}" -f $message.Error.Code, $message.Error.Message
}

$content = $response.Content
$content.ChildNodes
}

Now using this requires that you have a certificate setup for the subscription to allow access to the Microsoft Azure API. Thankfully Cindy Gross has a great post on how to get a certificate created and uploaded to Azure.

To summarize her post.

1. Create a certificate on your machine.
makecert -sky exchange -r -n "CN=" -pe -a sha1 -len 2048 -ss My "c:\temp\.cer"

2. Upload that certificate to the options section of the OLD PORTAL

3. Map the subscription to your certificate
$subID = "Your Subscription Id"
$thumbprint = "Your Cert Thumbprint (from the portal)"
$SubscriptionName = "Your Subscription"
$myCert = Get-Item cert:\\CurrentUser\My\$thumbprint
Set-AzureSubscription –SubscriptionName $SubscriptionName -SubscriptionId $subID -Certificate $myCert

4. Install the POwerShell CmdLet above and run it passing it your RequestId that you get from any Azure PowerShell cmdlet.

Denny


December 7, 2015  3:47 PM

Just how bad is the new Windows 2016 pricing?

Denny Cherry Denny Cherry Profile: Denny Cherry

3462607995_150a6b2624_zYou probably saw the new Windows Server 2016 Pricing FAQ that came out last week.  If not, you should go head through it.  It’s OK, I’ll wait.

So, now that you’ve read it you’re probably all worried about the new Windows Server 2016 pricing structure as it is core based, and you have to purchase a minimum of 16 cores per server.  What’s that? You didn’t actually read the FAQ?  No problem, here’s the relevant part.

Core licenses will be sold in packs of two licenses. Each processor will need to be licensed with minimum of 8 cores which is 4 two-core packs. Each physical server, including 1 processor servers, will need to be licensed with minimum of 16 cores which is 8 two-core packs. Additional cores can then be licensed in increments of two cores (one two core pack) for gradual increases in core density growth.

Needless to say people are beginning to think that this is another SQL Server Licensing disaster from when SQL Server 2012 went from socket to core based licensing.  And in reality this isn’t that big of a deal for most servers.  The way the FAQ states things you have to buy 8 cores per socket, with a minimum of 16 cores per server.  Now that sounds like a lot, but there’s a little chart in the FAQ that’s really important.  I’ve included it below.

licensing

So looking at this chart we can see that any machine with less than 8 cores per socket and two sockets or less will cost you exactly the same as it will today.  Also any machine with one socket and less than 16 cores will also cost the same as it costs today.

It’s important to remember that today dual socket machines are the bulk of the servers being purchased.  Most VM farms (vSphere or Hyper-V) work great on dual docket servers.  That’s the bulk of servers at companies right there.  It’s only when you get into the really big servers that things are going to change.  The really big database servers are going to have a cost increase, but that cost increase is going to be nothing compared to what you are already planning on spending on the SQL Server license.  The other big servers at most companies are things like Oracle and SAP, which are already running on something other than Windows.

In the long run, I see this change making almost zero impact on most companies IT expenses.

Denny


December 3, 2015  4:05 PM

Announcing Internals for Query Tuning Training Class By Kalen Delaney

Denny Cherry Denny Cherry Profile: Denny Cherry
DBA training, IT training, SQL Server, Training

We at Denny Cherry & Associates Consulting are thrilled to announce that we have partnered up with the SQL Server internals guru Kalen Delaney to deliver training classes.  Our first class will be Internals for Query Tuning and will be held Monday-Wednesday April 4th, 5th and 6th of 2016 in Orange County, CA.expert

This is an advanced course designed for advanced SQL Server professionals. The course includes extensive demonstrations that illustrate the details of SQL Server internals as well as tuning techniques.  While this course will focus on SQL Server 2014 the information presented will be relevant for older versions of SQL Server as well.  In addition, some of the changes due in the upcoming SQL Server 2016 will be described.

This three day course is made up of 6 parts which are outlined below.  The cost for this fantastic three day class is $2225 US.  Seating is limited, so get signed up today.

The outline for this class is:

Part 1: SQL Server Metadata

  • Architecture Overview
  • Metadata Overview
  • Dynamic Management Views

Part 2: Index Structures

  • Metadata for Storage
  • Space Allocation
  • Tools for Examining Physical Structures
  • Heaps and B-Trees
  • Clustered Indexes
  • Nonclustered Indexes
  • Fragmentation
  • Filtered Indexes
  • Rebuilding Indexes
  • Partitioning Overview
  • Creating and Maintaining Partitions
  • Metadata for Partitioning
  • Columnstore Index Storage

DAY 2

Part 3: Query Processing and Query Plans

  • SHOWPLAN Options
  • Query Plan Elements
  • Types of Joins
  • Aggregation
  • Sorting
  • Columnstore Indexes: Batch vs Row Mode
  • Data Modification

Part 4: Optimization and Recompilation

  • Optimization Overview
  • SQL Server’s Query Optimizer
  • Plan Management and Reuse
  • Causes of Recompilation
  • Forcing Recompilation
  • Plan Cache Metadata

DAY 3

Part 5: Index Tuning

  • Special Index Features
  • Statistics
  • Indexed Views
  • Covering Indexes
  • Filtered Indexes
  • Included Columns
  • Indexing Guidelines

Part 6: Query Tuning

  • Query Improvements
  • Search Arguments
  • Constants and Variables
  • User Defined Functions and Computed Columns
  • Plan Guides
  • Query Hints


December 2, 2015  2:00 PM

DR Planning in Azure Requires More Than Replicating VMs to Another Data Center

Denny Cherry Denny Cherry Profile: Denny Cherry
Disaster planning, Disaster Recovery, Disaster recovery planning, SQL Server

While flipping through the Microsoft Azure website recently I ran across this page which talks about HA/DR configurations for SQL Server VMs. Seeing this I figured they’d be talking about AlwaysOn Availability Groups, or even SQL Server FCIs and using SIOS’s DataKeeper Cluster Edition.

Thankfully the article does talk about AGs (and database mirroring), but doesn’t mention FCIs. What it does talk about at the bottom is how you need to setup GRS (Globally Redundant Storage) to use that are part of your DR plan. What GRS is, is a blob store configuration that stores a copy of the blob in another data center. If you are counting on this as being part of your DR plan, you have already failed. You shouldn’t ever be relying on GRS for DR for your SQL Server VMs, or really any VMs. If you need to be able to fail over to your DR system is anything that resembles a quick timeframe spinning up new VMs and attaching existing storage to them isn’t the way to do that.

More traditional DR plans like the other ones which are talked about in the article (as well as SQL Server Clustered Instances) are going to be a much more useful DR configuration.

Denny


November 30, 2015  8:23 PM

If you fail to plan for your scale, you’ll end up with no workload at all

Denny Cherry Denny Cherry Profile: Denny Cherry

targetToday is “Cyber Monday”, the artificial “holiday” that stores have decided to create in order to separate customers from their money on the first normal work day of the holiday season. You’d think that as online retailers created this holiday, they would be prepared for it.

Apparently you would be wrong if Target’s webpage was any indication from earlier today. On the biggest online shopping day of the entire calendar year Target’s website couldn’t handle the workload and went offline telling users:

“So sorry, but high traffic’s causing delays. If you wouldn’t mind holding, we’ll refresh automatically and get things going ASAP.”

Planning for peak workloads is VERY important. If your applications are hosted on-premises this can be harder to deal with as you have to have hardware sitting around that you can run the workload on. If however you are running in the Microsoft Azure cloud you can easily scale your databases up and down without any major configuration changes. You just change the size of the databases in the portal and give the Azure backend processes some time to move things around and you are back up and running.

If you are using SQL Server or Oracle in a VM and you need to scale up, simply change the size of your VMs to a larger size. The VM will need to restart (so hopefully you have HA in place so you’d scale up the passive node, restart it then failover to it) and you’ll have more CPU and memory to work with.

If the web farm is what needs to scale up and down, the Microsoft Azure platform can handle that for you as well simply by using the auto-scaling features for web farms which allows Azure to add more web servers as needed as load increases, then remove servers when load decreases. For workloads that burst like they do on Cyber Monday, this is a perfect use case.

Now all this requires some pre-planning as you can’t just flip all of this on if you aren’t working in Microsoft Azure already. But if you are, your scale problems become very easy to solve.

If you are interested in moving to Azure so that you can have this flexibility, we would love to help you move from your current hosting provider to the Microsoft Azure cloud. Contact us and we will schedule some time to talk about your systems today, the problems that you are seeing, and how Azure can solve those problems.

Denny


November 25, 2015  4:00 PM

Inline Nonclustered Indexes

Denny Cherry Denny Cherry Profile: Denny Cherry
Index, SQL Server, SQL Server Indexes

One of the new features that I just recently learned about in SQL Server 2014 is the ability to create nonclustered indexes on a table inline with the creation of the table. This is handy in a couple of ways.

First it makes it so that all the indexes get created with the table so if there’s a problem you don’t have to figure out which indexes were created and which ones weren’t. If the table is there all the indexes are there.

Second it helps greatly with caching of tempdb tables (which I talked about recently) as you can now create the indexes at the same time as the table so the temp table can be cached.

As for the syntax it’s pretty straight forward. Below is a sample table with a couple of indexes, one created on the column c2 and one created on C1 and C2. Now sadly include columns aren’t supported with inline indexes, but hopefully that’ll show up in a future version of SQL Server.

CREATE TABLE MyTable
(c1 int,
c2 int index IX_c2,
INDEX IX_c1_c2 (C1, c2))
GO

If you have created objects in MySQL before this syntax should look pretty familiar because it’s basically the same one supported by MySQL so this should help people move applications from MySQL to SQL Server.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: