SQL Server with Mr. Denny


December 3, 2015  4:05 PM

Announcing Internals for Query Tuning Training Class By Kalen Delaney

Denny Cherry Denny Cherry Profile: Denny Cherry
DBA training, IT training, SQL Server, Training

We at Denny Cherry & Associates Consulting are thrilled to announce that we have partnered up with the SQL Server internals guru Kalen Delaney to deliver training classes.  Our first class will be Internals for Query Tuning and will be held Monday-Wednesday April 4th, 5th and 6th of 2016 in Orange County, CA.expert

This is an advanced course designed for advanced SQL Server professionals. The course includes extensive demonstrations that illustrate the details of SQL Server internals as well as tuning techniques.  While this course will focus on SQL Server 2014 the information presented will be relevant for older versions of SQL Server as well.  In addition, some of the changes due in the upcoming SQL Server 2016 will be described.

This three day course is made up of 6 parts which are outlined below.  The cost for this fantastic three day class is $2225 US.  Seating is limited, so get signed up today.

The outline for this class is:

Part 1: SQL Server Metadata

  • Architecture Overview
  • Metadata Overview
  • Dynamic Management Views

Part 2: Index Structures

  • Metadata for Storage
  • Space Allocation
  • Tools for Examining Physical Structures
  • Heaps and B-Trees
  • Clustered Indexes
  • Nonclustered Indexes
  • Fragmentation
  • Filtered Indexes
  • Rebuilding Indexes
  • Partitioning Overview
  • Creating and Maintaining Partitions
  • Metadata for Partitioning
  • Columnstore Index Storage

DAY 2

Part 3: Query Processing and Query Plans

  • SHOWPLAN Options
  • Query Plan Elements
  • Types of Joins
  • Aggregation
  • Sorting
  • Columnstore Indexes: Batch vs Row Mode
  • Data Modification

Part 4: Optimization and Recompilation

  • Optimization Overview
  • SQL Server’s Query Optimizer
  • Plan Management and Reuse
  • Causes of Recompilation
  • Forcing Recompilation
  • Plan Cache Metadata

DAY 3

Part 5: Index Tuning

  • Special Index Features
  • Statistics
  • Indexed Views
  • Covering Indexes
  • Filtered Indexes
  • Included Columns
  • Indexing Guidelines

Part 6: Query Tuning

  • Query Improvements
  • Search Arguments
  • Constants and Variables
  • User Defined Functions and Computed Columns
  • Plan Guides
  • Query Hints

December 2, 2015  2:00 PM

DR Planning in Azure Requires More Than Replicating VMs to Another Data Center

Denny Cherry Denny Cherry Profile: Denny Cherry
Disaster planning, Disaster Recovery, Disaster recovery planning, SQL Server

While flipping through the Microsoft Azure website recently I ran across this page which talks about HA/DR configurations for SQL Server VMs. Seeing this I figured they’d be talking about AlwaysOn Availability Groups, or even SQL Server FCIs and using SIOS’s DataKeeper Cluster Edition.

Thankfully the article does talk about AGs (and database mirroring), but doesn’t mention FCIs. What it does talk about at the bottom is how you need to setup GRS (Globally Redundant Storage) to use that are part of your DR plan. What GRS is, is a blob store configuration that stores a copy of the blob in another data center. If you are counting on this as being part of your DR plan, you have already failed. You shouldn’t ever be relying on GRS for DR for your SQL Server VMs, or really any VMs. If you need to be able to fail over to your DR system is anything that resembles a quick timeframe spinning up new VMs and attaching existing storage to them isn’t the way to do that.

More traditional DR plans like the other ones which are talked about in the article (as well as SQL Server Clustered Instances) are going to be a much more useful DR configuration.

Denny


November 30, 2015  8:23 PM

If you fail to plan for your scale, you’ll end up with no workload at all

Denny Cherry Denny Cherry Profile: Denny Cherry

targetToday is “Cyber Monday”, the artificial “holiday” that stores have decided to create in order to separate customers from their money on the first normal work day of the holiday season. You’d think that as online retailers created this holiday, they would be prepared for it.

Apparently you would be wrong if Target’s webpage was any indication from earlier today. On the biggest online shopping day of the entire calendar year Target’s website couldn’t handle the workload and went offline telling users:

“So sorry, but high traffic’s causing delays. If you wouldn’t mind holding, we’ll refresh automatically and get things going ASAP.”

Planning for peak workloads is VERY important. If your applications are hosted on-premises this can be harder to deal with as you have to have hardware sitting around that you can run the workload on. If however you are running in the Microsoft Azure cloud you can easily scale your databases up and down without any major configuration changes. You just change the size of the databases in the portal and give the Azure backend processes some time to move things around and you are back up and running.

If you are using SQL Server or Oracle in a VM and you need to scale up, simply change the size of your VMs to a larger size. The VM will need to restart (so hopefully you have HA in place so you’d scale up the passive node, restart it then failover to it) and you’ll have more CPU and memory to work with.

If the web farm is what needs to scale up and down, the Microsoft Azure platform can handle that for you as well simply by using the auto-scaling features for web farms which allows Azure to add more web servers as needed as load increases, then remove servers when load decreases. For workloads that burst like they do on Cyber Monday, this is a perfect use case.

Now all this requires some pre-planning as you can’t just flip all of this on if you aren’t working in Microsoft Azure already. But if you are, your scale problems become very easy to solve.

If you are interested in moving to Azure so that you can have this flexibility, we would love to help you move from your current hosting provider to the Microsoft Azure cloud. Contact us and we will schedule some time to talk about your systems today, the problems that you are seeing, and how Azure can solve those problems.

Denny


November 25, 2015  4:00 PM

Inline Nonclustered Indexes

Denny Cherry Denny Cherry Profile: Denny Cherry
Index, SQL Server, SQL Server Indexes

One of the new features that I just recently learned about in SQL Server 2014 is the ability to create nonclustered indexes on a table inline with the creation of the table. This is handy in a couple of ways.

First it makes it so that all the indexes get created with the table so if there’s a problem you don’t have to figure out which indexes were created and which ones weren’t. If the table is there all the indexes are there.

Second it helps greatly with caching of tempdb tables (which I talked about recently) as you can now create the indexes at the same time as the table so the temp table can be cached.

As for the syntax it’s pretty straight forward. Below is a sample table with a couple of indexes, one created on the column c2 and one created on C1 and C2. Now sadly include columns aren’t supported with inline indexes, but hopefully that’ll show up in a future version of SQL Server.

CREATE TABLE MyTable
(c1 int,
c2 int index IX_c2,
INDEX IX_c1_c2 (C1, c2))
GO

If you have created objects in MySQL before this syntax should look pretty familiar because it’s basically the same one supported by MySQL so this should help people move applications from MySQL to SQL Server.

Denny


November 21, 2015  12:06 AM

Recommended reading from mrdenny for November 20, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


November 18, 2015  2:00 PM

I Don’t Want To Create Indexes Because It Will Slow Down My Application

Denny Cherry Denny Cherry Profile: Denny Cherry
Indexes, SQL Server, SQL Server Indexes

“I Don’t Want To Create Indexes Because It Will Slow Down My Application.”

I hear that argument all the time from software developers at clients when I tell them that we need to create indexes. This is usually followed up by “If we add indexes it will slow down the INSERT, UPDATE and DELETE statements.” And yes it will, normally by nanoseconds at most. If you are building the next Twitter, or Facebook then yes having lots of Indexes will be a problem that you can measure. But small applications with hundreds or thousands of users will have such a small performance impact by having the indexes there that you’ll never be able to measure the difference.

But with the indexes in place those slow queries which you have, which are typically the reason that I’m there in the first place, will go from seconds or minutes down to milliseconds. That’s a good pain for an acceptable cost. The gain is a well performing application, the cost is an unperceivable performance difference on data change.

The reason that people are having this argument is because conference speakers (myself included) have apparently done too good of a job telling people about the cost of having indexes on their system without explaining why we still need indexes.

So to the software developers and architects of the world, indexes are good when they are needed and are being used. If I’m there telling you that we need to create a bunch of indexes then we need to create them (my record is a CREATE INDEX script that was 450,000 lines long as there were 1000 databases, one per customer). After we create them we’ll remove any indexes which aren’t being used anymore.

To the speakers who talk about indexes we need to explain not just the downsides of indexes but do a better job of explaining why we need to create indexes and what the real cost of not creating the needed indexes is.

Denny


November 11, 2015  4:00 PM

The Dangers of Indexing Temp Tables

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Tables, SQL Server, SQL Server tables, TempDB, Temporary files

Indexes are good, except when they aren’t. Everything that you do in SQL Server has trade offs. Usually those tradeoffs are easy to see, unless they aren’t.

Indexes are generally a good thing. They make performance of queries within the database engine go faster, often a lot faster. Indexes on temp tables are also usually a good thing, unless you built them incorrectly then as one client of mine just found out things can get very bad very quickly.

This client had just upgraded from SQL Server 2008 R2 to SQL Server 2014 and the Monday after we did the upgrade (the first full business day running on SQL Server 2014) things fell apart, fast. We saw huge amounts of locking and waits on tempdb. The waits were reported as PAGELATCH_IO waits, but the disks according to perfmon had a 1-2ms response time. So it was something happening in memory. All the sessions were locking on a specific page in tempdb, 2:1:128. I looked at the page with DBCC PAGE and found that it was part of sysobjvalues. This table is used in storing information about temporary objects.

With the help of a Microsoft developer who looked through some minidumps from the SQL Server he was able to identify the code pattern that was causing the problem. The root of the problem was that temp tables weren’t being properly cached in the tempdb database. The reason for this is that the tables were being created without any indexes then a clustered index was being added to the temp table after the fact. In this case the code looked something like this:

CREATE TABLE #t1 (c1 int)
CREATE UNIQUE CLUSTERED INDEX I ON #t (c1) WITH IGNORE_DUP_KEY
INSERT INTO #t1 (c1) SELECT * FROM @Something

We were able to resolve the issue by removing the clustered index and making the column c1 a primary key, then doing a distinct insert into the table. Long term the developers are going to clean up the data within the .NET layer so that we know that distinct values are coming into the table so that we can remove the distinct. The new temporary code looks like this:

CREATE TABLE #t1 (c1 int primary key, c2 int, c2 int)
INSERT INTO #t1 (c1) SELECT DISTINCT c1 FROM @Something

Finding the problem code was pretty easy. The Microsoft developer was able to give me the names of a couple of stored procedures. The rest I was able to find by searching through sys.sql_modules looking for anything with “%WITH%IGNORE_DUP_KEY%” or “%CREATE%INDEX%” in the object code. After fixing these the system started performing MUCH, MUCH better.

Denny


November 5, 2015  11:55 PM

Congrats to Our PASS Booth Prize Winner

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences and events, SQL Server

We have done our raffle for the PASS 2015 Consultants Corner prize, which is a $1000 Amazon gift card. Our winner this year is …

David Sun from MasterCard.

(We actually did the drawing a few days ago, but being that David works for a large company there was a lot of checking to be done before this could be announced.)

Congrats David, I’m sure that the gift card will come in handy this year.

For those that came to see us at the PASS summit, we thank you for coming by and saying hello. If you didn’t make it to the PASS Summit be sure to get signed up for our newsletter (which is about to startup again will some great technical information). You can check out the recordings of our PASS sessions on the PASS Website as soon as they are published, and don’t forget to check out the Day 1 keynote which includes videos from by myself and Joey.

Denny


November 4, 2015  4:00 PM

In Azure CApS MatTers

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, PowerShell cmdlets, PowerShell Scripts, SQL Server, Windows Azure, Windows PowerShell

Recently I was doing some work for a client moving some vDisks from standard storage to premium storage. Overall it was a pretty easy migration to do, just requiring a little PowerShell to make it all happen.

The part that made it hard to get done is that the PowerShell parameters for file names on blob storage are case sensitive. This is a pain when the person who setup the virtual drives (not me) isn’t consistent in how they setup the file names. In this case the three drives which I needed to move where named like this.

  • sql1.vhd
  • SQL2.vhd
  • Sql3.vhd

This means that it takes some work and some careful looking to ensure that you have the case of each file correct so that the scripts actually work as planned. I got through it, but not without a decent of grumbling at my computer.

So when setting up drives for VMs in Azure, keep everything consistent. You’ll thank me later.

Denny


October 28, 2015  4:00 PM

My Experience Moving WordPress From @GoDaddy to @Azure

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL Server, Web hosting, Web hosting providers

leakRecently I got fed up with the lack of answers to my performance problems from GoDaddy with regard to the performance problems that had started randomly showing up that I decided to look into moving our website hosting from GoDaddy to Azure. As a partner we have a bunch of credits with Azure every month that we can use for production services, so we figured why not. And frankly the performance couldn’t be any worse. Page load times with GoDaddy were 5+ seconds, and their answer was always the same; “just disable your plugins and theme and it works fine, so it must be a problem with the plugins and theme”. Except that we had been using the exact same plugins and theme for months without issue.

I knew that moving all the various sites from GoDaddy would be a pain. We have 46 domain names which point to 13 different websites so getting everything over in a way which didn’t impact the users was going to be a challenge.

Step 1 of the process was deciding if we should use Windows or Linux to host the sites. As every one of the sites is either PHP (like www.dcac.co) or just a static HTML file (like www.getazureposh.com) we opted for Linux VMs within Azure. For HA reasons we have two Linux VMs, both of which are A1 VMs. These machines are a single CPU core with 1.75 Gig of RAM. These two machines are behind an Azure load balancer so we’ve got HA. Now we’ve got basically the same CPU power and RAM in each VM that we had before (GoDaddy’s site says that the plan we were on before had 1CPU and 1 Gig of RAM) so we’ve technically doubled our CPU power for total visitors, but a single user should see the same basic performance if it’s a problem with the PHP code in the plugins and/or theme.

For the database server we’ll be using MySQL so we stuck it on a Linux VM as well, with a backup database server using MySQL Replication to mirror the data within the database to the second VM.

Out next decision was where should we be putting these VMs. We opted for Central US as the bulk of our customers are based in the US, so we wanted to have the site centrally located to our users for quickest network access. Because this is Azure if we started to get a lot of customers in Europe (for example) I could simply drop some web servers in Europe and have them access the database servers here in the US (over a site to site VPN between the two Azure Data Centers) and pretty quickly and easily give the Europe users faster access to our website.

After that was all decided I built the VMs. Now being that I’m a little paranoid about security I closed all the firewall holes that Microsoft likes to leave open for new VMs except for http and https. So all the SSH ports got closed as I won’t be accessing them via SSH from the outside as we already have VPN setup into our Windows Azure environment.

The next step was to get all the files for the sites (and the backups of the MySQL databases downloaded). That just to using wget and connecting to the GoDaddy FTP server. It took a while to download everything as some of the sites have a LOT of old junk laying around inside them. Once that was done, the databases were created and all the data for the databases was loaded in. Then Apache was configured along with all the configuration files for all the various websites.

Once that was done DNS was moved over and everything basically started working as expected. There were a couple of glitches with SSL which needed to be addressed. But that didn’t really impact our users, just Joey and myself.

As for performance, everything is MUCH better today than it used to be. Our average CPU performance now that we are on the new machines in Azure is 1.4% with spikes up to 5%, and that’s when I’m going through and patching all the sites with the newest WordPress build and the webpage response time is under a second based on just using the website where we were getting all sorts of timeouts before.

Now I’ve made it look like is was pretty easy, and for the most part it really was. Thankfully I was able to take my time and move site by site moving stuff, slowly getting each time moved across and making sure that each site was setup correctly before moving on to the next website.

Other than speed have I really noticed a big difference? Not really. Using the website is basically the same. The only really annoying thing is waiting for files to replicate between the two web servers as that only happens every 5 minutes (or just VPNing in and forcing them to sync when I want them to sync if I’m doing something specific. But as most everything that WordPress does is done within the database that isn’t that big a deal.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: