SQL Server with Mr. Denny


January 3, 2014  2:00 PM

Recommended Reading from mrdenny for January 3, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: NikoNeugebauer also known as Niko Neugebauer

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

January 2, 2014  2:00 PM

Why would I want to use a non-durable in memory table?

Denny Cherry Denny Cherry Profile: Denny Cherry

With SQL Server 2014 we were introduced to in memory tables, and specifically the ability to use non-durable in memory tables. These non-durable tables, are special case tables. The schema will exist after the database engine is rebooted, however the table will be empty after the server is rebooted.

These tables are great for filling a specific need, and are useless for other needs. The need that these tables fill is processes such as staging tables in data warehouses, logging tables, and that’s just about it. These tables are fantastic when loading data where the data can be lost, or when the data can be recreated on the fly. This is why they are fantastic for loading tables for data warehouses because by their nature the data being written to loading or staging tables can be reloaded from the source system at a moments notice.

For production tables within an OLTP system these non-durable tables are a really bad idea. If you were to use non-durable tables in a production OLTP database, under most situations if the data was lost this would end up being a really bad idea because data would be lost when the SQL Server instance is rebooted.

So as you continue to look into SQL Server 2014 and as you begin to work with the in memory tables, be sure to understand the difference between the durable and non-durable tables and when to use each one.

Denny


January 2, 2014  2:00 PM

Blog Year 2013 In Review (Sort Of)

Denny Cherry Denny Cherry Profile: Denny Cherry

Welcome to 2014. For the last couple of years I’ve done a top posts of the last year post. This year will be no different.

What is different between this year and prior years is that this year I’ve got really bad data compared to prior years. I’ve got great data starting 10/3/2013 to current, but the data before that is … we’ll say lacking in detail.

Without waiting any longer the top posts in 2013 are…

10. What Exactly Is MSDTC And When Do I Need It
9. Will SQL Performance Be Better In A VM?
8. What Should Happen To Blackberry
7. If you could give an aspiring DBA just one piece of advice what would it be?
6. TempDB Latch Timeouts With Lots of RAM
5. SQL PASS 2013 Webcast Recording
4. How to Configure DTC on Windows 2008
3. Replication and VLFs
2. Seeks Aren’t Always Better Than Scans
1. Moving the SCCM Database

Comparing my top posts to last year we see a log of changes in the top 10 (unlike between 2011 and 2012).

The only posts which stayed in the top 10 from last year are:
1. How To Configure DTC on Windows 2008
4. What Exactly Is MSDTC And When Do I Need It

Based on this it sounds like (at least some people out there) are liking my new posts, which is good.

Denny


December 26, 2013  4:00 PM

Indexes on forign keys can make a big difference

Denny Cherry Denny Cherry Profile: Denny Cherry

It’s amazing just how much difference having a non-clustered index on the child table of a foreign key can matter when the foreign keys have cascading deletes turned on. In the example that I’m thinking of a new table was added when the application was upgraded the week before. Then Monday morning there were all sorts of blocking and deadlock problems when trying to delete data from a large table with 2.4B rows in it. The problem table wasn’t the big table, but instead the new table with all of 455k rows in it when I looked at it.

Looking at the execution plan for the stored procedure which does data deletion from the large table and the problem become painfully clear. There was a clustered index scan on the new table for each row which was deleted. That clustered index scan was part of the deletion transaction so anyone trying to insert into that little table was being blocked. So basically a nice chain reaction was happening.

This was easily shown in the CPU workload as well shown below. You can see the CPU workload on the system spike to above 40% to almost 50%, which is a lot on a system which has 80 logical CPUs and normally runs at about 11% CPU workload.

sql-cpu-1028

In the graphic you can see exactly there I added the indexes as the workload drops right at 11:50AM.

This just goes to show that if you plan on using foreign keys to handle data deletion between tables you need to ensure that the column which is the child tables foreign key has an index on it.


December 18, 2013  2:00 PM

Encryption and Compression

Denny Cherry Denny Cherry Profile: Denny Cherry

We have a variety of options when it comes to compression and encryption in SQL Server. When using both compression and encryption you have to understand how each of these work and when they wil lwork together and when they won’t be able to work together to make using both technologies useful.

The trick to making compression and encryption work together is to ensure that the compression is done first and the data encryption is done second. This is most easily done by using TDE for compression and page level compression for data compression. This is becaues when using these two technologies, no matter in which order you have enabled them, SQL Server will compress the data first and encrypt the data second. This even happens if you have a database which is encrypted with TDE and you then enable data compression on the tables. This is because when the data is compressed it is rewritten as compressed data and then encrypted post compression.

Using application level encryption you can still compress data using native data compression feature of SQL Server, however the amount of data compression that you will typically get in this situation will be much less than by using TDE and data compression. The same applies if you use TDE and then backup the database using native (or third party) backup compression. This is because when backups of a TDE encrypted database are taken the database pages are not decrypted when backed up. They are backed up in the same encrypted state that they are normally in, then compressed. By it’s nature encrypted data is very unique so data compression doesn’t do much good against encrypted data.

Denny


December 11, 2013  2:00 PM

Clustering in Amazon EC2 – Tomorrow!

Denny Cherry Denny Cherry Profile: Denny Cherry

Tomorrow (December 12th) at noon Pacific Time I’m presenting a session with SIOS Technologies on using their SANLess Clustering Technology to build a clustered SQL Server for High Availability within Amazon’s EC2 cloud. This session is open to the public and is priced just right … FREE!

So get signed up and learn more about how to setup a traditional Windows Cluster in the Amazon EC2 cloud for high availability within the EC2 cloud for your cloud based applications.

Thanks,
Denny


December 3, 2013  6:15 PM

Upcoming SQL Saturday Precon in Albuquerque

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m very pleased to announce that I will be presenting a precon at the Albuquerque SQL Saturday at SQL Saturday 271 on January 24th, 2014. At this precon I’ll be presenting a session titled “SQL Performance Tuning and Optimization”. This session is a full day session where we will look at the various ways to find and troubleshoot performance tuning problems in SQL Server today. New content in this session will include knowing when to look at the new SQL Server 2014 features such as Hekaton and ColumnStore (Apollo).

I hope to see you at the SQL Saturday. You do need to sign up for the SQL Saturday precon in advance, and their are a limited number of seats available, so be sure to get signed up soon.

Thanks,
Denny


November 27, 2013  5:00 PM

Windows 2012 Clusters on Windows 2003 Domains

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently I was building a new Windows 2012 cluster that was going to hold a SQL 2012 instance. So far so good. Something that I’ve done dozens of times. However this time something strange happened. As the SQL instance was being installed I got an error message saying that the cluster couldn’t be brought online. Looking at the cluster manager I saw that the network name for the SQL instance had failed.  Now the big difference between this cluster and all the other ones that I’ve installed was that the domain was still a Windows 2003 domain and the forest level was still Windows 2003 as well.

cluster

Wanting to make sure that it wasn’t a SQL Server problem I tried creating a new role on the cluster and giving it a client access point.  When that tried to come online it failed as well, so we now know that we don’t have a SQL problem but are having a Windows problem, so we can ignore the SQL Installer for now.

Looking at the cluster log, which I exported via the get-clusterlog PowerShell commandlet wasn’t much help either.  It did tell me that we were getting error 1326 when we tried to authenticate against the domain.  That being a username and password failure.

1326

That’s a little odd given that Windows creates these passwords for me when the access point it created.  You can attempt to fix this by right clicking on the failed object in failover manager, selecting “More Actions” and selecting repair.  In this case that didn’t work either.

After bouncing around the web for a bit and talking to some people at Microsoft in the Clustering product team we found hotfix 2838043 which is titled “Can’t access a resource that is hosted on a Windows Server 2012-based failover cluster“.  This fixed the ability for the cluster name that I manually created to work correctly, however the SQL Installer still failed.

To get the SQL installer to work I had to cancel out of the installer.  Manually delete the Active Directory account for the computer and manually delete the DNS entry.  Then reboot the cluster node and run through the installer again.  At this point it worked without issue.

So if you are planning on installing a Windows 2012 cluster on a Windows 2003 domain, make sure that you’ve got this hotfix installed on the Windows 2012 cluster nodes before you begin.

Denny


November 20, 2013  2:00 PM

Will SQL Performance Better In A VM?

Denny Cherry Denny Cherry Profile: Denny Cherry

A question came up that during my 24 Hours of PASS presentation a while back. The question was “Is performance of SQL Server 2012 is better in Virtual environment than the Physical?”. Now I actually get questions like this all the time when giving presentations.

Thankfully for me, this is one of those times where the answer is very straight forward. Performance on a virtual machine won’t ever be better than when using like physical hardware. The reason for this is that the hypervisor will add a small amount of performance overhead to the virtual machine. In modern hypervisors this overhead is typically very small, usually 1-2% but that does mean that the physical server will run a little bit better.

A perfectly normal follow up question would be, why should be bother setting up SQL Servers as virtual machines? The answer to this isn’t as easy to answer. If your SQL Server, and more specifically the application and your users can live with the slight performance hit that you get by being within a virtual machine then keep virtualizing those SQL Servers. However if you have an application which is very sensitive to performance problems, virtualizing the database for that application probably isn’t the best of ideas.

Hopefully this helps dispel some myths.

Denny


November 13, 2013  2:00 PM

Old Versions of SQL Server On New Hardware Can Be Interesting

Denny Cherry Denny Cherry Profile: Denny Cherry
NUMA, SQL Server, SQL Server 2008, SQL Server 2012

Recently a friend was working on one of his clients SQL Servers and he ran into an interesting problem. The hardware in question was a HP DL 580 with four chips, each with 10 cores, with hyper threading enabled. This should have presented to SQL Server as 80 total logical cores. The problem was that SQL Server was only seeing 40 cores. The server in this case was Windows Server 2008 R2 and the SQL Server was SQL Server 2008.

If you are familiar with SQL Server 2012 you may be thinking that this is done licensing limitation, but you would be wrong. The problem here is a NUMA problem.

The reason that the problem comes up (which I’ll cover before giving you the solution) because of the way NUMA works on large systems. We all know that we have NUMA nodes, which on todays servers are basically one physical CPU socket per NUMA node. NUMA nodes are put into groups when there are lots of logical processors. The thing is that a single NUMA group can only contain 64 logical processors. Looking back at our HP DL 580, we have 40 cores with hyper threading, which is 80 logical processors. That means that we need to have two NUMA groups.

This is no problem on Windows 2008 R2 as it supports NUMA groups (Windows 2008 and below do not). However the problem is that SQL Server 2008 doesn’t support NUMA groups, so it can only see the logical processors that are in NUMA group 0 (you can have up to 4 NUMA groups which are numbered from 0 to 3).

Because of this the SQL Server was only able to see 40 cores, and those 40 cores were the physical and logical cores from CPUs 0 and 1. We could see this in the errorlog file because it only showed CPUs 0 and 1. Why doesn’t SQL Server just use the physical cores from all the processors and ignore the hyperthreaded cores? Well that’s because it has no idea that CPUs 2 and 3 exist because it can’t see them over in NUMA Group 1.

When running the workload on this machine, 1/2 of the physical CPU power just isn’t being used.

Why do we are about this? Well the 1st problem is that accessing the memory that is attached to CPUs 2 and 3 is going to be expensive as that memory is in another NUMA node than the CPU that’s doing the work. That’ll slow things down with all the cross NUMA node requests. The other problem here is that under a heavy CPU workload the SQL Server will be using 20 real CPUs and 20 virtual CPUs. It would be much better to have access to all the physical CPU cores.

The solution here was quite simple once we realized what the problem was. Disable hyper threading. Now SQL Server can still only see 40 logical processors, but it’s getting all 40 physical cores on the server. This means that cross NUMA node memory access should be mostly gone and we’ve got all the CPU power that we paid for available to us.

As our servers get larger and larger we’ll have more and more cases of older versions of SQL Server not being able to see all the CPU power, and this is why. The number of logical processors that SQL Server could see really depends on the physical server config and how many cores each physical processor has. The basic idea behind the problem is that not all the cores are showing to the SQL Server.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: