SQL Server with Mr. Denny


January 10, 2014  8:28 PM

Recommended reading from mrdenny for January 10, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry

Ad 

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: markvsql also known as Mark Vaillancourt

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

January 7, 2014  8:46 AM

SQL 2012 Cluster, Windows 2012 R2 OS, Windows 2008 R2 Domains

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently I was installing SQL Server 2012 on a new Windows 2012 R2 cluster for a client and ran across a bit of a problem. When the SQL Installer was attempting to start SQL Server for the first time SQL Server was throwing Windows errors 1069 and 1194 which basically say…

Cluster network name resource ‘%1′ failed to create its associated computer object in domain ‘%2′ for the following reason: %3.

The text for the associated error code is: %4

Please work with your domain administrator to ensure that:
- The cluster identity ‘%5′ can create computer objects. By default all computer objects are created in the ‘Computers’ container; consult the domain administrator if this location has been changed.
- The quota for computer objects has not been reached.
- If there is an existing computer object, verify the Cluster Identity ‘%5′ has ‘Full Control’ permission to that computer object using the Active Directory Users and Computers tool.

What this basically means is that the account which is trying to create the computer account within the domain doesn’t have the correct permissions needed to create the account. Now in this case the domain has a domain group called “Add Computers To The Domain” which has the “Create Computer” right in all OUs within Active Directory. Adding the CLUSTER$ account for this cluster to this group did nothing. The reason for this is because this group has only a single right “Create Computer”. For Windows 2012 R2 clusters to be able to successfully create computer objects in Active Directory the CLUSTER$ account needs to have what is considered to be “Read” permissions. Specifically this is three different permissions on the OU which are:

  • List Contents
  • Read all properties
  • Read Permissions

In addition this account needs to “Create Computer” right as well.

My solution in this case was to create a new group in Active Directory and grant that group these four rights to the group. This way if the problem comes up again for new clusters (which I’m sure that it will) the fix will simply be to drop the computer account for the cluster into the group, wait for domain replication to finish, then try starting the client access point again.

My solution in this case was to grant the domain group rights to the OU which the SQL Server objects where going to be created in within Active Directory. For some reason when attempting to grant this permission through a group the permission wasn’t granted correctly.

Much thanks to Allan Hirt (@SQLHA) for validating that I’m not crazy and that these were the correct permissions at midnight my time which was 3am his time.

Denny


January 6, 2014  2:00 PM

SQL Saturday Nashville Pre-Con and Sponsorship

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m pleased to be able to report that I’m giving a pre-con on SQL Server Performance at SQL Saturday Nashville. The sign up page is available here.

In addition to giving a pre-con I’m also signed up as a Silver sponsor, so I’ll have a booth at the pre-con setup where you can come talk to us about the services that we offer or get some technical advice to fix problems at work.

I hope to see you at the SQL Saturday in Nashville.

Denny


January 3, 2014  2:00 PM

Recommended Reading from mrdenny for January 3, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: NikoNeugebauer also known as Niko Neugebauer

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


January 2, 2014  2:00 PM

Why would I want to use a non-durable in memory table?

Denny Cherry Denny Cherry Profile: Denny Cherry

With SQL Server 2014 we were introduced to in memory tables, and specifically the ability to use non-durable in memory tables. These non-durable tables, are special case tables. The schema will exist after the database engine is rebooted, however the table will be empty after the server is rebooted.

These tables are great for filling a specific need, and are useless for other needs. The need that these tables fill is processes such as staging tables in data warehouses, logging tables, and that’s just about it. These tables are fantastic when loading data where the data can be lost, or when the data can be recreated on the fly. This is why they are fantastic for loading tables for data warehouses because by their nature the data being written to loading or staging tables can be reloaded from the source system at a moments notice.

For production tables within an OLTP system these non-durable tables are a really bad idea. If you were to use non-durable tables in a production OLTP database, under most situations if the data was lost this would end up being a really bad idea because data would be lost when the SQL Server instance is rebooted.

So as you continue to look into SQL Server 2014 and as you begin to work with the in memory tables, be sure to understand the difference between the durable and non-durable tables and when to use each one.

Denny


January 2, 2014  2:00 PM

Blog Year 2013 In Review (Sort Of)

Denny Cherry Denny Cherry Profile: Denny Cherry

Welcome to 2014. For the last couple of years I’ve done a top posts of the last year post. This year will be no different.

What is different between this year and prior years is that this year I’ve got really bad data compared to prior years. I’ve got great data starting 10/3/2013 to current, but the data before that is … we’ll say lacking in detail.

Without waiting any longer the top posts in 2013 are…

10. What Exactly Is MSDTC And When Do I Need It
9. Will SQL Performance Be Better In A VM?
8. What Should Happen To Blackberry
7. If you could give an aspiring DBA just one piece of advice what would it be?
6. TempDB Latch Timeouts With Lots of RAM
5. SQL PASS 2013 Webcast Recording
4. How to Configure DTC on Windows 2008
3. Replication and VLFs
2. Seeks Aren’t Always Better Than Scans
1. Moving the SCCM Database

Comparing my top posts to last year we see a log of changes in the top 10 (unlike between 2011 and 2012).

The only posts which stayed in the top 10 from last year are:
1. How To Configure DTC on Windows 2008
4. What Exactly Is MSDTC And When Do I Need It

Based on this it sounds like (at least some people out there) are liking my new posts, which is good.

Denny


December 26, 2013  4:00 PM

Indexes on forign keys can make a big difference

Denny Cherry Denny Cherry Profile: Denny Cherry

It’s amazing just how much difference having a non-clustered index on the child table of a foreign key can matter when the foreign keys have cascading deletes turned on. In the example that I’m thinking of a new table was added when the application was upgraded the week before. Then Monday morning there were all sorts of blocking and deadlock problems when trying to delete data from a large table with 2.4B rows in it. The problem table wasn’t the big table, but instead the new table with all of 455k rows in it when I looked at it.

Looking at the execution plan for the stored procedure which does data deletion from the large table and the problem become painfully clear. There was a clustered index scan on the new table for each row which was deleted. That clustered index scan was part of the deletion transaction so anyone trying to insert into that little table was being blocked. So basically a nice chain reaction was happening.

This was easily shown in the CPU workload as well shown below. You can see the CPU workload on the system spike to above 40% to almost 50%, which is a lot on a system which has 80 logical CPUs and normally runs at about 11% CPU workload.

sql-cpu-1028

In the graphic you can see exactly there I added the indexes as the workload drops right at 11:50AM.

This just goes to show that if you plan on using foreign keys to handle data deletion between tables you need to ensure that the column which is the child tables foreign key has an index on it.


December 18, 2013  2:00 PM

Encryption and Compression

Denny Cherry Denny Cherry Profile: Denny Cherry

We have a variety of options when it comes to compression and encryption in SQL Server. When using both compression and encryption you have to understand how each of these work and when they wil lwork together and when they won’t be able to work together to make using both technologies useful.

The trick to making compression and encryption work together is to ensure that the compression is done first and the data encryption is done second. This is most easily done by using TDE for compression and page level compression for data compression. This is becaues when using these two technologies, no matter in which order you have enabled them, SQL Server will compress the data first and encrypt the data second. This even happens if you have a database which is encrypted with TDE and you then enable data compression on the tables. This is because when the data is compressed it is rewritten as compressed data and then encrypted post compression.

Using application level encryption you can still compress data using native data compression feature of SQL Server, however the amount of data compression that you will typically get in this situation will be much less than by using TDE and data compression. The same applies if you use TDE and then backup the database using native (or third party) backup compression. This is because when backups of a TDE encrypted database are taken the database pages are not decrypted when backed up. They are backed up in the same encrypted state that they are normally in, then compressed. By it’s nature encrypted data is very unique so data compression doesn’t do much good against encrypted data.

Denny


December 11, 2013  2:00 PM

Clustering in Amazon EC2 – Tomorrow!

Denny Cherry Denny Cherry Profile: Denny Cherry

Tomorrow (December 12th) at noon Pacific Time I’m presenting a session with SIOS Technologies on using their SANLess Clustering Technology to build a clustered SQL Server for High Availability within Amazon’s EC2 cloud. This session is open to the public and is priced just right … FREE!

So get signed up and learn more about how to setup a traditional Windows Cluster in the Amazon EC2 cloud for high availability within the EC2 cloud for your cloud based applications.

Thanks,
Denny


December 3, 2013  6:15 PM

Upcoming SQL Saturday Precon in Albuquerque

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m very pleased to announce that I will be presenting a precon at the Albuquerque SQL Saturday at SQL Saturday 271 on January 24th, 2014. At this precon I’ll be presenting a session titled “SQL Performance Tuning and Optimization”. This session is a full day session where we will look at the various ways to find and troubleshoot performance tuning problems in SQL Server today. New content in this session will include knowing when to look at the new SQL Server 2014 features such as Hekaton and ColumnStore (Apollo).

I hope to see you at the SQL Saturday. You do need to sign up for the SQL Saturday precon in advance, and their are a limited number of seats available, so be sure to get signed up soon.

Thanks,
Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: