SQL Server with Mr. Denny


June 28, 2012  2:00 PM

What do I do with Standard Edition SQL Servers using Mirroring

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

With the release of SQL Server 2012 many of the people who are using Standard Edition and Database Mirroring are quite nervous about what their HA options are going to be moving forward, and rightfully so as Microsoft has been pretty silent on this specific topic.  Let me see if I can’t calm your fears a little bit.

The first thing to do is not panic.  Microsoft’s policy is that features stick around for 2-3 releases after the announcement.  Database Mirroring was announced as depricated in SQL Server 2012 which means that the feature will be around in the next two releases.  That’s probably 6 more years of support (2 years for SQL 2012 and 2 years for each of the next two releases, this all depends on how long it takes for each release to come out) for database mirroring before the feature is removed from the product.

Now no one knows what feature changes are going to be coming in the next release of SQL Server, much less the release after that.  But Microsoft is collecting feedback about the minimal HA story in Standard Edition so that they know how their customers are using the software and what they need to support.  If you have an enterprise agreement you should have access to a Technical Account Manager (TAM).  Meet with your Technical Account Manager and explain to them how you are using Database Mirroring and how you’ll be needing some sort of replacement for it moving forward passed SQL Server 2012.  Your TAM will then be able to pass your information up to the SQL Server Team so that all the information gets to the right place.  Now if you don’t have an Enterprise Agreement, or if you can’t find your Technical Account Manager for some reason, I’ve setup a site for you to send the info to me, and I’ll forward it on to the right people.

Now is certainly not the time to be making hasty decisions about your SQL Server Mirroring upgrade paths.  Microsoft is usually pretty good and making the right decision when they have been provided with the right information.  So I’d recommend that you take a moment and talk to your TAM or fill out the form and get your information about how you are using database mirroring sent in.

So as the sign says, “Keep Calm and Carry On”.  It’s going to be a few years before any of us get to the point of actually needing to make decisions about our mirrored databases.

Denny

June 25, 2012  2:00 PM

Moving huge amounts of data between data centers

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

Every once and a while I get the fun project of moving a data center from one location to another, often from a Managed Service Provider to either a CoLo facility or to another Managed Service Provider.  When I’m doing this the data sizes typically aren’t small.  In the case of my current project one days worth of compressed database backups are running 581 Gigs which need to be transferred from a Managed Service Provider in Dallas to a CoLo in New Jersey with minimal cost and time spent.  When I’m doing these sorts of massive data migrations there are basically three different ways that these can be done.

  1. Copy the backups to a USB drive and ship it – aka. Sneaker Net
  2. Upload the files across the VPN using RoboCopy
  3. FTP the files up to the new site

While I love using #1 as it’s the easiest and usually the quickest way to move huge amounts of information it’s the most expensive, and it isn’t very repeatable.  If there’s a problem with the data transfer or you need to redo the data transfer then you need to purchase another USB drive from the first provider and have them ship it up to the new site.  As this all needs to be done pretty quickly that means that every time you do it you need to pay for overnight shipping which gets expensive, fast.  Not to mention that either you need to be at the destination site or you have to pay for remote hands to open the box and connect the hard drive.  In this case that means paying a few hundred bucks to have a guy at the remote site unbox the drive and connect it as the data center is in New Jersey and I’m in California a short 5.5 hour plane flight away.

Option #2 that I give here is a decent option as well, except that it only single threaded unless you do some really interesting stuff to spin up multiple copies of robocopy.  The reason that you want multiple threads running is because most Managed Service Providers has some sort of Quality of Service settings configured on their routers so that one connection isn’t able to take all the bandwidth available.  In this case each connection is limited to about 500kb of bandwidth so if I run several threads I get more throughput than if I run just a single thread.

Which leads me to option #3.  As I’m moving lots of database files it’s easy enough for me to do multi-threaded FTP as I can sent each file separately getting getter bandwidth (currently I’m pushing 1,662.90 kb per second).  I do this not with the native command line FTP or with the web browser, but by using a little FTP application which has long been abandoned by the developer called LeechFTP.  While it hasn’t been upgraded in years, and some of the error messages aren’t in English, it’s a great app for moving massive amounts of data in a multi-threaded process.

Now because FTP does totally suck when it comes to resuming from a failed upload process I add an additional step into the process, I take whatever data I’m planning on transferring and use either winrar or 7zip to break the files into smaller chunks.  Typically I’ll just take the entire folder that has all the backups and make one massive upload set out of it.  I usually break the files into 100 Meg segments as those will usually be able to be uploaded without any sort of interruption, and if there is a problem reuploading 100 Megs worth of data usually won’t take all that long.  Now I don’t bother to compress the data, I just put it into a single large rar or 7z fileset.  The reason that I don’t bother trying to compress the data is that it’ll take hours to compress and the time saved usually is pretty small if any (especially as these backups are already compressed).  Both winrar and 7z have store only options which usually run pretty quickly.  The example 581 Gigs of data that I’m working with here was able to be stored by 7z in about an hour and a half.

I’ve begun using 7z instead of winrar for these projects as I’ve found something very annoying about winrar when using it to copy files up to a network share (like say the shared network drive that the SQL Server is going to restore from).  When winrar decompressed all the data it wants to put it into the local %temp% folder which ends up filling the C drive of what ever server you are doing the work on, while 7z doesn’t have this annoying “feature”.

Once the full backups are copied up (~4 days in this case) I just unpack them and get the restore process started (keep in mind that I’ve got log backups being copied across the same network link as well.  I’ll take about how I get them across later on.

Denny


June 19, 2012  2:00 PM

I’ll be at SQL PASS 2012, will you?

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

It appears that I’ve been selected to give a couple of presentations at the premier Microsoft SQL Server conference again this year.  At the 2012 SQL PASS Summit in Seattle, WA this year I’ll be presenting a regular session titled “Where Should I Be Encrypting My Data?” where I’ll be talking about all the various places within the application stack that data can be encrypted.  We will also be looking at some of the more spectacular data breeches to talk about where things appeared to have gone wrong.

The second session that I’ll be presenting will be an all day pre-con on high availability called “SQL Server 2012 in a Highly Available World“.  During this will day session we will be looking at the legacy high availability solutions such as Database Mirroring, Log Shipping and SQL Server Clustering, but we will also be looking at the new AlwaysOn Availability Groups to see really when you should be looking at each one of these solutions, and when you should be combining some of these solutions to get even higher (and more complex) availability options.  While this high availability session does come with an extra cost, it’ll be worth every penny.

I hope to see you at the SQL PASS summit.

Denny

P.S. Don’t forget to check out my SQL PASS First Timers webcast on October 17th, 2012.


June 18, 2012  2:00 PM

The process could not execute ‘sp_replcmds’ on …

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

When setting up SQL Server replication you might see an error message from the Transactional Replication Log Reader Agent which reads like the following.

Error messages:

  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
    Get help: http://help/MSSQL_REPL20011
  • Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
    Get help: http://help/15517
  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
    Get help: http://help/MSSQL_REPL22037

Often this error message can come from the database not having a valid owner, or the SQL Server is not being able to correctly identify the owner of the database. Often this is easiest to fix by changing the database owner by using the sp_changedbowner system stored procedure as shown below. The sa account is a reliable account to use to change the ownership of the database to.

USE PublishedDatabase
GO
EXEC sp_changedbowner 'sa'
GO

Once the database ownership has been changed the log reader will probably start working right away. If it doesn’t quickly restarting the log reader should resolve the problem.

While this does require changes to the production database, there is no outage required to make these changes.

Denny


June 14, 2012  9:00 PM

When does SQLCLR launch?

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

A change was made to the SQLCLR, as to when the SQLCLR is loaded.  In SQL Server 2008 R2 and below the SQLCLR component was loaded only when the “clr enabled” setting, which is controlled through sp_configure, is enabled.  Even with the SQLCLR is enabled on SQL Server 2008 R2 and below, the SQLCLR is only loaded into SQL Server’s memory when it is needed.

In SQL Server 2012 however SQLCLR is loaded on startup no matter if the “clr enabled” setting is disabled or not.  This can be seen in the SQL Server ERRORLOG file by looking for the line which looks similar to the following.

Common language runtime (CLR) functionality initialized using CLR version v4.0.30319 from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.’

There are a couple of reasons that Microsoft wants the SQLCLR to be loaded automatically when the SQL Server starts instead of waiting for permission to load it.

The first is that there are plenty of native features which require the use of the SQLCLR in SQL Server 2012.  This includes features like the geometry data type, the hierarchyid data type, etc.

The second is that by loading the SQLCLR when the SQL Server instance first starts this decreases the time that SQL Server must wait the first time that the SQLCLR is used. This is because the SQLCLR is a rather large package which must be loaded, when loading the SQLCLR the first time it is used can take up to several seconds to load the CLR into memory for use.

Just because the SQLCLR is loaded into memory when the SQL Server instance is first started doesn’t mean that the sp_configure setting “clr enabled” is useless.  With this setting disabled, the SQL Server instance will not allow for the creation of CLR objects.  Once the “clr enabled” setting is enabled the CLR doesn’t need to be loaded, but the SQL Server instance will now allow the user created assemblies to be created within the SQL Server instance.

Denny


June 11, 2012  9:00 PM

SQL Server 2012, SQLCLR and Max Server Memory

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

When SQLCLR was first introduced in SQL Server 2005, one of the big complaints that I had with it was that SQLCLR could technically steal all the memory away from the SQL Server’s buffer pool by simply growing it’s memory pool which would cause the OS to request memory back from the SQL Server taking memory away from the buffer pool and the execution plan cache.  This could result, depending on the amount of memory that the SQLCLR needed, the entire memory for the server being eaten up by the SQLCLR component of Microsoft SQL Server, eventually possibly taking all the memory on the entire server for the SQLCLR component.

With the release of SQL Server 2012 this problem goes away to some extent as SQL Server 2012 had many memory architecture related changes put into the product.  Among the variety of changes which were introduced in SQL Server 2012 is the fact that the SQLCLR feature now gets its memory via the traditional SQL Server memory manager.  One of the side effects of this is that the SQLCLR how falls under the control of the “max server memory” setting which is accessed through the sp_configure system stored procedure.

While this doesn’t give you the ability to control how much memory the SQLCLR component specially will take this does mean that you can via the “max server memory” setting keep the SQLCLR processes from taking over all the memory within the Windows OS.

Because of this change in how memory is controlled for the SQLCLR component when upgrading from a prior version of Microsoft SQL Server to SQL Server 2012  the value which is used for the “max server memory” setting will need to be re-thought out as we no longer need to leave room outside the setting for the SQLCLR.  As an example, if a Windows Server with a single instance of Microsoft SQL Server 2008 had 64 Gigs of RAM installed and the instance was a heavy user of SQLCLR it might have made sense to configure the SQL Server instance with 50 Gigs of memory as the max server memory leaving about 4 Gigs for the Windows OS, and 10 Gigs for the SQLCLR.  When this server is upgraded to SQL Server 2012 you would then see that the memory usage for the Windows OS would drop leaving a large amount of memory un-used as the memory which was before allocated outside of the “max server memory” setting will sit there un-used by the SQLCLR.  As this memory is within the “max server memory” setting the buffer pool and procedure cache will now be reduced by the amount of memory that the SQLCLR needs to used below the 50 Gig limit, potentially reducing the amount of memory by 10 Gigs of more.

The upside to this configuration change of the way that the SQLCLR memory is managed is that if the SQLCLR only needs 1 Gig, and before it was being allocated 10 Gigs of memory (as discussed in the example above), it now only takes 1 Gig of memory.  However on the down side is that there is no knob to turn to limit the amount of memory which the SQLCLR can use at the high end.

Denny


June 4, 2012  3:55 PM

Storage and Virtualization Class Starting Tomorrow

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

SSWUG.ORG’s virtual class will provide database administrators (DBAs) with the skills to properly design storage and virtualization solutions.

Denny Cherry, course instructor and SQL Server MVP and Microsoft Certified Master, will explain why storage can be one of the largest bottlenecks when it comes to database performance.

He will also explain why it can be one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not get along. Finally, he will dig into LUNs, HBAs, the fabric, as well as the storage itself.

In the second half of the course, Denny will review the pros and cons of moving SQL Servers into a virtual environment. He will also explain how to tie virtual platforms to the storage array, so that DBAs can maximize the storage performance for their SQL Servers and the virtual environment.

Sign up for the virtual class now and get ready to learn about storage and virtualization at your pace and on your schedule.

Denny


June 4, 2012  2:00 PM

VMware vCenter Performance Tuning

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

As hopefully everyone that is using VMware’s vSphere in either data center knows VMware’s vCenter runs off of a Microsoft SQL Server database (by default).  Now as good as they guys at VMware are at building a virtualization platform there database leave a little to be desired.  I’ve identified a couple of indexes which when created against the VMware vSphere 5.0 database will improve the health of the SQL Server database.

The first index is on the dbo.VPX_PROPERTY_BULLETIN database.  On the system that I happen to be looking at this week not having this index caused the SQL Server to scan this table 6977 times since the database was last restarted about 7 days before I began looking at the system.  This table on this SQL Server only contains about 3000 records, but this system is pretty small.  Just 4 hosts, 4 data stores and about 100 guests at the moment.  The larger this table is the more impact not having this query will have.

CREATE INDEX mrdenny_OPERATION_TYPE ON dbo.VPX_PROPERTY_BULLETIN
([OPERATION_TYPE])
INCLUDE ([OBJECT_MOID], [GEN_NUMBER])
WITH (FILLFACTOR=70, ONLINE=OFF, PAD_INDEX=ON)

The nice thing about this index is that is also fills the requirements of another index which is needed by the SQL Server.

The second index to create is built on the same table, but on different columns.  While the query which needs this index is run MUCH less often, SQL estimates (on this system at least) that adding it will improve the query performance by 69.94%.  In my mind that’s very much worth it, even if the query is only being run a few times a week.

CREATE INDEX mrdenny_OBJECT_MOID ON dbo.VPX_PROPERTY_BULLETIN
([OBJECT_MOID])
INCLUDE ([GEN_NUMBER])
WITH (FILLFACTOR=50, ONLINE=OFF, PAD_INDEX=ON)

The third index that I’ve identified which needs to be created is against the VPX_LIC_USAGE table.  This table has something to do with the licensing and the size of the table on your system will vary.  This vCenter system has over 16k rows in the table but this system has only been installed for a couple of months at this point.  As your vSphere installation ages it appears that this table will continue to grow and grow.  Best I can tell there’s a couple of rows entered into this table every hour for each host in the farm.  Needless to say this table will grow quite large when you’ve got a large VMware vSphere farm.

CREATE INDEX mrdenny_SAMPLE_TIMESTAMP ON dbo.VPX_LIC_USAGE
([SAMPLE_TIMESTAMP])
INCLUDE ([SAMPLE_TS])
WITH (FILLFACTOR=100, ONLINE=OFF)

As you look at these three indexes which I recommend that you create on your vSphere database you’ll notice that there is an ONLINE=OFF flag.  If your vCenter database is running the Enterprise Edition of SQL Server then you can change that on ONLINE=ON which will allow the indexes to be created online instead of causing blocking while the indexes are being created.  If you don’t have the Enterprise edition of SQL Server then you can’t create in the indexes online.  There should be no impact to the vCenter system if you create the indexes while the system is up and running.  The worse that will happen is that queries which are already running slowly will run a little slower than normal.

You’ll notice that I’ve listed this indexes to all start with mrdenny.  This is so that these indexes can be easily identified as coming from my blog (in case the next admin wonders where they are from) and so that you never have to worry about the index names colliding with names that VMware would try and use.

Denny


May 30, 2012  2:00 PM

Time to vote for vmworld 2012 sessions

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

For those that are interested in the VMware product line and the VMworld conference, it is time to begin the community voting for sessions that will be given at VM World 2012 in both San Francisco, CA USA and Barcelona, Spain.

You’ll need to register in order to vote, so create an account and get signed in and vote for the sessions that you would like to see.  Now don’t worry, you don’t need to register for the conference to vote (it looks like you do on the site), you only have to create an account on the website in order to get your votes in.

There are a lot of really good looking sessions up on the list, including three of mine, so hopefully I can count on your votes for my sessions.  The three sessions that I have up for vote are:

1331 Optimizing SQL Server Performance in a Virtual Environment

In this session we’ll look over some of the things which you should be looking at within your virtual environment to ensure that you are getting the performance out of it that you should be. This will include how to look for CPU performance issues at the host level. We will also be discussing the Memory Balloon drivers and what they actually do, and how you should be configuring them, and why. We’ll discuss some of the memory sharing technologies which are built into vSphere and Hyper-V and how they relate to SQL Server. Then we will finish up with some storage configuration options to look at.

1332 SQL Server for the VMware Admin

One of the biggest issues in database performance is the storage, and one of the largest consumers of storage is databases. Because of an unfortunate disconnect that often occurs between database administrators and storage engineers troubleshooting performance problems tricky at best. In this session we’ll work to demystify the database so it can run at the best possible speed.

During this session we’ll look at how databases work from the database side, how the DBA should be laying out the database on the storage, what questions the DBA should be asking the storage admin, and most importantly what questions the storage admin should be asking your DBA when allocating storage for a new database server.

1717 SQL Server 2012 High Availability Without Shared Storage

In this session we will look at the features which are provided with Microsoft SQL Server 2012 as part of the “Always On” features including site to site configurations to allow of a large scale high availability solution without the need for any high end SAN storage solution.

Additionally we will be looking at the ability to have redundant servers which can be used for reporting or for taking your backups reducing the load from the production database. We will also look a unique use case using SQL Server 2012’s Always On feature to scale out the reads to synchronous read only copies.

I hope that I can count on your votes so that my sessions get picked for vmworld 2012.

Denny


May 28, 2012  2:00 PM

Know your app when migrating your data

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

One of the projects that I’m working on at the moment is moving a data center from being hosted in Dallas Texas up to New Jersey so that it’s closer to the companies corporate office in New York City.  As a part of this data center migration the Microsoft SQL Server databases must be moved to the new data center (dugh).  The company who’s data is being moved is a medical billing company, and their system is pretty busy, most of the time.  However at night the systems aren’t all that busy and on the weekends there is next to nothing going on.

I bring this up because moving 13 Gigs worth of transaction log files per day isn’t exactly something which can be done all that quickly, during the week.  However when I look at the transaction log backups for a Sunday each of the log backups for the largest database are just a few megs every 12 minutes where those same log backups during the day are hundreds of megs in size (or larger) every 12 minutes.

By waiting to do the data transfer of the full backups from Wednesday until Sunday I reduced the amount of data in the initial transaction log transfer by about 5 Gigs, for just this one database.  As the data is transferring at about 5mbits per second that’s a decent amount of time savings per database (there’s about 40 databases which have to be moved as a part of this solution).

This just goes to show that with a little knowledge of the systems that you are working with you can save yourself a decent amount of time.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: