SQL Server with Mr. Denny


June 18, 2012  2:00 PM

The process could not execute ‘sp_replcmds’ on …

Denny Cherry Denny Cherry Profile: Denny Cherry

When setting up SQL Server replication you might see an error message from the Transactional Replication Log Reader Agent which reads like the following.

Error messages:

  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
    Get help: http://help/MSSQL_REPL20011
  • Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
    Get help: http://help/15517
  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
    Get help: http://help/MSSQL_REPL22037

Often this error message can come from the database not having a valid owner, or the SQL Server is not being able to correctly identify the owner of the database. Often this is easiest to fix by changing the database owner by using the sp_changedbowner system stored procedure as shown below. The sa account is a reliable account to use to change the ownership of the database to.

USE PublishedDatabase
GO
EXEC sp_changedbowner 'sa'
GO

Once the database ownership has been changed the log reader will probably start working right away. If it doesn’t quickly restarting the log reader should resolve the problem.

While this does require changes to the production database, there is no outage required to make these changes.

Denny

June 14, 2012  9:00 PM

When does SQLCLR launch?

Denny Cherry Denny Cherry Profile: Denny Cherry

A change was made to the SQLCLR, as to when the SQLCLR is loaded.  In SQL Server 2008 R2 and below the SQLCLR component was loaded only when the “clr enabled” setting, which is controlled through sp_configure, is enabled.  Even with the SQLCLR is enabled on SQL Server 2008 R2 and below, the SQLCLR is only loaded into SQL Server’s memory when it is needed.

In SQL Server 2012 however SQLCLR is loaded on startup no matter if the “clr enabled” setting is disabled or not.  This can be seen in the SQL Server ERRORLOG file by looking for the line which looks similar to the following.

Common language runtime (CLR) functionality initialized using CLR version v4.0.30319 from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.’

There are a couple of reasons that Microsoft wants the SQLCLR to be loaded automatically when the SQL Server starts instead of waiting for permission to load it.

The first is that there are plenty of native features which require the use of the SQLCLR in SQL Server 2012.  This includes features like the geometry data type, the hierarchyid data type, etc.

The second is that by loading the SQLCLR when the SQL Server instance first starts this decreases the time that SQL Server must wait the first time that the SQLCLR is used. This is because the SQLCLR is a rather large package which must be loaded, when loading the SQLCLR the first time it is used can take up to several seconds to load the CLR into memory for use.

Just because the SQLCLR is loaded into memory when the SQL Server instance is first started doesn’t mean that the sp_configure setting “clr enabled” is useless.  With this setting disabled, the SQL Server instance will not allow for the creation of CLR objects.  Once the “clr enabled” setting is enabled the CLR doesn’t need to be loaded, but the SQL Server instance will now allow the user created assemblies to be created within the SQL Server instance.

Denny


June 11, 2012  9:00 PM

SQL Server 2012, SQLCLR and Max Server Memory

Denny Cherry Denny Cherry Profile: Denny Cherry

When SQLCLR was first introduced in SQL Server 2005, one of the big complaints that I had with it was that SQLCLR could technically steal all the memory away from the SQL Server’s buffer pool by simply growing it’s memory pool which would cause the OS to request memory back from the SQL Server taking memory away from the buffer pool and the execution plan cache.  This could result, depending on the amount of memory that the SQLCLR needed, the entire memory for the server being eaten up by the SQLCLR component of Microsoft SQL Server, eventually possibly taking all the memory on the entire server for the SQLCLR component.

With the release of SQL Server 2012 this problem goes away to some extent as SQL Server 2012 had many memory architecture related changes put into the product.  Among the variety of changes which were introduced in SQL Server 2012 is the fact that the SQLCLR feature now gets its memory via the traditional SQL Server memory manager.  One of the side effects of this is that the SQLCLR how falls under the control of the “max server memory” setting which is accessed through the sp_configure system stored procedure.

While this doesn’t give you the ability to control how much memory the SQLCLR component specially will take this does mean that you can via the “max server memory” setting keep the SQLCLR processes from taking over all the memory within the Windows OS.

Because of this change in how memory is controlled for the SQLCLR component when upgrading from a prior version of Microsoft SQL Server to SQL Server 2012  the value which is used for the “max server memory” setting will need to be re-thought out as we no longer need to leave room outside the setting for the SQLCLR.  As an example, if a Windows Server with a single instance of Microsoft SQL Server 2008 had 64 Gigs of RAM installed and the instance was a heavy user of SQLCLR it might have made sense to configure the SQL Server instance with 50 Gigs of memory as the max server memory leaving about 4 Gigs for the Windows OS, and 10 Gigs for the SQLCLR.  When this server is upgraded to SQL Server 2012 you would then see that the memory usage for the Windows OS would drop leaving a large amount of memory un-used as the memory which was before allocated outside of the “max server memory” setting will sit there un-used by the SQLCLR.  As this memory is within the “max server memory” setting the buffer pool and procedure cache will now be reduced by the amount of memory that the SQLCLR needs to used below the 50 Gig limit, potentially reducing the amount of memory by 10 Gigs of more.

The upside to this configuration change of the way that the SQLCLR memory is managed is that if the SQLCLR only needs 1 Gig, and before it was being allocated 10 Gigs of memory (as discussed in the example above), it now only takes 1 Gig of memory.  However on the down side is that there is no knob to turn to limit the amount of memory which the SQLCLR can use at the high end.

Denny


June 4, 2012  3:55 PM

Storage and Virtualization Class Starting Tomorrow

Denny Cherry Denny Cherry Profile: Denny Cherry

SSWUG.ORG’s virtual class will provide database administrators (DBAs) with the skills to properly design storage and virtualization solutions.

Denny Cherry, course instructor and SQL Server MVP and Microsoft Certified Master, will explain why storage can be one of the largest bottlenecks when it comes to database performance.

He will also explain why it can be one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not get along. Finally, he will dig into LUNs, HBAs, the fabric, as well as the storage itself.

In the second half of the course, Denny will review the pros and cons of moving SQL Servers into a virtual environment. He will also explain how to tie virtual platforms to the storage array, so that DBAs can maximize the storage performance for their SQL Servers and the virtual environment.

Sign up for the virtual class now and get ready to learn about storage and virtualization at your pace and on your schedule.

Denny


June 4, 2012  2:00 PM

VMware vCenter Performance Tuning

Denny Cherry Denny Cherry Profile: Denny Cherry

As hopefully everyone that is using VMware’s vSphere in either data center knows VMware’s vCenter runs off of a Microsoft SQL Server database (by default).  Now as good as they guys at VMware are at building a virtualization platform there database leave a little to be desired.  I’ve identified a couple of indexes which when created against the VMware vSphere 5.0 database will improve the health of the SQL Server database.

The first index is on the dbo.VPX_PROPERTY_BULLETIN database.  On the system that I happen to be looking at this week not having this index caused the SQL Server to scan this table 6977 times since the database was last restarted about 7 days before I began looking at the system.  This table on this SQL Server only contains about 3000 records, but this system is pretty small.  Just 4 hosts, 4 data stores and about 100 guests at the moment.  The larger this table is the more impact not having this query will have.

CREATE INDEX mrdenny_OPERATION_TYPE ON dbo.VPX_PROPERTY_BULLETIN
([OPERATION_TYPE])
INCLUDE ([OBJECT_MOID], [GEN_NUMBER])
WITH (FILLFACTOR=70, ONLINE=OFF, PAD_INDEX=ON)

The nice thing about this index is that is also fills the requirements of another index which is needed by the SQL Server.

The second index to create is built on the same table, but on different columns.  While the query which needs this index is run MUCH less often, SQL estimates (on this system at least) that adding it will improve the query performance by 69.94%.  In my mind that’s very much worth it, even if the query is only being run a few times a week.

CREATE INDEX mrdenny_OBJECT_MOID ON dbo.VPX_PROPERTY_BULLETIN
([OBJECT_MOID])
INCLUDE ([GEN_NUMBER])
WITH (FILLFACTOR=50, ONLINE=OFF, PAD_INDEX=ON)

The third index that I’ve identified which needs to be created is against the VPX_LIC_USAGE table.  This table has something to do with the licensing and the size of the table on your system will vary.  This vCenter system has over 16k rows in the table but this system has only been installed for a couple of months at this point.  As your vSphere installation ages it appears that this table will continue to grow and grow.  Best I can tell there’s a couple of rows entered into this table every hour for each host in the farm.  Needless to say this table will grow quite large when you’ve got a large VMware vSphere farm.

CREATE INDEX mrdenny_SAMPLE_TIMESTAMP ON dbo.VPX_LIC_USAGE
([SAMPLE_TIMESTAMP])
INCLUDE ([SAMPLE_TS])
WITH (FILLFACTOR=100, ONLINE=OFF)

As you look at these three indexes which I recommend that you create on your vSphere database you’ll notice that there is an ONLINE=OFF flag.  If your vCenter database is running the Enterprise Edition of SQL Server then you can change that on ONLINE=ON which will allow the indexes to be created online instead of causing blocking while the indexes are being created.  If you don’t have the Enterprise edition of SQL Server then you can’t create in the indexes online.  There should be no impact to the vCenter system if you create the indexes while the system is up and running.  The worse that will happen is that queries which are already running slowly will run a little slower than normal.

You’ll notice that I’ve listed this indexes to all start with mrdenny.  This is so that these indexes can be easily identified as coming from my blog (in case the next admin wonders where they are from) and so that you never have to worry about the index names colliding with names that VMware would try and use.

Denny


May 30, 2012  2:00 PM

Time to vote for vmworld 2012 sessions

Denny Cherry Denny Cherry Profile: Denny Cherry

For those that are interested in the VMware product line and the VMworld conference, it is time to begin the community voting for sessions that will be given at VM World 2012 in both San Francisco, CA USA and Barcelona, Spain.

You’ll need to register in order to vote, so create an account and get signed in and vote for the sessions that you would like to see.  Now don’t worry, you don’t need to register for the conference to vote (it looks like you do on the site), you only have to create an account on the website in order to get your votes in.

There are a lot of really good looking sessions up on the list, including three of mine, so hopefully I can count on your votes for my sessions.  The three sessions that I have up for vote are:

1331 Optimizing SQL Server Performance in a Virtual Environment

In this session we’ll look over some of the things which you should be looking at within your virtual environment to ensure that you are getting the performance out of it that you should be. This will include how to look for CPU performance issues at the host level. We will also be discussing the Memory Balloon drivers and what they actually do, and how you should be configuring them, and why. We’ll discuss some of the memory sharing technologies which are built into vSphere and Hyper-V and how they relate to SQL Server. Then we will finish up with some storage configuration options to look at.

1332 SQL Server for the VMware Admin

One of the biggest issues in database performance is the storage, and one of the largest consumers of storage is databases. Because of an unfortunate disconnect that often occurs between database administrators and storage engineers troubleshooting performance problems tricky at best. In this session we’ll work to demystify the database so it can run at the best possible speed.

During this session we’ll look at how databases work from the database side, how the DBA should be laying out the database on the storage, what questions the DBA should be asking the storage admin, and most importantly what questions the storage admin should be asking your DBA when allocating storage for a new database server.

1717 SQL Server 2012 High Availability Without Shared Storage

In this session we will look at the features which are provided with Microsoft SQL Server 2012 as part of the “Always On” features including site to site configurations to allow of a large scale high availability solution without the need for any high end SAN storage solution.

Additionally we will be looking at the ability to have redundant servers which can be used for reporting or for taking your backups reducing the load from the production database. We will also look a unique use case using SQL Server 2012′s Always On feature to scale out the reads to synchronous read only copies.

I hope that I can count on your votes so that my sessions get picked for vmworld 2012.

Denny


May 28, 2012  2:00 PM

Know your app when migrating your data

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the projects that I’m working on at the moment is moving a data center from being hosted in Dallas Texas up to New Jersey so that it’s closer to the companies corporate office in New York City.  As a part of this data center migration the Microsoft SQL Server databases must be moved to the new data center (dugh).  The company who’s data is being moved is a medical billing company, and their system is pretty busy, most of the time.  However at night the systems aren’t all that busy and on the weekends there is next to nothing going on.

I bring this up because moving 13 Gigs worth of transaction log files per day isn’t exactly something which can be done all that quickly, during the week.  However when I look at the transaction log backups for a Sunday each of the log backups for the largest database are just a few megs every 12 minutes where those same log backups during the day are hundreds of megs in size (or larger) every 12 minutes.

By waiting to do the data transfer of the full backups from Wednesday until Sunday I reduced the amount of data in the initial transaction log transfer by about 5 Gigs, for just this one database.  As the data is transferring at about 5mbits per second that’s a decent amount of time savings per database (there’s about 40 databases which have to be moved as a part of this solution).

This just goes to show that with a little knowledge of the systems that you are working with you can save yourself a decent amount of time.

Denny


May 28, 2012  2:00 PM

Figuring out how log shipping is doing

Denny Cherry Denny Cherry Profile: Denny Cherry

A lot of the time when doing data center migrations or system migrations I end up using Transaction Log Shipping to get the data from one SQL Server to another.  I do this for a few reasons.

  1. It’s reliable.
  2. It’s easy to do (I’ve got plenty of scripts sitting around to use)
  3. I can easily see just how far out of sync the databases are

When I’m using log shipping you’ll some times be asked how far behind the new system is from the production system.  There are all sorts of complex ways of doing this like bringing the database into standby mode and looking at some table to see the last value in the table.  However the method that I really like is to use the msdb database and simply query for the information using the dbo.backupset and the dbo.restorehistory system tables.

USE msdb
GO
SELECT database_name, max(backup_finish_date) DatabaseState
FROM dbo.backupset
WHERE EXISTS (SELECT * FROM dbo.restorehistory WHERE backupset.backup_set_id = restorehistory.backup_set_id)
GROUP BY database_name
GO

I then can compare the output from this query to the output from a similar query that you can run on the production server.

USE msdb
GO
SELECT database_name, max(backup_finish_date) DatabaseState
FROM dbo.backupset
WHERE database_name NOT IN ('master', 'model', 'msdb', 'distribution')
GROUP BY database_name
GO

If the output from these match then the destination server has all the log records that have been backed up.  If not then it doesn’t.  What ever the difference between those values tells you how much data is missing.

Denny


May 22, 2012  9:00 AM

Still time to sign up for SQLDay 2012 pre-con

Denny Cherry Denny Cherry Profile: Denny Cherry

There is still plenty of time to get signed up for my SQLDay 2012 all day session in Poland this week on Thursday the 24th of May. Below you’ll find a description of the session which I’ll be teaching.

This session will be a two part session in which we will be focusing on two of the biggest topics in the DBA field. How to properly design your SAN storage solution and how to properly design your virtualization solution.

The storage portion of this session will focus on SAN storage, but most of the material will apply to direct attached storage as well.

In the first half of the session we’ll be focusing on the storage array. Storage can be one of the biggest bottlenecks when it comes to database performance. It’s also one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not speak the same language. In this session, we’ll be looking at storage from both the database and storage perspectives. We’ll be digging into LUNs, HBAs, the fabric, as well as the storage configuration.

After going over the components we’ll dig into some advanced storage configurations. This includes RAID groups, multi-pathing software, and proper redundant storage network design. We will also be digging into some advanced storage array backup techniques including taking storage level clones and snapshots. After going over these advanced techniques we will dig into how these can best be used to backup the SQL Server environment to provide maximum redundancy with no recurring tape costs.

In addition to theory, we’ll be looking at an actual SAN so that we can translate what we see in the Storage Array with what we see on the actual server.

In the second half of the day we’ll be looking into the pros and cons of moving SQL Servers into a virtual server environment. Specifically we’ll be looking into when it’s a good idea and when it’s probably not a good idea. Like everything in the database world there are no hard set answers as to if virtualization is a good idea or not, but there are some times when virtualizing a SQL Server is a good idea, and can save you some money. There are some other times when you will be shooting yourself in the foot and virtualization isn’t a good idea. We’ll be focusing on when how to make this decision, and how to gather the metrics that you need in order to come to this decision.

We’ll look into how tie the virtual platforms to the storage array so that you can maximize the storage performance for your SQL Servers and the virtual environment.

I hope to see you in Poland at SQLDay 2012 later this week.

Denny


May 21, 2012  2:00 PM

What to do when your Twitter account gets hacked

Denny Cherry Denny Cherry Profile: Denny Cherry

On occasion Twitter accounts get hacked and are used to send out spam to other twitter followers, usually via Direct Message. When your twitter account is taken over there are two different things that you need to do in order to get your account back under your control.

The first is pretty straight forward, change your Twitter password. This is done by going to the Twitter web site (www.twitter.com) and logging in. Then you’ll need to go to your settings page (by clicking the little person drop down in the upper right, then selecting settings from the menu) and select the Password tab from the menu on the left. You can also get there directly from this link.

The second thing that needs to be done is to reset the rights that all the applications have to use your account. If you don’t do this, and one of the apps which has access to your account is what is sending out the direct messages, it will still be able to send out the direct messages. You do this by logging into the Twitter web site (www.twitter.com) and logging into your account, then going into your account settings (by clicking the little person drop down in the upper right, then selecting settings from the menu), then selecting “Apps” from the menu on the left. You can also get there directory from this link. From this page you’ll want to revoke access to every application listed. Then as you come across the applications naturally add them back, but don’t assume that the app that currently has the rights to your account should.

Once you’ve done both of these things you’ll need to change the password for any third party applications like Tweet Deck which logs in to the Twitter website directly, and any apps which need access to your Twitter account will need to be reauthorized. This sounds painful, but it isn’t. Check the website for your Twitter client on how to change the password which the application uses to log into the website.

Hopefully you never need this information, but if you hopefully this helps fix any problems that you have,
Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: