SQL Server with Mr. Denny


July 12, 2012  2:00 PM

PBM, Replication and Stored Procedure Names

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the things that Policy Based management is really good at is making sure that people don’t create stored procedures that start with “sp_”.  However when the server that you are deploying the policies to is also being used for SQL Server replication this can get a little more complex seeing how all of the replication stored procedures start with “sp_” by default and they aren’t marked as system stored procedures.

I came across this problem when setting this up on a clients system.  The way that I was able to get around this was to setup Policy to evaluate against a condition which I had setup which filtered out the stored procedures that were used for SQL Server Replication.  I did this by setting up a condition, which I called “User Defined Stored Procedures” which had two values in the Expression.  The first was looking at the Schema field and excluding anything in the “sys” schema (which takes care of all the system objects), then looking at the Name field and excluding everything that matched “sp_MS%”.  You can see this condition below (click any of the images to enlarge).

Now to ensure that this only ran against the user databases I created another condition against the Database facet which looked at the IsSystemObject field and make sure that it was False, shown below.  That way I could put procedures like sp_whoisactive and sp_who3 into the master database and not have a problem with them.

The actual Check condition of the policy was setup easily enough, simply checking that the stored procedure name wasn’t like “sp%” as shown below.

Bringing this all together is the actual Policy which is configured with the check condition, and is configured to filter the objects being checked against the two other check conditions which helps to limit the amount of time that the policy takes to execute as shown below.  As this is a SQL Server 2008 R2 instance in this example I had to use a schedule to verify everything nightly, but that’ll do.

Hopefully if you run across this situation this will help you get everything setup faster that I was able to.

Denny

July 9, 2012  6:20 PM

Windows 2012 Dates Announced

Denny Cherry Denny Cherry Profile: Denny Cherry

It appears that Microsoft has released the dates for RTM (Release To Manufacturing) and GA (General Availability) for both the Windows 2012 Server and Client Operating Systems. Both the client and server with RTM the first week in August with the server OS will be available in September and the Client OS in October. You can read more about the announcements and the schedules on the server and client blog posts their respective blogs.

Denny


July 5, 2012  6:31 PM

VMworld needs to provide more feedback to session submitters

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve submitted a lot of sessions to a lot of conferences over the years.  And while the feedback from some of the conferences is a little cryptic thanks to the review team only having access to canned responses, it’ll do.  Now personally I’d love to get specific feedback saying exactly why my session wasn’t submitted, but I’m realistic in that there are probably hundreds or thousands of sessions submitted for every major conference, so canned responses are as good as it is going to get.

However VMworld hits the bottom rung of the ladder.  The VMworld content team simply provides this feedback.

Here is a list of the most common reasons why sessions were declined:

  • There were too many submissions with similar topics.
  • The submission was too basic, not enough information was provided in the abstract.
  • The session was too single vendor product focused and likely to have a commercial nature.

Having the most common reasons why sessions weren’t picked isn’t exactly helpful.  Given that my abstracts have been accepted at conferences like TechEd and the SQL PASS Summit I can assume that the submission wasn’t too basic.  While my sessions are SQL Server specific, virtualizing SQL Server is a major push for VMware so I’m assuming that I don’t fall into the last bucket.  That leaves that there were just too many submissions, which I can’t believe as currently there are only three sessions on the schedule that talks about SQL Server (as of my writing this).  The first is about virtualizing SQL Server, and the second is how General Mills succeed in virtualizing SQL Server using VMware, and the third appears to be a repeat of a SQL Virtualization session from last year.

The first session is called “Virtualizing SQL High Availability” but the abstract doesn’t talk about SQL Server High Availability, it only talks about SQL Server Performance Tuning within a virtual machine which let’s be honest from a SQL Server perspective is identical to tuning SQL Server on a physical machine.  Make sure the disks are fast enough, that you have enough memory and CPU then do the normal SQL performance tuning.  Looking at the external speakers companies website he appears to be knowledgeable on databases in general, but he doesn’t list anything more current than SQL Server 7 in his publications list for SQL Server, and most of his publications are for Oracle.

The second session called “Virtualizing SQL 2012 at General Mills” looks basically like a fluff piece being led by a Technical Account Manager from VMware (Sales) and a VMware Architect. In other words, they’ll have no one around who can actually answer any SQL Server questions, or talk about SQL Server in depth.

The third session is called “Virtualizing SQL Best Practices” which I’m pretty sure I saw at VMworld last year, and if it is the same session as last year I really hope that it is better than the session last year.  When I saw the session by this same presenter last year the session was full of incorrect and very outdated information, much of which was based on SQL Server 2000 but was being presented for SQL Server 2008 and SQL Server 2008 R2.  If this is an updated version of last years session, I REALLY hope for the attendees sake that the session is updated and the incorrect information is updated.

Now am I writing this because I think I should have been handed a speaking slot at VMworld?  No.  I’m writing this because I think that VMworld needs to do a better service to their potential speakers so that the speakers can improve their abstracts based on VMworld’s requirements (as every conference has different criteria that they use when evaluating session abstracts).  I also feel that VMworld needs to do a better job providing SQL Server specific content to their attendees.  VMworld has 9 Oracle sessions listed (granted I think that only 7 of them are actually Oracle specific).

Hopefully this blog post won’t fall on deaf ears at VMworld and they will do something about their lack of SQL Server content as out of the three sessions listed, I personally don’t have much hope for any of them being very useful to the general VMworld attendee.

Before writing this blog post I emailed VMware asking for more information about why my sessions weren’t selected.  I got a pretty generic response back which doesn’t specify anything about why my specific sessions weren’t selected.

  • Take the time to specify some of the details in your session description.  The title, abstract, outline, and session takeaways are key factors in determining whether your session is selected and can affect session attendance.
  • Be sure your perspective or story is unique.
  • Attendees are there to learn, so educate rather than pitch your product.  Your proposal submission should clearly reflect your intention.
  • Make sure your topic is relevant to the audience you’re targeting. Review the content topics before submitting a session.
  • Session selections are based on the content described in the submission, and the speaker(s). Speakers should have extensive experience presenting at conferences of a similar nature. If a speaker has presented at VMworld in the past, we will look at speaker scores and take them into consideration during selection.

Again not exactly the more useful information for me as an abstract writer to improve.

Denny


July 5, 2012  2:00 PM

Prepping the security for a data warehouse

Denny Cherry Denny Cherry Profile: Denny Cherry

So in an earlier blog post I talked about how I had to grant some users the ability to create indexes in a reporting server.  A couple of people have asked me how I created the domain groups and ensured that the permissions were setup correctly.  In this case I did this with two SQL Scripts.  The first generates some command line commands which can then be run on a domain controller in order to create the domain groups that the users will be put into.  This script simply compares the groups which exist as logins and matches that against the list of databases on the SQL Server.  In this case I’m filtering the databases down based on a prefix as I only want to deal with databases that start with rpt or secure.  There are 5 different domain groups created for each database.  I take the output of this first query and run it from the command line window on an Active Directory domain controller and the domain groups are then created.  Once they are created I simply drop them into the correct folder in AD and the help desk can start dropping users into the groups.


select 'net group EDW1-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_SHOWPLAN_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_INDEX_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_VIEW_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_PROC-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
GO

The second script that I’ve got creates the logins for each of these domain groups, again by simply looking at the ones that don’t exist in sys.server_principals.  This script generates T-SQL code which I then just copy and paste into another SQL Query window and then execute the script.  The only thing that I have to remember with this script is that there are line breaks in the output so I need to ensure that I run this with the results going to text not to the default grid.


select 'use master
GO
CREATE LOGIN [MyDomain\EDW1-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1-' + name + '] FROM LOGIN [MyDomain\EDW1-' + name + ']
GO
EXEC sp_addrolemember @rolename=''db_datareader'', @membername=''MyDomain\EDW1-' + name + '''
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO
GRANT SHOW PLAN TO  [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)

The great thing about these scripts is that they produce a consistent set of domain groups for each new database that is added to the reporting server so that the helpdesk can quickly and easily figure out which groups users need to be placed into.  Could I have done this in PowerShell? Yeah, I’m sure that I could have.  But I don’t know the PowerShell cmdlets that would have been needed to create the domain accounts, so honestly I didn’t bother even trying.  I knew the SQL commands to do this off the top of my head.  While someone else probably could have written this in PowerShell in 10 minutes, I did it in T-SQL in 10 minutes and it’s reproduceable.

(Don’t forget for the index domain groups I’ve got a SQL Agent job that grants the rights to those that runs every night.)

If I need to add another set of domain groups into the script, I can simply copy and paste an existing one and make the few needed changes.

Denny


July 2, 2012  2:00 PM

Giving users the ability to create indexes

Denny Cherry Denny Cherry Profile: Denny Cherry

So I ran across a pretty interesting requirement when setting up a new reporting database server.  The end users who run queries against the databases on this server are pretty good at performance tuning their own queries so they wanted the ability to create indexes on the tables on the reporting server.  I figured no problem I’d just grant them the ALTER ANY INDEX right and be done with it.  The only problem is that there isn’t any such right.

There’s also no right at the table level which can be granted which will give the user the ability to create indexes without giving them other rights on the objects.

Some Background

So I started doing a little bit of hacking around the SQL Server and I came up with a solution that’ll work.

A little background on how security is setup on the SQL Server to begin with.  All the rights on this SQL Server are controlled via Active Directory domain groups.  There are four rights which we are concerned with, being a member of the db_owner fixed database role, being able to query any table or view in the database, being able to create stored procedures and being able to create indexes on any table in the database.  To handle this there are 4 domain groups for each database on the reporting server name {ServerName}-{Privelege}-{DatabaseName} so for example there is a domain group named “EDW1_CREATE_PROC-optin” which grants the users of the domain group the CREATE PROCEDURE right within the optin database on the SQL Server EDW1.

Like the create procedure domain group I’ve also go groups named CREATE_INDEX, CREATE_VIEW, etc. to grant all the needed rights.

The Workaround

To workaround there not being a right to create any index that you want, I granted the CREATE_INDEX right the right to ALTER any table.  This allows the user to change the table, create indexes, and drop the table.  In case new tables are added to the system I setup a job which simply loops through all the databases and grants the ALTER right to the table to the user within each database that maps to the specific domain group for that database using this T-SQL code. This job is simply configured to run nightly and the SLA with the business users is that when new tables are added they simply won’t be able to create indexes on the new table until the next day. As this data is simply being replicated over from production there shouldn’t be that many new tables being added very often.

EXEC sp_MSforeachdb '
USE [?]
declare @schema sysname, @table sysname, @database sysname
declare tables CURSOR for select name, schema_name(schema_id) from sys.tables WHERE is_ms_shipped = 0
declare @cmd nvarchar(4000)
open tables
fetch next from tables into @table, @schema
while @@fetch_status = 0
begin
set @cmd = ''GRANT ALTER ON ['' + @schema + ''].['' + @table + ''] TO [DomainName\EDW1_CREATE_INDEX_'' + db_name()  + '']''
print @cmd
if exists (select * from sys.database_principals WHERE name = ''DomainName\EDW1_CREATE_INDEX_'' + db_name())
exec (@cmd)
fetch next from tables into @table, @schema
end
close tables
deallocate tables'

You’ll notice within the cursor above that I’m also checking on if the domain group exists or not, so that if it hasn’t been created for some reason the script won’t throw an error message and stop.

Now as the users are being granted the ALTER right on each table, this is giving them way more rights than they actually need on the table.  To resolve this I created a DDL trigger for the server which looks to see if the user is a member of the db_owner fixed database role, and if they aren’t it rolls back the command and tells them to call IT as shown in the T-SQL below.


CREATE TRIGGER PreventSchemaChange
ON ALL SERVER
FOR ALTER_TABLE, DROP_TABLE
AS

SET NOCOUNT ON

create table #groups (account sysname, type sysname, privilege sysname, mapped sysname, path sysname)

declare @login sysname = suser_sname(), @database sysname, @event_data XML = EVENTDATA()

SET @database = @event_data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'sysname')

insert into #groups
exec xp_logininfo @login, 'all'

if not exists (select * from #groups where [path] like '%' + @database + '%dbowner') or IS_SRVROLEMEMBER('sysadmin', suser_sname()) = 1
begin
rollback
raiserror('Access to changing or dropping objects is denied.  Please contact IT to make this change.', 16, 1)

end

GO

You’ll notice that I had to do some hacking around within that DDL trigger to make this work.  That’s because DDL triggers that are server scoped (which mine is) always fire in the master database.  So because of this I couldn’t use the normal IS_ROLEMEMBER function that I wanted to use.  The quickest solution that I could come up with was to get the database name from the EVENTDATE() function and use that in combination with the xp_logininfo system stored procedure to see if the user is a member of the correct domain group.  This worked because of my domain group naming convention that I had setup earlier.

Now I’ve put in a connect item to make all of this easier to deal with as I can’t be the only one who has this sort of requirement.  Hopefully my request will get some traction as I wouldn’t think that this would be all that hard to implement.  Granted even if they do add the feature I’m asking for it probably won’t be until the next major release, and probably won’t be back ported into SQL Server 2012 or SQL Server 2008 R2 (this system is SQL Server 2008 R2) but hopefully we can get it in a future release so complex hacks like this aren’t required.

Denny


June 28, 2012  2:00 PM

What do I do with Standard Edition SQL Servers using Mirroring

Denny Cherry Denny Cherry Profile: Denny Cherry

With the release of SQL Server 2012 many of the people who are using Standard Edition and Database Mirroring are quite nervous about what their HA options are going to be moving forward, and rightfully so as Microsoft has been pretty silent on this specific topic.  Let me see if I can’t calm your fears a little bit.

The first thing to do is not panic.  Microsoft’s policy is that features stick around for 2-3 releases after the announcement.  Database Mirroring was announced as depricated in SQL Server 2012 which means that the feature will be around in the next two releases.  That’s probably 6 more years of support (2 years for SQL 2012 and 2 years for each of the next two releases, this all depends on how long it takes for each release to come out) for database mirroring before the feature is removed from the product.

Now no one knows what feature changes are going to be coming in the next release of SQL Server, much less the release after that.  But Microsoft is collecting feedback about the minimal HA story in Standard Edition so that they know how their customers are using the software and what they need to support.  If you have an enterprise agreement you should have access to a Technical Account Manager (TAM).  Meet with your Technical Account Manager and explain to them how you are using Database Mirroring and how you’ll be needing some sort of replacement for it moving forward passed SQL Server 2012.  Your TAM will then be able to pass your information up to the SQL Server Team so that all the information gets to the right place.  Now if you don’t have an Enterprise Agreement, or if you can’t find your Technical Account Manager for some reason, I’ve setup a site for you to send the info to me, and I’ll forward it on to the right people.

Now is certainly not the time to be making hasty decisions about your SQL Server Mirroring upgrade paths.  Microsoft is usually pretty good and making the right decision when they have been provided with the right information.  So I’d recommend that you take a moment and talk to your TAM or fill out the form and get your information about how you are using database mirroring sent in.

So as the sign says, “Keep Calm and Carry On”.  It’s going to be a few years before any of us get to the point of actually needing to make decisions about our mirrored databases.

Denny


June 25, 2012  2:00 PM

Moving huge amounts of data between data centers

Denny Cherry Denny Cherry Profile: Denny Cherry

Every once and a while I get the fun project of moving a data center from one location to another, often from a Managed Service Provider to either a CoLo facility or to another Managed Service Provider.  When I’m doing this the data sizes typically aren’t small.  In the case of my current project one days worth of compressed database backups are running 581 Gigs which need to be transferred from a Managed Service Provider in Dallas to a CoLo in New Jersey with minimal cost and time spent.  When I’m doing these sorts of massive data migrations there are basically three different ways that these can be done.

  1. Copy the backups to a USB drive and ship it – aka. Sneaker Net
  2. Upload the files across the VPN using RoboCopy
  3. FTP the files up to the new site

While I love using #1 as it’s the easiest and usually the quickest way to move huge amounts of information it’s the most expensive, and it isn’t very repeatable.  If there’s a problem with the data transfer or you need to redo the data transfer then you need to purchase another USB drive from the first provider and have them ship it up to the new site.  As this all needs to be done pretty quickly that means that every time you do it you need to pay for overnight shipping which gets expensive, fast.  Not to mention that either you need to be at the destination site or you have to pay for remote hands to open the box and connect the hard drive.  In this case that means paying a few hundred bucks to have a guy at the remote site unbox the drive and connect it as the data center is in New Jersey and I’m in California a short 5.5 hour plane flight away.

Option #2 that I give here is a decent option as well, except that it only single threaded unless you do some really interesting stuff to spin up multiple copies of robocopy.  The reason that you want multiple threads running is because most Managed Service Providers has some sort of Quality of Service settings configured on their routers so that one connection isn’t able to take all the bandwidth available.  In this case each connection is limited to about 500kb of bandwidth so if I run several threads I get more throughput than if I run just a single thread.

Which leads me to option #3.  As I’m moving lots of database files it’s easy enough for me to do multi-threaded FTP as I can sent each file separately getting getter bandwidth (currently I’m pushing 1,662.90 kb per second).  I do this not with the native command line FTP or with the web browser, but by using a little FTP application which has long been abandoned by the developer called LeechFTP.  While it hasn’t been upgraded in years, and some of the error messages aren’t in English, it’s a great app for moving massive amounts of data in a multi-threaded process.

Now because FTP does totally suck when it comes to resuming from a failed upload process I add an additional step into the process, I take whatever data I’m planning on transferring and use either winrar or 7zip to break the files into smaller chunks.  Typically I’ll just take the entire folder that has all the backups and make one massive upload set out of it.  I usually break the files into 100 Meg segments as those will usually be able to be uploaded without any sort of interruption, and if there is a problem reuploading 100 Megs worth of data usually won’t take all that long.  Now I don’t bother to compress the data, I just put it into a single large rar or 7z fileset.  The reason that I don’t bother trying to compress the data is that it’ll take hours to compress and the time saved usually is pretty small if any (especially as these backups are already compressed).  Both winrar and 7z have store only options which usually run pretty quickly.  The example 581 Gigs of data that I’m working with here was able to be stored by 7z in about an hour and a half.

I’ve begun using 7z instead of winrar for these projects as I’ve found something very annoying about winrar when using it to copy files up to a network share (like say the shared network drive that the SQL Server is going to restore from).  When winrar decompressed all the data it wants to put it into the local %temp% folder which ends up filling the C drive of what ever server you are doing the work on, while 7z doesn’t have this annoying “feature”.

Once the full backups are copied up (~4 days in this case) I just unpack them and get the restore process started (keep in mind that I’ve got log backups being copied across the same network link as well.  I’ll take about how I get them across later on.

Denny


June 19, 2012  2:00 PM

I’ll be at SQL PASS 2012, will you?

Denny Cherry Denny Cherry Profile: Denny Cherry

It appears that I’ve been selected to give a couple of presentations at the premier Microsoft SQL Server conference again this year.  At the 2012 SQL PASS Summit in Seattle, WA this year I’ll be presenting a regular session titled “Where Should I Be Encrypting My Data?” where I’ll be talking about all the various places within the application stack that data can be encrypted.  We will also be looking at some of the more spectacular data breeches to talk about where things appeared to have gone wrong.

The second session that I’ll be presenting will be an all day pre-con on high availability called “SQL Server 2012 in a Highly Available World“.  During this will day session we will be looking at the legacy high availability solutions such as Database Mirroring, Log Shipping and SQL Server Clustering, but we will also be looking at the new AlwaysOn Availability Groups to see really when you should be looking at each one of these solutions, and when you should be combining some of these solutions to get even higher (and more complex) availability options.  While this high availability session does come with an extra cost, it’ll be worth every penny.

I hope to see you at the SQL PASS summit.

Denny

P.S. Don’t forget to check out my SQL PASS First Timers webcast on October 17th, 2012.


June 18, 2012  2:00 PM

The process could not execute ‘sp_replcmds’ on …

Denny Cherry Denny Cherry Profile: Denny Cherry

When setting up SQL Server replication you might see an error message from the Transactional Replication Log Reader Agent which reads like the following.

Error messages:

  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
    Get help: http://help/MSSQL_REPL20011
  • Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
    Get help: http://help/15517
  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
    Get help: http://help/MSSQL_REPL22037

Often this error message can come from the database not having a valid owner, or the SQL Server is not being able to correctly identify the owner of the database. Often this is easiest to fix by changing the database owner by using the sp_changedbowner system stored procedure as shown below. The sa account is a reliable account to use to change the ownership of the database to.

USE PublishedDatabase
GO
EXEC sp_changedbowner 'sa'
GO

Once the database ownership has been changed the log reader will probably start working right away. If it doesn’t quickly restarting the log reader should resolve the problem.

While this does require changes to the production database, there is no outage required to make these changes.

Denny


June 14, 2012  9:00 PM

When does SQLCLR launch?

Denny Cherry Denny Cherry Profile: Denny Cherry

A change was made to the SQLCLR, as to when the SQLCLR is loaded.  In SQL Server 2008 R2 and below the SQLCLR component was loaded only when the “clr enabled” setting, which is controlled through sp_configure, is enabled.  Even with the SQLCLR is enabled on SQL Server 2008 R2 and below, the SQLCLR is only loaded into SQL Server’s memory when it is needed.

In SQL Server 2012 however SQLCLR is loaded on startup no matter if the “clr enabled” setting is disabled or not.  This can be seen in the SQL Server ERRORLOG file by looking for the line which looks similar to the following.

Common language runtime (CLR) functionality initialized using CLR version v4.0.30319 from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.’

There are a couple of reasons that Microsoft wants the SQLCLR to be loaded automatically when the SQL Server starts instead of waiting for permission to load it.

The first is that there are plenty of native features which require the use of the SQLCLR in SQL Server 2012.  This includes features like the geometry data type, the hierarchyid data type, etc.

The second is that by loading the SQLCLR when the SQL Server instance first starts this decreases the time that SQL Server must wait the first time that the SQLCLR is used. This is because the SQLCLR is a rather large package which must be loaded, when loading the SQLCLR the first time it is used can take up to several seconds to load the CLR into memory for use.

Just because the SQLCLR is loaded into memory when the SQL Server instance is first started doesn’t mean that the sp_configure setting “clr enabled” is useless.  With this setting disabled, the SQL Server instance will not allow for the creation of CLR objects.  Once the “clr enabled” setting is enabled the CLR doesn’t need to be loaded, but the SQL Server instance will now allow the user created assemblies to be created within the SQL Server instance.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: