SQL Server with Mr. Denny


July 5, 2012  2:00 PM

Prepping the security for a data warehouse

Denny Cherry Denny Cherry Profile: Denny Cherry

So in an earlier blog post I talked about how I had to grant some users the ability to create indexes in a reporting server.  A couple of people have asked me how I created the domain groups and ensured that the permissions were setup correctly.  In this case I did this with two SQL Scripts.  The first generates some command line commands which can then be run on a domain controller in order to create the domain groups that the users will be put into.  This script simply compares the groups which exist as logins and matches that against the list of databases on the SQL Server.  In this case I’m filtering the databases down based on a prefix as I only want to deal with databases that start with rpt or secure.  There are 5 different domain groups created for each database.  I take the output of this first query and run it from the command line window on an Active Directory domain controller and the domain groups are then created.  Once they are created I simply drop them into the correct folder in AD and the help desk can start dropping users into the groups.


select 'net group EDW1-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_SHOWPLAN_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_INDEX_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_VIEW_' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
UNION ALL
select 'net group EDW1_CREATE_PROC-' + name + ' /DOMAIN /ADD'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
GO

The second script that I’ve got creates the logins for each of these domain groups, again by simply looking at the ones that don’t exist in sys.server_principals.  This script generates T-SQL code which I then just copy and paste into another SQL Query window and then execute the script.  The only thing that I have to remember with this script is that there are line breaks in the output so I need to ensure that I run this with the results going to text not to the default grid.


select 'use master
GO
CREATE LOGIN [MyDomain\EDW1-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1-' + name + '] FROM LOGIN [MyDomain\EDW1-' + name + ']
GO
EXEC sp_addrolemember @rolename=''db_datareader'', @membername=''MyDomain\EDW1-' + name + '''
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_INDEX_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_INDEX_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_INDEX_' + name not in (select name from sys.server_principals)
UNION ALL
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_PROC-' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO
GRANT SHOW PLAN TO  [MyDomain\EDW1_CREATE_PROC-' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_PROC-' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_CREATE_VIEW_' + name + '] FROM LOGIN [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_CREATE_VIEW_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_CREATE_VIEW_' + name not in (select name from sys.server_principals)
union all
select 'use master
GO
CREATE LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM WINDOWS
GO
use ' + name + '
GO
CREATE USER [MyDomain\EDW1_SHOWPLAN_' + name + '] FROM LOGIN [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO
GRANT SHOWPLAN TO  [MyDomain\EDW1_SHOWPLAN_' + name + ']
GO'
from sys.databases
where (name like 'rpt%'
or name like 'secure%')
and 'MyDomain\EDW1_SHOWPLAN_' + name not in (select name from sys.server_principals)

The great thing about these scripts is that they produce a consistent set of domain groups for each new database that is added to the reporting server so that the helpdesk can quickly and easily figure out which groups users need to be placed into.  Could I have done this in PowerShell? Yeah, I’m sure that I could have.  But I don’t know the PowerShell cmdlets that would have been needed to create the domain accounts, so honestly I didn’t bother even trying.  I knew the SQL commands to do this off the top of my head.  While someone else probably could have written this in PowerShell in 10 minutes, I did it in T-SQL in 10 minutes and it’s reproduceable.

(Don’t forget for the index domain groups I’ve got a SQL Agent job that grants the rights to those that runs every night.)

If I need to add another set of domain groups into the script, I can simply copy and paste an existing one and make the few needed changes.

Denny

July 2, 2012  2:00 PM

Giving users the ability to create indexes

Denny Cherry Denny Cherry Profile: Denny Cherry

So I ran across a pretty interesting requirement when setting up a new reporting database server.  The end users who run queries against the databases on this server are pretty good at performance tuning their own queries so they wanted the ability to create indexes on the tables on the reporting server.  I figured no problem I’d just grant them the ALTER ANY INDEX right and be done with it.  The only problem is that there isn’t any such right.

There’s also no right at the table level which can be granted which will give the user the ability to create indexes without giving them other rights on the objects.

Some Background

So I started doing a little bit of hacking around the SQL Server and I came up with a solution that’ll work.

A little background on how security is setup on the SQL Server to begin with.  All the rights on this SQL Server are controlled via Active Directory domain groups.  There are four rights which we are concerned with, being a member of the db_owner fixed database role, being able to query any table or view in the database, being able to create stored procedures and being able to create indexes on any table in the database.  To handle this there are 4 domain groups for each database on the reporting server name {ServerName}-{Privelege}-{DatabaseName} so for example there is a domain group named “EDW1_CREATE_PROC-optin” which grants the users of the domain group the CREATE PROCEDURE right within the optin database on the SQL Server EDW1.

Like the create procedure domain group I’ve also go groups named CREATE_INDEX, CREATE_VIEW, etc. to grant all the needed rights.

The Workaround

To workaround there not being a right to create any index that you want, I granted the CREATE_INDEX right the right to ALTER any table.  This allows the user to change the table, create indexes, and drop the table.  In case new tables are added to the system I setup a job which simply loops through all the databases and grants the ALTER right to the table to the user within each database that maps to the specific domain group for that database using this T-SQL code. This job is simply configured to run nightly and the SLA with the business users is that when new tables are added they simply won’t be able to create indexes on the new table until the next day. As this data is simply being replicated over from production there shouldn’t be that many new tables being added very often.

EXEC sp_MSforeachdb '
USE [?]
declare @schema sysname, @table sysname, @database sysname
declare tables CURSOR for select name, schema_name(schema_id) from sys.tables WHERE is_ms_shipped = 0
declare @cmd nvarchar(4000)
open tables
fetch next from tables into @table, @schema
while @@fetch_status = 0
begin
set @cmd = ''GRANT ALTER ON ['' + @schema + ''].['' + @table + ''] TO [DomainName\EDW1_CREATE_INDEX_'' + db_name()  + '']''
print @cmd
if exists (select * from sys.database_principals WHERE name = ''DomainName\EDW1_CREATE_INDEX_'' + db_name())
exec (@cmd)
fetch next from tables into @table, @schema
end
close tables
deallocate tables'

You’ll notice within the cursor above that I’m also checking on if the domain group exists or not, so that if it hasn’t been created for some reason the script won’t throw an error message and stop.

Now as the users are being granted the ALTER right on each table, this is giving them way more rights than they actually need on the table.  To resolve this I created a DDL trigger for the server which looks to see if the user is a member of the db_owner fixed database role, and if they aren’t it rolls back the command and tells them to call IT as shown in the T-SQL below.


CREATE TRIGGER PreventSchemaChange
ON ALL SERVER
FOR ALTER_TABLE, DROP_TABLE
AS

SET NOCOUNT ON

create table #groups (account sysname, type sysname, privilege sysname, mapped sysname, path sysname)

declare @login sysname = suser_sname(), @database sysname, @event_data XML = EVENTDATA()

SET @database = @event_data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'sysname')

insert into #groups
exec xp_logininfo @login, 'all'

if not exists (select * from #groups where [path] like '%' + @database + '%dbowner') or IS_SRVROLEMEMBER('sysadmin', suser_sname()) = 1
begin
rollback
raiserror('Access to changing or dropping objects is denied.  Please contact IT to make this change.', 16, 1)

end

GO

You’ll notice that I had to do some hacking around within that DDL trigger to make this work.  That’s because DDL triggers that are server scoped (which mine is) always fire in the master database.  So because of this I couldn’t use the normal IS_ROLEMEMBER function that I wanted to use.  The quickest solution that I could come up with was to get the database name from the EVENTDATE() function and use that in combination with the xp_logininfo system stored procedure to see if the user is a member of the correct domain group.  This worked because of my domain group naming convention that I had setup earlier.

Now I’ve put in a connect item to make all of this easier to deal with as I can’t be the only one who has this sort of requirement.  Hopefully my request will get some traction as I wouldn’t think that this would be all that hard to implement.  Granted even if they do add the feature I’m asking for it probably won’t be until the next major release, and probably won’t be back ported into SQL Server 2012 or SQL Server 2008 R2 (this system is SQL Server 2008 R2) but hopefully we can get it in a future release so complex hacks like this aren’t required.

Denny


June 28, 2012  2:00 PM

What do I do with Standard Edition SQL Servers using Mirroring

Denny Cherry Denny Cherry Profile: Denny Cherry

With the release of SQL Server 2012 many of the people who are using Standard Edition and Database Mirroring are quite nervous about what their HA options are going to be moving forward, and rightfully so as Microsoft has been pretty silent on this specific topic.  Let me see if I can’t calm your fears a little bit.

The first thing to do is not panic.  Microsoft’s policy is that features stick around for 2-3 releases after the announcement.  Database Mirroring was announced as depricated in SQL Server 2012 which means that the feature will be around in the next two releases.  That’s probably 6 more years of support (2 years for SQL 2012 and 2 years for each of the next two releases, this all depends on how long it takes for each release to come out) for database mirroring before the feature is removed from the product.

Now no one knows what feature changes are going to be coming in the next release of SQL Server, much less the release after that.  But Microsoft is collecting feedback about the minimal HA story in Standard Edition so that they know how their customers are using the software and what they need to support.  If you have an enterprise agreement you should have access to a Technical Account Manager (TAM).  Meet with your Technical Account Manager and explain to them how you are using Database Mirroring and how you’ll be needing some sort of replacement for it moving forward passed SQL Server 2012.  Your TAM will then be able to pass your information up to the SQL Server Team so that all the information gets to the right place.  Now if you don’t have an Enterprise Agreement, or if you can’t find your Technical Account Manager for some reason, I’ve setup a site for you to send the info to me, and I’ll forward it on to the right people.

Now is certainly not the time to be making hasty decisions about your SQL Server Mirroring upgrade paths.  Microsoft is usually pretty good and making the right decision when they have been provided with the right information.  So I’d recommend that you take a moment and talk to your TAM or fill out the form and get your information about how you are using database mirroring sent in.

So as the sign says, “Keep Calm and Carry On”.  It’s going to be a few years before any of us get to the point of actually needing to make decisions about our mirrored databases.

Denny


June 25, 2012  2:00 PM

Moving huge amounts of data between data centers

Denny Cherry Denny Cherry Profile: Denny Cherry

Every once and a while I get the fun project of moving a data center from one location to another, often from a Managed Service Provider to either a CoLo facility or to another Managed Service Provider.  When I’m doing this the data sizes typically aren’t small.  In the case of my current project one days worth of compressed database backups are running 581 Gigs which need to be transferred from a Managed Service Provider in Dallas to a CoLo in New Jersey with minimal cost and time spent.  When I’m doing these sorts of massive data migrations there are basically three different ways that these can be done.

  1. Copy the backups to a USB drive and ship it – aka. Sneaker Net
  2. Upload the files across the VPN using RoboCopy
  3. FTP the files up to the new site

While I love using #1 as it’s the easiest and usually the quickest way to move huge amounts of information it’s the most expensive, and it isn’t very repeatable.  If there’s a problem with the data transfer or you need to redo the data transfer then you need to purchase another USB drive from the first provider and have them ship it up to the new site.  As this all needs to be done pretty quickly that means that every time you do it you need to pay for overnight shipping which gets expensive, fast.  Not to mention that either you need to be at the destination site or you have to pay for remote hands to open the box and connect the hard drive.  In this case that means paying a few hundred bucks to have a guy at the remote site unbox the drive and connect it as the data center is in New Jersey and I’m in California a short 5.5 hour plane flight away.

Option #2 that I give here is a decent option as well, except that it only single threaded unless you do some really interesting stuff to spin up multiple copies of robocopy.  The reason that you want multiple threads running is because most Managed Service Providers has some sort of Quality of Service settings configured on their routers so that one connection isn’t able to take all the bandwidth available.  In this case each connection is limited to about 500kb of bandwidth so if I run several threads I get more throughput than if I run just a single thread.

Which leads me to option #3.  As I’m moving lots of database files it’s easy enough for me to do multi-threaded FTP as I can sent each file separately getting getter bandwidth (currently I’m pushing 1,662.90 kb per second).  I do this not with the native command line FTP or with the web browser, but by using a little FTP application which has long been abandoned by the developer called LeechFTP.  While it hasn’t been upgraded in years, and some of the error messages aren’t in English, it’s a great app for moving massive amounts of data in a multi-threaded process.

Now because FTP does totally suck when it comes to resuming from a failed upload process I add an additional step into the process, I take whatever data I’m planning on transferring and use either winrar or 7zip to break the files into smaller chunks.  Typically I’ll just take the entire folder that has all the backups and make one massive upload set out of it.  I usually break the files into 100 Meg segments as those will usually be able to be uploaded without any sort of interruption, and if there is a problem reuploading 100 Megs worth of data usually won’t take all that long.  Now I don’t bother to compress the data, I just put it into a single large rar or 7z fileset.  The reason that I don’t bother trying to compress the data is that it’ll take hours to compress and the time saved usually is pretty small if any (especially as these backups are already compressed).  Both winrar and 7z have store only options which usually run pretty quickly.  The example 581 Gigs of data that I’m working with here was able to be stored by 7z in about an hour and a half.

I’ve begun using 7z instead of winrar for these projects as I’ve found something very annoying about winrar when using it to copy files up to a network share (like say the shared network drive that the SQL Server is going to restore from).  When winrar decompressed all the data it wants to put it into the local %temp% folder which ends up filling the C drive of what ever server you are doing the work on, while 7z doesn’t have this annoying “feature”.

Once the full backups are copied up (~4 days in this case) I just unpack them and get the restore process started (keep in mind that I’ve got log backups being copied across the same network link as well.  I’ll take about how I get them across later on.

Denny


June 19, 2012  2:00 PM

I’ll be at SQL PASS 2012, will you?

Denny Cherry Denny Cherry Profile: Denny Cherry

It appears that I’ve been selected to give a couple of presentations at the premier Microsoft SQL Server conference again this year.  At the 2012 SQL PASS Summit in Seattle, WA this year I’ll be presenting a regular session titled “Where Should I Be Encrypting My Data?” where I’ll be talking about all the various places within the application stack that data can be encrypted.  We will also be looking at some of the more spectacular data breeches to talk about where things appeared to have gone wrong.

The second session that I’ll be presenting will be an all day pre-con on high availability called “SQL Server 2012 in a Highly Available World“.  During this will day session we will be looking at the legacy high availability solutions such as Database Mirroring, Log Shipping and SQL Server Clustering, but we will also be looking at the new AlwaysOn Availability Groups to see really when you should be looking at each one of these solutions, and when you should be combining some of these solutions to get even higher (and more complex) availability options.  While this high availability session does come with an extra cost, it’ll be worth every penny.

I hope to see you at the SQL PASS summit.

Denny

P.S. Don’t forget to check out my SQL PASS First Timers webcast on October 17th, 2012.


June 18, 2012  2:00 PM

The process could not execute ‘sp_replcmds’ on …

Denny Cherry Denny Cherry Profile: Denny Cherry

When setting up SQL Server replication you might see an error message from the Transactional Replication Log Reader Agent which reads like the following.

Error messages:

  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
    Get help: http://help/MSSQL_REPL20011
  • Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
    Get help: http://help/15517
  • The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
    Get help: http://help/MSSQL_REPL22037

Often this error message can come from the database not having a valid owner, or the SQL Server is not being able to correctly identify the owner of the database. Often this is easiest to fix by changing the database owner by using the sp_changedbowner system stored procedure as shown below. The sa account is a reliable account to use to change the ownership of the database to.

USE PublishedDatabase
GO
EXEC sp_changedbowner 'sa'
GO

Once the database ownership has been changed the log reader will probably start working right away. If it doesn’t quickly restarting the log reader should resolve the problem.

While this does require changes to the production database, there is no outage required to make these changes.

Denny


June 14, 2012  9:00 PM

When does SQLCLR launch?

Denny Cherry Denny Cherry Profile: Denny Cherry

A change was made to the SQLCLR, as to when the SQLCLR is loaded.  In SQL Server 2008 R2 and below the SQLCLR component was loaded only when the “clr enabled” setting, which is controlled through sp_configure, is enabled.  Even with the SQLCLR is enabled on SQL Server 2008 R2 and below, the SQLCLR is only loaded into SQL Server’s memory when it is needed.

In SQL Server 2012 however SQLCLR is loaded on startup no matter if the “clr enabled” setting is disabled or not.  This can be seen in the SQL Server ERRORLOG file by looking for the line which looks similar to the following.

Common language runtime (CLR) functionality initialized using CLR version v4.0.30319 from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.’

There are a couple of reasons that Microsoft wants the SQLCLR to be loaded automatically when the SQL Server starts instead of waiting for permission to load it.

The first is that there are plenty of native features which require the use of the SQLCLR in SQL Server 2012.  This includes features like the geometry data type, the hierarchyid data type, etc.

The second is that by loading the SQLCLR when the SQL Server instance first starts this decreases the time that SQL Server must wait the first time that the SQLCLR is used. This is because the SQLCLR is a rather large package which must be loaded, when loading the SQLCLR the first time it is used can take up to several seconds to load the CLR into memory for use.

Just because the SQLCLR is loaded into memory when the SQL Server instance is first started doesn’t mean that the sp_configure setting “clr enabled” is useless.  With this setting disabled, the SQL Server instance will not allow for the creation of CLR objects.  Once the “clr enabled” setting is enabled the CLR doesn’t need to be loaded, but the SQL Server instance will now allow the user created assemblies to be created within the SQL Server instance.

Denny


June 11, 2012  9:00 PM

SQL Server 2012, SQLCLR and Max Server Memory

Denny Cherry Denny Cherry Profile: Denny Cherry

When SQLCLR was first introduced in SQL Server 2005, one of the big complaints that I had with it was that SQLCLR could technically steal all the memory away from the SQL Server’s buffer pool by simply growing it’s memory pool which would cause the OS to request memory back from the SQL Server taking memory away from the buffer pool and the execution plan cache.  This could result, depending on the amount of memory that the SQLCLR needed, the entire memory for the server being eaten up by the SQLCLR component of Microsoft SQL Server, eventually possibly taking all the memory on the entire server for the SQLCLR component.

With the release of SQL Server 2012 this problem goes away to some extent as SQL Server 2012 had many memory architecture related changes put into the product.  Among the variety of changes which were introduced in SQL Server 2012 is the fact that the SQLCLR feature now gets its memory via the traditional SQL Server memory manager.  One of the side effects of this is that the SQLCLR how falls under the control of the “max server memory” setting which is accessed through the sp_configure system stored procedure.

While this doesn’t give you the ability to control how much memory the SQLCLR component specially will take this does mean that you can via the “max server memory” setting keep the SQLCLR processes from taking over all the memory within the Windows OS.

Because of this change in how memory is controlled for the SQLCLR component when upgrading from a prior version of Microsoft SQL Server to SQL Server 2012  the value which is used for the “max server memory” setting will need to be re-thought out as we no longer need to leave room outside the setting for the SQLCLR.  As an example, if a Windows Server with a single instance of Microsoft SQL Server 2008 had 64 Gigs of RAM installed and the instance was a heavy user of SQLCLR it might have made sense to configure the SQL Server instance with 50 Gigs of memory as the max server memory leaving about 4 Gigs for the Windows OS, and 10 Gigs for the SQLCLR.  When this server is upgraded to SQL Server 2012 you would then see that the memory usage for the Windows OS would drop leaving a large amount of memory un-used as the memory which was before allocated outside of the “max server memory” setting will sit there un-used by the SQLCLR.  As this memory is within the “max server memory” setting the buffer pool and procedure cache will now be reduced by the amount of memory that the SQLCLR needs to used below the 50 Gig limit, potentially reducing the amount of memory by 10 Gigs of more.

The upside to this configuration change of the way that the SQLCLR memory is managed is that if the SQLCLR only needs 1 Gig, and before it was being allocated 10 Gigs of memory (as discussed in the example above), it now only takes 1 Gig of memory.  However on the down side is that there is no knob to turn to limit the amount of memory which the SQLCLR can use at the high end.

Denny


June 4, 2012  3:55 PM

Storage and Virtualization Class Starting Tomorrow

Denny Cherry Denny Cherry Profile: Denny Cherry

SSWUG.ORG’s virtual class will provide database administrators (DBAs) with the skills to properly design storage and virtualization solutions.

Denny Cherry, course instructor and SQL Server MVP and Microsoft Certified Master, will explain why storage can be one of the largest bottlenecks when it comes to database performance.

He will also explain why it can be one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not get along. Finally, he will dig into LUNs, HBAs, the fabric, as well as the storage itself.

In the second half of the course, Denny will review the pros and cons of moving SQL Servers into a virtual environment. He will also explain how to tie virtual platforms to the storage array, so that DBAs can maximize the storage performance for their SQL Servers and the virtual environment.

Sign up for the virtual class now and get ready to learn about storage and virtualization at your pace and on your schedule.

Denny


June 4, 2012  2:00 PM

VMware vCenter Performance Tuning

Denny Cherry Denny Cherry Profile: Denny Cherry

As hopefully everyone that is using VMware’s vSphere in either data center knows VMware’s vCenter runs off of a Microsoft SQL Server database (by default).  Now as good as they guys at VMware are at building a virtualization platform there database leave a little to be desired.  I’ve identified a couple of indexes which when created against the VMware vSphere 5.0 database will improve the health of the SQL Server database.

The first index is on the dbo.VPX_PROPERTY_BULLETIN database.  On the system that I happen to be looking at this week not having this index caused the SQL Server to scan this table 6977 times since the database was last restarted about 7 days before I began looking at the system.  This table on this SQL Server only contains about 3000 records, but this system is pretty small.  Just 4 hosts, 4 data stores and about 100 guests at the moment.  The larger this table is the more impact not having this query will have.

CREATE INDEX mrdenny_OPERATION_TYPE ON dbo.VPX_PROPERTY_BULLETIN
([OPERATION_TYPE])
INCLUDE ([OBJECT_MOID], [GEN_NUMBER])
WITH (FILLFACTOR=70, ONLINE=OFF, PAD_INDEX=ON)

The nice thing about this index is that is also fills the requirements of another index which is needed by the SQL Server.

The second index to create is built on the same table, but on different columns.  While the query which needs this index is run MUCH less often, SQL estimates (on this system at least) that adding it will improve the query performance by 69.94%.  In my mind that’s very much worth it, even if the query is only being run a few times a week.

CREATE INDEX mrdenny_OBJECT_MOID ON dbo.VPX_PROPERTY_BULLETIN
([OBJECT_MOID])
INCLUDE ([GEN_NUMBER])
WITH (FILLFACTOR=50, ONLINE=OFF, PAD_INDEX=ON)

The third index that I’ve identified which needs to be created is against the VPX_LIC_USAGE table.  This table has something to do with the licensing and the size of the table on your system will vary.  This vCenter system has over 16k rows in the table but this system has only been installed for a couple of months at this point.  As your vSphere installation ages it appears that this table will continue to grow and grow.  Best I can tell there’s a couple of rows entered into this table every hour for each host in the farm.  Needless to say this table will grow quite large when you’ve got a large VMware vSphere farm.

CREATE INDEX mrdenny_SAMPLE_TIMESTAMP ON dbo.VPX_LIC_USAGE
([SAMPLE_TIMESTAMP])
INCLUDE ([SAMPLE_TS])
WITH (FILLFACTOR=100, ONLINE=OFF)

As you look at these three indexes which I recommend that you create on your vSphere database you’ll notice that there is an ONLINE=OFF flag.  If your vCenter database is running the Enterprise Edition of SQL Server then you can change that on ONLINE=ON which will allow the indexes to be created online instead of causing blocking while the indexes are being created.  If you don’t have the Enterprise edition of SQL Server then you can’t create in the indexes online.  There should be no impact to the vCenter system if you create the indexes while the system is up and running.  The worse that will happen is that queries which are already running slowly will run a little slower than normal.

You’ll notice that I’ve listed this indexes to all start with mrdenny.  This is so that these indexes can be easily identified as coming from my blog (in case the next admin wonders where they are from) and so that you never have to worry about the index names colliding with names that VMware would try and use.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: