SQL Server with Mr. Denny

September 15, 2008  11:00 AM

Scaling the database out, not up.

Denny Cherry Denny Cherry Profile: Denny Cherry

When your database has grown beyond the performance capabilities of a single SQL Server, there are still ways get increase the system performance.  This requires using a technique called a Federating the database, this is also known as scaling the database out.  When you increase a servers capacity by increasing the CPU count within the SQL Server it is called scaling up the system.  When you increase a servers capacity by adding additional servers to the system it is called scaling out the system.  By scaling out the system we add additional entire servers to the database creating a database federation.  There are a couple of ways which you can create the database federation.  The technique that you use will depend on your own system requirements.

A database federation is not a high availability solution.  The correct solution to use for a high availability solution would be Microsoft Cluster Service (MSCS) or Database Mirroring (SQL 2005 and up).

There are some potential down sides to federating your database which you need to be aware of in order to make an informed decision.

1.       If any server in the federation is taken offline, the entire database system will become unavailable.  This is because the way the federation works requires online access to all nodes of the federation.  As the database which is being federated is probably an important asset to the company, this risk can be mitigated by using clustering in combination with database federation to provide a high availability solution to build your database federation on top of.

2.       Licensing for a database federation is extremely expensive.  SQL Server Enterprise Edition must be used, as database federation requires the use of distributed queries, which is an Enterprise Edition only feature.  Another reason for Enterprise Edition would be the number of CPUs supported.  As the system is apparently CPU bound (which is one of the key reasons to use a federated database) you will want to use SQL Servers which have as many CPUs as possible in them.  This would lead you to select servers along the line of the HP DL700 series of servers, of the SUN Fire 4600 series of servers.  Use of these massive servers will decrease the number of servers in your federation, thereby increasing the ease of setup of the federation.

3.       The design of a database federation is not a simple task.  It requires an intimate knowledge of the not only the database, but the entire application platform which works with the database backend.  In addition you need to have a solid grasp of not only the current system requirements, but of the far reaching expandability requirements of the database as well.  This knowledge is key as changing the design of your database federation is an extremely complex task, which if not done correctly can easily lead to ours of down time, and poor performance while data is moved from one node of the federation to another.

While these are some pretty important things to think about, federating your database has some major upsides as well.

1.       By federating your database, you will increase the amount of data that can be loaded into cache, as that data is loaded as each server loads its own data into its own cache.  This allows you to go well beyond the 64 Gigs of memory that Windows 2003 or Windows 2008 Enterprise Edition support.  With enough servers in the federation this will allow you to go beyond the 2 TB limit of Windows 2008 Data Center Edition.

2.       In addition to the additional data cache you have access to, you also get access to more CPUs than you would be able to fit in a single server, unless you where to purchase a very high end system such as an EMC Superdome, or one of the IBM iSeries servers.

3.       Given that the data is laid across multiple servers this then increase the number of disk controllers, or HBAs that you have access to, which can increase the available throughput to the disk.  It also increases the number of PCI busses which you have access to, thereby preventing any sort of contention as the data crosses through the HBAs or RAID Controllers and through the PCI bus on its way to the CPUs and RAM.


Now that we’ve gone over some of the basics of the federated databases, read through my next few posts as I talk about the various techniques which can be used to federate a database, and we go through the design processes to use each one within your database environment.


September 11, 2008  11:00 AM

Service Broker may not remove messages after processing

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve seen an issue with Service Broker that others may be seeing.  Messages are sent into the service broker, and are processed as normal, and they are removed from the message queue.  However you still see the conversation in the sys.conversation_endpoints DMV in a CONVERSING state instead of a CLOSED state.  There are no records in the sys.transmission_queue which is the very strange part.  There are also no errors when checking with SQL Server Profiler.

 Apparently this is a known issue which they are working on.  The strange thing is that when it happens on my system, it only happens on a single queue in my database.

Currently the only workaround is to do an END CONVERSATION WITH CLEANUP on the conversations.  I’ve written this script which clears out the conversations.  I’ve made it so that it only removes the messages which are for the problem conversation which don’t currently exist in the queue (this queue is not auto processed, there is a service which queries the queue every 30 seconds so there can be a backlog of valid messages in the queue which I don’t want to delete).

declare @i int
set @i = 1
while @i <> 10000
  declare @conversation_handle uniqueidentifier
  declare cur CURSOR for
  SELECT TOP (1000) conversation_handle
  FROM sys.conversation_endpoints
  FROM [tcp://AWT/Sonar/Q_ObjectDelete] a
  WHERE a.conversation_handle = sys.conversation_endpoints.conversation_handle)
  AND sys.conversation_endpoints.far_service = 'tcp://AWT/Sonar/Svc_ObjectDelete'
  AND sys.conversation_endpoints.state <> 'CD'
  open cur
  fetch next from cur into @conversation_handle
  while @@fetch_status = 0
  end conversation @conversation_handle with cleanup
  fetch next from cur into @conversation_handle
  close cur
  deallocate cur
set @i = @i + 1
I run this every hour to clean up the bogus records in the sys.conversation_endpoints DMV.

Without cleaning up the sys.conversation_endpoints DMV the tempdb will slowly start to fill up and throw out of space messages while sp_spaceused shows that the tempdb is empty in the same that id did in the other post I did a while back.


September 9, 2008  11:52 AM

Tuning SQL Server performance via memory and CPU processing

Denny Cherry Denny Cherry Profile: Denny Cherry

Part two of my two part series on hardware tuning of your SQL Server has just been released on SearchSQLServer.com.  This part is titled “Tuning SQL Server performance via memory and CPU processing“.


September 8, 2008  11:00 AM

How to configure DTC on Windows 2008

Denny Cherry Denny Cherry Profile: Denny Cherry

This post is specifically about setting up DTC on Windows 2008.  If you are looking for the post about setting up DTC on Windows 2003 you’ll want to go to the post “How to configure DTC on Windows 2003“.

The basic idea behind DTC setup in Windows 2008 is very similar to Windows 2003.  DTC needs to be installed and then configured.  To install DTC open the Control Panel, then Programs and Features.  Click on the link to the right which says Turn Windows features on or off.  Eventually the Server Manager will open and finish querying the system for the list of features and roles which are setup.  Click on Add Feature button and select Application Server from the Feature list (you may need to click next to get past the welcome to the wizard screen.  On the screen which tells you about the Application Server click next.  On the next page you can select the services which you be installed as part of this role.  Select the Incoming and/or Outgoing Remote Transactions depending on which one you need.  If SOAP will be used you may need the WS-Atomic Transactions installed.  If so select that as well.  (In this example I’ll check them all.)

Select Roles

Then click next.  If you have selected the WS-Atomic Transactions you’ll be presented with a page to select an SSL cert.  You can either select one, of create a self signed cert, or request one later from a CA.  I selected a self signed cert as I didn’t have one already installed.  Click next, then install.  If you didn’t install WS-Atomic Transactions the next screen will simply have the summary and install button.

It will take a while as Windows is going to install .NET 3.0 as part of this install.  Now is a good time for coffee or a smoke.

After installation is complete click close.

If you click the plus sign next to Roles in the right hand menu you can navigate down to Roles > Application Server > Component Services > Distributed Transaction Coordinator > Local DTC.

Server Manager Right Hand Menu

Right click on Local DTC and select properties, then select the security tab.  This tab looks very similar to the Windows 2003 one.

Check which ever boxes you need to in order to get the DTC setup to match the other servers in your environment.  Be sure to enable “Allow Remote Clients” or client machines won’t be able to access the DTC on this machine.  If you want to be able to remotely enable DTC from another machine you’ll need to check the “Allow Remote Administration”.  It’s been my experience that once DTC is setup and working remove administration probably isn’t that important so I’d say leave that unchecked and simply RDP into the server if it needs to be worked on.  Your screen will look something like this.

Local DTC Properties

When you click OK DTC will prompt you to restart it.  Click yes (unless you want to schedule the restart for another time) and DTC will restart.  You can see the DTC events in the log by selecting the “Application Server” option from the menu on the left.  You are going to want the most recent event to look something like this.

MSDTC started with the following settings:
Security Configuration (OFF = 0 and ON = 1):
Allow Remote Administrator = 0,
Network Clients = 1,
Trasaction Manager Communication:
Allow Inbound Transactions = 1,
Allow Outbound Transactions = 1,
Transaction Internet Protocol (TIP) = 0,
Enable XA Transactions = 1,
MSDTC Communications Security = No Authentication Required,
Account = NT AUTHORITY\NetworkService,
Firewall Exclusion Detected = 0
Transaction Bridge Installed = 1
Filtering Duplicate Events = 1

(Yes, I’m aware of the spelling issues in the error message.  That’s a straight copy and paste from Windows 2008’s event log.)When configuring DTC on a cluster you only need to configure one node.  This is because DTC is a cluster aware service so when you install DTC after setup clustering (or you setup clustering after installing DTC) the DTC service will already be setup as a clustered resource within the first cluster resource group created.  When you configure DTC for network access on a cluster the settings are written to the Quorum drive as well as the system registry which allows both nodes to share the settings.  If you have a cluster and you have to go through a firewall with DTC and have followed KB Article 250367 (I’m sure there is a Windows 2008 version of this KB article somewhere, I just haven’t run across it yet) you will need to have more than 20 ports available to DTC.  This is because when you configure the DCOM protocols to use specific ports you are configuring all of RPC to use those specific ports.  This means that the cluster administrator needs to use these ports as does the Component Services window which monitors for distributed transactions.  When dealing with a cluster it is recommended that you have at least 100 ports open between the machines within the transaction. If you need to setup MS DTC to talk to another DTC coordinator then you will probably need to use the No Authentication Required setting unless they support the other options.  Check with the vendor of the other coordinator to find out.

(These are the settings which you need no matter which version or edition of SQL Server you have installed.)


September 4, 2008  11:00 AM

Speaking about Federated Databases at the San Diego SQL Users Group

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve been asked to come back to the San Diego SQL Server Users Group on September 18, 2008.  This time around I’ll be speaking about Federated Databases, and some various techniques which you can use to federate your systems.

For those that can’t make it I’ll be speaking on this same topic at the SoCal Code Camp on October 25 and 26.

I’m still finishing up the slide deck and demos.  I’ll try and get them posted in advance.  If I can’t I’ll post them shortly after.

Bring your business cards for a drawing as I’ll be giving away a copy of Laptop Cop, the laptop retrieval product by Awareness Technologies (the company which I work for).


September 2, 2008  7:30 PM

Get SQL Server log shipping functionality without Enterprise Edition

Denny Cherry Denny Cherry Profile: Denny Cherry

A new tip of mine has just been published on SearchSQLServer.com.  This tip, “Get SQL Server log shipping functionality without Enterprise Edition” is all about writing your own Log Shipping code without using Microsoft’s, allowing you to use Log Shipping on editions of SQL Server other than Enterprise Edition.


September 1, 2008  11:00 AM

Getting more error data from SQL Server Replication

Denny Cherry Denny Cherry Profile: Denny Cherry

The error reporting in SQL Server Replication isn’t all that great.  This is a well known issue that pretty much everyone knows about.  Something that I don’t know if a whole lot of people know about, is that there is a way to get a lot more information from replication about what’s going on, and what’s going wrong.

This is most easily done by running the replication job manually from within a command window on the distributor.  This will allow you to add switches or change values as needed and easily see the output, or redirect the output to a file for analysis, sending to Microsoft, your consultant, etc.

 Replication is run by jobs, with job steps of some funky types.  These step types simply mask what is happening in the background.  SQL is shelling out and running a command line app and passing it all the switches as they are within the job step.

All the command line apps which replication uses are in the “C:\Program Files\Microsoft SQL Server\90\COM” folder by default (for SQL 2000 replace the 90 with 80, for SQL 2008 replace the 90 with 100).  In that folder you will find a few apps which are of interest.  When you run the snapshot job snapshot.exe is called.  When you run a distribution job DISTRIB.exe is called (I’ve got no idea why it’s uppercase).  When the log reader is running logread.exe is run.  When you are running merge replication replmerg.exe is run.

All of these can be run manually from a command prompt.  For starters open up the SQL Job and edit step two, the one which actually does the work.  Copy all the text in the command window and paste is after the filename in the command prompt window and press enter.  You will need to stop the SQL Agent job before you can actually run the command from the command prompt, as replication is designed so that you can only run the commands one at a time.

Now the whole point of this was to get more log info because the replication is failing.  This is done by added the “-OutputVerboseLevel” switch to the command.  This switch has between 3 and 5 logging levels depending on which command you are running.  0 (zero) is basically no logging, and as the number goes up more data will be shown.  The distrib.exe, replmerg.exe and snapshot.exe takes 0-2, while the logread.exe takes 0-4.

You should only do this when replication is failing and you can’t figure out why, and all SQL is telling you is some cryptic error message.

Hopefully you’ll find this information useful.


August 26, 2008  7:55 PM

Don’t Consolidate Yourself Into Performance Problems – Archive

Denny Cherry Denny Cherry Profile: Denny Cherry

The webcast which I did for Quest Software called Don’t Consolidate Yourself Into Performance Problems is now available for viewing after the fact.

This is the webcast which I did last week for Quest Software.  It you had signed up for the session Quest should have sent you an email with this URL already.



August 25, 2008  11:00 AM

How to setup a server to read log files nightly.

Denny Cherry Denny Cherry Profile: Denny Cherry

So you’ve been tasked with setting up a quick and dirty reporting server.  The goal is to restore the log files from the production server to the reporting server nightly.

 The backups are simple, use the SQL Maintenance plan to backup the logs, and then copy them to the remote machine.  But how do you restore the logs to the reporting server nightly.

 Well I’ve got a two step SQL job which should help you out.

Step 1 kills all current sessions in the database, and step 2 does the actual restores.

The code for step 1 is: 

declare @spid varchar(20)
declare cur CURSOR FOR
select spid
from sys.sysprocesses
where dbid = db_id('Your Database Name Here') /*<---Put your database name here*/
and spid > 50
open cur
fetch next from cur into @spid
while @@FETCH_STATUS = 0
exec ('kill ' + @spid)
fetch next from cur into @spid
close cur
deallocate cur

The code for step 2 is:

create table #Files
(FileName nvarchar(4000),
Depth int,
IsFile bit)
insert into #Files
exec xp_dirtree 'd:\', 1, 1

delete from #Files
where IsFile = 0

declare @FileName nvarchar(4000)
declare cur CURSOR FOR SELECT FileName from #Files
open cur
fetch next from cur into @FileName
SET @FileName = 'D:\Path\To\Your\Log\Backups\' + @FileName
RESTORE LOG YourDatabaseNameHere FROM DISK=@FileName WITH STANDBY='D:\Path\To\Your\StandBy\File.standby'
IF @@ERROR <> 0
exec xp_delete_file @FileName
fetch next from cur into @FileName
close cur
deallocate cur

I hope this makes your process easier. Now this code only works on SQL Server 2005 and up as the system stored procedures which I use were not included until SQL Server 2005.


August 21, 2008  11:00 AM

SoCal Code Camp Sessions are posted

Denny Cherry Denny Cherry Profile: Denny Cherry

I and many of the other presenters have begun entering our sessions for the next SoCal Code Camp on on October 25th and 26th at USC in Los Angeles, CA.

The Code Camp is the weekend before PDC, and right near where PDC will be held.  So if you are coming to PDC this year, come a couple of days early and check out what is hoped to be the largest SoCal Code Camp ever.

I’ve got four sessions scheduled, all of which are tagged with “SQL Server“.

If you are planning on attending the Code Camp, be sure to mark the sessions you want to attend with the interest check box.  That way the Code Camp staff knows how big of a room each session will need.

If you are interested in speaking at the Code Camp, feel free to enter more sessions.  The more sessions we have, the more popular the event will be.

As the date gets closer additional sessions will be entered onto the site.  Be sure to check back often.

I’ll do my best to get the slide decks and sample code posted before the Code Camp starts.  Odds are I’ll get them posted on Saturday morning as I did last time.  If you missed any of my sessions from prior Code Camps, let me know in the comments and I’ll see what I can do about adding those sessions to the new schedule.  See you at the Code Camp.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: