I came across this problem when setting this up on a clients system. The way that I was able to get around this was to setup Policy to evaluate against a condition which I had setup which filtered out the stored procedures that were used for SQL Server Replication. I did this by setting up a condition, which I called “User Defined Stored Procedures” which had two values in the Expression. The first was looking at the Schema field and excluding anything in the “sys” schema (which takes care of all the system objects), then looking at the Name field and excluding everything that matched “sp_MS%”. You can see this condition below (click any of the images to enlarge).
Now to ensure that this only ran against the user databases I created another condition against the Database facet which looked at the IsSystemObject field and make sure that it was False, shown below. That way I could put procedures like sp_whoisactive and sp_who3 into the master database and not have a problem with them.
The actual Check condition of the policy was setup easily enough, simply checking that the stored procedure name wasn’t like “sp%” as shown below.
Bringing this all together is the actual Policy which is configured with the check condition, and is configured to filter the objects being checked against the two other check conditions which helps to limit the amount of time that the policy takes to execute as shown below. As this is a SQL Server 2008 R2 instance in this example I had to use a schedule to verify everything nightly, but that’ll do.
Hopefully if you run across this situation this will help you get everything setup faster that I was able to.
- The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
Get help: http://help/MSSQL_REPL20011
- Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517)
Get help: http://help/15517
- The process could not execute ‘sp_replcmds’ on ”. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
Get help: http://help/MSSQL_REPL22037
Often this error message can come from the database not having a valid owner, or the SQL Server is not being able to correctly identify the owner of the database. Often this is easiest to fix by changing the database owner by using the sp_changedbowner system stored procedure as shown below. The sa account is a reliable account to use to change the ownership of the database to.
USE PublishedDatabase GO EXEC sp_changedbowner 'sa' GO
Once the database ownership has been changed the log reader will probably start working right away. If it doesn’t quickly restarting the log reader should resolve the problem.
While this does require changes to the production database, there is no outage required to make these changes.
1. If there are triggers on your tables, replication doesn’t have a way to ensure that the triggers will be there on the remote site.
2. If you need to add tables, procedures, views, etc. you have to reinitialize the subscription to add the new articles to the subscriber.
3. The failback story is pretty much a mess. Assuming that you do have to fail over to your DR server failing back isn’t exactly the easiest thing to do. Basically you have to take another outage while you move the database back. That or you have to resetup replication in the other direction.
Needless to say that these are some pretty good reasons to not use SQL Server Replication to get data to your DR site. Especially as there are so many better options such as Database Mirroring, Log Shipping, storage replication, third party storage replication and soon enough AlwaysOn Availability Groups.
If you are using SQL Server Replication to replicate data from your production site to your DR site I urge you to look at the other options which are available to you and you should strongly consider moving to one of the other technology options.
However this technique can be used to get replication back up and running after moving the publisher to another SQL Server. Simply setup the publication just like normal, then backup the database and add the subscription using the “initialize with backup” value for the @sync_type parameter as shown in the sample code below.
If you were going to actually initialize a new subscription using a backup like the feature was written to be used, then after the backup has happened restore the database to the subscriber under the correct database name.
BACKUP DATABASE YourDatabase TO DISK='E:\Backup\YourDatabase.bak' WITH FORMAT, STATS=10 GO USE YourDatabase GO EXEC sp_addsubscription @publication = N'YourDatabase Publication', @subscriber=N'ReportServer', @destination_db = N'ReportingDatabase', @article='all', @sync_type='initialize with backup', @backupdevicetype='disk', @backupdevicename='e:\Backup\YourDatabase.bak' GO
This technique should work on all versions of SQL Server from SQL Server 2000 up through SQL Server 2012 without issue.
Now normally I would just copy the snapshot to the subscriber and run the distribution agent on the subscriber while the snapshot loads (I thought I had a blog post about this but apparently I don’t, so I’ll blog about that later). The problem with doing that in this case is that the distributor doesn’t have enough drive space to create the snapshot and being that the servers are hosted with RackSpace adding 600 Gigs of SAN drive space to this cluster would cost about about $12,000 a month with a one year contract on the new storage. Fortunately there’s another cluster that has enough DAS space available so I can simply point the replication agent to use a network share on this drive instead by using the sp_changepublication procedure as shown below.
exec sp_changepublication @publication='PublicationName', @property='alt_snapshot_folder', @value='\\Server\NetworkShare\'
However when I get to the subscriber I don’t want to read from the network share. I want to copy the files to the local disk and read from there. There’s no switch when running the distribution agent manually so before starting the distribution agent on the subscriber you have to manually edit the dbo.syspublications table on the subscriber, specifically editing the alt_snapshot_folder column so that it points to the local path that the files will be stored in on the subscriber. You can then run the distribution agent on the subscriber and the snapshot will load. Once the snapshot is loaded kill the snapshot on the subscriber and start it on the distributor as normal and all will once again be well with the world.
A question that I get when I talk about doing this is why copy the files over the network instead of just letting the distribution agent load the snapshot remotely. I’ve found (and I’ve done this several times to deal with slow networks) that the time spent compressing the snapshot files (and they compress really well as they are just text) and copying them over the network is almost way less time then it would take to load the snapshot remotely from the distributor.
In the case of this project the publisher and distributor were in Dallas and the subscriber was in New York City. But I’ve used this same technique from Orange County, CA to Dallas Texas; from Dallas, Texas to Los Angeles, CA; and from Los Angeles, CA to Beijing, China. In every case it’s a little slow, it’s a little annoying, but it works (in the other cases the distributor did have enough space to hold the snapshot so I didn’t have to modify the dbo.syspublications table, but everything else I did was the same).
Kendal Van Dyke (blog | @SQLDBA) has another great technique that he posted a couple of years ago.
The log scan number (6367:10747:6) passed to log scan in database ‘%d’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup. (Source: MSSQLServer, Error number: 9003)
Needless to say this error looks pretty damn scarey. In reality it isn’t actually that bad. What this error is basically saying is that the LSN that is returned from the database is older than the one logged in the replication database. The best part is that the fix is pretty easy. Simply run the stored procedure “sp_replrestart” in the published database.
Local disk is cheap. That’s the reason in a nut shell. Let me see if I can’t explain in a little more detail.
Because local disk is so cheap, I can buy a 300 Gig SAS drive from Dell for $469, we can easily throw a lot of them at the problem, getting really, really fast storage really, really cheaply (at least compared to a SAN). Throwing 10 or 20 disks at a server is only ~$4600 or ~$9200 respectively which in the grand scheme of things isn’t all that much.
Those same 300 gig disks in an EMC array as an example will retail for ~$2500 each (~$25,000 for 10 or ~$50,000 for 20). So why would I purchase SAN storage, instead of buying a ton of local disks? The local disk is faster, and cheaper so where is the benefit?
The benefit from the SAN storage comes in a few places.
1. Centralized Administration
2. Better use of resources
3. Lower power utilization
4. Lower TCO
Lets look at each of these one at a time (and yes there is some overlap).
Instead of having to get bad disk alerts from all the servers that the company owns I get them all from one place. Instead of having to connect to each server to manage its storage configuration I have a single point that I can do this from.
Better use of resources
When using local disk I have to purchase storage for each server as a one off purchase. If a server I bought 6 months ago doesn’t need anywhere near the amount of storage that I purchased for it I’m stuck. That storage will sit there eating power doing nothing while I go out and purchase new storage for my next server.
When using a storage array, each server only has what it needs. If a server needs more storage later, that can be easily assigned. If a server has more storage than it needs you can shrink the LUN (only a couple of vendors can do this so far, the rest will catch up eventually) and that storage can be easily reallocated to another server in the data center for use. If a server needs faster storage, or is on storage which is just to fast and the faster storage could be better utilized somewhere else these changes can be made on the array, with no chance for loss of data on the array, and with no impact to the system.
Lower Power Utilization
This goes back to the better use of resources point above. When you have shelves of disks sitting around doing nothing, or next to nothing, those disks need to be powered. Power costs money which effects the bottom line. When you can re-utilize the disks the over all power costs are lower, especially when multiple servers are all sharing the spindles.
This goes back to the Power Utilization above. When you are using more power, you are generating more heat. The more heat you generate the more cooling that you need to keep everything up and running. Along with this, and tied into the better use of resources, when you need 50 Gigs of storage you use 50 Gigs of storage. When you need 1 TB of storage you use 1 TB of storage, no more no less. So while you have to purchase a bit more up front (which is always recommended so that you can get the best possible prices), when you use the storage, you’ll only need to use the storage that you actually need. If you do charge backs this will be very important.
Storage arrays also provide all sorts of extra goodies that they can do. The array it self can help with your backup and recovery process. It can help present full data sets to your Dev/QA/Test/Staging systems without using up full sets of data via the built in snapshot technologies. When migrating or upgrading from one server to another, the storage array can make this very easy.
Migrating between servers is just a matter of disconnecting the LUN(s) from the old server, and attaching them to the new server.
Upgrading SQL Server? That’s no problem. Disconnect the LUNs from the old server, and take a snapshot of the LUNs. Then attach the LUNs to the new server. You can then fire up the database engine, and in the event of a failure to upgrade the databases, just roll back the snapshot and attach the LUNs back to the original system, or take another snapshot and try attaching the databases again.
Want to keep a copy of every database that you have in the company, no matter the version of SQL Server at your DR site? Storage based replication can replicate the data for any application, it doesn’t matter if that application supports replication or not, from one array to another. Every time a new or changed block is written to the array, the array will grab that block and sent it over the wire to the remote array. This can be done in real time (synchronously) or on a delay as specified by the admin (asynchronously).
Hopefully this opened up the array a little to you, and gave you some insight into how the magic box works.
At random times we would see the latency for all the publications for a single database start to climb, eventually being a few hours behind for no apparent reason. Looking in the normal places didn’t lead me to much. I looked at some execution plans, and saw a couple of performance issues there (with the Microsoft code) so I threw a couple of new indexes onto the MSlogreader_agents and MSsubscriptions tables (see below) and I also made a couple of tweaks to the sp_MSset_syncstate procedure to fix some of the pathetic code which I found within the procedure (you’ll also find this below).
This helped a little, but it didn’t solve the problem. What did was when I queried the sys.dm_os_waiting_tasks dynamic management view. This showed a large number of processes with a wait_type of TRACEWRITE, and these were waiting long enough that blocking was actually starting to pop up (very sporadically making it very hard to see). A query look at sys.traces told me that there were three traces running against the server. I knew that I didn’t have one running, so I took the session_id values which were shown in sys.traces and looked in sys.dm_exec_sessions for those session IDs to find out who needed to be kicked in the junk. Turns out that the traces were being run by Quest Software’s Spotlight for SQL Server Enterprise’s Diagnostic Server (the program_name column read “Quest Diagnostic Server (Trace)”).
So I logged into the diagnostic server’s via RDP, and opened Spotlight . Then edited the properties for the server which is our distributor. Then I opened the SQL Analysis window, and disabled the SQL Analysis for this server. Pretty much as soon as I clicked OK through the windows the TRACEWRITE locks went away, and the latency went from 2 hours down to 0.
This just goes to show, just how careful that you have to be when using SQL Profiler (or any sort of tracing) against your database server.
P.S. If you decide to make these changes to your distributor keep in mind that they may cause anything or everything to break, including patches, etc. that you try and install against the SQL Server engine. These changes were made for a distributor running SQL Server 2008 R1 build 10.0.1600, use against another build at your own risk. That said, here’s the code.
USE distribution GO CREATE INDEX IX_sp_MSget_new_errorid ON dbo.MSrepl_errors (id) WITH (FILLFACTOR=100) GO CREATE INDEX IX_sp_MSadd_logreader_history ON dbo.MSlogreader_agents (id) include (name, publication) GO CREATE NONCLUSTERED INDEX IX_sp_MSset_syncstate ON MSsubscriptions (publisher_id, publisher_db, article_id, subscription_seqno) include (publication_id) with (fillfactor=80) GO CREATE NONCLUSTERED INDEX IX_sp_MSset_syncstate2 ON MSsubscriptions (publisher_id, publication_id, sync_type, status, ss_cplt_seqno, publisher_db) include (article_id, agent_id) WITH (FILLFACTOR=90, DROP_EXISTING=ON) GO ALTER procedure sp_MSset_syncstate @publisher_id smallint, @publisher_db sysname, @article_id int, @sync_state int, @xact_seqno varbinary(16) as set nocount on declare @publication_id int select top 1 @publication_id = s.publication_id from MSsubscriptions s where s.publisher_id = @publisher_id and s.publisher_db = @publisher_db and s.article_id = @article_id and s.subscription_seqno < @xact_seqno if @publication_id is not null begin if( @sync_state = 1 ) begin if not exists( select * from MSsync_states where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id ) begin insert into MSsync_states( publisher_id, publisher_db, publication_id ) values( @publisher_id, @publisher_db, @publication_id ) end end else if @sync_state = 0 begin delete MSsync_states where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id -- activate the subscription(s) so the distribution agent can start processing declare @automatic int declare @active int declare @initiated int select @automatic = 1 select @active = 2 select @initiated = 3 -- set status to active, ss_cplt_seqno = commit LSN of xact containing -- syncdone token. -- -- VERY IMPORTANT: We can only do this because we know that the publisher -- tables are locked in the same transaction that writes the SYNCDONE token. -- If the tables were NOT locked, we could get into a situation where data -- in the table was changed and committed between the time the SYNCDONE token was -- written and the time the SYNCDONE xact was committed. This would cause the -- logreader to replicate the xact with no compensation records, but the advance -- of the ss_cplt_seqno would cause the dist to skip that command since only commands -- with the snapshot bit set will be processed if they are <= ss_cplt_seqno. -- update MSsubscriptions set status = @active, subscription_time = getdate(), ss_cplt_seqno = @xact_seqno where publisher_id = @publisher_id and publisher_db = @publisher_db and publication_id = @publication_id and sync_type = @automatic and status = @initiated and ss_cplt_seqno <= @xact_seqno OPTION (OPTIMIZE FOR (@automatic=1, @initiated=3, @publisher_id UNKNOWN, @publisher_db UNKNOWN, @xact_seqno UNKNOWN)) end end GO]]>
When you are setting up something like synchronous replication between two storage arrays this latency starts to become more important as every millisecond that you spend waiting for the storage to respond is time that your application isn’t responding to your clients requests.
So, the basic math is that every meter of fiber optic that your data travels takes 5 nanoseconds. So if you have your server connected to your storage array via a one meter cable there will be 10 nanoseconds of delay. 5 nanoseconds for the data to get to the array, and 5 nanoseconds for the response to get back to the server from the array.
So using this math, for each 100 meters of fiber optic cable there is 1 microsecond of latency. For every kilometer of cable there is 10 microseconds. For every 100 kilometers of cable there is a 1 millisecond delay, and for every 1000 kilometers of cable there is 10 milliseconds of delay. So if you are replicating data from LA to New York that’s about 2778 miles, or 4470 kilometers which gives us a delay of about 44 milliseconds for each command which is being sent.
Now there is something else which needs to be taken into account when figuring out the storage latency time. And that is the fiber channel switches. If the ports on the fiber switch are on the same ASIC then there is no measurable latency through the switch, however if the two ports on the fiber switch are on different ASICs then there is an additional latency of 2 microseconds in each direction. While this isn’t much, if you keep in mind that between LA and New York there are probably hundreds of switches, those 2 microseconds are going to really start to add up.
Because of these numbers when using synchronous replication 30 miles is about as far as you want to replicate data. And farther than that and you’ll start to see network latency problems with your application. These problems will be amplified if you use something like SQL Server as with SQL Server and other databases, every nanosecond counts.
Hopefully this math will help you make more informed storage design decisions.
Why Service Broker
Before I get into the code a little reasoning behind why we went with SQL Service Broker over SQL Replication is probably good to talk about.
The main reason is that we wanted to be able to do ETL on the data as is moved from the production OLTP database to the reporting database. We also wanted the ability to easily scale the feed from one to many reporting databases with the reporting servers being in different sites.
Below you’ll find some SQL Code that we’ll use to create some sample tables. In our OLTP database we have two tables LoanApplication and Customer, and in our Reporting database we have a single table LoanReporting. As data is inserted or updated in the LoanApplication and Customer table that data will then be packaged up into an XML document and sent to the reporting database. In the case of my sample code here everything is on one server, but the databases could be easily enough moved to separate servers.
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Sample_OLTP') DROP DATABASE Sample_OLTP GO IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Sample_Reporting') DROP DATABASE Sample_Reporting GO CREATE DATABASE Sample_OLTP CREATE DATABASE Sample_Reporting GO ALTER DATABASE Sample_OLTP SET NEW_BROKER ALTER DATABASE Sample_Reporting SET NEW_BROKER GO ALTER DATABASE Sample_OLTP SET TRUSTWORTHY ON ALTER DATABASE Sample_Reporting SET TRUSTWORTHY ON GO USE Sample_OLTP GO CREATE TABLE LoanApplication (ApplicationId INT IDENTITY(1,1), CreateTimestamp DATETIME, LoanAmount MONEY, SubmittedOn DATETIME, ApprovedOn DATETIME, LoanStatusId INT, PrimaryCustomerId INT, CoSignerCustomerId INT) GO CREATE TABLE Customer (CustomerId INT IDENTITY(1,1). FirstName VARCHAR(50), LastName VARCHAR(50), EmailAddress VARCHAR(255)) GO USE Sample_Reporting GO CREATE TABLE LoanReporting (ApplicationId INT, CreateTimestamp DATETIME, LoanAmount MONEY, SubmittedOn DATETIME, ApprovedOn DATETIME, LoanStatusId INT, PrimaryCustomerId INT, PrimaryFirstName VARCHAR(50), PrimaryLastName VARCHAR(50), PrimaryEmailAddress VARCHAR(255), CoSignerCustomerId INT, CoSignerFirstName VARCHAR(50), CoSignerLastName VARCHAR(50), CoSignerEmailAddress VARCHAR(255)) GO
Service Broker Objects
With this system I use a single pair of service broker queues to handle all the data transfer. This way transactional consistency can be maintained as the data flows in. These SQL Service Broker objects should be created in both the Sample_OLTP and the Sample_Reporting database..
CREATE MESSAGE TYPE ReplData_MT GO CREATE CONTRACT ReplData_Ct (ReplData_MT SENT BY ANY) GO CREATE QUEUE ReplData_Source_Queue GO CREATE QUEUE ReplData_Destination_Queue GO CREATE SERVICE ReplData_Source_Service ON QUEUE ReplData_Source_Queue (ReplData_Ct) GO CREATE SERVICE ReplData_Destination_Service ON QUEUE ReplData_Destination_Queue (ReplData_Ct) GO
In the OLTP database you create a route like this (just change the BROKER_INSTANCE to match your server).
CREATE ROUTE ReplData_Route WITH SERVICE_NAME='ReplData_Destination_Service', BROKER_INSTANCE='566C7F7A-9373-460A-8BCC-5C1FD4BF49C9', ADDRESS='LOCAL'
In the reporting database you create a route like this (just change the BROKER_INSTANCE to math your server).
CREATE ROUTE ReplData_Route WITH SERVICE_NAME='ReplData_Source_Service', BROKER_INSTANCE='A4EC5E44-60AF-4CD3-AAAD-C3D467AC682E', ADDRESS='LOCAL'
Stored Procedures on the OLTP Database
In the OLTP database we need just a single stored procedure. This stored procedure will handle the sending of the message so that we don’t have to put that same code in each table.
CREATE PROCEDURE SendTriggerData @XMLData XML AS BEGIN DECLARE @handle UNIQUEIDENTIFIER BEGIN DIALOG CONVERSATION @handle FROM SERVICE ReplData_Source_Service TO SERVICE 'ReplData_Destination_Service' ON CONTRACT ReplData_Ct WITH ENCRYPTION=OFF; SEND ON CONVERSATION @handle MESSAGE TYPE ReplData_MT (@XMLData) END GO
OLTP database Triggers
The triggers that are on each table on the OLTP database are kept as small as possible so that we put as little additional load on the OLTP server as possible. Obviously there will be some additional load on the OLTP database, but we want to keep that to a minimum.
CREATE TRIGGER t_LoanApplication ON LoanApplication FOR INSERT, UPDATE AS BEGIN DECLARE @xml XML SET @xml = (SELECT * FROM inserted as LoanApplication FOR XML AUTO, ROOT('root')) EXEC SendTriggerData @xml END GO CREATE TRIGGER t_Customer ON Customer FOR INSERT, UPDATE AS BEGIN DECLARE @xml XML SET @xml = (SELECT * FROM inserted as Customer FOR XML AUTO, ROOT('root')) EXEC SendTriggerData @xml END GO
Procedures on the Reporting Database
The reporting database is where the real work happens. Here we take the XML document, identify which table the data is from then pass the XML document to a child procedure which then processes the data and updates the table.
CREATE PROCEDURE ProcessOLTPData_LoanApplication @xml XML AS DECLARE @hDoc INT EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml UPDATE LoanReporting SET ApplicationId=a.ApplicationId, CreateTimestamp = a.CreateTimestamp, LoanAmount=a.LoanAmount, SubmittedOn=a.SubmittedOn, ApprovedOn=a.ApprovedOn, LoanStatusId=a.LoanStatusId FROM OPENXML (@hDoc, '/root/LoanApplication') WITH (ApplicationId INT '@ApplicationId', CreateTimestamp DATETIME '@CreateTimestamp', LoanAmount MONEY '@LoanAmount', SubmittedOn DATETIME '@SubmittedOn', ApprovedOn DATETIME '@ApprovedOn', LoanStatusId INT '@LoanStatusId', PrimaryCustomerId INT '@PrimaryCustomerId', CoSignerCustomerId INT '@CoSignerCustomerId') a WHERE a.ApplicationId = LoanReporting.ApplicationId INSERT INTO LoanReporting (ApplicationId, CreateTimestamp, LoanAmount, SubmittedOn, ApprovedOn, LoanStatusId, PrimaryCustomerId, CoSignerCustomerId) SELECT ApplicationId, CreateTimestamp, LoanAmount, SubmittedOn, ApprovedOn, LoanStatusId, PrimaryCustomerId, CoSignerCustomerId FROM OPENXML (@hDoc, '/root/LoanApplication') WITH (ApplicationId INT '@ApplicationId', CreateTimestamp DATETIME '@CreateTimestamp', LoanAmount MONEY '@LoanAmount', SubmittedOn DATETIME '@SubmittedOn', ApprovedOn DATETIME '@ApprovedOn', LoanStatusId INT '@LoanStatusId', PrimaryCustomerId INT '@PrimaryCustomerId', CoSignerCustomerId INT '@CoSignerCustomerId') a WHERE NOT EXISTS (SELECT * FROM LoanReporting WHERE a.ApplicationId = LoanReporting.ApplicationId) EXEC sp_xml_removedocument @hDoc GO CREATE PROCEDURE PRocessOLTPData_Customer @xml XML AS DECLARE @hDoc INT EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml UPDATE LoanReporting SET PrimaryEmailAddress = EmailAddress, PrimaryFirstName = FirstName, PrimaryLastName = LastName FROM OPENXML(@hDoc, '/root/Customer') WITH (CustomerId INT '@CustomerId', FirstName VARCHAR(50) '@FirstName', LastName VARCHAR(50) '@LastName', EmailAddress VARCHAR(255) '@EmailAddress') a WHERE PrimaryCustomerId = a.CustomerId UPDATE LoanReporting SET CoSignerEmailAddress = EmailAddress, CoSignerFirstName = FirstName, CoSignerLastName = LastName FROM OPENXML(@hDoc, '/root/Customer') WITH (CustomerId INT '@CustomerId', FirstName VARCHAR(50) '@FirstName', LastName VARCHAR(50) '@LastName', EmailAddress VARCHAR(255) '@EmailAddress') a WHERE CoSignerCustomerId = a.CustomerId EXEC sp_xml_removedocument @hDoc GO CREATE PROCEDURE ProcessOLTPData AS DECLARE @xml XML DECLARE @handle UNIQUEIDENTIFIER, @hDoc INT WHILE 1=1 BEGIN SELECT @xml = NULL, @handle = NULL WAITFOR (RECEIVE TOP (1) @handle=conversation_handle, @xml=CAST(message_body AS XML) FROM ReplData_Destination_Queue), TIMEOUT 1000 IF @handle IS NULL BREAK EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml IF EXISTS (SELECT * FROM OPENXML(@hDoc, '/root/LoanApplication') ) BEGIN EXEC ProcessOLTPData_LoanApplication @xml END IF EXISTS (SELECT * FROM OPENXML (@hDoc, '/root/Customer') ) BEGIN EXEC PRocessOLTPData_Customer @xml END EXEC sp_xml_removedocument @hDoc END CONVERSATION @handle END GO
You can now set the queue to run the ProcessOLTPData Stored Procedure as an activation procedure and it will process the data as it comes in.
Hopefully someone finds this useful as a framework in their shop. If you have any questions please feel free to post them here.