I have 3 instances of SQL Server 2005 hosted on a 3-node cluster, using Polyserve clustering. Each instance is nominally hosted on its own node in the cluster.
I have configured Service Broker to route messages from INST-A (on NODE-1) to INST-C (on NODE-3), using TCP and a NetBIOS name. This NetBIOS name obviously uses the machine name of the node, rather than the virtual machine name of the instance.
Under normal conditions, this works, messages sent from INST-A to INST-C are received and processed.
However, should INST-C failover (for example to NODE-1), the route created to INST-C from INST-A is no longer valid; INST-C is now on a different node to that specified in the route.
Service Broker stops, unable to put messages onto the queue on INST-C, so backing them up in sys.transmission_queue on INST-A. To fix, I have to update the route as appropriate.
We have thousands of messages being sent every minute. While a delay in sending them during an actual failover is expected, would it not also be expected to recover itself and process any backlog without manual intervention?
Is there something I'm missing in my configuration? Or do I need to set up some other means to automatically update the route upon failover? In which case, how can I programmatically determine the node to which it has failed-over?
CREATE ENDPOINT ServiceBrokerTargetEndpoint
STATE = STARTED
AS TCP(LISTENER_PORT = xxxx)
FOR SERVICE_BROKER(AUTHENTICATION = WINDOWS, ENCRYPTION = REQUIRED)
CREATE ROUTE RouteToTargetServiceWITH SERVICE_NAME = 'ServiceBrokerTargetService', ADDRESS = 'TCP://INST-C:xxxx'
INST-A and INST-C are both SQL 2005 Enterprise edition.
Thanks for any assistance.
Software/Hardware used: SQL Server 2005 Enterprise, Polyserve clustering