Correct Isolation level for SQL2005 Subscriber.

140 pts.
Tags:
OLAP/OLTP
SQL 2005 Database
SQL queries
SQL Server 2005 Enterprise Edition
Hello, I'm specifying a SQL 2005 Enterprise back-end to an OLTP application which has one main DB (200GB, could grow to 600GB), a subset of which will be replicated to a separate subscriber for running read-only queries. Here's the spec, any comments appreciated, especially around the use of read committed snap-shot on the subscriber. Transaction server (including distributor) Read_committed: 2 node active / passive cluster. Each node:4 Quad core 64bit processors. 32GB ram. 72GB disks. C: 2 disks (1+1 RAID 1) 72GB OS + Binaries + Page file for crash log. P: 2 disks (1+1 RAID 1) 72GB 2nd Page file. SAN (36GB disks, 64 bit partition offset, dedicated unless specified). D: 8 disks (4+4 RAID 10) 144GB TempDB (16 files) + TembDB log. E: 4 disks (2+2 RAID 10) 72GB System DBs + Distributor. F: 8 disks (4+4 RAID 10) 144GB App DB log file (fully logged). G: 8 disks (4+4 RAID 10) 144GB App DB file1. H: 8 disks (4+4 RAID 10) 144GB App DB file2. I: 8 disks (4+4 RAID 10) 144GB App DB file3. J: 8 disks (4+4 RAID 10) 144GB App DB file4. K:11 disks (10+1 RAID 5) 360GB (SAN Shared) Compressed full and transaction backups. Cluster Server (Subscriber, 1 way transactional replicated from Transaction Server) Read_Committed_Snapshot. 2 node active / passive cluster. Each node:4 Quad core 64bit processors. 32GB ram. 72GB disks. C: 2 disks (1+1 RAID 1) 72GB OS + Binaries + Page file for crash log. P: 2 disks (1+1 RAID 1) 72GB 2nd Page file. SAN (36GB disks, 64 bit partition offset, dedicated unless specified). D: 8 disks (4+4 RAID 10) 144GB TempDB (16 files) + TembDB log. E: 2 disks (1+1 RAID 1) 36GB System DBs. F: 4 disks (2+2 RAID 10) 72GB App DB log file (simple). G: 5 disks (4+1 RAID 5) 144GB App DB file1. H: 5 disks (4+1 RAID 5) 144GB App DB file2. I: 5 disks (4+1 RAID 5) 144GB App DB file3. J: 5 disks (4+1 RAID 5) 144GB App DB file4. K:5 disks (4+1 RAID 5) 144GB (SAN Shared) Space for one compressed backup to resynch replication. Instant file initialization will be specified on both clusters to improve restore and failover times. Indexes will be defragged as necessary using online in tempDB option. Regards, SCM.

Answer Wiki

Thanks. We'll let you know when a new response is added.

What sort of transaction load are you expecting on this system? This is a very large hardware setup.

Instant File initialization will increase the load on your disks while data is being written to new data pages as the pages have to be initialized before the data is actually written to the disk.

Read Committed should be just fine unless you are pushing very large transactions through the replication.

Discuss This Question: 2  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • SuperCoolMoss
    Thanks Mr. D. This is an upgrade to a current SQL 2000 solution. Current spec is 32 bit, 4 dual core cpus, 4gb memory on Transaction server, and 6gb AWE on query server, shared SAN disks. Stand-alone distributor. The SQL 2005 Transaction server will co-locate the distributor. We have around 300 connections at peak times on both clusters. I'll try and get some transaction throughput figures. We've had issues with blocking on the subscriber which were resolved by using no-lock hints on a majority of queries. I thought, as we're moving to SQL2005 we may as well use 2005's snapshot isolation to hopefully resolve all locking issues and not have to rely on dirty reads. Management have decided to beef up the hardware whilst upgrading to SQL2005. They've decided on the memory and CPUs - I'm specifying the disk infrastructure and have decided to give them top spec with the expectation that it'll probably be reduced due to costs. Thanks, I'll take on board your suggestion to not use Instant file initialisation to improve writes. Regards, SCM.
    140 pointsBadges:
    report
  • Denny Cherry
    Snapshot isolation saves you from dirty reads but it gives you old data instead. It all depends which you would rather have. Rather than changing the process it would be better find out what is causing the blocking, and correct the underlying issue. It could simply be that you need to reduce the number of statements being replicated in each transaction and your blocking issue will go away.
    66,190 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following