Upgrading to a 64-bit SQL Server 2005 environment

Tags:
SQL Server 2005 (64-bit)
SQL Server memory
SQL Server upgrades
We are replacing an existing SQL Server 2005 32-bit environment with a new SQL Server 2005 64-bit Enterprise Edition environment. The new solution will be a 2-node Microsoft Cluster with SQL Server Cluster Server, all 64-bit Enterprise additions. The only issue we are having is determining a good baseline for the base memory to install. Can you suggest a minimum amount of memory to handle the 64-bit OS, application & two or three instances?
ASKED: February 2, 2009  6:28 PM
UPDATED: March 5, 2009  5:20 PM

Answer Wiki

Thanks. We'll let you know when a new response is added.

With the price of memory today, the easy answer would be “as much as the hardware can accommodate”.

A more general rule, though, for SQL Server is at least 12GB per CPU core. This is so general, though, that you will get a lot of people questioning it.

The actual answer will depend upon an analysis of the actual performance of your system and how much you will be able to capitalize on IO caching and how much calculation and aggregation your normative query model has. A good performance tuning expert should be able to use perfmon/profiler and some test cases to see what kind of caching benefits you can get.

Since you are making the move to 64-bit, why not also consider moving up to SS2008? Since, I assume, you are going to go through a full testing cycle on the new hardware, the benefits of SS2008 are too much to not consider.

Discuss This Question: 4  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Denny Cherry
    12 Gigs per core, that's a ridiculous amount of RAM to recommend that someone purchase. Where did you get this number from? How can you recommend an amount of RAM for a system without knowing how big the database will be? Just because a database is clustered doesn't mean that it is a large database. It could be a critical system that is only 10 Gigs of data on an 8 core system. Why would someone on a system this size need 96 Gigs of RAM? That's just a waste of RAM.
    65,660 pointsBadges:
    report
  • Dschwant
    Like I said "This is so general, though, that you will get a lot of people questioning it." My take is that, with the price of RAM, put in as much as the thing can fit. If you are doing a significant amout of re-reads in IO, the performance impovements you get from having it cached are very impressive, and the goal in performance improvement is to remove the bottlenecks. But, it is like playing "whack-a-mole" as once one is dealt with another one will crop up. I guess since I am used to working with very large databases, my recommendations will be on the top end. But, as I said, the only way to really know what you need is to analyse and understand your environment. Noone can give a good recommendation without knowing a lot of mroe details. So, if you just want a recommendatuion with no analysis, then, "Damn the torpedoes! Full speed ahead!", give it 24 GB per core :-)!
    175 pointsBadges:
    report
  • SuperCoolMoss
    Can you suggest a minimum amount of memory to handle the 64-bit OS, application & two or three instances? As a minimum, I'd suggest starting with a similar amount of RAM as you're using on the existing 32bit environment(s) and compare performance under load. I wouldn't correlate memory with the amount of CPU cores. Instead you should consider the current and future database(s) sizes and the number of user connections to these databases. Memory is cheap comparatively, so it's probably OK to slightly over-estimate! Regards, SCM.
    140 pointsBadges:
    report
  • Naterw
    32 bit or 64 bit the rules for memory are the same really. Watch your page life expectancy counter primarily and if it's very low (below 300) you need more memory, if it's very high (hundreds of thousands) you could make due with less memory. As a general rule for my systems I like to keep memory equal to roughly 50% of the database size. Anything less that this and the PLE tends to nose dive very quickly. This, of course, assumes that you're not storing large quantities of infrequently accessed data. It would also depend on the usage of the server, a data warehouse should probably be higher than that depending on access frequency, an OLTP server you could get away with lower numbers.
    20 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following