Urgent Help

pts.
Tags:
ECC5
Oracle administration
Oracle database administration
Oracle database availability
I have some doubts related to oracle database..... I am in ECC 5 implementation project (OS: Windows, DB: Oracle) My Question....... After Installing ECC 5 the default database size is around 50 GB, some 300 GB HDD is allotted to database , the client is saying to utilize the remaining 250 GB (300 - 50 = 250) by adding Data files to the tablespace, so that the DB Administration part of adding Data files is reduced and also since the entire HDD is utilized the I/O will be good. Is the client right ? Having large data files will it affect the performance ? Waitng for ur reply. Thanks in advance Regrads Abtest

Answer Wiki

Thanks. We'll let you know when a new response is added.

There is no answer to this question.

Performance is affected by space definitions for each table (INITIAL/NEXT) or for the entire tablespace. Read Oracle’s documentation on CREATE TABLE and CREATE TABLESPACE for clarification.

Performance on a database with only ONE tablespace will depend on the placement physically of tables and indexes (i.e., primary and foreign keys) within the design. Adjacent structures perform faster than structures separated. The type of disk (internal or NAS/SAN), rotation speed, etc., also affects performance.

All the physical stuff is good, but if it is slow and meets the users’ needs, performance is ultimately fine.

Discuss This Question: 3  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Diegop
    Hello Sir, I totally understand your question and is diffcult to reply. Assuming that there are going to be a lot of transactions, I'd recommend to have several tablespaces with its datafiles. why? because : - in case of a failure, you reduce the chances of losing all of your data as you will have it in several datafiles - You can define different blocksizes for your tablespaces based on the type of data to storage - is easier to manage. So I'd recommend to use several datafiles. Regards, Diego
    0 pointsBadges:
    report
  • Psw2000
    Hi, I have a similar sized Oracle database in a similer sized disk (SAN) array. Your current ratio (50/300) is good. I'll list the reasons yoy need the spare disk space and then why it's a bad idea not to have extra large tablespaces. Spare disk space for: Oracle Archive Logs RMAN disk backups (mine are about 1.5x - 2x the size of the database). MS Windows monthly update backups. Space to do Oracle Upgrades (room for new sofware, before and after backups of the database, and archive logs). Space to do application upgrades (room for archive logs generated by updating every row in your largest table. Twice.). Is there still room for 3-5 years worth of growth (being the life expectency of the disk unit). Now, if you fill the remaining space with datafiles: What happens of growth happens in a tablespace you did not expect? What happens if the application changes so that a new tablespace is required? Can you actually do a 'cold' tape backup of the entire disk? Adding a datafile takes less time than reading this responce, so where is the measureable time saving? Datafiles can also be set to 'autoextend' to a maxiumn (or unlimited) size, thus achiving the same desired result without taking up all the disk space in one hit. My own experience of Windows systems in general is I/O performane will go down as disk usage goes up, this is noticeable when more than 80% is the disk is utilized. Hope this helps. Regards, Peter Wilkinson
    0 pointsBadges:
    report
  • JeffCote
    Please excuse my lack of knowledge surrounding SAP, so this response is based solely on Oracle.... First, a question: Is this 300GB on a single platter or multiple drives? If multiple drives, then the client is correct, provided you place the datafiles correctly across drives. If the 300GB is a single drive, however, you'll see no performance gains. (And I might add, this is a very poor physical design since you have a glaring single point of failure.) Another thing to consider, which was briefly touched on here by others, is your backup strategy. If you use RMAN for backups, the increased size of the database (e.g. empty space) will have a minimal impact, aside from potential scattered disk reads. However, if you plan on performing OS level cold backups, that 300GB will take roughly 6 times longer to backup than your current 50GB database. Also, if your tablespaces are locally managed vs. dictionary and have segment sizing set to auto, it's not difficult to set the tablespaces to autoextend, provided you set the extension to a multiple of the segment size. This way, the tablespaces will grow uniformly and only as needed. You can save that extra disk for eventual upgrades, etc. Hope this helps! Jeff
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following