» VIEW ALL POSTS Oct 22 2007   10:00AM GMT

Oracle patching: The bane of the DBA’s existence



Posted by: Clinek
Tags:
Managing an Oracle shop
Oracle database administration

Ask any Oracle DBA and they’ll tell you that the bane of their existence — well, one of them at least — is keeping up with Oracle’s continuous stream of patches and upgrades. As we reported last week, the latest volley of patches included 51 fixes to security vulnerabilities in their array of database and app products. This quarterly CPU (critical patch update) included:

  • 27 fixes for the Oracle Database 10g and 9i, five of which may be exploited remotely without the need for a username and password. The fixes address flaws in the core relational database management system, SQL execution, Oracle Database Vault, and advanced queuing.
  • 11 security fixes plug holes in Oracle Application Server 10g release 2 and 3, seven of which may be remotely exploitable without the need for a username and password. The fixes repair flaws in Oracle HTTP Server, Oracle Portal, Oracle Single Sign-On and Oracle Containers for J2EE.
  • 8 flaws in Oracle E-Business Suite 11i applications were included. One of the vulnerabilities can be remotely exploited by an attacker without authentication. Areas affected include Oracle Marketing, Oracle Quoting, Oracle Public Sector Human Resources, Oracle Exchange and Oracle Applications Manager.
  • 2 flaws were patched in Oracle Enterprise Manager and three security fixes were released for Oracle PeopleSoft Enterprise products. The PeopleSoft Human Capital Management software and PeopleTools are affected.

Will all these fixes actually work? Will they break other unrelated systems? Welcome to the life of the DBA.

The most common complaint I hear when talking to DBAs is about patching and upgrading. What I don’t hear as much are suggestions about how to improve the process. Does Oracle need to implement automatic updating like Windows Update? Release better tested products? Or is the number of bug fixes manageable? Let’s hear your thoughts!

Thanks, Tim

17  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Clinek
    Patching the RDBMS is manageable, if you follow the basic patching guidelines: 1. examine the patch content 2. does it apply to your site 3. download and install it on TEST server first 4. evaluate the installation process 5. then evaluate if the fixes does not brake your applications (regression testing) 6. plan for proper deployment Of course, the above also apply when you are using other Oracle related products, such as Application Server, and e-Business, etc... But this of course, increase the patch evaluation and installation and testing more complex, because some patch are incompatible (or not tested or certified) with some others... this is where it start to be more complicated. As for automatic patch update like Microsoft Windows... forget it. If your Windows PC have a problem after such automatic software update it will impact you (possibly just one person)... but if your corporate database have an issue after an automatic patch install, the entire organization may be impacted... (of course it may also be the case if it is the Windows/Outlook mail server who's being patched...). Cio, Yves
    0 pointsBadges:
    report
  • Clinek
    Haven't we always wanted Oracle to do a better job regression testing their released code? And haven't we always wanted to know about a critical flaw before RMAN is involved? But who has the time to pour over metalink looking for possible (recent) hits? I think they're going in the right direction, with automated patch downloads and SR generation. Human DBAs will never be able to scale the possible permutations to predictively find an appropriate available patch. Let alone the ramifications of applying it. I don't fear computers .. I fear the lack of them. Ken
    0 pointsBadges:
    report
  • Clinek
    Oracle has a tool that does patch management...it's call Grid Control. Perhaps a little research about the available product set before a rant..?
    0 pointsBadges:
    report
  • Clinek
    Matt: That's true, but our research shows that most people are still using 9i and not 10g -- and Grid Control only works with 10g, right? (I certainly could be wrong!) Thanks for your comment though, Tim
    0 pointsBadges:
    report
  • Clinek
    implementing Oracle's enterprise grid control is not a trivial task; Oracle has a five-day intensive class for grid control. 9.2 patches are fairly easy to do; 9.2 databases are very stable. the new "atomic" molecular patches for 10.2.0.3 OPATCH are definitely a better direction for Oracle to be taking. still, patching all databases is definitely a chore -- patching the IAS servers and backend databases is a major pain. I still have to get management to buy into having databases down every quarter for the CPU patches.
    0 pointsBadges:
    report
  • Clinek
    Tim, Grid Control does support 817, 9i, and 10g...we are currently using it today. I currently don't use it to patch my mid-tier but I don't see why it couldn't. With the CPUs that come out it scans the Oracle Homes exposed to the security issues and such therefore I assume this would work for Oracle IAS (or whatever they're calling it these days...). Now it's not a trivial product, with an additional required database and needing to install agents on all the servers to be managed, but it makes my life easier. It gives me additional information on the security compliance per host too.. I guess you could say I'm a fan of the product... ;-) I'm looking forward to using 11g with it's Automatic Diagnostic Repository that packages up trace files and environment info for ya to upload to Metalink. With any luck no more RDA requests from support..
    0 pointsBadges:
    report
  • Clinek
    Grid Control (ideally running on its own server with its' 10g database repository) is a major upgrade the the Enterprise Manager (plus agents) which is part of the 8i/8i database suite. If your not alredy using that then your missing some major productivity enhancing stuff). It does correctly identify the patches that need to be installed and will offer to do if for you (minimal downtime IF you have DB and/or iAS EM Configuration Licence $$$$). If the phrase " remotely exploitable without the need for a username and password" sends shivers down your spine then all updates should be installed as it seems every CPU has at least one of these. Point 2 should be "Carry out a Risk Assessment for not (and one for) installing the patch". Management understand the implications of Risk Assessments and in this context should make a better decision as to allowing poduction servers to be patched.
    0 pointsBadges:
    report
  • Clinek
    Patching was a real pain until we came up with a technique for creating a Gold Oracle home and cloning this home. This technique saves a LOT of DBA time, reduces system downtime, and makes patches easy. Here is a link to an article I wrote on the subject: http://www.dba-oracle.com/t_patching_cloning_oracle_home.htm
    0 pointsBadges:
    report
  • Clinek
    Manageability of Quarterly CPU patching is relative to the number of databases, high availability requirements, etc. in your environment, as also a need to patch development and QA environments before rolling it out to production. I would think Oracle testing their products better to ensure fewer security related bugs before they release their products would make sense from a customer standpoint. This will perhaps reduce the frequency of CPUs to half-yearly instead of quarterly. Also, ease of installing only what your environment really needs would help. This will reduce applicability of certain security patches. I like the idea of having a standard Gold copy of a release, patching it and then cloning to reduce time. But again, if your footprint is large, and you've been applying one-off patches in certain environments to fix bugs, it'll just make maintenance of that gold copy a nightmare :) ..My two cents. Bottomline, life would be much easier as a DBA if the products undergo extensive security tests and fixes before release.
    0 pointsBadges:
    report
  • Clinek
    creation of a gold Oracle home sounds wonderful. however, there might be an issue here -- my databases run on Linux Red Hat SMP, and kernel patches are applied haphazardly by the systems group, which *should* not make any difference with Oracle, but then again...so the servers on which my database software reside, will always be out of sync as far as the kernel goes. however I think it's a great idea to at least have a separate software area you can patch.
    0 pointsBadges:
    report
  • Clinek
    Personally, I really like the scheduled, consistent way the CPUs are made available by Oracle. It is much easier to get buy-in from above when you can detail a controlled process, planned out well into the next year. This is actually a great boon for our sysadmins, who double-up on our downtime to perform maintenance activities of their own. As for the actual implementations, we've had a lot of success on our 'vanilla' systems, requiring some basic testing & a thorough scouring of the patch notes for surprises. Streams and Advanced Rep. have never experienced any difficulties in testing or production during the application of a CPU. Of course, patching a system utilizing Data Guard is very sensitive to following an exact process as well, but it really isn't a big deal. However, RAC and HAA have had a small share of irregularities in testing - And of course, these are generally the systems you want to be the most bulletproof. It takes a bit more headwork, but we've never had a CPU that we couldn't get to work well.
    0 pointsBadges:
    report
  • Clinek
    Grid Control is only half the answer. We used to get Oracle alerts but we were more often likely to need a patch for a bug than a standard patch set. Which brings me to the real problem at our site. Patches breaking other currently working code. Exhaustive testing for every patch is not feasible. Complete unit tests for every piece of code and function will allow faster exhaustive testing but that is still far off for my site. We have too much legacy code. We have had too many patches break other code. We use a lot of Oracle's Db functionality. Our solution has been to only patch when absolutely needed and work around bugs. Faster than rolling out and testing to dozens of databases with different apps using different functionality. It still comes down to a lack of confidence in Oracle's patches. Bummer but true.
    0 pointsBadges:
    report
  • "John
    Patching every 12 weeks is simply a pain. 24*7*365 is a fantasy with this. All it means is failure to meet uptime SLRs and that Oracle has a predictable cornucopia of faults every 3 months. Bah! Humbug! Bob Cratchit... your're fired! An early Merry Xmas to you all!
    0 pointsBadges:
    report
  • Rayb
    My problem with patching is the process. Why does OUI or Opatch require a separate, unmovable, directory for inventory. Why wasn't that inventory placed directly inside the database. There is no mobility.
    5 pointsBadges:
    report
  • Clinek
    It is with both amusement (to some extent) and a sense of relief that I went through the comments on this blog! Amusement because when Core DBAs managing only Oracle databases complain so much of patching, I really wonder what Oracle Applications DBAs like me should say!! I am left speechless! A sense of relief because it is good to know that you are not all alone in this world (you must know what I mean)! Well, bugs are a bit of a headache but also seem to be a steady revenue earner for Oracle as well as for DBAs.
    0 pointsBadges:
    report
  • Clinek
    Has anyone experienced in scripting Oracle CPU patches? I meant handoff approach and let it rips through the CPU patch process including pre patch and post patch steps (from backing up the current db, ORACLE_HOME directory and inventory directory to post install script like catcpu.sql to verify patch has been install successfully). Are there any risks of doing this way espcially in a production environment versus doing manually to watch every steps and correct any errors in the way. Any thoughts and feedbacks??? Thanks, Robert
    0 pointsBadges:
    report
  • Clinek
    I agree, patching is a pain. However we recently moved some databases over to 10g and installed grid control. First GC was a pain, but finally getting the hang of it. Without getting blogged down in GC topics( which are plenty), the focus of this note is how GC was used in my test environment to apply the latest round of Critical patches for January. It actually worked. From the GC main page, go to target, select databases, select a database then select maintenance. Search for the patch number on metalink (you must have a valid metalink account). Down load the patch to the patch repository which if not specified will install to the Oracle Home/emsstagedpatches. You may want to specify a pre and post job to to stop and start the databases/listeners/opmn etc. I was surprised that it actually worked. After reviewing the output form the job I check the opatch inventory and wow la....it was properly updated. I think this is the roadmap for the future. My team and I are very pleased that finally...finally we may have a tool to help with the drugery of patching each quarter. Here is a link that can help with OEM Grid Control http://download-west.oracle.com/docs/cd/B16240_01/doc/em.102/b40002/deploy_proc.htm
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: