Posted by: Richard Evans
10g, 11g, Data Guard, implementation, improvement, logical standby, RMAN
Today we demonstrated the capabilities of Oracle Data Guard. I’ve implemented it at a couple other locations and absolutely love it. I’d still consider myself a DG newbie, but it’s so straight forward and easy to manage at this point (10gR2) you don’t have to be an expert. I’ve read that it gets even better with 11g! I can’t wait for some of the new features they’ve integrated into RMAN for Data Guard.
Even if you’re not using RAC, check out page 33 of the Data Guard 11g Installation and Configuration On Oracle RAC Systems
And Uwe Hesse, an Oracle Instructor, has put together a nice document demonstrating 11g’s Data Guard Features: Oracle 11g Data Guard in Action
Back on topic now, we have Business Analysts that want to run reports against the data from our production server. Since we don’t really want them bogging down the production server, we’ve been using a Korn shell script to copy the Level 0 of the production DB from NetBackup to a reporting Zone. This has worked pretty well for a couple years now, but when I came on board I asked why we hadn’t considered Data Guard. Mostly it was a lack of experience with the product and when you’re a small team with a large workload, it’s hard to tackle products that are out of arms reach. I completely understand that, been there! Still am there!!
So I put together a scenario using one of their development DBs. I had permission to start/stop that DB as necessary and used it as my primary DB. I duplicated this DB to a separate Solaris Zone to, eventually, be my logical standby database.
I’ll post the steps I took on here sometime soon — first I want to make sure I have the bumps smoothed over.
The big thing was our demonstration today. I put together a few (~7) PowerPoint slides to give them an idea of what I want to do:
- What is Data Guard?
- How does Data Guard work?
- Why use Data Guard instead?
- What’s the catch?
- How would we implement?
- How long will this take??
Generally our customers don’t care about the details, they just want to know what the catch is, how long it’ll take to implement, and how much downtime they’ll experience. Since I’m a technical guy, I like throwing in some technical information to see exactly how long it takes for their eyes to glaze over!
If you aren’t or haven’t looked at creating a Logical Database for reporting purposes you really might check it out. You can tell DG to duplicate only specific schemas, to have a lag-time, and you can create users that only exist on the reporting DB as well as index the **** out of it so their queries perform better. Yes, I know indexes come with an overhead expense, but this is on our reporting DB specifically — not production — so I really don’t care if it takes a little extra space or a few extra CPU cycles to insert, update, or delete data from the indexes. The benefits outweigh the costs in such a scenario.
It’s always a great feeling when you can alleviate some of the monotonous administrative tasks — i.e. weekly duplications on a Friday afternoon when you really want to be at the bar instead. This way, their data is always current; now they won’t complain that we duplicated it two days before the month ended and they really needed the whole month’s data available! They will have near-real time data at their finger tips.
The next thing is to get them out of fat client reporting tools and to take a good hard look at Application Express