CPU overhead on DASD quota products

100 pts.
Direct access storage device
Disk quota management
We are considering use of one the OEM products that allows real-time tracking of DASD dataset usage and setting quotas on usage with optional prevention of allocations over budget. I have unsuccessfully tried to find anything information on what processor overhead these products carry. Example Products: DTS Software DLimit BMC Mainview SRM Reporting (used to be BUDGET DASD) CA Quota (used to be Sterlings VAM Quota)

Answer Wiki

Thanks. We'll let you know when a new response is added.

You’ve raised a really good question, particularly for sites with sub-capacity licensing (meaning extra CPU seconds can raise software licensing costs across the board).

I’m unfamiliar with the these three products. Mind you, I don’t think I’ve ever seen CPU overhead data published on the web.

However for many monitoring products, there is cpu usage an other performance data available. You just need to ask your software provider (and don’t be afraid to be persistent).

When you get the stats, take some time to look at the fine print – there’s no doubt that they’ll all want to present their product in the best light.

Hope this helps.

For the benefit of any other Users going through this process I include here the criteria we established. Note that this is purely for the IBM ZoS environment.

DASD Quota Software Evaluation

Core requirements and capabilities have been identified as:

The ability to Monitor and Control DASD capacity usage in real-time, for individual Programmes, projects or other areas, i.e a set and sub-set structure controllable at each level.

Each set or sub-set should allow for pre-arranged or default capacity values to be assigned, and these must be dynamically adjustable.

An ability to pre-warn any ‘account’ Users, if approaching their capacity limit.

Ability for account names to be dynamically generated based on either literals or generic variables such as dataset names (including part names or combined parts), RACF Users or Groups, program, job or DDnames, storage group name masks, volume masks, or SMS classes for example, or a combination of both.

Allow de-centralised control of ‘accounts’ controllable via RACF or similar mechanism.
For example, a Programme may be assigned capacity over which it may control usage by sub-projects to the programme.

Real-time Reporting of accounts and sub-accounts should be available, with access to these controllable via RACF or similar mechanism.

Ability to build a historical view of each account or sub-account usage, with reporting of selected periods as required.

The ‘accounts’ in the database should be re-nameable without rebuilding the entire database.

The database build process should have some sort of restart capability to avoid excessively long DASD scanning processes.

There should be a backup or ‘shadow’ database maintained concurrently to provide resilience to failure and avoid a database rebuild.

The database should be a non-proprietary access method and allow standard utility backups etc.

The product should not present a significant overhead to allocation during normal operation or during initial database build.

Any product license should be Enterprise Wide to allow it to be used on any Sysplex..

The ability to associate DFHSM Migrated data as part of the main accounts or sub-accounts, if possible.

An ISPF panel or GUI based interface to the product for set up, maintenance and reporting, with caapbility to export to spreadsheets.

Support for product Install with possible overview presentation of the product capabilities.

Evaluation Approach:

• Test installation and maintenance.

• Verify the basic functionality and operability of the product meets the defined requirements including database build and volume scanning functions.

• Test recoverability and resilience to failure.

• Define rule sets for accounts as expected for productioned running and run test allocations to confirm expected results.

• Test Reporting functions including historical data capture.

• Test securability of the product and its database.

• Have Capacity or Performance teams measure operational overheads.

• Carry out performance profiling in a high use environment.

Conclusion of Testing and Decision Point

Following testing of the various products there would be an evaluation of the capabilities of each product vs the cost of initial purchase and ongoing maintenance costs. A projection of likely savings achieved through introduction of the product would be included in the conclusions to provide a balance to the figures. Savings will be mainly from allowing greater sharing of DASD resources, more accurate and timely reporting including projected vs actuals, control over ‘rogue, unauthorised allocation’ that may affect multiple testing environments and advance warning of capacity issues to prevent down time for recovery etc.

Any supplementary functions or products included in the package would also be considered and balanced against the other factors. It is accepted that some functionality could be foregone if this results in a significantly reduced cost and this may ultimately affect the choice of vendor.

Discuss This Question: 1  Reply

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • PeteW
    Thanks dza. We do actually have one of the products on trial but it has proved a nightmare in the install as it is one of those where it has been packaged up with lots of other related products so uses common agents and has endles STC's, APFlibs and RACF to sort out. The interface is not that great either, although the GUI versions is an improvement. The problem is time , we only have it on a test system with low load so stat's aren't that helpful. We do have Omegamon and RMF etc for monitoring so you can get an idea of what''s going on but in the real world with 6 LPAR or more sysplexes things change. I have directly asked for reference sites and info about overhead but as with most vendors they're a bit coy over this. Our capacity and performance people were quite adament it would be very significant overhead so I wanted something tindependent o confirm or refute this. We have now got to a stage where the original reasons for the product are becoming less compelling and it is likely to be dropped altogether. We used to have separated storgrps and usercatalogs for each test environment and used Shadow Image or Flashcopy to backup to equivalent DASD to be used for roll-back. We have now moved to shared DASD storgrps and usercatalogs and use DFDSS DUMP's for backup as it save many terabytes of space. Originally I was concerned that we could have 'rogue users' swallowing all the shared DASD and a quota product would have dealt with that, but ithat issue has materialised so far. I also wannted to be able to see highwater marks for usage of each project and build historical data for projected vs actuals etc, but we can do that other way, except for highwater marks.
    100 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: