I normally work with SQL, but, the same tactics should apply to query as well. In SQL if you do a STRDBG before opening SQL, you can view the job log to see any temporary access paths that the system built, as well as what logicals it considered, which it used, and why it ruled out the rest. Without specifics on what you’re querying, and how data joins together it’s a bit hard to offer concrete suggestions, although, if you are using computed, trimmed, or substringed fields in a join, things can get very kludgy very fast.
Tuning slow queries can be very useful, I’ve cut 50-75% off of your normal system usage just by consistantly going after any heavy hitters that show up on the system, and building the right logicals when possible, or fixing faulty logic in queries when not.
A last resort that has often worked for me. for complicated queries with a lot of joins, you can often speed things considerably by breaking the query into steps. but, indexing, or rewriting is preferred.
If OPNQRYF is performing too slowly, then don’t use OPNQRYF. That command is going to build an access path when it runs. If that’s too slow, then build the access path by creating a LF and then using the LF in your program. Or use SQL instead — use debug during development to guide creation of supporting indexes.
OPNQRYF has uses. E.g., it can create access paths with compound keys that include components from multiple physical files; and, AFAIK, there are no other tools that can do that.
But basic selection, dynamic or otherwise, is long since done better in other ways.