We are seeing some very strange behaviour in the performance of a query on our database. We have run the same query on the actual database server (s1) using sql+, and on a server (s2) 1 hop away (taking out network performance). The exact same queries, without any bind variables, are kicked off at exactly the same time. The query that is run on the s1 finishes in 30-40 minutes, whilst the query that is run from s2 takes 4 hours to finish. Even when the queries are started at different times during the day, no matter in which order, or only 1 at a time, the s1 query always takes less than an hour, whilst the s2 query always takes close on 4 hours. The explain plans for both sessions show the exact same paths, however, they have different costs (I believe this is due to a logon trigger that is forcing the optimizer_index_cost_adj = 10 when the s2 session is created). The OS user used to log in to s1 does not have any specific resource rights.
It has been noted that there are a lot of waits between reads for client side queries, which don't seem to be happening on the server side query. The session statistics don't really tell us much, other than that it is taking a lot longer to run.
Is this behaviour normal for server vs client queries? Does it make a difference that the costs are higher in the explain plan generated, even though the steps are exactly the same? If not, what else can we look at?
Free Guide: Managing storage for virtual environments
Complete a brief survey to get a complimentary 70-page whitepaper featuring the best methods and solutions for your virtual environment, as well as hypervisor-specific management advice from TechTarget experts. Don’t miss out on this exclusive content!