All numbers depend on the types of queries/transactions being run. If you are running an annual report across a large data set, you may see hundreds of thousands of disk reads, but they may all be valid and as efficient as possible.
For online, common queries, a 2-second response time is usually sufficient for users. I usually look at the disk reads as the primary metric. On a good server, it is possible to have your responses occurring within 2 seconds, but using a TON of disk reads, which usually means that your indexes are not optimized for the query (or the query is badly formed).
One thing to beware of when looking at performance metrics is the database “start-up” costs. If a particular set of tables aren’t accessed for a while, and then a query goes against those tables, you may see a “spike” in disk reads and response time, since the data is probably no longer cached. When trying to do performance tuning, first try to ensure that your system is in a “steady state” condition, so you are looking at meaningful metrics.
If you restart your system (or your database server), I would ignore all metrics for the initial period after the restart. How long to wait depends on the system usage.
On a well-tuned system with sufficient memory cache space, queries should run with very few disk reads (like 0-10). Any query that consistently needs more than a few disk reads should probably be examined.