Thanks for your response.
We have a high configuration server for load test. I dont have access to the hardware.
I would like to know if there is specific pattern or technique or checklist to attack the database performance issues.
What are the possible information should I be looking into to diagonize the database performance issue?
Can you give us more details on the specifics of your hardware and system configuration? The solution could be something as simple as adding more RAM or increasing hard-disk throughput or something as complicated as a coding problem (loop, race condition, etc.).
9 milliseconds for each procedure execution seems a bit high, unless the procedure needs to evaluate a lot of different rows of data. I would extract the internal query (queries?) of the procedure and look at the execution plan. Are there any "cartesian" joins? Any "full table scans"? Any "index full scans"? Any or all of these would indicate a problem either with the logic of the query, or with the available indexes.
If the procedure is running a query that has to somewhat randomly access individual rows in a VERY large table (many, many gigabytes), you may be stuck with the performance, as typically a physical data read from disk will cost you 6-8 milliseconds. If the table isn't that huge, then more likely the join logic isn't quite right.
If you have a "DISTINCT" modifier in your queries, it may be covering up a join that is creating excessive temporary rows. I dislike DISTINCT in queries, unless you thoroughly understand why it is needed - too many people use them as a crutch instead of understanding why their query is returning duplicate rows.
Take the DISTINCTs out and see how many rows you get back - if there are a lot, then you need to figure out why you are getting a large intermediate result set.
Functions in Where clauses - if you have any functions in your where clauses that operate on the table fields, these may be preventing the query optimizer from using available indexes, resulting in "table full" or "index full" scans. Functions in Where clauses are to be avoided if at all possible.
If you think the logic is correct, but you have "table full scans" in the explain plan, then look at whether you can define additional indexes that would allow the query to be more efficient - look at the columns used in the where clause and identify those columns that contain lots of different values. Columns that contain widely varying data make much better index columns that columns with a limited set of values (e.g., date of birth is a much better index column than gender). Also - where possible, use multi-column indexes. A single, three-column index is much more efficient than three single-column indexes, if the query references all three columns. Note that with multi-column indexes, try to ensure the column with the most widely varying values is the first column in the index.
Index trick - create UNIQUE indexes on non-unique data by including the primary key column(s) as the last column(s) in the index. This has two benefits - a "unique" index is considered "better" by the query optimizer, so it is more likely to be used - AND, if the result of the query is then joined to another table through the primary key, the primary key is available in the index, rather than having to physically access the table to get the primary key based on the other indexed columns.
Last but not least - if all else fails, you may need to force a particular index to be used, with a "hint" to the query optimizer. With some types of data, the distribution of values is such that the query optimizer cannot always find the right index, and needs a little help. These are not common, and are unfortunately very difficult to solve with traditional performance analyses.