If you have a good database engine and server, it should be very hard to answer this question.
If you use “bulk load” transactions, then you can possibly measure the time from the “commit” until the records are visible to another process, but this would be pretty tricky.
If you aren’t using bulk load transactions, then the index overhead is largely buried inside the row transactions. Most index overhead is occurring in memory, so it is very fast – the physical updates to the file system are buffered through the caching mechanism, so timing the completion of those writes is very difficult.
What is the purpose of trying to find this time? What question are you really trying to answer?
If you absolutely need to know the cost of index overhead, then you can compute an average cost using the following approach:
1. Create the data table with NO indexes.
2. Write records to the table and measure the time it takes to write the data. I would normally do this in a few common sizes, e.g. 500 or 1000 records. Repeat this several times – I would probably populate at least several hundred thousand records, capturing the apparent time for each block.
3. Drop the table.
4. Create the data table again, but with the index.
5. Repeat step 2 and measure the times again.
At the end, the difference between the times measured in step 5 and the times measured in step 2 will give you the “cost” of the index overhead.