Some data is missing.
MIPS is a speed measure, so in order to get a time measure (seconds) from it, you would need the number of instructions performed.
is the number of Millions (of) Instruction Per Second, as per cpu manufacturers specifications.
So if you have a MIPS number you know how long it takes to perform a set number of instructions per second.
Try this link and you will know more.
( Instructions_counted / 1000000 ) / MIPS = CPU seconds
Since MIPS is “millions of instructions per second” and CPU seconds is essentially “seconds of CPU execution”, you divide the number of instructions that ran by a million and divide that result by your MIPS value.
MIPS is an (I/S) value — instructions per second.
CPU seconds is a (S) value — seconds.
Instructions_counted is an (I) value — instructions.
Divide an (I) value by an (I/S) value to arrive at a (S) value.
Of course, the result is effectively meaningless for most purposes. But some people still want to do calculations, for the fun of it, I suppose.
The problem with attempting to relate these is that a single computer instruction may take 1, 5, 10, 20 cycles to perform. The number of cycles required by an instruction varies by the instruction and the processor.
I remember when I was using Assembler language, I came across a table that gave the Assembler instruction, (i.e. jmp) and the number of cpu cycles it took to complete the action. The purpose of the table was to show that tighter code could be written, and the size of the .com file could be reduced. (This was in the days of the tiny ram (3.5K) DOS PCs)
This table was interesting, but was of no use to what I was doing, at the time. It might be on the web somewhere.