It seems that there are two ways to compute uptime of client server machines. The first, and I think best, way is to compute the number of minutes between the date and time of of each reboot of each client server machine (an instance of an operating system) and the following date and time of a shutdown. If you add all of these up-duration times for all of the client server machines in the server pool you get the total uptime. If you then subtract that total uptime from total possible uptime you can compute the percentage of uptime which seems to be the goal of many IT managers because they want that uptime percentage to compare with their uptime percentage in their SLAs with end-users. Total possible uptime fr a 30 day month is gotten by multiplying 30 x 24 x 60 = 43,200 minutes.
The wrong way to compute uptime is to first compute downtime and then substract that time from total possible up time. The reason why people would do it this way is because if you are doing this process manually it is eaiser to count the smaller amount of downtime then it is to count the very large amount of uptime. The reason why this is not as accurate as computing the uptime in my first paragraph is that this "wrong" way is totaly dependent on what is the defintion of down. For instance a reboot of a server requested by the owner of the server might not be considered as downtime even though it might take the client server machine out of service for five minutes if not longer.
I have two questions:
For those of you who compute uptime as described in the first paragraph, presummably using some counting software, how do you handle crashes that dont produce a shutdown record?
For those of you who compute uptime manually, as decribed in paragraph two, is there anyway to compute uptime without first computing downtime?