Posted by: Arun Gupta
capacity planning, CIO, cloud computing, IT infrastructure, server virtualization
Wishing all a very happy new year, and a great decade ahead!
I am sure that most of you had a wonderful time enjoying your favorite activities with your friends, relatives, and even strangers. The number of messages (SMS, tweets, emails, Web posts, etc) multiplied over the weekend. And in almost all cases (I am sure there were a few exceptions), they were delivered to the intended recipients. All this was enabled by the IT infrastructure which worked seamlessly, despite the additional load generated by hundreds of messages, which implied a multiple factor loading over average transaction loads on the servers and networks.
No one really planned for this surge, unlike the planning that typically goes into catering for month end or quarter end processing. It just worked!
Does it mean that most IT organizations deploy infrastructure that is way over the required average load?
Most analyst reports indicate that average usage of the IT infrastructure ranges from 5-30%. This is where the virtualization story promises to deliver higher utilization levels. So how would one explain the success for highly virtualized shops, where utilization is higher than the numbers stated by analysts and vendors? Did we receive messages sent on the last day of the year after a few days?
At least in my case, I know for sure that the messages that I sent out (about 10 times the emails I send in a day) within a span of 20 minutes — all of them were received by the intended recipients within a few minutes.
The bogey of capacity planning, utilization levels, right sizing of servers, etc. for our messaging and collaboration platforms would appear to be highly overstated. Most IT shops play it safe, and buffer in more than 200% capacity in such infrastructure. However, the same hypothesis does not hold good for business transaction systems, which do tend to feel the pressure over month or quarter end sales cycles. Users end up at the receiving end during these peaks, and the reactions to such planned upgrades are slower than expected.
Maybe, cloud-based models for compute power on demand are an answer to such issues. But their deployment still remains experimental (at best), for mission critical transactional applications like ERP, financial accounting and supply chain management. As the interoperability of applications and base infrastructure improves, with consistent bandwidth becoming available on demand at affordable rates, the sizing problem will slowly die a natural death.
CIOs should review their capacity planning assumptions in the New Year as they engage with vendors and users, learn from the past, and take some calculated risks. I am sure that sooner or later, these questions would be posed; the answers may not be very easy.