My friendly Board Member who has also been my mentor for some time now always brings up very interesting points in discussions. I have always enjoyed talking to him as he challenges conventional wisdom with his way outside the box ideas. Being technology savvy, discussions with him tend to be not just at a high level but at times I have to explain why one solution scores over the other. In one such meeting he brought into the open a dimension.
Selection of a tool
We were discussing a new predictive analytics solution and its adoption in the industry not just locally but globally. With no competitor locally using such a solution, with bright eyes, he started drawing the end game that he wanted us to reach; the dimensions were simple in their representation but complex to execute, requiring multiple data sources and algorithms that would challenge most. As the big picture unfolded, it had me and the team scared and excited about the leap forward for our company.
Using a formal matrix for evaluation of the solution is normal for IT; functionality, roadmap, customer success, ease of use, scalability, investment, and industry-fit are some of the parameters. He helped us refine the list discarding most of them considering every solution scored a tick on them. It then came down to a few that focused on strategic intent and investments by the vendor. E.g. How many customers has the vendor invested in to enable them to succeed?
Thus the discussion was at a different plane working on the methodology, plan and shared success criteria that had direct linkage to business outcomes. I could sense wariness in the business, IT and vendor teams on the perceivably risky proposition. What if the solution does not work? What if the industry changes shape or direction? What if business users don’t use the end result for decision making? How do we enforce the models that the solution recommends? What if …
Playing it safe?
Success and failure rates of IT-led projects have been a statistics that scares every one; the reasons have not changed much over the years since I read about them first–more than 15 years back. So there was some hope with a project endorsed by Board Member, but then the big question was if we can sustain his interest over the 18-24 month period in which the end outcome would be measured. We decided to raise these questions to moderate expectations and ended up inviting trouble.
Why are you all so risk averse? How would innovation happen if everyone wanted a fool-proof solution that someone has used in the past? Why are you always looking for precedence? Early adopters always gain a competitive advantage even if it is short-lived; in most cases, the followers get lesser benefits. If you keep working with a view that we don’t want to get anything wrong, is there a guarantee that you will not? And not getting it wrong does not imply that you will get it right!
Doing nothing wrong would mean that status quo is the best place to be; trying something new is always fraught with risk. I am not implying that we take undue risks on new untested technology solutions. To get something right requires collective buy-in that CIOs seek for most projects; the marquee CIOs take a lot of calculated risks, and yes, they do face failed projects more often than others. However, they more than make up for them with their successes.
Inertia is not a good keyword for IT and CIOs; they should seek unexplored avenues to make a difference. I believe that we all strive for success and in the same vein we have a phobia for failure. The obsession to always succeed may result in a dull and boring existence that is disconnected from real life and business which has to compete every day in a new competitive environment with uncertainty. I tend to agree that doing nothing wrong does not mean that you are getting it right!