You know the signs. The project deliverables are vague; no one knows exactly what they are supposed to be doing. The hardware might not be on-site site yet — it might not be ordered. Perhaps there won’t be any hardware at all; instead, vague promises are made that Amazon’s EC2, or some other cloud, will make the hardware needs obsolete.
Somewhere in the back, someone is sighing and nodding their head, saying “This is never gonna work.”
You want to listen to that guy, you really do. Perhaps you are that guy … but the evidence against the status quo boils down to you saying “nu uh!”
Today I’ll introduce a way to move your case from “gut feeling” to reasonable argument that is based on a paper published in, believe it or not, the Harvard Business Review.
Make a betting pool about when the project will actually end. You might do this with two bets – when the project will ship and when it will be fit for use for customers. (As we learned in the Healthcare.gov mess, the two can be different things.)
Once you have the bets, average them, based on number of days in the future.
What you are doing here is relying on the Wisdom of Crowds, or Crowdsourcing, to figure out what is actually going on on the project. The assumption here is that the people who are overly optimistic will balance out the folks that are overly pessimistic, and that the average of the answers will be better than anyone individual answer.
When you present the idea, you’re likely to be faced with scorn – so point out that you read the idea in the Harvard Business Review.
Here’s the direct quote, from the November 2013 issue, Deciding How To Decide:
Information Aggregation Tools
These tools are used to collect information from diverse sources
* Traditional approaches, including the Delphi method, gather information from a variety of expert sources, aggregate the responses, and generate a range of possible outcomes and their probabilities. Decision makers many then consult with the group until a consensus is reached.
* Prediction or information markets are designed to gather “the wisdom of the crowd” by creating financial markets where investors can trade securities with payoffs linked to uncertain future outcomes (for example, the winner of an election or the release date of a new product.)
* Incentivized estimate approaches involve surveying individuals with diverse information sources to estimate the outcomes of variables and then rewarding individuals with the most accurate estimates.
You can purchase the article as a PDF for $6.95, but that’s the gist of it: The average (“aggregate”) of what people know is likely to be accurate, you just need a mechanism to get them to say it and a reward structure that rewards honestly.
A Word Of Warning
I am actually suggesting you try this technique, and tell me how it goes — but be careful. It would be very easy to be perceived as a “non-team-player”, or, perhaps worse, have long-shot betters perceived as manipulating the project to get to the date they picked. I suggest doing this quietly, over a project you are clearly motivated to make successful. When pressed, pull out the HBR.
If you want bonus points, take a look at the standard deviation between the estimates and the outliers. Often the most interesting data is in the one person who disagrees.
What if Everybody Agrees?
On one project, the date of May first was set in stone. Everyone I talked to said the software would ship on May first, even though I knew the software couldn’t possibly work. To be clear: My team was doing downstream, speculative work for interfaces that didn’t exist yet in March. Another team was creating reports for tables that did not exist and had not been populated. May first was a fantasy.
Yet if we had tried the betting method, every single person would have said May first.
That, in itself, is a kind of feedback — overly consistent answers to questions indicate that something is going on here, likely a deference to authority. That problem comes up in at least two of the case studies in Reliability and Validity in Qualitative Research.
If you’d like me to do an analysis of that which I represent in plain English, just let me know. There’s gold in them thar’ academic papers.