Uncharted Waters

Dec 11 2013   12:45PM GMT

An Estimation Trick

Matt Heusser Matt Heusser Profile: Matt Heusser

An Estimates Photo You know the signs. The project deliverables are vague; no one knows exactly what they are supposed to be doing. The hardware might not be on-site site yet — it might not be ordered. Perhaps there won’t be any hardware at all; instead, vague promises are made that Amazon’s EC2, or some other cloud, will make the hardware needs obsolete.

Somewhere in the back, someone is sighing and nodding their head, saying “This is never gonna work.”

You want to listen to that guy, you really do. Perhaps you are that guy … but the evidence against the status quo boils down to you saying “nu uh!”

Today I’ll introduce a way to move your case from “gut feeling” to reasonable argument that is based on a paper published in, believe it or not, the Harvard Business Review.

The Method

Cover, HBR, November 2013Make a betting pool about when the project will actually end. You might do this with two bets – when the project will ship and when it will be fit for use for customers.  (As we learned in the Healthcare.gov mess, the two can be different things.)

Once you have the bets, average them, based on number of days in the future.

That’s it.

What you are doing here is relying on the Wisdom of Crowds, or Crowdsourcing, to figure out what is actually going on on the project. The assumption here is that the people who are overly optimistic will balance out the folks that are overly pessimistic, and that the average of the answers will be better than anyone individual answer.

When you present the idea, you’re likely to be faced with scorn – so point out that you read the idea in the Harvard Business Review.

Here’s the direct quote, from the November 2013 issue, Deciding How To Decide:

Information Aggregation Tools

These tools are used to collect information from diverse sources 

* Traditional approaches, including the Delphi method, gather information from a variety of expert sources, aggregate the responses, and generate a range of possible outcomes and their probabilities. Decision makers many then consult with the group until a consensus is reached.

* Prediction or information markets are designed to gather “the wisdom of the crowd” by creating financial markets where investors can trade securities with payoffs linked to uncertain future outcomes (for example, the winner of an election or the release date of a new product.)

* Incentivized estimate approaches involve surveying individuals with diverse information sources to estimate the outcomes of variables and then rewarding individuals with the most accurate estimates.

Whew.

You can purchase the article as a PDF for $6.95, but that’s the gist of it: The average (“aggregate”) of what people know is likely to be accurate, you just need a mechanism to get them to say it and a reward structure that rewards honestly.

A Word Of  Warning

I am actually suggesting you try this technique, and tell me how it goes — but be careful. It would be very easy to be perceived as a “non-team-player”, or, perhaps worse, have long-shot betters perceived as manipulating the project to get to the date they picked. I suggest doing this quietly, over a project you are clearly motivated to make successful. When pressed, pull out the HBR.

If you want bonus points, take a look at the standard deviation between the estimates and the outliers. Often the most interesting data is in the one person who disagrees.

What if Everybody Agrees?

On one project, the date of May first was set in stone. Everyone I talked to said the software would ship on May first, even though I knew the software couldn’t possibly work. To be clear: My team was doing downstream, speculative work for interfaces that didn’t exist yet in March. Another team was creating reports for tables that did not exist and had not been populated. May first was a fantasy.

Yet if we had tried the betting method, every single person would have said May first.

That, in itself, is a kind of feedback — overly consistent answers to questions indicate that something is going on here, likely a deference to authority. That problem comes up in at least two of the case studies in Reliability and Validity in Qualitative Research.

If you’d like me to do an analysis of that which I represent in plain English, just let me know. There’s gold in them thar’ academic papers.

2  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • TomLiotta

    I've been asked many times by various managers to give estimates for work that involved significant up-front unknowns. I learned to preface my answer with a short demonstration.

    "I'll give my best estimate if you agree to answer two questions from me. After you answer the first question, I'll ask the second one. Agreed?"

    It's been agreed every time I can remember. The first question:

    "How long will it take you to answer my second question?"

    The problem of 'unknown factors' has always been mitigated by that. It never hurts to manage expectations, even if you must do it yourself.

    And I really like the method of using Crowd Wisdom.

    Tom

    125,585 pointsBadges:
    report
  • Matt Heusser
    NICE Tom. Not by style, but nice.
    930 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: