Enterprise IT Watch Blog

May 20 2014   10:00AM GMT

Grounding cognitive computing in probabilistic data analytics

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Tags:
Cognitive computing
Data Analytics

shutterstock_120297835

Data analytics image via Shutterstock

By James Kobielus (@jameskobielus)

Life’s just a rolling calculation grounded in odds. What you know about the world, you pretty much think you sort of know for sure. If Rene Descartes hadn’t been in such a rush to certainty, he might have admitted that his inner voice really told him “I think therefore I probably am.”

Having confidence in your knowledge means that the probabilities for what you believe are so high that they are practically indistinguishable from certainties. For example, we all tend to believe the evidence of our eyes, ears, and other senses. However, everyone knows that appearances can deceive. Memory is a faulty gauge of factuality, even for sensory impressions that happened a split-second ago and remain in working memory. And, of course, the art of magic demonstrates the infinite range of intentional illusions that can put the senses to shame.

Real cognition involves organically reckoning and hammering the probabilities that surround us down to manageable near-certainties. Humans are not computers that perform deterministic cognitive processing under stored-program control. Instead, our nervous systems are built on probabilistic principles that sift through impressions, heuristics, and odds so that we can get on with the business of living.

Cognitive computing systems should incorporate probabilistic analytic models in order to capture the irreducible uncertainties that inform rational thought. Anybody who wishes to plant cognitive computing in a more solid scientific foundation should check out the research presented in this MIT wiki. As discussed in the wiki, a probabilistic model of cognition should proceed from two axioms.

First, cognition is a process of trial-and-error hypothesis testing and confirmation. In other words, one confirms or rejects an apriori “working model” of a knowledge domain (i.e., cause-and-effect logic) through evaluation of probability-driven empirical observations.

And, second, cognition is a process of learning by conditional inference from confirmed working models. In other words, one’s confidence in any statement about the world rides on the extent to which it derives from a cause-effect model that was confirmed through probabilistic trial-and-error testing.

These axioms define the extent to which we can trust deterministic approaches to cognitive computing. To the extent that a probabilistic cognitive model has been confirmed over and over through empirical evidence, we can justify coding its cause-effect model into deterministic processing rules. And to the extent that fresh empirical data continues to validate probabilistic models describing those same working models, we can continue to execute those models deterministically.

In other words, we can’t have full-fledged cognitive computing without predictive models, on the one hand, and business rules management systems on the other.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: