Uncharted Waters

May 31 2018   3:12PM GMT

Machine Learning As True AI? Not Yet

Matt Heusser Matt Heusser Profile: Matt Heusser

Tags:
Artificial intelligence
Machine learning

Book Of Why: Machine LearningAccording to Judea Pearl, a co-creator of the Bayesian Filter, today’s machine learning is just “curve-fitting.” That’s taking a set of data points and mapping it into a line. This can be incredibly effective at, say, predicting seasonally-adjusted demand in order to reduce excess inventory yet keeping the shelves fully stocked.

But is that really artificial intelligence?

Pearl says no, and that the applications for that kind of specific, directed Machine Learning are less than the you might think. 

True Benefits of Machine Learning

On a soon-to-be podcast of The Testing Show, Peter Varhol referenced Nicholas Carr as saying “You can’t Automate what you can’t understand.” With Machine Learning, the computer can do a little better. For example, with unsupervised Machine Learning, a tool might get a hundred thousand different poker hands, along with what that hand “scores.” From that data a Machine Learning App can infer the rules to poker, or predict the score for a given hand. With a Bayesian filter, a computer can infer what types of emails might be spam, even if a human is not capable of writing down the rules.

These sorts of problems all boil down to curve-fitting. Given a large set of data points, figure out the best line to approximate them. Plenty of algorithms already do this, some implemented in Microsoft Excel. What we call “Machine Learning” today is one step better. After conducting the math, the tool creates the curve, then re-runs the data using that curve to see how off it is, creating a new curve that is more accurate. That is, modern Machine Learning tools create the prediction, compare that to what actually happened, and can improve themselves in a loop. This continues until they reach a point where improving one area throws off the curve in a different area.

With machine learning, it is still possible for a human to execute the math of the curve – it would just take a week of a trained mathematician. The tools make it easy enough to write something like this instead:

sub prediction(var input) = ML.Calculate(@input_array, @output_array);

Creating a subroutine that takes an input and predicts the result.

That is helpful for certain problems, but isn’t artificial intelligence.

“True AI”

AI as Better Machine LearningIn an interview with The Atlantic, Pearl states that true Artificial Intelligence has not arrived. He gives the example of a team of robot soccer players. In that situation, real Machine Learning would not just be playing a game and getting better over time, but having free will. Here’s Pearl’s explanation:

 I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball—I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the first sign will be communication; the next will be better soccer.

What Judea doesn’t say, but seems obvious, is the pre-requisite to free will is self-awareness. 

In order for the robots to say “you should have done better”, the robot needs to recognize that they (“I”) and the other (“You”) are both true things in the environment, something Pearl implies in his new book, The Book Of Why.

Computers are not aware of the environment. As James Bach has pointed out, a million billion Atari gaming systems lash together can have essentially unlimited processing power, but they still lack that spark of self-awareness.

Right now, we have no idea where that spark comes from. It is not in computer systems. Today’s systems still have the same model as those proposed by Alan Turing in the 1940’s: A CPU that executes instructions, memory, and long-term storage.

To get the computer to become self-aware we need to tell it how, which turns out that is an algorithm we have not been able to define yet.

And, as Nicolas Carr said – if we can’t define it, we can’t automate it either.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: