Enterprise IT Watch Blog

Oct 18 2016   11:05AM GMT

Keeping a clear mind about the potential downsides of AI

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Tags:
Artificial intelligence

File Name : DSCN6318.JPG File Size : 149.8KB (153444 bytes) Date Taken : Sab, 2 ott 2004 16:13:55 Image Size : 1024 x 768 pixels Resolution : 300 x 300 dpi Bit Depth : 8 bits/channel Protection Attribute : Off Camera ID : N/A Camera : E885 Quality Mode : NORMAL Metering Mode : Matrix Exposure Mode : Programmed Auto Speed Light : Yes Focal Length : 8.0 mm Shutter Speed : 1/363.3 seconds Aperture : F7.6 Exposure Compensation : 0.0 EV White Balance : Auto Lens : Built-in Flash Sync Mode : Normal Exposure Difference : N/A Flexible Program : N/A Sensitivity : Auto Sharpening : Auto Image Type : Color Color Mode : N/A Hue Adjustment : N/A Saturation Control : Normal Tone Compensation : Auto Latitude(GPS) : N/A Longitude(GPS) : N/A Altitude(GPS) : N/A

Artifical image via FreeImages

By James Kobielus (@jameskobielus)

It’s not hard to grab your 15 minutes of attention in the mass media. All you need to do is argue that that the latest technological mania is going to ruin the world.

Alarmist warnings about artificial intelligence (AI) seem to be everywhere right now. I’m a bit jaded by all this sensationalism. Earlier this year I published my thoughts on this topic, in which I outlined the principal overheated arguments being made against AI and its data-driven cousin: cognitive computing. If you watched the otherwise excellent October 9 “CBS Sixty Minutes” episode on AI, you saw many of those arguments rehashed.

Now I’m seeing a new theme in the anti-AI backlash: the notion that growing reliance on data-driven cognitive computing will turn users into gibbering idiots. That’s essentially the thesis of Bernard Marr’s recent Forbes article, as flagged in the headline “Is Stupidity A Dangerous Side Effect Of Big-Data-Driven AI?” In the article itself, Marr softens that tone just a wee bit, using the term “de-skill” to refer to the process under which automating cognitive functions may cause people to forget how to handle them unassisted. But it’s clear that Marr believes the technology risks dumbing down AI-assisted tasks to the point at which people may become passive appendages to the machine (or, at the very least, to machine learning algorithms).

My feeling is that this is more of a red herring than a real issue. The fact that AI has made a specific mental task easier doesn’t imply that you, the person whose cognitive load is being lightened, are in danger of becoming an imbecile. We’ve been living with high-tech cognition offloaders—such as spreadsheets and electronic calculators–for the past couple of generations, but those don’t seem to have spawned mass mathematical illiteracy. People still need to master the same core concepts—addition, subtraction, division, multiplication, etc.–in order to use these tools correctly.

So before we let our fevered imaginations get the better of us, let’s consider how deeply entrenched AI has already become in our lives, and how little we need to fear it. If you consider the range of tasks for which AI is becoming ubiquitous, it’s obvious that none of this is reducing any of us to a state of drooling incoherence. AI’s principal applications so far have been for conversational chatbots, speech recognition, face recognition, image classification, natural language processing, computer vision, fraud detection, and environmental sensing. Though these AI-powered applications are everywhere—such as your smartphone–I’m pretty sure that most of us still have no problem speaking to human agents, understanding spoken language, or identifying a familiar face without technological spoonfeeding.

If anything, AI-enriched applications, appliances, the Internet of Things, and intelligent robotics are extending all of our senses. They’re honing our innate intuitions to a finer degree through immersive pattern sensing. And they’re empowering our neuromusculature in new ways, spurring all of us to evolve our organic smarts in order to embrace the amazing possibilities being unlocked.

Of course, it is quite possible that AI will be misapplied in many application contexts. That’s the essence of Marr’s arguments. He sketches out several speculative decision scenarios in which particular professionals may give AI-infused applications too much latitude:

  • AI-guided flight-automation systems may create a new generation of pilots who aren’t knowledgeable or attentive enough to manually override and fly the plane when they need to.
  • AI-driven autonomous vehicles may lessen people’s incentive to learn how to operate cars manually (assuming that this is even possible in future self-driving vehicles).
  • AI-powered medical systems may weaken physicians’ ability to render expert diagnostic judgments grounded in their own manual review of patient records.
  • AI-monitored manufacturing assembly lines may cause quality assurance specialists to ignore the evidence of their own senses, thereby contributing to a surge in defective products that enter the market.

Those scenarios are worth considering, but the more you ponder them the less likely they seem. Let’s consider each of the scenarios in turn:

  • Only insane pilots would rely unthinkingly on AI-guided navigation systems, considering that their own lives (not to mention those of their passengers and people on the ground) hang in the balance.
  • The same applies to the occupants in AI-steered autonomous vehicles, many of whom won’t enter such vehicles unless there’s a designated human driver who can assume controls in a pinch.
  • Likewise, few responsible doctors will take reckless risks with patients’ lives by delegating unthinkingly to AI-powered systems—or, at the very least, their malpractice insurers will rein in any such temptations.
  • And manufacturing quality control specialists will lose their jobs in no time if they sign off on too many AI-certified false-negatives at the tail end of the production process.

Marr states, rightly, that what would be most problematic is any AI-driven process that entirely lacks human oversight and control. But even in those instances (which in any likely future scenario will be the exception, not the rule), specific humans will still be held responsible for the results of algorithmically guided processes.

Real-world checks and balances will keep AI-powered processes from running amok. When circumstances demand it, heads will roll and AI-driven processes’ most adverse decision paths will be reined back in.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: