Enterprise IT Watch Blog

Feb 26 2016   10:18AM GMT

The robotic throttling of information overload

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Tags:
Machine learning
Robotics

shutterstock_275149358
Machine learning image via Shutterstock

By James Kobielus (@jameskobielus)

People will increasingly have to learn how to collaborate with robots in unprecedented situations. And I’m not referring to some “Star Wars” scenario where R2D2 is all that stands between Luke Skywalker and interstellar annihilation.

Machine learning is increasingly powering search-and-rescue scenarios in which intelligent machines—such as drones—team with human agencies, such as the military and police. In the previous sentence, I used the active voice—“team with”—rather than the passive “are controlled by” to signify the fact that the robotic agents will increasingly be capable of autonomous or semi-autonomous capabilities.

This distinction will be increasingly important in emergency response situations in which events unfold too fast for human controllers to make the right decision, but in which the machines might—based on sensor-driven real-time and predictive algorithms—act first to save lives or nip potential catastrophes in the bud. There might even be scenarios where the robots take action first while delaying the communication of selective pieces of information from human team members, or even withhold it entirely, when there’s a significant risk that the people in question might inadvertently jeopardize the desired outcome by misinterpreting or misapplying the information.

What I’ve just sketched out is a human-machine communication scenario that popped into my mind as I read this recent article on an MIT research project that’s applying algorithmic approaches for reducing communications overload in human-robot emergency response teams. Essentially, it uses machine learning to identify when it’s best for rescue robots to suppress some of the chatter they communicate to human collaborators, on the grounds that these communications not only pose a cost on the machines themselves (processing, memory, bandwidth) but also might drown the humans involved in so much informational noise that they’ll have trouble identifying the gist of what they need to do.

In other words, it’s a project that’s developing approaches under which robots might need to operationally divulge information to humans on a “need to know” basis, or, conversely, withhold information. Though the team-effectiveness rationale for this makes perfect sense, it raises an uncomfortable question: In scenarios where robots are using the “need to know” criterion to algorithmically throttle the information they provide to humans, will we ever be able to trust them entirely? Even though their algorithmic hearts may not be lying to us, is their failure to be entirely candid and transparent in all circumstances a trust-killer?

Humans, of course, also engage in “need to know” communication throttling under similar circumstances. But that doesn’t always destroy the mutual trust that’s essential to team effectiveness. Recognizing this, the researchers have used machine learning to find communication-throttling patterns within 100 percent human teams engaging in equivalent (albeit virtual) rescue missions. They will use the results to build the interaction logic that guides robots in these same scenarios.

But the fact that some robots will be capable of autonomous operation means that they will have the power, under pre-programmed circumstances, to act first, perhaps without asking for explicit permission, and then explain their actions later. What that will mean operationally is that we’re creating a world where, under some emergency circumstances, our robotic team members may briefly have more knowledge than we do about what’s going on, even though we’ve programmed them for total candor.

There may simply be no time for them to explain or ask permission when lives are hanging in the balance. And if they had to wait for us to comprehend the urgency of the situation, those lives may be lost.

We will need to trust them to do the right thing and then report back to us when the situation has stabilized.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: