Enterprise IT Watch Blog

Jul 21 2016   9:13AM GMT

Surmounting huge hurdles to algorithmic accountability

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Tags:
Algorithms

laser-game-1171080-638x441
Algorithm image via FreeImages

By James Kobielus (@jameskobielus)

Algorithms are a bit like insects. Most of the time, we’re content to let them buzz innocuously in our environment, pollinating our garden and generally going about their merry business.

Under most scenarios, algorithms are helpful little critters. Embedded in operational applications, they make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings, encroaching on your privacy or perhaps targeting you with a barrage of objectionable solicitations, your first impulse may be to swat back in anger.

That image came to mind as I pondered the new European Union (EU) regulation that was discussed by Cade Metz in this recent Wired article. Due to take effect in 2018, the General Data Protection Regulation prohibits any “automated individual decision-making” that “significantly affects” EU citizens. Specifically, it restricts any algorithmic approach that factors a wide range of personal data—including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions.

Considering how pervasive algorithmic processes are in everybody’s lives, this sort of regulation might encourage more people to retaliate against the occasional nuisance using legal channels. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision.

Now that’s definitely a tall order to fill. The regulation’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity–coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms—be they machine learning, convolutional neural networks, or whatever–are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Throwing more decision scientists at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. As the cited article states, “Explaining what goes on inside a neural network is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. You can’t easily trace their precise path to a final answer.”

Algorithmic accountability is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

For example, this recent article outlines the challenges that Facebook faces in logging, aggregating, correlating, and analyzing all the decision-automation variables relevant to its troubleshooting, e-discovery, and other real-time operational requirements. In Facebook’s case, the limits of algorithmic accountability are clearly evident in the fact that, though it stores low-level messaging traffic in HDFS, this data can only be replayed for “up to a few days.”

Now imagine that decision-automation experts are summoned to replay the entire narrative surrounding a particular algorithmic decision in a court of law, even in environments less complex than Facebook’s. In such circumstances, a well-meaning enterprise may risk serious consequences if a judge rules against its specific approach to algorithmic decision automation. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding. Most of the people you’re trying to explain this stuff to may not know a machine-learning algorithm from a hole in the ground.

More often than we’d like to believe, there will be no single human expert–or even (irony alert) algorithmic tool–that can frame a specific decision-automation narrative in simple, but not simplistic, English. Check out this post from last year, in which I discuss the challenges of automating the generation of complex decision-automation narratives.

Even if you could replay automated decisions from in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made. Check out this recent article by Michael Kassner for an excellent discussion of the challenge of independent algorithmic verification.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance—such as a legal proceeding, contractual dispute, or showstopping technical glitch—will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory. As Facebook’s Yann LeCun stated in this presentation, recurrent neural networks “cannot remember things for very long”—typically holding “thought vector” data structures in memory for no more than 20 seconds during runtime.

In other words, algorithms, just like you and me, may have limited attention spans and finite memories. Their bias is in-the-moment action. Asking them to retrace their exact decision sequence at some point in the indefinite future is a bit like asking you or me to explain why we used a particular object to swat a particular mosquito nine months ago.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: