Nov 28 2017   11:46AM GMT

AI meets the comments section, and the results aren’t in yet

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski


The comments section at the bottom of news stories, blogs and Instagram posts has become a place for incivility. Without the resources to monitor every single comment written, news organizations such as NPR are beginning to rely on a different tactic: They’re disabling the feature altogether.

But doing so has consequences. “There’s actually a contraction of space online to meet each other and exchange ideas,” said Yasmin Green, director of research and development at Jigsaw, an incubator within Alphabet Inc. (parent company of Google) that’s attempting to develop technology to solve geo-political problems.

During a fireside chat at the recent EmTech conference in Cambridge, Mass., Green described how Jigsaw has built a tool, dubbed Perspective, that can flag “toxic” comments. Perspective is an API that relies on artificial intelligence. Specifically, it uses natural language processing (NLP) that, unlike keyword-based systems, uses patterns to understand context. Perspective is capable of identifying tone and learning slang, distinguishing between, say, you’re killing it today and I’m going to kill you today, according to Green.

Interest in the tool, so far, is encouraging. Jigsaw is partnering with news organizations like The Economist and The New York Times to moderate the comments section more efficiently and encourage community discussion. But applying AI to moderate language is still a work in progress.

Perspective was trained on internet data, and that can introduce a new wrinkle: Human bias. At one point, the tool began to identify words like gay, feminism and Muslim as toxic — that is, as words making people want to leave a conversation. These are terms that, online at least, are “disproportionately skewed toward comments that have a negative effect on people,” Green said.  The model started to assume the words intrinsically had negative properties.

So the model had to be retrained on news articles that mention termslike these in a neutral way to remove its bias, according to Green. And, in the greater scheme of things, to keep discussion forums open. “The goal, of course, is to expand the space we have online to meet each other to create more inclusive conversations,” she said.

Perspective is now available to the public — and so is the chance to break the model again. “Interestingly, when you offer a group of smart people an AI to use, their instinct is to see if they can trick it,” Green said. “So please do try and trick it because that’s actually very helpful to us.” Even with artificial intelligence, perfection is a goal, not a destination.

 Comment on this Post

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: