Test your best insults against Perspective, the anti-troll AI

Date:24 February 2017 Tags:, ,

Perspective, Google’s new anti-trolling algorithm, can detect toxicity in internet comments and filter them for you.

By: Avery Thompson

One of the biggest problems in online discourse is how to moderate comment sections. Increasingly, much of online discussion is taken over by hateful and derailing comments, which are so frequent they’ve become nearly impossible to moderate. To help solve this problem, Google is turning to artificial intelligence.

Engineers at Google’s Jigsaw division, which focuses on cybersecurity, developed Perspective, an algorithm that can sort online comments based on “toxicity” as rated by other users. Perspective is currently used by Wikipedia and The New York Times, among others, to clean up their comment sections and help overwhelmed moderators.

Today, Google opened up Perspective so that even more people could use it. Perspective is now free and open source, which means that if you want to use Perspective for any reason, you can.

In addition to making the code publicly available, Google also created a demo of the technology that you can try. On the site, you can see what happens when you filter comments by toxicity, as determined by Perspective, and write your own comments to see how the AI rates them.

Perspective seems to be pretty good at weeding out the worst of the worst. Obvious hateful messages are rated very toxic, and the service reliably filters them out. Comments that use swears or mean-sounding words are given a high toxicity rating, and most negative comments are rated similarly toxic.

But Perspective is still in its infancy, and there are many ways to fool the algorithm. Most importantly, Perspective seems to have trouble with context. Using a word like “stupid” can rate a comment highly toxic even if the opinion expressed is positive. Conversely, a comment expressing a hateful opinion using pleasant-sounding words will probably slip by the filters.

Google says its algorithm will become more sophisticated once more people give feedback, which means that soon it may be able to better discern intent. Until that happens, though, people can still send hateful messages as long as those messages are a little more clever.

 

 

This article was originally written for and published by Popular Mechanics USA.