The Perspective assigns a score for each comment based on the impact it can have on the discussion. The technology identifies as “toxic” at different levels comments like “Liberals live in bubbles and deserve to lose” or “It’s a shame that Donald Trump has been elected. You can never underestimate the stupidity of the interior of the United States.” A comment like “You’re a stupid idiot” would be at the highest level of toxicity. The technology will be released by the tech giant Google as an API for the media. What is interesting here is that each site can define how strict the “toxicity” filter and what to do with problematic comments – you can simply refuse messages that are very harmful or put “moderately harmful” at the bottom of the list, for example.

The tech giant Google says it already tests Perspective in the publication “The New York Times”, which receives 11,000 comments daily. The newspaper has a staff only to moderate comments before they are published. With Perspective, which learns how harmful a commentary is based on human assessments, moderation work can be more efficient. For now, the Perspective works in English and only detects toxic language. Next year, the idea will get expand for the technology to other languages and rate comments based on other factors, such as when a message is off topic or does not add anything to the discussion.

Δ

Google Just Introduced Its New Weapon To Fight Online Trolling - 49Google Just Introduced Its New Weapon To Fight Online Trolling - 55Google Just Introduced Its New Weapon To Fight Online Trolling - 58