Twitter is tweaking its review policies to go beyond measuring a conversation’s health on the social network, to include what it calls, “behaviors that distort and detract from the public conversation.” It expects a large percentage of these tweets and accounts to not go far enough and break its policies, therefore cannot be suspended.
Currently, Twitter filters replies that it classes as potentially causing offense, and puts them into a hidden Show More Replies section at the end of a thread. Under newly announced processes, still carried out by people and machine learning, tweets that disrupt conversation and are often reported for abuse will also be shifted down into the section.
In addition to reported tweets, Twitter will also pay more attention to “signals,” indicating whether messages should be flagged or not. These include accounts without a verified email, multiple accounts from the same person, spammy behavior, and indications of coordinated attacks.
“These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.”
Early tests have seen a 4% drop in abuse reports from search, and an 8% fall in reports from conversations.