Twitter appears to be working on a Reply filter that would limit exposure to offensive or harmful tweets.
According to a tweet by app research Jane Manchun Wong, the filter would stop users from seeing replies that contain harmful language.
Other users would still be able to view all responses.
It could be a useful way to automatically limit spam replies and focus on more meaningful engagement on the app.
Though the full details haven’t been shared, the feature could be based on the detection algorithm used by the app’s offensive reply warnings launched in February last year.
The feature which prompts users to review tweets has shown to be effective in 30% of cases.
Twitter’s filter replies could then be used to manage tweet replies, autoblock users and replies, mute accounts and decide who gets to directly message a user.
It’s not entirely clear when the new features will roll out, but the screenshots shared by Wong may hint at a release coming soon.