“Automatically” Detecting Online Abuse Requires an Editorial Eye

Tow Center
4 min readMay 17, 2021

--

Effectively flagging abusive content on social media is hard, but journalists can share their insights — and their data — to improve computational tools.

By Susan E. McGregor

On April 9, 2021, women Twitter users from a range of professions — including law, journalism, academia and advertising, to name a few — participated in a one day boycott to bring attention to the abuse that women regularly experience on the platform. According to a 2018 Amnesty International report highlighted by participant and startup lawyer Moe Odele, in fact, a woman is abused on Twitter every 30 seconds.

Women journalists reading these statistics won’t be surprised. In the course of researching my upcoming book, Information Security Essentials: A Guide for Reporters, Editors and Newsroom Leaders, I spoke with dozens of security specialists at news organizations and found that online harassment — which disproportionately, though by no means exclusively, affects journalists who present as women or people of color — was their most pressing security problem. Manifesting as everything from doxxing campaigns to relentless streams of otherwise “innocuous” messages, online harassment is a huge drain on journalistic resources, both in direct financial costs and lost productivity. According to recent research by the International Women’s Media Foundation, in fact, nearly one-third of women journalists who experience online abuse consider leaving the profession altogether. As IWMF executive director Elisa Lees Muñoz recently wrote for Nieman Lab, this type of “[o]nline violence has the same impact as violence experienced offline.”

In 2015, Twitter CEO Jack Dorsey acknowledged how poor the company was at dealing with abuse on the platform, and since then the platform has developed a handful of new tools — such as restricting who can reply to your posts — in an effort to improve women users’ experiences. But addressing the problem of online abuse at the scale and speed of social media is not a trivial problem, even for the platforms themselves. Scanning every message posted to a social media platform demands a computational solution, yet tasks that tend to be simple for humans — like interpreting the “gist” or tone of an exchange — are effectively impossible for computational systems. At best, computational tools can be trained to mimic this type of understanding by algorithmically “studying” large volumes of actual human communication that has been qualitatively labeled — by humans. But which humans are doing the labeling also matters, because the language of abuse is not “one-size-fits-all.” Research shows that anti-Semitic speech, for example, taps a different vocabulary than misogynistic or racist speech. This means that efficiently and automatically detecting abuse on social media platforms requires “teaching” computational tools to identify, among other things, the specific linguistic structures of the abuse targeting a particular community, using realistic, expertly-labeled examples of the behavior you want to detect.

In my previous research collaborations with computer scientists on this topic, one thing that stood out to me was how poor most of that training data was. So in early 2020, I once again reached out to colleagues in computer science — Julia Hirschberg and Sarah Ita Levitan — to see if we could create better tools for detecting online abuse by collecting better training data. With support from the Tow Center for Digital Journalism and the Brown Institute for Media Innovation, we designed a participatory data-collection process: rather than scraping data from public Twitter accounts and asking research assistants to make judgments about abusive content, we have been recruiting and paying women journalists to share and label their own Twitter conversations. With enough participants, we are confident that we can build a tool to better recognize the linguistic characteristics of the online abuse that targets women journalists, but may also help clarify the broader mechanisms of harassment used in these spaces. Once we can accurately and efficiently detect abuse, we’ll continue to work with the journalism community to understand what should be done with it. In some cases, simply blocking or quarantining it will be enough; in others, documenting and reporting it — to platforms and/or law enforcement — will be important. To that end, we are building relationships with apps like Block Party, which are already beginning to provide some of this much-needed functionality.

Realistically, improving Twitter for women — and especially women journalists — will not be solved by any one effort. The goal of the April 9 boycott, according to organizer Heidi N. Moore, was to spur Twitter to action by exerting economic pressure, since the platform depends on user activity to generate advertising revenue. Opening up social media platforms for independent research and development is also key. On April 19, for example, Block Party founder Tracy Chou wrote in Wired about the need for platforms to created application programming interfaces (APIs) that would help companies like hers more easily innovate tools for improving Twitter users’ experience — a move that would reflect the origin of many of Twitter’s most successful interfaces, many of which were originally built by people outside the company. For our part, we hope that our work will demonstrate that creating effective, scalable tools to detect online abuse is well within the realm of technical possibility. Likewise, we hope to demonstrate to other researchers the importance of working in direct collaboration with communities on this type of work, especially to ensure accuracy and fairness of the tools we produce. And we hope you that you — whether you are a woman journalist or one of their colleagues — will consider working with us to build a better, more effective online working environment for us all.

If you are a woman journalist, please consider participating in our research and/or sharing this information with your colleagues. We welcome participants from all types of media and freelance practices, and hope to make our work with women journalists just the first of many appropriately

--

--

Tow Center
Tow Center

Written by Tow Center

Center for Digital Journalism at Columbia Graduate School of Journalism