Dr Rebecca Whittington is the Online Safety Editor at Reach plc, publisher of more than 130 national and regional titles in the UK, Ireland and the US, including The Mirror, The Express, the Manchester Evening News and Liverpool Echo. Ahead of the general election, they've launched a campaign to raise awareness of the rise in election misinformation on social media. Rebecca spoke to us about the online harm journalists face in this landscape and the effects it has on journalists and the wider industry.

 

What effect do online harms have on journalists and who is most at risk?

A lot of it is anecdotal, but my day job is working with people who experience online harm and I've done research into it as well. We know from research that's been done in the UK, as well as globally by UNESCO, that online harm has a significant chilling effect on journalists

Essentially it stops people from feeling confident and empowered to do their work. It means that they might choose alternative stories or alternative angles, or choose not to promote their work online because of a fear of receiving hate or backlash or threats.

There's quite a lot of evidence as well, and which may come out more during election times, of activity online to intimidate journalists, to prevent them from investigating certain stories or covering certain topics. Often that can be at a low or grassroots level that we see that intimidation. It can have quite a significant effect in that sense. 

Any journalist can find themselves targeted in this way, but obviously journalists who have a byline, journalists who are prominent in the public sphere, will find themselves more vulnerable to it. 

We do see the use of, for example, misogyny and sexual violence against women, much more so than we see against men working in journalism. We see racism, hate speech, hate crime and intimidation against people of different ethnicities, and also homophobia, transphobia, etc. So those people that were already needing equality within our industry are often finding themselves the target of this online hate.

I would say there's layers of online hate, so any journalist might experience it, but if you are in a protected category, you're then experiencing online hate in a doubling down kind of way, which makes it more difficult for you to do your job and be confident and feel empowered in that way.

 

Have you seen any issues particular to AI or the rise of AI?

I wouldn't say I've seen significant issues at Reach, but it’s obvious that AI is going to have an impact in this area. We have done quite a lot of work around preventing disinformation and misinformation in the run up to the election, such as preventing doxxing and impersonation accounts that are often set up with the intent of smearing a journalist by pretending to be that journalist and then behaving badly online. AI hasn't necessarily been involved with those activities, but there's obviously opportunity in the future for that to happen; I think it's something that we need to be mindful of across our industry. 

AI obviously offers lots of opportunities, but it also offers significant threats and I think those threats are still emerging. Women working in the forefront of broadcast journalism, for example, their likenesses are there, and those kinds of images are going to be used to create deepfakes. But it's not something that I've seen evidence of at Reach.

 

Speaking of Reach and other media organisations - they come in all shapes and sizes, but what concrete steps should they be taking to tackle online harm?

I think a lot of organisations are taking steps. In the UK there’s the National Committee for the Safety of Journalists and that has a subcommittee working group on it that has representation from multiple news organisations across the UK. It's brought an interesting and valuable group of people together to address this issue, from policing, government, industry, academia, etc. I really want to see that continue in the next government so that good work can thrive.

At Reach, we obviously have myself as Online Safety Editor and the work that I've done is extensive, because I'm there specifically to handle that particular issue. We have a reporting system; we have a support system. I do a huge amount of training in the business to really try and get people up to speed with how to protect themselves online and prevent themselves being targeted by online harm. I work with editorial teams around risk assessment; we do risk assessment for health and safety, and we do it for security. We build in online harm as part of that risk assessment and that's been very effective. 

In an ideal world, large organisations like Reach and their equivalent will have somebody in a position like my own. Smaller organisations obviously will find that more difficult, and then we have the issue of individual freelancers as well. There are a lot of good organisations out there, such as the NUJ, Women in Journalism and the Rory Peck Trust that provide really good resources around this. I feed into those quite a lot because I do think that as a large organisation, we have the opportunity to give something back in that sense. 

 

And what is the police response like when journalists report online harms?

It's very varied. Police Northern Ireland and Police Scotland are taking steps to align their response to reports of online threats made by journalists. In England, obviously we've got multiple forces. There are journalism safety liaison officers present in a lot of forces in England and Wales and elsewhere, but there needs to be more transparency around who those people are and how they can help. On International Women’s Day this year an appeal letter, signed by more than 100 media leaders in the UK, called for improvements to recording of crimes reported to police by journalists. This was a joint appeal by Reach, Women in Journalism and Reporters Without Borders. 

There needs to be a more joined up approach by the police as well as better documentation. At the moment, we don't know what the reporting figures are because it's so varied in the way that it's categorised. Those things need to change for us to then be able to really look at the crime reporting figures and start seeing where the resource needs to go in terms of a better response.

I think there needs to be better communication between journalists and policing to help each other understand what the issues are that are being raised, and what the police can do in terms of response. From my experience, when we make reports of threats it has to meet a criminal threshold for us to say this needs to be reported to the police. It's then reported and it's up to the police to decide what criminal categories these things fall into and what thresholds they meet. This recommendation was one of many in a recently published report.

Cybercrime is so varied and it's quite new still in terms of policing and legislation. It’s being categorised in different ways, it's being responded to in different ways, and I know from the work I do and from the research that's been done that a threat of harm online can translate into a physical safety threat as well. Physical crimes and online crimes are intrinsically connected.

I sympathise with the police as well because their resources are exceptionally tight. They've got to work out how to prioritise and where to prioritise. A committee like the National Committee for the Safety of Journalists, for example, has the opportunity to help with the coordination of making these changes. There needs to be those voices coming in to demonstrate what the issue is and to help the police really examine it; they need that support from the outside as well.

 

What would be needed to force the platforms to take effective action on this?

Well, that’s a million dollar question, isn't it? Platforms are currently not taking responsibility. They are putting certain things in place that they can point to and say, these are our safety measures, which is the right thing to do. You need to have safety measures within the product that you are selling if that product has a potential safety risk.

I would love to know how we can get the platforms around the table and really take responsibility for this. There are people working within these absolutely huge organisations who do care about safety and who are invested in it, but I think it needs to come from the top down and currently I'm not seeing that. Safety is just not a priority.

These platforms suggest that they are not responsible for the content that is published on them. Every government has this question on their mind. How can we get them to take more responsibility? What's it going to look like in 10 years' time if we don't get them to take that responsibility?

I do feel strongly we shouldn't have to say these very wealthy, very powerful people need to take more responsibility. The position of power of these very wealthy, very powerful people gives them the potential to change the world, and currently I don't feel like they're changing the world for the better. I think that they're shirking that responsibility. So yes, I would love to know the answer to that question.

 

What would you say to someone who argues there is always going to be abuse and misinformation online?

I think that's true. It's the same way that if you don't look both ways when you cross the road, you might get hit by a bus - but there are crossings in place to help people cross the road safely.

I do genuinely feel more could be done. Meta, for example, recently lowered the age of use for WhatsApp to 13. They're not strengthening the safety of these platforms. They're encrypting DMs to make it difficult for anyone to actually investigate direct messages. Direct messages are one of the more reported ways that somebody has abused online, because it's behind closed doors. 

On the one hand, there are good resources. Meta has a decent journalist and safety platform. Twitter/X has lots of different safety functions. It also has lots of things that make those platforms unsafe.

There are always going to be people who try to try to get around the systems. There are always going to be people who try to use new technologies and new tools in ways that can cause harm. That's out of our control. But what we can do is identify what the risks are and identify guard rails and safety rails to put in place to stop those risks from being untethered and unchecked.

I genuinely think that there is more that could be done. Identification verification for a start, because a lot of the problems that I see which are connected to social media come from unverified accounts or accounts that are anonymous. That person can use a VPN and nobody knows who they are; we're offering people a disguise to go and be harmful and abusive and then walk off scott free. 

I agree there are always going to be people who are misusing the systems and that's just got to be accepted. Of course it has. But I do think in the position of power, there's an opportunity to make a difference there as well.

 

You're the very first Online Safety Editor at Reach. What advice would you have to someone else starting in that kind of role? Or advice that you would give to yourself when you were starting the role?

I think crucially, it's about getting to know the issues; you have to do quite a bit of listening before you start taking action. I did that when I started and I'm really glad I did, because it allowed me to find out from people what things they were facing: what their worries were, what their concerns were, and where they thought there was good practise - it's not all about the bad things; it's about what's good and what has worked. 

I think that's really, really important because each organisation will be different. Reach is quite an organic organisation, it has grown over time. We've got very different platforms and portfolios, we've got very different groups of people working in lots of different locations, with different concerns about online safety.

The other thing we’ve managed to achieve at Reach is that in our system for online safety reports, a report is only closed at a point of understanding and agreement between that person that made the report and me; there's a point of closure for that person. We've done everything that we can do, we check in with them, make sure they're OK and then we can close the report and they can move on.

Finally, I've learned over time that you'll have categories of kind of online harm but the individual response is unique. You can label something as a threat or backlash or doxxing, but what really counts is the individual and how that individual in that moment needs that support. One person will say they’re terrified, they’re really worried, they’re not sleeping and they're scared that this person might come to their home. Then another person might make a report of exactly the same thing and say ‘just thought you better know about it, I'm OK at the moment’. If I went in all guns blazing to the second person and gave them the same response as I would give to the first person, the second person probably wouldn't report again because they'd feel kind of overwhelmed and think it was a bit of an overreaction. 

 

Is there anything else you'd like to add before we finish?

I've been in this job for two and a half years and the things that are reported have changed over that time. We still see a lot of personal comments and threats coming in on social platforms like Facebook and on Twitter, but there are other emerging things. A lot of the reported issues come in from e-mail, for example, or direct message, or through your phone. We are seeing that impersonation has risen in the last year. 

The threats do change over time and the things that are reported change over time; I anticipate that in a year's time it will be something else. So many things affect that, like the fact that we're in an election year. 

From my point of view, it's not a static job and it's not a static situation and that's what makes it interesting. It means that as media organisations, we need to keep moving to make sure that we're addressing the up-to-date issues, that we're not stuck in the past. Meta has moved away from its interest and focus on news at the moment, so that means Meta is being less reported to me because we're not as exposed on there anymore; we're seeing other platforms being reported. Flexibility is an important part of this job, analysing what's happening over time and being able to respond and get in front of emerging trends.

 

Follow Rebecca on X/Twitter @RebeccaWMedia; find out more about Reach's user-facing campaign on spotting disinformation.

About Natalie Beale

Natalie is a Senior Editor for Cision, based in London. In addition to interviewing journalists and media industry experts, she manages the US and UK Media Moves newsletters, which showcase the latest journalist news and moves.