In this article, I outline the legislation and its application for publishers and their staff before examining the positives and pitfalls of the law through the lens of the August disruption.
Almost 18 months in the making, the Online Safety Act, which introduced a series of new offences connected to online activity, put its main focus on making the internet safer for children and young people. However, many elements of the legislation also have potential benefit for journalists and writers who find themselves targeted maliciously online.
The act focuses on online services providers such as hosting providers, social media platforms, search engines. It also aims to protect online journalism content produced by recognised UK news publishers.
Under the act, online service providers have a duty to put in place systems for users to easily report content and processes to improve user safety.
Ofcom has been appointed as the regulator tasked with enforcing the civil elements of the act. However, Ofcom does not have responsibility for moderating individual pieces of content, but instead holds tech companies to account over their responsibilities to proactively assess risk of harm to their users and put in place systems and processes to keep users safe online.
The act extends and applies to the whole of the UK, except for some of the criminal offences which apply differently within the home nations.
How the act affects news publishers
Recognised news publishers’ platforms, such as newspaper websites or news apps, are not bound by much of the legislation because it falls within the remit of ‘journalistic content’ as outlined in the act. Below the line comments are also exempt. News providers will need to consider the act when introducing user-to-user features, such as interaction between users in online communities where, for example, those features divorce users from the organisation’s primary purpose of publishing news related materials. Content designed for under 18s will also need to be considered under the legislation.
News publishers’ content is exempt from platforms’ online safety duties, meaning social media platforms, for example, will not be incentivised to remove news publishers’ content. In fact, the legislation imposes a duty on social media companies to safeguard journalistic content shared on their platform.
The law means platforms seeking to remove journalistic content must notify the recognised news publisher affected and allow opportunity to appeal before removing or moderating content or taking action against their accounts.
Other provisions include:
- Protections for journalists: Content produced by recognised publishers should also be afforded the same protections if shared by individual journalists on their personal or professional social media. However, those accounts are not protected if individuals breach the act by sharing non-journalistic or illegal content.
- Criminal legislation: Criminal offences introduced by the act came into effect on 31 January 2024 — some of which may have direct benefit to journalists targeted by online violence or other harmful communications.
- False communications: Under the new law, it is now a criminal offence to send a message online containing information known to be false with the intent of causing non-trivial psychological or physical harm to a likely audience without there being a reasonable justification for the communication. That means it is now illegal to share disinformation (false information shared in the knowledge of its inaccuracy) if it is likely to cause harm. The new law benefits journalists by offering protection against falsehoods being shared deliberately with the intent of not only smearing their professional reputation but also seeking to encourage online threats of violence to intimidate or silence a journalist through abuse prompted by the false claim. However, the challenge with using this law would be proving the intent of causing non-trivial psychological or physical harm.
- Threatening communications: One of the most reported online safety issues by journalists at Reach is the receipt of death threats and threats of violence. This vexatious issue can now be reported as a crime if it meets the threshold for investigation, thanks to the Online Safety Act. To meet that threshold, the message must threaten death or serious harm and the sender must have intended (or been reckless) for the recipient of the message to fear the act would be carried out. Again, proving intent or recklessness in this situation would be the challenge during application of the law.
- Sending pornographic images (AKA cyberflashing): Some of the most serious online safety reports I have supported journalists with in the past have included women journalists being sent pornographic images or videos on social and messaging platforms connected to their work. The new legislation has made this type of non-consensual sexual cyber attack illegal if the sender intended to cause distress or alarm or if the message was sent for their own sexual gratification.
- Deepfake pornography and intimate images: In April this year, the government promised to introduce a law to criminalise the creation of deepfake pornography using another person’s image without their consent. This legislation will add additional depth to a law in the Online Safety Act which criminalises the sharing of intimate images and video without consent — deepfake content included. The promise to introduce deepfake legislation was announced following the broadcast of an investigation by Channel 4 journalist Cathy Newman, who discovered her own image had been used in deepfake content. It is to be hoped the new Labour government will uphold the pledge to introduce the addition into the statute books.
Resources, funding and fast action are needed to make the Online Safety Act fit for purpose
When the juggernaut that was the Online Safety Act was finally passed, it was something of a relief. At times, the ratification of the bill that had started its way through the legislative process in March 2022 had looked almost impossible due to ongoing debates and disagreement around free speech and levels of harm. While these issues were absolutely important, it was also clear that robust laws were desperately needed to protect the most vulnerable in our society — without the legislation, the protections afforded to young people and children in online spaces were seriously lacking.
The promise in April to make non-consensual deepfake pornography a criminal offence following the excellent investigation by Channel 4 and Cathy Newman is also to be applauded — it provided a degree of hope that emerging digital threats can be promptly identified and answered with a robust legislative response.
However, some of my original concerns with the Online Safety Act from the viewpoint of protecting journalists remain the same; that is, the law does not go far enough and implementation is slow and challenging.
Ofcom is not expected to have the online safety regulation plan fully established until the second half of 2025, and although online services have been advised to make changes now in order to be ready for the plan’s requirements, the question remains as to exactly when the regulation will become a reality.
The aftermath of the horrific killing of three young girls in Southport in August is a prime example of where Ofcom’s enforcement of the act currently falls short.
In the hours and days following the attack, fake information swirled online. Some of it was shared in genuine innocence or ignorance of the falsity of the claims. Much of it was shared with full knowledge that it was harmful disinformation being claimed as ‘the truth’ and that spreading it online would fuel riots and disorder across the UK.
Agitators jumped on the opportunity to further their own agenda and incite violence on the streets through disinformation and fear mongering. Along with targeted communities, journalists and members of the media covering the disruption found themselves in the firing line both in online and physical spaces.
One example of online violence stoking the flames of threats to journalists was sent to the Reach online safety reporting system when a tweet branded several staff members who had covered the disruption as ‘paedophiles’. The tweet had been reported to X by one of the staff members, who was told in an automated reply that the post did not violate X’s community standards.
On a closer look at the profile that sent the tweet, it was clear the account holder had been heavily invested in stoking the civil unrest. Islamophobic tweets glorifying violence, inciting and condoning the destruction of religious buildings and police stations and advising potential rioters what to wear to hide their identity during the unrest, had been sent multiple times. There was a significant amount of disinformation also being shared with the apparent intent of creating fear and inciting violence. I reported the entire account to X and was asked by email to provide evidence to substantiate my report.
After a significant amount of time spent gathering links to prove the rule-breaking behaviour, X agreed that there was a problem and ‘limited the visibility’ of some of the tweets I had reported - including the tweet attacking our journalists (which by this point was 12 days old, meaning any damage caused had already been done). Limited visibility does not remove tweets entirely and does not appear to penalise the account in any way — it simply makes harmful tweets more difficult to find.
The Online Safety Act outlines the duties for platforms to assess the risk of harmful material shared on their platform, to stop illegal content and require them to make it easy for users to report illegal content. Currently — because Ofcom’s laborious rollout of the regulations means many of the rules will not be enforced until at least the end of this year — X does not make it clear how to report content as illegal. Similarly, if the response to material inciting rioting and Islamophobia is anything to go by, it cannot yet be thoroughly assessing the potential harm or illegality of content published on its platform.
I made a report to Ofcom about the experience. However, as it is not possible to share links to evidence in the Ofcom form (to protect against sharing harmful material), it felt a somewhat hollow gesture — more like telling tales on an anonymous bogeyman than lodging a substantial grievance. I received an email to tell me the report had been received and not to expect a response to my individual complaint.
While it is understandable that individual complaints cannot be followed up, it feels there should be something more from the process — how can Ofcom encourage people to report on such issues if afterwards reportees continue to feel unheard?
Since the events of early August, Ofcom has pledged to invest even more into its Online Safety provision — I hope some of this investment will seek to make the user experience transparent and, if possible, more worthwhile for the individual.
In terms of the criminal elements of the act — it has been refreshing to see so many people who openly used online platforms to incite violence being held accountable in the courts. But for the police to be able to seriously respond to criminal online harm, they need to be provided with the people and resources to do the job. Too often, anonymous accounts are set up and accessed via a Virtual Private Network (VPN) meaning the person behind the account is difficult to identify. From experience, I know that in many of these cases, the police do not have the time or resources to investigate beyond a VPN — in circumstances such as those taking place in August, the wealth of anonymous accounts inciting violence demonstrates the magnitude and difficulty of the task faced by investigators.
Whilst the big social media platforms and online service providers continue to generate huge profits, refusing to acknowledge their role in facilitating the dark side of the web, their inaction remains a literal case of ‘computer says no’, allowing anonymous criminals to get off scot-free.
Resources
This article was first published in InPublishing magazine. If you would like to be added to the free mailing list to receive the magazine, please register here.