top of page

Should the UK Improve its Online Safety Laws?

Greta Sporcich


Image by Nevit Dilmen via WikimediaCommons
Image by Nevit Dilmen via WikimediaCommons


Last month, Technology Secretary Peter Kyle called UK laws on internet safety “very uneven” and “unsatisfactory,” begging the question of why these laws are inadequate and whether they can be improved.


In today’s digital age, freedom of expression online has become more complicated than ever, as people have an unprecedented ability to express their views. The Online Safety Act 2023 is a set of laws passed by the previous Conservative government intended to protect children and adults online. The Act introduced new criminal offences, including encouraging or assisting serious self-harm, cyberflashing, threatening communications, and intimate image abuse. 


The explanatory notes to the bill state that “at present most user-to-user and search services operating in the United Kingdom are not subject to any regulation concerning user safety.” 


The lack of regulation has particularly dangerous effects on young internet users. This January, Ian Russell, father to a teen who took her own life after seeing harmful online content, addressed a letter to the PM. Russell argued that the Online Safety Act required amending and that a “duty of care” should be imposed on tech firms. 


Initially, the legislation had plans to compel companies to remove “legal-but-harmful” content, such as posts encouraging eating disorders. However, critics of the legislation were concerned that removing too much content could lead to censorship. Conservative MP, David Davis, called it “the biggest accidental curtailment of free speech in modern history.” Altering its initial plans, the legislation was instead required to give users more control over filtering their own content. 


While Peter Kyle claimed the act contained some “very good powers,” he also mentioned being “very open-minded” to making changes to the current legislation.


Mark Zuckerberg, CEO of Facebook and Instagram, laid out plans in early January 2025 to get rid of fact-checkers in the United States. Zuckerberg claimed that Meta’s previous approach, utilising moderators on the platforms, was “too politically biased,” and that it was “time to get back to our roots around free expression.” 


While this change on Meta will apply to only the US, critics in the UK, including Ian Russell have accused Zuckerberg of abandoning safety for a laissez-faire model. Peter Kyle commented that in the UK, “you abide by the law, and the law says illegal content must be taken down.”


Russell stated that in order to properly regulate social media in the UK, changes were necessary, or else “the streams of life-sucking content seen by children’ would ‘soon become torrents - a digital disaster.”


Although protecting children and adults online is of the utmost importance, protections can not be in violation of laws of free expression. Article 10 of The Human Rights Act 1998 protects citizens’ rights to hold opinions and express them freely without government interference, including views through the internet and social media. However, in certain instances, such as expressing views encouraging racial or religious hatred, authorities are allowed to restrict this freedom. 


UK criminal or civil law applies both online and offline, making the right to freedom of expression subject to a range of restrictions in UK law; the Malicious Communications Act 1998 criminalises “indecent or grossly offensive” online communications, and the Public Order Act 1986 contains offenses for stirring up hatred on the grounds of race, religion, or sexual discrimination. Censoring these offenses ideally provides the protections that users need online.


Overall, online safety is a complex issue. In its 2021 report, the House of Lords Communications and Digital Committee took the view that while freedom of expression was the “hallmark of free societies,” it was not an “unfettered right.” It is this fine line that lawmakers must work with to find the balance between abusing the right to freedom of expression and protecting the rights of online users. While there is good reason to support imposing a “duty of care” on social media, lawmakers must proceed with caution when interpreting which kinds of speech can and should be restricted online.

bottom of page