On 20 May 2018, Matt Hancock MP, then the Secretary of State at the Department for Digital, Media, Culture and Sport, announced Theresa May’s government’s intention to ‘make sure the UK is the safest place to be online’. Demonstrating the scale of such an undertaking, it took over five years and almost as many Prime Ministers until, on 26 October 2023, the Online Safety Bill received Royal Assent.

The Online Safety Act 2023 (‘the Act’) is a sizeable piece of legislation. In addition to the regulation of online harms, the Act has brought in changes to the communications offences. In that respect, it has had an immediate effect; the first conviction for the new offence of cyberflashing occurred in March 2024.

However, it will take longer fully to understand how much of an impact the Act’s core focus – imposing a regime of systems and processes on various types of digital platform – has on the digital landscape, including to what extent it stymies users’ rights to freedom of expression. Such rights include the right to receive information, as well as to express it.

Overview of the Act

The Act potentially regulates two classes of online platform. First, there are those which can be categorised as a ‘user-to-user service’, because they enable users to encounter material that other users upload to the platform or generate directly on it, such as social media platforms. Secondly, there are those which can be classed as a ‘search service’, because the platform can be used to search the contents of more than one website or database.

In addition, each class of platform will only be regulated by the Act if it also has ‘links with the UK’, meaning that there are ‘reasonable grounds to believe that there is a material risk of significant harm’ to individuals in the UK presented by content that is hosted on the platform or can be found through it. There are also further permutations and exclusions beyond this article’s scope.

OFCOM has estimated that the Act’s prescriptive requirements will apply to more than 100,000 services of various sizes. Some, like the major search engines and social media platforms, are at the heart of the daily lives of many people.

Where are we now?

The regulatory aspect of the Act will, and is intended to, take longer to implement than the new communication offences. Upon being designated the online harms regulator, OFCOM set out a three-year ‘roadmap to regulation’, creating a staged implementation of guidance and codes of practice which elaborate on the statutory provisions. That roadmap is presently due to conclude in Spring 2025.

Potentially regulated organisations, and the lawyers engaged to advise them, are keeping a close eye on the development of the codes of practice. Adherence will create a presumption of compliance with the statutory duties. To that end, the codes will go hand in hand with risk assessments that regulated platforms must undertake to identify risks their services might create for users and, subsequently, demonstrate that systems and processes have been put in place to mitigate such risks.

Once the first of the codes of practice – which focus on the illegal harms duties relating to matters such as terrorist content – are laid before Parliament at the end of this year, the practical effect of the Act should become clearer.

The Act and freedom of speech

One area of particular interest is the extent to which the Act may create a chilling effect on online speech and access to information. This concern flows from the requirement imposed on regulated entities to identify, mitigate and manage risks of harm that arise from illegal and legal but harmful content that can be accessed on their platforms and related activities that take place through their platforms.

Regulated organisations have good reason to ensure that they do not fall foul of their obligations under the Act. OFCOM’s supervisory role comes with teeth: it has been empowered to hand out substantial financial penalties of up to £18 million or 10% of a company’s relevant global turnover. It can also impose ‘business disruption measures’, such as requiring a regulated organisation’s payment provider to withdraw or limit access to their services in the UK until such time as OFCOM permits.

With such significant sanctions available, the natural tendency for regulated entities might be to take a zealous approach to removing potentially offending material from their platforms, at the expense of user interaction with such material.

However, the Act, at least ostensibly, does provide some counterbalance. In addition to creating obligations relating to online harms, the Act also imposes various requirements relating to freedom of speech: all regulated entities must pay ‘particular regard to the importance of protecting users’ right to freedom of expression within the law’ when deciding on, and implementing, safety measures and policies.

In addition, regulated organisations designated as most important or influential – ‘Category 1’ services – are required to adhere to additional duties relating to protecting news publisher content, journalistic content and content ‘of democratic importance’.

While OFCOM can take enforcement action for systemic breach of the obligations which might loosely be described as ‘free speech duties’, regulated organisations may well still be more likely to place greater importance in avoiding having their feet held to the regulatory fire for breaches of the harms duties, rather than on perhaps less readily identifiable breaches of the free speech duties.

Regulated platform users are not left without any mechanism to seek redress if they think their rights to freedom of expression have been interfered with due to the Act. All regulated organisations are required to implement a complaints procedure. That procedure must include addressing complaints that assert the platform is not complying with the free speech duties.

Category 1 platforms are also required to put systems in place to ‘empower’ adult users, where proportionate, to enable those users to encounter content that might otherwise be made inaccessible due to compliance with the harms duties.

Systemic failure to comply with such complaints or empowerment requirements can, in the same way as a systemic breach of the duties relating to harmful material, give rise to enforcement action by OFCOM.

Only time will tell, however, if this mechanism is effective in vindicating users’ complaints. It depends upon the prospect of OFCOM taking enforcement action in respect of problems identified by users. If however no such action is taken – or the prospect of such action is insufficiently coercive on a regulated platform – then a user who, for example, finds their content being persistently removed or their access blocked has no individual right to bring an action against the regulated provider concerned.

Outlook

The online harms landscape continues its evolution. The completion of the roadmap in 2025 will be the next significant step. OFCOM is alive to tensions such as the concern over chilling free expression.

It is too early to say whether that balance is going to be struck satisfactorily. However, given the scrutiny that has dogged the proposal for an online harms framework from its inception, it would be surprising if OFCOM did not take steps to protect freedom of speech online as the Act’s impact becomes more clear. 

Upon being designated the online harms regulator, OFCOM set out a three-year ‘roadmap to regulation’. Potentially regulated organisations, and the lawyers engaged to advise them, are keeping a close eye on the development of the codes of practice.