The UK’s new Online Safety Act, which came into force in July 2025, introduces stricter requirements for digital platforms to tackle harmful content such as pornography. While aimed at improving online safety especially for under age children, the legislation has raised significant GDPR concerns.
Under the new law, platforms that host user-generated content, like social media, forums, and some messaging services, must take “proportionate measures” to prevent children from accessing harmful or inappropriate content. This includes implementing age verification or age estimation technologies, such as ID checks, facial age estimation, or third-party verification services.
This has been met with a lot of backlash. The Information Commissioner’s Office has called for “greater clarity” on how platforms can lawfully process personal data under the Act. Civil liberties groups have also warned that vague definitions of “harmful content” could lead to over-removal, raising issues around data accuracy and proportionality. Subsequently a parliament petition was formed, amounting over 337,000 signatures, with the numbers rising rapidly. It states that “this law compels platforms to monitor and censor content, including private messages, in ways that violate our rights to privacy and free expression”. Parliament is now required to consider a debate as growing public opposition calls into question both democratic principles and privacy infringement concerns.