Deepfake Danger: 61 Countries Pledge to Protect the Vulnerable

In a landmark move, data protection authorities from 61 countries, including the UK’s ICO have issued a joint statement addressing the dangers posed by AI-generated imagery. The unified stance responds to growing concerns over systems capable of creating realistic images and videos of individuals without their consent, with particular emphasis on protecting children from potential harm.

William Malcolm, Executive Director of Regulatory Risk & Innovation, emphasised that while AI offers benefits, people should not have to sacrifice their identity, dignity, or safety. He stressed that responsible innovation requires putting people first through meaningful safeguards that ensure autonomy, transparency, and control over personal data.

The statement highlights that public trust remains essential for successful AI adoption. This global regulatory alignment aims to provide certainty while sending a clear message: developers and deployers of AI systems must meet their obligations or face enforcement action.

Key Takeaways

• Trust is non-negotiable: Public confidence in AI depends on embedding privacy safeguards from the outset

• Global consensus matters: 61 authorities speaking together signals that AI governance is an international priority requiring coordinated action

• Children require special protection: Regulators explicitly highlight minors as particularly vulnerable to AI-generated imagery harms

• Accountability follows innovation: Those developing AI must anticipate risks proactively, or regulators will intervene to protect the public

Read more : Joint Statement on AI Generated Imagery