A battle is unfolding in Brussels as the European Commission invites public input on how to classify high-risk AI systems under the EU AI Act. First launched on 6 June 2025, the consultation aims to determine which AI tools are dangerous enough to warrant strict regulation, and who should be held responsible when things go wrong.
Key Questions:
1. What’s High Risk? Industry and regulators disagree over how to interpret “high-risk” - some fear overreach, whilst others worry about loopholes.
2. GDPR Crossfire: Many high-risk systems process personal or biometric data, pulling them into dual compliance with both AI law and GDPR which becomes a legal challenge.
3. Enforcement Gap: National authorities must collaborate up to oversee AI regulation by August 2026, but questions remain about consistency across EU member states.
Guidelines are expected to be finalised by late summer. The window for public comment closes 18 July 2025, and with it, a rare chance to shape how we balance AI innovation with rights protection. Share your input on how you think high risk AI systems should be classified to help the EU make AI systems more safe.