European Union policymakers have endorsed the world’s first comprehensive AI regulation, covering tools like ChatGPT and biometric surveillance, set to take effect next month. The AI Act classifies AI systems into risk categories: high-risk systems, such as those impacting health, safety, and democracy, must undergo stringent requirements including fundamental rights assessments. Lower-risk systems face minimal transparency obligations, like labeling AI-generated content. The use of AI in law enforcement is restricted to specific serious crimes and threats, ensuring a balance between utility and privacy.
General purpose AI systems (GPAI) and foundation models will adhere to lighter transparency requirements, such as technical documentation and EU copyright compliance. Those posing systemic risks will need to conduct evaluations, risk assessments, and report serious incidents to the European Commission. Prohibited AI activities include biometric categorization based on sensitive characteristics, untargeted facial image scraping, emotion recognition in sensitive settings, and systems manipulating free will. The AI Act will be enforced by an AI Office within the European Commission, with penalties for violations ranging from €7.5m to €35m, depending on the infringement’s severity and company size. Read more here.