The article discusses a new framework for assessing the compliance of AI models with the EU’s AI Act. The framework, developed by LatticeFlow AI, is designed to test whether AI systems are aligning with the requirements set out in the EU AI Act, which aims to regulate AI development and deployment.
Here are some key takeaways from the article:
- Compliance gaps: The framework highlights significant compliance gaps in current AI models, particularly with regard to fairness, privacy, and cybersecurity.
- Prioritizing capabilities over compliance: Many AI developers have prioritized building high-capability models over ensuring they comply with regulatory requirements, leading to a lack of attention to critical areas like fairness and cybersecurity.
- LatticeFlow’s framework: The LatticeFlow team has developed an open-source framework that assesses AI systems’ compliance against the EU AI Act. This framework includes benchmarks for assessing various aspects of AI models, including:
- Fairness
- Privacy
- Cybersecurity (at both the model and system levels)
- Copyright and intellectual property protection
- Community engagement: LatticeFlow invites researchers, developers, and regulators to contribute to and improve their framework, making it a valuable tool for organizations across various jurisdictions.
- Industry impact: The development of this framework is significant as it highlights the need for AI developers to prioritize compliance alongside capability.
Some notable points from the quotes in the article include:
- "Predominantly been optimized for capabilities rather than compliance" (Martin Vechev, founder and scientific director at INSAIT)
- "Notable performance gaps" in fairness and cybersecurity, with many models scoring below 50% (Tsankov)
- The challenges of benchmarking LLMs in areas like copyright and privacy, due to limitations in current approaches (Tsankov)