MENU

LatticeFlow AI finds EU AI Act compliance gaps in DeepSeek models

LatticeFlow AI finds EU AI Act compliance gaps in DeepSeek models

Technology News |
By Jean-Pierre Joosting



LatticeFlow AI has used the COMPL-AI evaluation framework for Generative AI models under the EU AI Act to flag critical compliance gaps in two DeepSeek distilled models.

The evaluated DeepSeek models fell short in key regulatory areas, including cybersecurity vulnerabilities and bias mitigation challenges. However, they ranked high in toxicity prevention.

Developed by ETH Zurich, INSAIT, and LatticeFlow AI, COMPL-AI is a compliance-centered framework that translates regulatory requirements into actionable technical checks. It provides independent, systematic evaluations of public foundation models from leading AI organizations, including OpenAI, Meta, Google, Anthropic, Mistral AI, and Alibaba, helping companies assess their compliance readiness under the EU AI Act.

LatticeFlow AI used COMPL-AI to asses the EU AI Act compliance readiness of two DeepSeek distilled models — DeepSeek R1 8B (based on Meta’s Llama 3.1 8B), and DeepSeek R1 14B (built on Alibaba’s Qwen 2.5 14B). Not only did the evaluation benchmark these two models against the regulatory principles of the EU AI Act, it also compared the performance of the two models to their base models as well as to models from OpenAI, Google, Anthropic, and Mistral AI, all featured on the COMPL-AI leaderboard.

The LatticeFlow AI evaluation ranked the two evaluated models lowest in the leaderboard for cybersecurity, which showed increased risks in goal hijacking and prompt leakage protection in comparison to their base models. The DeepSeek models also ranked below average in the leaderboard for bias and exhibited significantly higher bias than their base models.

However, on a positive note, the evaluated DeepSeek models performed well in toxicity mitigation, outperforming their base models.

“As corporate AI governance requirements tighten, enterprises need to bridge internal AI governance and external compliance with technical evaluations to assess risks and ensure their AI systems can be safely deployed for commercial use,” said Petar Tsankov, CEO and Co-founder of LatticeFlow AI. “Our evaluation of DeepSeek models underscores a growing challenge: while progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks — cybersecurity, bias, and censorship.”

The full DeepSeek evaluation results are available at https://compl-ai.org.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s