
European lessons from DeepSeek AI – updates

The emergence of Chinese AI company DeepSeek has some key lessons for the European industry say European AI entrepreneurs from Fractile and Graphcore to Axelera.
DeepSeek, based in Hangzhou in south-east China, has developed two AI frameworks that can run large language models (LLMs) that can challenge the performance of those from OpenAI, Perplexity and Google with a fraction of the computing resources. The company has used unsupervised reinforcement learning to create the AI frameworks with more reasoning. It is also making the technology open source under the MIT license.
The LLMs with up to 70bn parameters run on lower performance Nvidia GPUs, the H100, as other higher performance chips are banned from being shipped to China by the US government. Recent reports say DeepSeek has as many as 50,000 H100 processors available.
The seminal paper on the DeepSeek technology is here.
“DeepSeek is not the first to show that a talent-dense team can go toe-to-toe with the leading, most capitalised AI model companies. In Europe, Mistral was able for much of 2024 to provide open source models that rivalled Meta’s open Llama models, yet were trained on a fraction of the budget,” said Walter Goodwin, CEO and Founder of UK AI startup Fractile which recently saw investment from Pat Gelsinger, former CEO of Intel.
Fractile licenses RISC-V vector processor IP for AI accelerator
“Europe has a high talent density and is less constrained on compute availability than China, and so DeepSeek should be a wake-up call that proves Europe can also afford to play at the leading edge of AI.”
The open source nature of the DeepSeek frameworks has already hit the share price of US competitors that charge for their AI chatbot services. WiMi Hologram Cloud in China is already developing intelligent programming tools based on DeepSeek to provide programmers with a more intelligent and efficient coding experience. This tool will be able to automatically complete code, analyze code quality, offer optimization suggestions, and more, helping programmers write code more efficiently and improve code quality.
“However, while DeepSeek has kept training costs for its model staggeringly low, it’s important to point out that it’s not had a revolutionary impact on inference costs,” said Goodwin at Fractile which is developing an inference chip. “What we’re seeing here is evidence of a flip, where the cost of training AI models becomes increasingly marginal compared to the cost of inference. It’s inference where we’ll see increased competition for incumbents like Nvidia in the long-run, as the costs remain exceptionally high.”
Nigel Toon, CEO of UK AI chip designer GraphCore, flagged the potential of DeepSeek last month. This is key as GraphCore is part of Softbank, which is leading the US AI project Stargate.
“Another stunning example is the breakthroughs that have been achieved by the Chinese research company DeepSeek-AI who have also used a reasoning approach that leverages Reinforcement Learning , combined together with a highly diverse mixture-of-experts model, to achieve results and which goes beyond what has previously been possible with a single large model, but where this performance is achieved in a far more efficient way,” he said. “Whereas most AI researchers have just been pushing hard on the scaling lever, perhaps held back by recent export restrictions that limit access to GPU’s, this Chinese team are showing that necessity can become the mother of invention.”
“I find it very exciting (but also not a surprise) that we are about to see a new wave of innovation in artificial intelligence. When asked about AI development, I always point out that the AI you are using today is the worst AI you will ever use,” he said.
Dutch edge AI chip maker Axelera also sees this as a positive move, particularly as it develops a chip for datacentre applicaitions.
“DeepSeek has proven that constraints can spark creativity. Limited hardware accessibility has forced researchers to rethink how we work with resources, shifting away from treating computing as unlimited—a common mindset within corporate labs with near-endless budgets. But, now that a new method has emerged, I believe many other companies will also follow suit and may invent even more effective solutions,” said Fabrizio del Maffeo, CEO of Axelera in the Netherlands.
“To tackle the constraints, DeepSeek reduced the supervised part of training and leaned heavily into reinforcement learning. This shift emphasized step-by-step reasoning and systematic logic over simple token prediction. At Axelera AI, our inference-optimized architecture thrives in this environment,” he said.
A chat app from DeepSeek saw 2.6m downloads in the last three days but sign ups have been paused after a cyberattack was reported.
www.deepseek.com; www.fractile.ai
