
Implementing multiple AI agents in a telecoms network is increasingly having unintended consequences as the models try to balance the requirements in different ways.
This is raising issues for the design of the next generation chips that are embedding native AI agent capability for 5G and 6G networks.
“We had two AI agents implemented on Samsung products and they caused problems in the network. We had one agent optimising the air interface and another for load balancing,” said Dan Warren, director of communications research at Samsung R&D UK. The first one was pushing devices off a band to switch the band off, while the second was pushing devices back on to balance the load.
“When you have lots of cells and lots of bands there are lots of applications and a lot of AI and a lot of trouble if you don’t get it right,” he said.
“Everyone is aware of the AI conflict issue and it causes people to stumble,” said Rob Curran of telecom analyst Appledore Research. “Different radio front ends, some optimised for power efficiency, some for performance.

AI agents are used in several layers across the RAN Courtesy: RANsemi
This has implication for network architectures and the design of the next generation of chips in the network that will run AI agents.
“Out of that comes the nexus of the idea that you should have a layer in the network where all the intents of the AIs are acknowledged,” said Warren.
“Fighting and conflicts between AI models, we faced the issues when we are analysing the outcome of AI models,” adds Prof Shadi Moazzeni, director of the Smart IoT lab at the University of Bristol who is looking at the requirements for AI agents in 6G networks.
The issue with fighting AI is particularly important for embedded AI, says UK chip designer RANsemi. It is developing a chip for the radio access network (RAN) optimised for AI with its own accelerator.
“The thing is particularly with embedded AI you need to start to think about whether there’s the right amount of compute, interfaces to the companion host processors, if you are making a chip for a specific application you have to think of a lot of other things,” said Doug Pulley, CTO of UK chip designer RANSemi.
High level AI agents operating in the cloud with millisecond latencies are is a very different problem from the microsecond requirements of embedded AI, says Pulley.
“The default approach to AI for the RAN relies on placing AI in the cloud, either for both training and inference, or at the very least, for offline training of models. Every base station location is different and has dynamic characteristics over multiple timescales (for RF environment and traffic), says Oliver Davies, VP of marketing at RANsemi.
“Baseband processing at the RAN edge faces stringent technical constraints, including ultra-low latency, tight synchronisation, and high power efficiency. Offloading AI workloads to general-purpose cloud compute, or even to air-cooled edge servers, is impractical at scale. Instead, baseband AI must be tightly co-designed with its hardware environment, integrating DSP acceleration, dedicated inference engines, and high-throughput I/O to meet wireless telecom-grade performance requirements.”

AI agents in the RAN Courtesy: RANsemi
An agentic base station is a self-regulating node that uses AI not only for optimising physical layer operations such as beamforming or interference mitigation but also for making autonomous decisions across the different software layers.
This is driving the need for coordination between the AI agents. “We are the transition phase,” said Curran. “All the work and experimentation going on at the moment is extremely valuable as the industry has to learn extremely fast.”
6G network evolution
“The challenge is that the entire network is a compromise,” said Warren. “Even with the air interface the tradeoff is coverage versus capacity, then there is backhaul capacity vs cost. This is what you can use AI to optimise.”
“Crucially, these agents would learn from local data, reflecting their specific RF environment, traffic patterns, and user behaviour rather than relying solely on centralised training sets. This approach also means that agentic base stations can work together in multi-agent configurations, sharing knowledge and optimising jointly for system-wide goals,” said Davies.
“Cloud-first strategies and repurposed data centre AI chips will not fully drive the transition to AI-native RAN,” he adds. “It requires bespoke, embedded AI architectures tailored to the unique constraints of modern wireless telecoms. An approach that involves rethinking silicon, software, and system design, with a focus on deterministic performance and autonomous adaptability.”
“We need to know the industry requirements to be able to help,” says Moazzeni.
“I would say research has a very good role here as when we understand more the requirements that industry and the future world needs, we should look at the requirements in ten years which is very hard. We need collaboration with industry to train the students in a way that is useful.”
www.appledoreresearch.com; www.ransemi.com; www.samsung.com; www.bristol.ac.uk
