MENU

Carrier networks made to measure

Carrier networks made to measure

Technology News |
By Jean-Pierre Joosting



The twentieth century saw the decline in made to measure, bespoke goods and services. Henry Ford epitomised the new “any colour as long as it’s black” way of thinking: standardise, limit choice and build a focused mass market, cut production costs and undercut the competition. It looked like the death of customization – but that trend has been reversed in this century.

A combination of factors – such as Internet ordering, cloud services and new manufacturing processes including 3-D printing – has lead to a resurgence in goods and services closely tailored to demand. Choosing clothes on a fashion site not only allows a choice of sizes, colours, patterns and materials, you can even “try it on” by pre-modelling the finished product on an avatar of your own body.

Something similar is happening to carrier networks. The customer might assume that they are all basically the same, apart from the distribution of cell towers, but that is far from true. With all the pressures of smartphone and mobile device usage, video bandwidth demands, soaring customer expectations, and competition for a mature market, today’s carrier networks are stringently tested both during development and deployment to ensure not just reliable connectivity but demanding service levels and – in the ‘age of the customer’ – the very best QoE (quality of experience).

So how is this achieved?

 

Testing today’s networks

Mobile service providers test their carrier networks using sophisticated test solutions designed to emulate “real-world” control and data plane traffic. This traffic emulation – typified by Spirent’s “Landslide” solution – can scale up from realistic day to day traffic to extreme traffic conditions with millions of subscribers moving through the LTE, GSM, UMTS, eHRPD and Wi-Fi networks while consuming IMS and Over-The-Top services.


These tools do not simply throw bandwidth at the network, they will simulate not only the different types of traffic but also the types of usage conditions: for example the typical rhythms and patterns of a voice call are very different from a video stream or instant messaging. They can even record actual traffic conditions from real life and then magnify or distort them to mimic a signal storm – as when thousands of spectators simultaneously upload a dramatic moment in a sports arena. All these very diverse types of traffic can be massively scaled by the system and run concurrently, and you can even superimpose fault conditions and cyber attacks to see how the network would react, or services would degrade, under stress.

At the design stage, this means that the system can model the provider’s actual or proposed network in detail – including nodes, network architecture, Evolved Packet Core, servers and management – and it can emulate every type of mobile device likely to be used on the network, every type of service including VoLTE, WiFi APs and IMS, and every sort of user up to many millions of endpoints. It can also emulate any number and type of connected IoT devices and this will become especially significant.

This is very far from a “one size fits all” scenario. One might think that the basic requirements for an AT&T network would be the same as for Orange, BT or any other carrier. But each company has its own user profiles, contract packages, SLAs and business models – quite apart from the physical differences in the core networks. So this very stringent testing is not simply to ensure that the network works, but that it also satisfies the needs of every type of application and delivers a consistently high QoE.

This is what is already happening, and not only at the design and development stage. Similar testing can be integrated into the operational network to run end-to-end health tests, to isolate faults, validate upgrades, model cell-site turn up and backhaul loading. Landslide testing is already deployed by leading mobile providers worldwide – but according to a Heavy Reading report, 20% of outages are still only detected by operators when users report them on social media!

So is that the end of story?


Signal storms ahead

With the rise of smartphones and the unquenched appetite for mobile video, the mobile industry is already stretched to deliver. But the signs are that the rapid growth of the Internet of Things (IoT) will shift the challenge to a whole new level – not only in terms of the number of end points but also the increasing diversity and management complexity.

To take just one new addition to the IoT landscape: Beecham Research suggests that there could be nearly 350 million connected cars by 2020. Note that each connected car requires not just one but a host of connectivity services, each with their own traffic protocols, levels of criticality and SLA requirements. There will be familiar demands such as Internet connectivity for browsing and emails, then voice services, location services, vehicle and engine monitoring and extremely critical “driverless” or crash avoidance applications.

Overall Gartner estimates some 26 million connected devices, and Machina Research estimates around 2 billion cellular M2M connections – all by 2020 just four years ahead.

These are staggering numbers and yet they can all be accommodated and emulated by test systems such as Landslide. Such scalability will become very important, when you consider the factors involved.

Quite apart from sheer network congestion, there will be even more extreme variation in types of traffic and their demands. Probably the most widespread initial uptake will be among smart meters, but also alarm systems and domestic and office environmental monitors. These systems typically offload their traffic to wi-fi, and only turn up 3G or 4G connections if that fails. So what happens when a district experiences a major power cut and thousands of devices immediately log in to the carrier network? Such signal storms are not unprecedented, but they could arise in many new and unexpected ways.


Traffic prioritization sets a further challenge. As before, voice and other latency-critical applications will need to be prioritized, but how do we grade the countless IoT demands? Vehicle monitoring for oil levels is very important, but can take its time, whereas loss of pressure in a tyre could cause a crash. EHealth – and the litigation that could arise from failure in a medical monitoring application – presents another minefield, let alone the microsecond demands of M2M financial systems.

So the rise of IoT not only threatens a leap in capacity requirements, but also intense competition for resources and a plethora of different QoS and QoE standards to be met. And yet the test solutions to make sure that all these management challenges are properly addressed are already available and already being deployed by leading carriers.

 

Conclusion – a testing future

According to a recent Heavy Reading report Mobile Network Outages & Service Degradations:

In what for the most part still tend to be flat revenue environments for mobile operators, maintaining network availability and excellent service and application performance is exceptionally challenging. That isn’t just a function of the huge growth in traffic volumes and generally flat capex budgets. It’s also a function of the growing diversity and complexity of application types and their underlying service requirements, and the increasing interdependence of different application, service and infrastructure layers within the network.

The report suggests that, although the number of incidents affecting mobile networks is about the same as it was two years ago, there are more outages from network failures and more that take longer to fix. The per annum cost of outages has meanwhile increased 18% to around $20 billion. But the good news is that operators are already confident that they have the necessary testing capacity:

Mobile operators seem to have a lot of confidence in the ability of testing and performance monitoring tools to accurately assess how successful a network upgrade has been. Only 9 percent of respondents believe that only the impact of user loading is capable of providing meaningful validation. Performance monitoring systems (59 percent rated most important) and the ability to test in the production network (32 percent) are both highly valued.


It is good to know that most carriers already have these essential test capabilities in place, and the latest test solutions already enjoy the scalability and sophistication needed to match the complexity of the coming IoT storm. But we do expect a wider market for such testing tools.

The initial demand for sophisticated test solutions was among the Network Equipment Manufacturers, and the largest carriers and test laboratories. But we also anticipate a growing market of IoT Service Providers as well as more local providers needing to address the challenges described above.

The necessary solutions and testing expertise are already available. And they are well proven.

Further Reading:
www.spirent.com/Assets/WP/WP_Mobile-Network-Outages-Service-Degradations


Share:

Linked Articles
10s