
Major challenges remain for 5G deployment
However, unlike its predecessors, the fifth generation will empower more than smartphones and tablets, but fixed-wireless access (FWA) broadband delivery, autonomous vehicles, the “Factory of the Future”, connected cities, and dozens of applications within the broad umbrella of IoT. It’s a tall order, and delivering on all this presents enormous technological challenges, of which three, millimeter-wave operation, small cells, and artificial intelligence, stand out.
For example, even though carrier aggregation, more spectrally-efficient modulation schemes, and spectrum sharing will help, 5G along with Wi-Fi and other services will ultimately consume nearly all that remains of the available spectrum below 6 GHz. After that, there’s nowhere to go but up in frequency, possibly even to 95 MHz, for which even the FCC in the U.S. is on board (Figure 1). In the spirit of Field of Dreams, the commission is evaluating the viability of allocating more than 21 GHz of unlicensed spectrum between 95 GHz and 3 THz. The hope is that if spectrum is available “they will come”. After making the announcement, FCC Chairman Ajit Pai noted the limitations of this new frontier but touted the “mammoth swaths of airwaves” available.

However, for anyone who’s developed components and systems in this region such a plan probably seems comical at best and impossible at worst considering its immense challenges. That is, there are good reasons why, except for microwave links, satellite communications, and some military systems, millimeter wavelengths have remained uninhabited. But the commitment by the wireless industry illustrates just how much bandwidth the industry believes it will need to handle the disparate services that 5G will eventually provide. Why else would this or any other industry take it upon itself to operate in a region of the spectrum that is inherently inhospitable to the transmission and reception of electromagnetic energy?
Not only are these frequencies limited to line-of-sight paths, low-power signals can traverse only a few hundred meters under ideal conditions (which are rare), and are attenuated by almost anything, from precipitation to leaves. They won’t penetrate common building materials either, including the low-emissivity glass used in new construction and replacement windows. “Low-e” glass works fantastically for reducing UV rays from the Sun, but it’s metal-oxide coating is equally effective in attenuating sometimes entirely blocking millimeter-wave signals.
In addition to its propagation challenges, millimeter-wave operation requires further development of semiconductor technologies such as silicon-on-insulator and silicon germanium, greater integration with baseband components, massive MIMO on a scale that befits its name, and Active Electronically-steered Array (AESA) antennas currently the exclusive domain of next-generation military radar systems.Considering the challenges presented by millimeter-wave operation, it may seem odd that one of the first applications of 5G will be at 24 and 28 GHz in the form of fixed wireless access (FWA). On one level, delivering residential wireless broadband at these frequencies makes sense, because it’s a good “beta application” for millimeter-wave technology. It will allow carriers to further develop millimeter-wave wireless systems based on insights from actual operation rather than trials or simulations. It’s also a point-to-multipoint application, so it won’t have to serve mobile devices or deal with the integrating these bands into smartphones that are already cramped for space.
The small cell scenario
One of the issues whose challenges are often understated is just how much infrastructure or “densification” will be required to support 5G in the next few years and later as millimeter-wave frequencies come online. Industry organizations typically state the increase in base stations at ten times those in use today for 4G (for “hyper densification” 150 small cells per km2). More than 10 million small cells have already been deployed throughout the world to serve 4G, but this is just the beginning.
Analysts predict that more than 4 million more will be deployed globally this year alone, and the small cell market, which had been growing slowly for lack of a need to make them, is now expanding at 50% per year to accommodate LTE-Advanced, representing more than $4 billion in revenue. These figures will likely soon be incremented upward once deployments at 3.5 and 5 GHz increase. These estimates also do not include millimeter-wave deployments, which except for FWA won’t appear for years (see table 1).

This being said, the same characteristics the make the millimeter-wavelengths a poor choice for long-range communication make them almost ideal for the very short distances required for small cell-to-small cell and smartphone-to-smartphone links, as well as high-speed backhaul. They might also find homes in some industrial IoT environments, where they will compete with entrenched protocols like ZigBee, Bluetooth, Wi-Fi, Thread, and Z-Wave. Small cells operating at millimeter-wave frequencies could also collaborate with these protocols, providing a more seamless way to integrate short-range mesh networks into the long-range solutions (cellular and LPWAN) that provide connections to the world outside the plant.
AI everywhere
It’s been hard to miss all the hype about artificial intelligence and how it will transform everything it touches. The latest trend is toward “AI at the edge” of IoT deployments, the edge being where the data is generated from devices, typically sensors, that monitor various characteristics of the equipment they’re attached to. At the moment, nearly all this data is sent to cloud data centers, which causes two major problems: high end-to-end latency and a massive burden on the communications pathway between the edge and the cloud. In addition, as even a relatively small IoT deployment can generate huge amounts of data, it’s becoming increasingly obvious that some of this processing should be performed at the edge.
Reducing latency is far from trivial, as the laws of physics dictate the minimum time that can be achieved for a signal traversing a given distance and back. The least latency will always be delivered over the shortest distance, taking into consideration the processing, computing, and other functions performed along the way. For IoT, this is data traveling outward from the edge device to the cloud, and the return response from the cloud to the device (Figure 2).

To alleviate these problems, the goal is to split the tasks of processing and analytics between the cloud and the edge, which would reduce the end-to-end latency to levels suitable for real-time applications at the edge and reduce the amount of data sent to the cloud. Most of the attention to this approach has focused on large-scale IoT applications such as industrial production facilities and “smart” cities, but it will be a major component of 5G as well, and for mostly the same reasons.
As it applies to 5G, most of the talk about AI (and its subsets machine learning and deep learning) focuses on network management and other high-level applications to reduce operating costs through precision network planning, capacity expansion forecasting, autonomous network optimization, dynamic cloud network resource scheduling, among others. However, it will eventually further expand its reach even to smartphones that today rely on the massive resources in the cloud. For this to occur the semiconductor industry will need to develop “on-device AI” realized by dedicated coprocessors or accelerators, a market that has just emerged and is growing rapidly with more than 40 start-up companies working on the problem, along with the usual cohort of deep-pocketed silicon vendors.
The need for AI at the edge is perhaps most obvious for the autonomous transportation environment, as when it arrives this application inherently requires decisions to be from data produced by sensors in a few milliseconds or even less. Latency this low can only be achieved over a very short distance, which effectively mandates placing intelligence locally, in the vehicles and the roadside infrastructure that supports them. As the technology used for intelligent transportation system communication is most likely to be the cellular industry through its “Cellular-Vehicle to Everything” (C-V2X) architecture, AI will become a fundamental element of AI at the edge in this application.
To support all this data, network topologies such as Cloud-RAN will be complemented or replaced by virtualized RAN (vRAN) along with edge computing and integrated AI. C-RAN splits a base station in two, with the baseband unit performing processing (and soon analytics), and the remote radio heads delivering the RF portion of the system. In contrast, vRAN realizes baseband functions “virtually” in software, which makes allocation of resources more flexible so that resource allocation can be made in near real time. 5G’s expansion of cellular technology to include IoT requires these resources to be controlled at a local level to reduce latency and improve the performance of the systems it supports, a task that the vRAN is designed to serve.
Another resource in the carrier toolbox is network slicing that, among other things, can make more “granular” use of AI. Network slicing allows multiple networks to run on top of a single shared physical network, providing an end-to-end virtual network and letting carriers partition their resources to allow multiple “tenants” to multiplex their signals over a single physical infrastructure. So, for example, traditional high-speed cellular service, low-power IoT, and low-latency applications could be served by a single network, in slices. The resources allocated to these three very different applications can be adjusted in near real time.
The benefits delivered by network slicing appear like those from VPNs, network function virtualization, and other approaches. However, network slicing has one benefit the others can’t provide: the ability to generate additional revenue. By leasing slices on a long-or short-term basis, carriers can create an entirely new market sector tailored to the needs of customers whose needs differ. As part of the package, various levels of intelligence and other data-centric resources such as computational horsepower and storage can be offered to these customers, as needed.
Summary
A decade or more from now, everyone having anything to do with the development of 5G will look back to 2019 as the year when the massive amounts of time, money, and sweat began to reveal themselves in actual deployed systems. By that time, small cells, AI, and dozens of other technological breakthroughs will have advanced dramatically, and hopefully, millimeter-wave frequencies will have proven themselves useful. Precisely when it will be safe to reminisce remains to be seen, as 5G is evolutionary as well as revolutionary, so there may not even be a need for something called 6G.
