5G stands for the fifth generation of wireless technology for mobile cellular communication. Each generation refers to a developmental leap in wireless technology that is typically not backward-compatible. History seems to indicate that this developmental leap happens once every decade. Let us quickly go over the past four generations before we see what 5G has in store for us.

1G refers to the analogue technologies that were operational in the 1980s (e.g., NTT in Japan, Nordic mobile telephone in the Nordic countries, advanced mobile phone system in North America, among others). They involved transmission and reception of analogue speech, with the key feature being the support for mobility.

2G refers to digitised voice and text transmission technologies that came into operation in the early 1990s. These technologies offered better efficiency through improved multiplexing techniques, methods that could support more users in the same frequency band. Example technologies include the Global System of Mobile (GSM) communications, which uses time-division multiple access that allocates different time slots to different users, and Code Division Multiple Access (CDMA) techniques which uses spread spectrum radio transmission. The speeds were of the order of 10 kbit/s although some later extensions of GSM [namely, General Packet Radio Service (GPRS) and Enhanced Data Rates for GSM evolution (EDGE)] could provide rates of up to 384 kbit/s.

3G came into operation in the 2000s and witnessed the move of almost all carriers towards CDMA. This enabled better power control, better call handovers across base stations and better statistical multiplexing that allowed a larger number of users to co-exist in the system. 3G ushered in mobile broadband access at rates of a few Mbit/s and played a pivotal role in enabling the smartphone revolution.

4G of the 2010s has provided and continues to provide much higher data rates than 3G, up to even 1 Gbit/s for low mobility users (e.g., stationary users and pedestrians). Several technological innovations and engineering ingenuities were needed to achieve such high speeds—better equalisation of the channel via the use of Orthogonal Frequency Division Multiplexing (OFDM), frequency-domain statistical multiplexing via the linear-precoded OFDM access which enabled lower signal distortions at the mobile transmitter, multiple antennas, better error-correcting codes, opportunistic scheduling, link adaptation, etc. Some of these features were already available in 3G as advanced features but came to be fully utilised only in 4G systems. These improvements were key to bringing the Internet and videos to those on the move.

With this backdrop, 5G aims to provide much higher speeds (of up to 2 Gbit/s) to support enhanced mobile broadband, aims to enable the Internet of Things (IoT) for smart factories and smart cities at lower power consumption, and aims to reduce latency (to less than 1 ms) in ultra-reliable and low latency communication. The last of these aims will enable tactile internets, high-speed networked autonomous systems, and robotics. There are many challenges that need to be overcome before we can achieve the targeted speed, power efficiency, reliability, and latency requirements. Some trials in late 2019, although under ideal conditions, indicate that we are not too far off, at least from the goal of 2 Gbit/s.

In this issue, we have invited a few articles from leading experts in the industry and the academia. They discuss some of the technical challenges in bringing 5G to fruition. They also shed light on what lies ahead. Here is a quick summary.

  1. 1.

    The opening paper (by Subrahmanya and Farajidana) discusses some guiding principles that will shape the physical layer of the 5G standard. It also discusses the special techniques required to allow 5G to work in the millimeter wave frequency bands, support higher bandwidths, and provide lower latency and much greater flexibility compared to the previous Gs. It concludes with a discussion on what is beyond 5G.

  2. 2.

    The second paper (by Barati, Dutta, Rangan, and Sabharwal) discusses the challenges arising in our effort to tap new spectrum in the millimeter-wave (mmWave) band that allows spatial multiplexing through beamforming. Transmissions in such systems are highly directional. This is necessary for providing adequate range, and is beneficial for spatial multiplexing. However, it leads to an exploration problem where transmitters and receivers have to first discover each other. This costs energy and time. How best is this exploration done? Should it be done in the analogue domain or in the digital domain? The paper provides an in-depth study of the associated trade-offs.

  3. 3.

    The third paper (by Mazgula, Sapis, Hashmi, and Viswanathan) explores the use of millimeter waves for industrial automation. The harsh propagation conditions in a factory, associated with the millimeter waves, raises concerns on the viability of not only beamforming but also ultra-reliable and low latency connectivity in such environments. The paper formulates optimisation problems and presents low-complexity greedy solutions for beam allocation, keeping in view the channel conditions and the latency requirements.

  4. 4.

    The fourth paper (by Ramachandran, Surabhi, and Chockalingam) looks at a novel signaling scheme, called orthogonal time–frequency-space signaling (OTFS), which could be useful at high mobile speeds. At high speeds, multipath causes severe frequency-selective fading and the Doppler-effect causes the more familiar time-selective fading. The paper explores a new domain of sparsity in the delay-Doppler domain that arises in such systems and discusses the relative merits of OTFS over the 4G waveform, OFDM, which enabled 4G systems to combat multipath.

  5. 5.

    In the fifth paper (by Bhatia, Swami, Sharma, and Mitra), the focus shifts to massive machine-type communications. It explores the so-called “overloaded” setting, with many more users than available orthogonal “dimensions” of transmission, and makes a case for nonorthogonal multiple access as an enabler for massive connectivity at higher capacities.

  6. 6.

    The sixth paper (by Madadi, Baccelli, and de Veciana) studies an interesting issue in dense 5G deployments. Dense deployments are crucial to increasing coverage and capacity. But mobility in such dense deployment settings will likely lead to an increased rate of handovers (of the mobile) across base stations. Too many handovers lead to a loss of throughput due to the associated delay and overheads. The paper provides the mathematical framework needed for analysing the impact of frequent handovers and presents strategies for optimising them based on the amount of a priori information available at the users and the base stations.

  7. 7.

    The seventh paper (by Mashhadi and Gűndűz) highlights the challenges in the timely learning of the channel state in massive multiple-input multiple-output (multiple antenna) systems. In such systems with many antennas at both the transmitter and the receiver ends, there is a significant training overhead (pilot transmissions, estimations, feedback). The paper provides an overview of how neural networks can learn the key features of the channel and efficiently encode them, and thereby help reduce the training overhead and complexity.

  8. 8.

    The eighth paper (by Gupta, Tripathi, and De) brings to our attention the serious impact of energy consumption on the lifetime of devices as we expand the deployment of the IoT to make our homes, factories, and cities smart. The paper surveys several “green” sensing and communication approaches that can make the anticipated massive IoT expansion sustainable. A key takeaway from the paper is that a holistic, application-driven approach to energy management is necessary for maximising the lifetime of IoT networks without sacrificing the quality of service.

  9. 9.

    The ninth paper (by Amuru, Ganti, Kuchi, Milleth, and Ramamurthi) points out a crucial difference in the requirements for providing cellular coverage in rural environments in developing and developed countries. Typical rural areas in developing countries are densely populated with low mobility users, while those in developed countries are sparsely populated with high-mobility users (e.g., on highways). This paper makes a strong case for having larger-than-usual cells with low mobility support, particularly in countries like India, to meet the affordable rural cellular coverage requirements. They also discuss potential solutions for meeting these requirements within the framework of the present-day cellular standards.

  10. 10.

    The tenth paper (by Manjeshwar, Jha, Karandikar, and Chaporkar) provides a survey of radio access network architectures and discusses how they can be easily reconfigured, run on commodity hardware, and managed better to handle mobility. Software-defined networking, network functions virtualisation, etc. are key new concepts that enable the above. The paper discusses the limitations of some existing solutions and proposes some fixes.

  11. 11.

    The 11th paper (by Popovski, Simeone, Boccardi, Gűndűz, and Sahin) argues that post-5G, the communication engineer should be thinking more about design objectives and constraints based on the semantics of the transferred bits, in other words, beyond mere connectivity. Traditionally, communication engineers have addressed only the technical problem of transporting bits reliably across a noisy channel. But perhaps the time has come to change this. The paper introduces the concept of semantic-effectiveness as a core component of a future communication system architecture. Perhaps this could replace the current “next-G paradigm” and usher in a more continuous improvement framework for wireless technologies.

We hope that these invited articles provide a glimpse of what is likely to come in the next 10 years, and possibly even beyond.