The development of IoT within the energy infrastructure is best seen as a control loop. The control loop is composed of four functions: a physical process (such as the generation, transmission, or consumption of electricity), its measurement, decision making, and actuation. This control structure is shown in Fig. 3.1 where a sensor takes measurements of the states and outputs of a physical system. Wireless and wired communications are used to pass this information between the physical layer and other informatic components. This information is used to make decisions either independently in a decentralized fashion or in coordination with the informatic components of other devices. Decisions are sent back down to network-enabled actuators for implementation.

Fig. 3.1
figure 1

The development of IoT within energy infrastructure as networked control loop

In some cases, this control loop acts in near real-time; in other cases, some of the information is used as part of predictive applications that facilitate decisions at a longer timescale. Control algorithms implemented at different layers of this control loop enable the control of individual devices as well as the coordination of smart grid devices that make up other parts of eIoT. Given the connectivity between the functions of this control loop, its successful implementation requires architectures and standards that ensure interoperability between eIoT technologies.

This chapter serves to summarize the most recent developments of IoT within the energy infrastructure. The discussion proceeds bottom-up by classifying these developments according to the generic control structure shown in Fig. 3.1.

  • Section 3.1 discusses some of the state of the art in network-enabled physical devices, whether they are network-enabled sensors or actuators in the control loop.

  • Section 3.2 focuses on the communication networks that send and receive data to and from these devices.

  • Section 3.3 discusses advancements in distributed control algorithms to coordinate the techno-economic performance.

    The chapter concludes with two discussions of a cross-cutting nature:

  • Section 3.4 addresses the importance of control architectures and standards in the development of eIoT technologies.

  • Section 3.5 addresses the security and privacy concerns that emerge from the development of eIoT technologies.

3.1 Network-Enabled Physical Devices: Sensors and Actuators

3.1.1 Network-Enabled Physical Devices: Overview

In many ways, the development of network-enabled physical devices forms the heart of eIoT implementation. As such, this section provides a broad review of these technical developments taking into consideration their tremendous heterogeneity and relative placement within the electric power system. Figure 3.2 provides a schematic overview of the section making sure to distinguish between the measurement and actuation of primary and secondary electric power system variables.

Fig. 3.2
figure 2

Schematic overview of Sect. 3.1 on network-enabled physical devices: sensors and actuators

Definition 3.1 (Primary Electric Power System Variables)

Physical quantities that describe the physical behavior of electric systems. They are voltage and current magnitudes and phase angles, active power, reactive power, magnetic flux, and electrical charge.\(\hfill \blacksquare \)

Definition 3.2 (Secondary Electric Power System Variables)

Physical quantities that are distinct from primary electric power system variables and that have a direct impact on the generation, transmission, distribution, and consumption of electric power. They often serve as inputs to the electric power generation and consumption functions (e.g., wind speed, solar irradiance, and building occupancy).\(\hfill \blacksquare \)

  • Section 3.1.2 begins with the (traditional) primary variables in the transmission system.

  • Section 3.1.3 turns the discussion towards concerns around the secondary variables associated with wind, solar, and natural gas generation.

  • Section 3.1.4 returns to the primary variables in the distribution system so as to address smart meters and other “grid modernization” technologies.

  • Section 3.1.5 discusses smart homes, industry, and transportation in the context of demand-side secondary variables. Each of these sections addresses network-enabled sensors and actuators.

Sensing technology plays an indispensable role in providing situational awareness within an eIoT control loop that activates the grid periphery. As such, sensors exist at the periphery of a communication network to relay data and information from the physical grid to a control or decision-making center. Given the tremendous heterogeneity in the number, type, and input of physical eIoT devices, the measurement role of network-enabled sensing technologies increases immensely. Fortunately, there has been significant innovation in sensing technologies to accommodate these needs. Such advancements include miniaturization, wireless data transfer, and decreasing implementation costs. Miniaturization technologies have enabled monitoring of household devices where it was previously infeasible to collect data. Noninvasive wireless technologies have reduced implementation costs by forgoing wired installation. These two factors have made sensors increasingly ubiquitous in electric grid applications.

Although network-enabled sensors vary in design and location within the power system, they have a commonality of function that is fundamental to measurement within the control loop. At a basic level, a sensor is composed of a sensing unit, a processing unit, a transceiver unit, and a power unit [138]. Depending on its function, a sensor component must balance various design aspects such as power consumption, memory allocation, lifespan, and cost [138]. These trade-offs lead to a heterogeneity in sensor operations such as data collection intervals, wired or wireless communication, type of power source, and their connection to other devices. Furthermore, and as mentioned in Sect. 2.2, the need for precise control and accurate net load forecast also drives the deployment of a greater heterogeneity of sensors [138]. Here, the distinction between primary and secondary variables becomes important. Traditional primary variables have often been measured first due to physical and monetary constraints [157]. However, the need to better characterize variable energy, energy storage, and demand-side resources has led to the development of secondary measurement applications as well. These additional measurements improve situational awareness because they show the underlying causes for the supply and demand of electricity.

3.1.2 Sensing and Actuation of Primary Variables in the Transmission System

3.1.2.1 Network-Enabled Sensors: SCADA and PMUs

The development of monitoring and sensing technologies began in the transmission system in response to the Northeast Blackout of 1965 [158, 159]. It was found that as the North American power system became ever-more connected it was necessary to deploy new sensing technology so as to gain greater situational awareness of the transmission system as a whole. As shown in Fig. 3.3, a tremendous heterogeneity of sensors is deployed in the transmission system where they are used in transmission lines and substations to monitor “traditional” variables directly related to power quality, operations, and system limits. These variables are key to ensuring system stability and reliability and include voltage, current, their phase angles, active power, and reactive power. In the transmission system, line monitoring is achieved through sensors that measure voltage, detect faults, and conduct predictive maintenance [160].

Fig. 3.3
figure 3

Sensor technologies in transmission lines and substations (adapted from [18])

Transmission sensors also help to monitor the physical condition of power supply equipment to improve safety, and determine when to deploy a workforce for repairs or outage prevention [18]. These sensors can be deployed in substations, in overhead lines, or in buried lines used for underground cable systems [18]. Sensors in the transmission system can also inform operational databases [18] to guide decision making that ensures system reliability. The reader is referred [18] for a deeper review of existing technologies.

The need for situational awareness also motivated the development of sensor networks. As is discussed in greater depth in Sect. 3.2, sensor networks are a collection of sensors tied to a modular communication network that bridge the gap between physical devices and decision-making points elsewhere [161]. These sensing networks are spatially distributed across the electric grid to form an interconnected monitoring and perception layer. The first and most prominent of such sensor networks is the SCADA system [19, 101, 162] shown in Fig. 3.4. SCADA is deployed in substations and distribution feeders where it is able to sense voltage, frequency, and power flows, and then send these measurements to centralized operations control centers. SCADA systems are also able to send remote signals to change generation levels, switch circuit breakers, and control devices through programmable logic controllers (PLCs) [101, 162]. SCADA systems and other sensor networks are discussed further in Sect. 3.2 where they are part of a larger discussion on communication networks. Further mention of the SCADA system in this section refers collectively to its embedded sensors.

Fig. 3.4
figure 4

SCADA as a network of remote terminal units (RTUs) connected to a master terminal unit (MTU) via modems and radios [19]

Despite the elaborate SCADA-based sensing network in the transmission system, several challenges are yet to be addressed to allow for the effective adoption of eIoT. First, the transmission system is spread out over a wide area, making real-time data collection a challenge [163]. Generally, the transmission system is remote and deploying resources for scheduled maintenance checks is costly [164]. Many of the sensors are located on transmission carriers with approximately 60–125 carriers between substations [160]. The distance between two carriers ranges from 400 to 800 m [160]. Furthermore, a typical utility with about 25,000 km of high-voltage (≥69 kV) power lines and thousands of transformers, capacitors, and breakers is expected to have 100,000 distinct sensors spread over a 20–80,000 km2 area [138].

Traditionally, any outside-the-system threats are from weather (such as storms or overheating), aging, physical destruction, and other environmental elements [160]. Given the wide geographical range and the numerous sensors involved, manual checks are less efficient compared to receiving signals from automated sensors. Furthermore, the Electric Power Research Institute (EPRI) advocates that data communication and automation reflect condition-based rather than time-based management of the transmission system [18]. Probabilistic (rather than deterministic) methods for assessing risk in the transmission system can also be used to preemptively solve faults and address sub-optimal conditions [18]. In all cases, real-time data is needed to better monitor the conditions of the transmission system to ensure safety and reliability [138].

Second, the SCADA system, currently in place, cannot observe the dynamic phenomena in transient and small signal stability models [163]. SCADA has a relatively low sampling rate of 2–4 s, making dynamic state estimation over a wide area difficult [163]. Instead, SCADA data are often used in static state estimation algorithms [165168] for manual decision making [169, 170]. Dynamic state estimation is further complicated by SCADA’s lack of measurements with synchronized time stamps [163].

To address these issues, SCADA systems must be equipped with the ability to study temporal trends with finer resolution and synchronization [169]. These requirements imply better coordination and compatibility between SCADA terminals [163]. Such developments in wide-area measurements are set to enhance corrective actions against system-wide disturbances [171]. All in all, the electric grid must be updated with new sensors to enable the better gathering, transfer, and processing of measurement data [172].

Sourcing power for sensors can pose a major challenge to their deployment in sensor networks. The main energy intensive components in a typical sensor include microcontrollers, wireless interfaces, integrated circuits, voltage regulators, and memory storage devices. Nevertheless, this challenge can be overcome through the use of batteries or environmental power sourcing techniques [18]. A key factor in designing sensors for remote applications is ensuring sustainable energy consumption and supply. In order to minimize operation and maintenance costs, sensors must be designed in such a way that optimizes hardware and software energy use while taking advantage of energy harvesting opportunities from naturally occurring sources of energy such as thermal, solar, kinetic, and mechanical energy [138, 173]. Furthermore, some sensors can switch between a static “asleep” and a dynamic “awake” mode as needed.

In addition to such energy minimization techniques, designers must also optimize the use of passive components such as capacitors, resistors, and diodes to reduce leakage currents and switching frequencies [138]. Reducing the energy dependence of sensors on the electric power grid is of vital importance to prevent cascading failures between the physical electric grid and the informatic sensor network [174]. Such decoupling of the power grid’s sensors from its physical power flows serves to increase the resilience of the two systems together [174].

These sensing challenges in the transmission system have motivated the deployment of phasor measurement units (PMUs) (that is, synchrophasors). Phasor measurements provide a dynamic perspective of the grid’s operations because their faster sampling rates help capture dynamic system behavior [169, 170, 175185]. PMUs measure voltage and current, and can calculate watts, vars, frequency, and phase angles 120 times per power-line cycle [163, 176]. Figure 3.5 shows the schematic of a PMU. This PMU data immediately enhances topology error correction, state estimation for robustness and accuracy [163], faster solution convergence, and enhanced observability [186]. Simulations and field experiences also suggest that PMUs can drastically improve the way the power system is monitored and controlled [186]. However, the installation of PMUs and their dependent solutions can be hindered by monetary constraints [186, 187]. A completely observable system requires a large number of PMUs which utilities usually install incrementally [187].

Fig. 3.5
figure 5

Schematic of a Phasor measurement unit [20]

Recent studies have explored algorithms for optimal placement of PMUs to minimize the number of PMUs required to collect sufficient information [188190]. PMU-based wide-area monitoring systems (WAMS) use the global position system (GPS) to synchronize PMU measurements [170]. Such synchronized measurements allow two quantities to be compared in the real-time analysis of grid conditions [186]. Through wide-area monitoring and synchronization, PMUs have made great strides in power system stability [170] which was often hindered by SCADA’s slow state updates [191]. The implementation of synchrophasors has also allowed voltage and current data from diverse locations to be accurately time-stamped in order to assess system conditions in real-time [186]. Synchrophasors are also available in protection devices, but since requirements for protection devices are fairly restrictive, the full integration of synchrophasors into line protection is still debated [186]. The increasing application of synchrophasors in wide-area monitoring, protection and control systems, post-disturbance analyses, and system model validation has made these measurement tools invaluable [176, 187].

While the integration of PMUs into the transmission system will do much to enhance situational awareness in the transmission system, it is by no means sufficient for the grid as a whole. First, PMUs are primarily meant for applications in the transmission system and to a large extent are not feasible in the distribution system. They are even less appropriate for understanding customers’ power consumption profiles. In that regard, the emergence of smart meters has fulfilled a much needed functionality. Second, PMUs only measure voltage and current phasors. As such, they are able to provide much needed insights into grid conditions but are not able to inform why these conditions exist. As the electric grid comes to depend more on interdependent infrastructure, weather conditions, and consumers’ dynamic behavior, secondary measurements of these quantities become increasingly important. In that regard, sensors used in other sectors will have an indispensable role in taking secondary measurements.

3.1.2.2 Network-Enabled Actuators: AGC, AVR, and FACTS

In order to take full advantage of the heterogeneity of sensing and measurement technologies, a heterogeneity of actuation methods is also needed. Much like with sensing technologies, actuation technology has long been a part of power systems operations and control. Perhaps, the earliest remotely controlled actuator in the electric grid is automatic generation control (AGC) [192] which is used to maintain grid frequency in the face of fluctuating consumer load. In time, power system operations came to include automatic voltage regulation (AVR) [193, 194] to maintain voltage stability. Finally, a plethora of flexible alternating current transmission system (FACTS) [195] devices have been developed to address line congestion in addition to supporting AGC and AVR technologies.

AGC, formerly known as load-frequency control was established in the early 1950s [196] to adjust the power output of interconnected generators in order to meet variations in load (Fig. 3.6). Imbalances in real power generation and load cause frequency fluctuations that could compromise the stability of the system. For a given control area, each energy control center aims to maintain zero area control error (ACE). ACE defines the difference between the net interchange power and the deviation in net frequency in megawatts (MWs) [196]. Controlling the ACE is the main role of AGC, and it is achieved through a mix of specialized control algorithms and automatic signals to generators. AGC achieves control of output generation by sending signals to generators every 4 s. The ability of generators to respond to these signals is governed by various characteristics of the generator, such as type of plant, fuel type, age of the unit, as well as operating point and operator actions [197]. In most cases, units under AGC tend to have faster ramping capabilities, such as fast start natural gas units.

Fig. 3.6
figure 6

Schematic of automatic generation control [20]

As the electric grid becomes more and more interconnected, the AGC process has been complicated and research into distributed control algorithms for AGC is steadily underway [198]. (See Sect. 3.3 for further explanation.) AGC control has also become more decentralized with the Federal Energy Regulatory Commission (FERC) even allowing third-party AGC [199]. Such decentralized AGC is more likely to require advanced communication for any large-scale application to be considered feasible. Specifically, the current star-shaped communication architecture would need to change to a meshed one [172].

In addition to frequency regulation, voltage regulation is a key component in ensuring power stability. Voltage stability regulation has played a significant role in controlling the reactive power flow in the electric grid. The schematic of automatic voltage control is best captured by Fig. 3.7. In North America, voltage control is done at a local level although there is a possibility of expanding this to a regional level [172] where it has been successfully implemented in China and the UK. Voltage instability occurs when a condition in the system results in deficient reactive power. Currently, voltage instability analyses have relied heavily on contingency analysis to prevent conditions that could potentially result in deficient reactive power [172]. This contingency analysis and prevention has been made possible by the use of automatic voltage regulators. With DERs, issues such as steady-state voltage spikes are likely to occur making the use of a single voltage regulator for multiple feeders infeasible [200]. Going forward, possible multi-agent approaches could be applied to provide more flexibility to the voltage regulation process [201].

Fig. 3.7
figure 7

Schematic of automatic voltage regulation [20]

The use of FACTS in power transmission has tremendously improved the amount of power that can be transported within the power grid. This has enhanced the stability of the grid in the face of increasing demand and variable generation capacity. FACTS devices can increase or decrease power flow in certain lines and respond to instability problems almost instantaneously. These devices have aided in power routing and have helped send power to areas that were previously insufficiently connected [202]. FACTS devices are a wide range of power electronic devices that are split into three categories depending on their switching technology: (1) mechanically switched, (2) thyristor switched, or (3) fast-switched [202]. They include but are not limited to: static synchronous compensator (STATCOM) and static VAR compensator (SVC) for voltage control, thyristor controlled phase shifting transformer (TCPST) for angle control, and thyristor controlled series compensator (TCSC) for impedance control [202]. SVC is an automated impedance matching device that switches in capacitor banks to bring up the voltage under lagging conditions and consumes VARs from the system under reactive conditions.

The SVC and TCSC represent what is commonly referred to as the first generation of FACTS devices [202]. A STATCOM is based on a power electronics voltage source converter and can act as a source or sink for reactive AC power as needed. This device is commonly used for voltage stability and belongs to the second generation of FACTS devices [202]. FACTS devices have played a key role in deregulated markets by helping to increase the load ability for power lines, reduce system losses, improve the stability of the system, reduce production costs, and control the flow of power in the network. These functions make FACTS devices indispensable as the electric grid becomes more interconnected and adopts eIoT. As eIoT develops even more, FACTS devices may need to become smarter so as to receive signals and regulate flow as necessary. Such facilities are particularly helpful in the control of DERs. The ability to connect to communication networks is also necessary for these devices to ensure that they communicate and work with other sensors and wireless devices.

3.1.3 Sensing and Actuation of Supply Side Secondary Variables

As mentioned earlier in the section, the deployment of variable energy, energy storage, and demand-side resources requires a greater understanding of their associated secondary variables. For example, the power injection and withdrawal of these resources depends on solar radiance, wind direction and speed, temperature, humidity, and rain [160]. Therefore, sensing and actuating these secondary variables enables the control of the supply and demand of electricity based on its root causes.

3.1.3.1 Networked-Enabled Sensors: Wind, Solar, and Natural Gas Resources

Perhaps the best way to appreciate the benefits of measuring secondary variables is by observing how IoT analogously enabled “smart manufacturing,” which is defined as “the use of information and communications technology to integrate all aspects of manufacturing, from the device level to the supply chain level, for the purpose of achieving superior control and productivity [203].” Smart manufacturing implies the use of embedded sensors and devices that communicate with each other and other systems [203]. Through data gathering and sharing, these devices inform decision making and automation throughout the manufacturing network [203]. The system uses big data to improve, evaluate, and analyze operations, consumer interests, resource planning, and management systems via cloud-based tools [203].

Smart manufacturing involves a holistic approach where it tracks a product’s life cycle from raw material, to factory, to end use [203]. Most important, smart manufacturing makes use of a distributed approach by ensuring that every entity in an organization has the necessary information, at the time it is needed, to make optimal contributions to the overall operation through informed, data-based decision making [203]. Systems such as Industrie 4.0 advocated for the concept of “intelligent products,” which used “product agents.”

Furthermore, IoT has enabled greater supply chain integration both upstream and downstream of a given production system [119121]. The information about incoming parts and services from upstream suppliers help streamline operations management decisions [8, 122, 123]. Similarly, the information about downstream demand allows production systems to manage when and where they need to deploy resources closer to real-time [124131]. When the electric power system is viewed as a full supply chain, it can mirror smart manufacturing applications to extract the full value of eIoT.

In that regard, the reliable integration of solar and wind resources requires secondary measurement applications in the electric grid. Such measurements include wind speed and solar irradiance. This kind of secondary monitoring of weather-dependent variables is not entirely new to electric power systems. Hydrologists have been monitoring water flows and elevations to understand the potential for hydropower generation for decades [204]. Indeed, as concerns over global climate change and water availability rise, the energy-water nexus has received considerable attention [205212, 212, 213, 213225]. These works have investigated the availability of water for the energy infrastructure [217225], the co-optimization of water and energy infrastructure [212, 213, 213216], and the impacts of water consumption on the electric grid demand-side management [220, 226, 227].

However, solar and wind resources, unlike hydropower, are often called variable energy resources (VERs). They exhibit intermittency in that their power generation value is not entirely controllable. They also exhibit uncertainty in that their power generation value is not perfectly predictable [228233]. In both cases, access to real-time secondary measurements of weather-based variables can greatly reduce the uncertainty they impose on electric power system operations [234, 235]. Furthermore, as solar and wind resources become more prevalent at the grid periphery as DG, concerns over voltage fluctuations, power quality, and system stability necessitate better forecasting [109].

Despite these similarities, solar and wind power generation requires distinct prediction and monitoring techniques. Solar PV monitoring is best served with effective short-term predictions of fluctuations in solar irradiance over short intra-day and intra-hourly timescales [109]. Such predictions when combined with the fixed parameters of the solar PV arrays (for example, size and efficiency), they can be used to calculate power generation values [109]. In most cases, forecasting techniques based purely on historical data are insufficient. Instead, many of the most promising approaches propose hybrid machine-learning techniques that combine historical data with real-time weather data [236].

Wind power generation also combines wind speed predictions with site-dependent variables such as surface landscape and weather conditions to accurately predict power output [236]. In both cases, solar and wind variability occurs on all timescales, from turbine control occurring from milliseconds to seconds to integrated wind-grid planning occurring from minutes to weeks [237239]. Furthermore, wind and solar predictions quickly lose accuracy at longer timescales [232, 237, 240244]. Consequently, a holistic approach to forecasting must address the many applications of power system operations and control [15]. These include reserves procurement and energy market optimizations such as unit commitment and economic dispatch [237, 245250]. Advanced sensing technologies introduced through eIoT are expected to play a key role in obtaining and communicating raw data inputs to solar and wind prediction models.

Similar to VERs, even dispatchable resources such as natural gas can have variable supply chains that require secondary measurement to ensure reliable grid operation. The challenge of natural gas relative to other dispatchable power generation fuels is that its gaseous state requires purpose-built facilities for its storage. Coal and oil are often stockpiled at the input of power generation resources to ensure an effective ramping response to grid conditions. Natural gas, on the other hand, is fed by pipeline and has only limited storage capability in many geographical regions.

Therefore, the flow of natural gas is quite susceptible to pipeline capacity constraints. As the price of natural gas has fallen in recent years (in response to the expanded availability of shale gas), this susceptibility has only grown. Some ISOs now have over 50% of their power generation capacity come from natural gas units [251]. To ensure reliability, power grid operators must now coordinate their operations with natural gas operators to make certain that sufficient natural gas capacity is available for power generation [252].

And yet, coordinated operation of the natural gas and electric power systems requires a recognition of their inherent similarities and differences. The natural gas industry, like the electric industry, has undergone deregulation to encourage competitive markets [252,253,254]. The electric power system has wholesale energy markets that implement security-constrained unit commitment (SCUC) and security-constrained economic dispatch (SCED) decisions. They competitively clear 1 day ahead and every 5 min, respectively [253]. Meanwhile, natural gas supply contracts have durations from 1 day to 1 year [254]. This optimal supply mix of natural gas also compensates storage and not just supply and transmission (as is the case in electric power) [254]. Furthermore, natural gas is transported by shipment as liquefied natural gas or by pressure differences in a pipeline network as a gas [252]. In contrast, electricity has no such differentiation of material phase. Finally, the natural gas system has an entirely different set of organizations, regulations, and scopes of jurisdiction that further complicate coordination with the electric power system.

Nevertheless, the presence of deregulation and market forces now means that natural gas and electricity prices are often closely correlated [255]. This is especially true during particularly hot or cold days when both systems experience peak demand from heating, ventilation, and air conditioning (HVAC) units [253]. The challenge during these times is to design the control room operations and the markets for both commodities such that both infrastructures continue to operate reliably and cost-efficiently [252263]. Naturally, these requirements further motivate the need for secondary measurement from eIoT.

3.1.3.2 Networked-Enabled Actuators: Wind and Solar Resources

The effect of VERs on power system stability and control is significant due to the intermittent nature of resources such as wind and solar. However, recent studies and applications are showing that these resources are not so variable after all. In fact, they can be used to provide ancillary services such as frequency and voltage regulation or “artificial inertia.” Wind turbine generators have varying reactive power regulation capabilities, depending on the manufacturer. Types 1 and 2 wind turbines are based on induction generators and have no ability for voltage control. While types 3, 4, and 5 wind turbine generators have power electronic converters that allow them to control reactive power and regulate voltage [264].

Although Type 1 and 2 wind turbines cannot control voltage directly, they are usually fitted with power correction capacitors to maintain the reactive power output at a fixed set point [264, 265]. These voltage control capabilities can be used to regulate the voltage at the collector bus of the wind farm [264, 265]. A centralized controller would usually communicate with individual wind turbines directly to regulate their voltage. Presently, grid codes require wind power plants (WPPs) to have a specified reactive power capability (for example, 0.9 lagging to 0.9 leading), making reactive power capabilities fundamental to the design of WPPs [264, 265].

In recent years, the concept of “synthetic” or “artificial” inertia has been introduced as a potential application for frequency control. A study conducted on the New Zealand system explored a possible use of wind turbine generators for frequency regulation by providing a megawatt contribution within a small period of time [266]. The study also proposed the following activation mechanism to mimic the first frequency response produced by real inertia: (1) the activation must occur within 0.2 s after the frequency reaches 0.3 Hz lower than nominal, (2) the ramp rate of the output must be no less than 0.05 pu/s of the machine’s total capacity in megawatts, (3) the output must be maintained for at least 6 s from activation, and (4) the machine must deactivate the artificial megawatt output once the frequency has returned to the nominal frequency [266]. With this activation technique, low inertia devices can contribute MWs towards a falling system frequency. Other studies have also proposed a mechanism of reprogramming power inverters connected to wind turbines to imitate “synchronized spinning masses” or synthetic inertia [267]. Hydro-Québec TransÉnergie was the first to adopt this application of synthetic inertia and the general response is good although not enough to sustain the growing penetration of wind [267]. As wind turbine designs advance to supply more inertia, they are increasingly viewed as contributors to system stability.

The nature of remotely controlled devices requires them to be self-sufficient and self-sustaining. Remote devices include power transmission line monitoring systems, sensors, backbone nodes, video cameras set up in the transmission lines and towers. Given their location, repair and maintenance of these devices is severely limited. As such, remote devices are constrained by battery capacity, processing ability, storage capacity, and bandwidth [161]. These devices are in need of remote sources of power although they can use power acquisition technology [161] to harvest their own power. In addition, these devices must be suited for varying environmental conditions and must be waterproof, dust-proof, anti-vibration, anti-electromagnetic, anti-high-temperature, and anti-low-temperature [161]. Data fusion technology has been suggested as an application that can be used to collect data more efficiently, and combine useful data for these remote devices [161].

As for solar PV actuation, smart inverters are seen as key components for the effective coordination of solar PV systems with other eIoT devices. Inverters play a key role in the intersection between the measurement and decision-making layer of the control loop. New developments in the field of power electronic devices and modern control strategies for inverters have provided numerous operation strategies for efficient management of the inverter-controlled systems. However, future inverter designs need to allow for modularity to ensure independent scalability of components especially when deploying them to distributed systems such as solar PV installations [268]. Modular inverter design is also key to fast and effective standardization [268].

With smart inverters, the integration of IoT devices with the direct current interfaces has become much easier [268]. For an inverter to be considered smart, it must have a digital architecture with the capability for two-way communication and a solid software infrastructure. The ability to send and receive messages quickly is imperative for effective eIoT deployment. Smart inverters must be capable of sending granular data to utilities, consumers, and other stakeholders quickly. This allows for faster and more efficient diagnosis of problems as well as maintenance [269]. For solar PV, smart inverters have a key role to play in improving system costs and performance as they provide high redundancy through distributed AC architecture [269]. Microinverters provide a PV system with the ability to provide ancillary services such as ramp rate control, power curtailment, fault ride-through, and voltage support through vars [269].

To fully develop and incorporate smart inverters to the grid, designers must work with utilities and regulators to meet the desired standards and regulatory requirements. The Underwriters Laboratory/American National Standards Institute (UL/ANSI) 1741 and IEEE 1547 standard groups together with the Smart Inverter Working Group (SIWG) are some of the groups that are working collaboratively towards advancing this technology [269].

3.1.4 Sensing and Actuation of Primary Variables in the Distribution System

As was discussed extensively in Chap. 2, the greatest transformation of the electric power grid will occur at the grid periphery. These include the integration of network-enabled sensors and actuators in distributed generation, distribution lines, and end-user power consumption. The discussion provided in Sect. 3.1.3, in many ways, already addressed the sensing and actuation of DG. Because solar PV and wind turbines are effectively scalable technologies, they may be integrated equally effectively in the transmission and distribution systems. Consequently, the conclusions of Sect. 3.1.3 are equally applicable here. This section now addresses the sensing and actuation of primary variables in the distribution system prior to addressing secondary variables in Sect. 3.1.5.

3.1.4.1 Network-Enabled Sensors: The Emergence of the Smart Meter

In many ways, the degree of transformation of distribution system sensing technologies surpasses the transmission system development described previously. Traditionally, electrical equipment installed at the customer point was mainly a meter, chief purpose of which was consumer billing [270]. It counted the total number of kilowatt-hours (kWh) consumed and was read once per billing period. This meant that utilities rarely had access to real-time power consumption data at the grid periphery. Instead, real-time data would originate from feeders and substations that were connected to the SCADA network. The remaining “last-mile” of the grid (between these feeders and electricity consumers) was often managed by practical engineering rules based upon feeder data and the feeder’s radial topology. These approaches, however, have limited utility in the presence of DG downstream of the last SCADA-monitored feeder [271, 272]. Furthermore, they are equally inapplicable as demand-side resources begin to participate in demand-response programs [271, 272].

The advent of smart meter technology, however, has greatly expanded the capabilities of demand-side metering technology. First, instead of simply measuring aggregate energy consumption, smart meters measure active power consumption as a temporal variable with a sampling rate of up to 1 Hz [273]. Some smart meters also measure power quality as well as voltage and current phase angles [274]. Such measurements naturally produce significant quantities of data which must ultimately be communicated, processed, and stored in new information technology (IT) infrastructure. Nevertheless, the readings from individual smart meters are valuable because they can be used to make advanced analyses for individual meters or aggregated networks [141, 270].

Second, smart sensors, such as smart meters in advanced metering infrastructure (AMI), monitor a bidirectional flow of power and allow for two-way communication between the utility and the consumer [275, 276]. AMI is a system of technologies that measures, saves, and analyzes energy usage from devices such as smart meters using various communication media [46]. AMI meters have embedded controllers, generally including a sensor, a display unit, and a communication component such as a wireless transceiver, and they are generally powered by the electrical feed that they are monitoring [276]. AMI can also incorporate older systems such as automatic meter reading (AMR) and automated meter management (AMM) [46] in their applications. An older AMR system may be capable of remotely collecting power consumption data, remotely relaying power usage, remotely turning a system on or off, and generating bills with different pricing rates [277, 278].

Most utilities have upgraded their investments from AMR to AMI to install two-way communication in a transition to smart technologies with improved demand-side management capabilities [141]. In 2013, the number of two-way AMI meters overtook the number of one-way AMR meters for the first time [279] and by 2016, there were about 46.8 million AMR meters and about 70.8 million AMI smart meters installed by utilities [279, 280]. As eIoT advances to include demand-side management, older technologies need to be upgraded in order to maximize the benefits of eIoT technologies.

3.1.4.2 Network-Enabled Actuators: Distribution Automation

Although distribution automation was initially implemented in the USA (in the 1970s) to increase reliability and resilience in the face of electrical faults [281], eIoT is placing increased demand for automated power quality and real-time network adjustments. Automated feeder switching provides traditional reliability in response to fault identifications, load control and load management [282]. Distribution automation is important not only for resilience with faults, but also as a solution to today’s more dynamic loads. Tools such as automated feeder switching must accomplish network-wide reconfigurations for self-healing operations and day-to-day operations with increased load variability [283]. Other tools, such as automated voltage regulation and automated power factor correction, increase efficiency and improve power quality [21, 282]. Optimal load balancing through automation results in decreasing power losses, deferring capacity-expansion investment, and improving voltage profiles [21, 283].

Automation in distribution is a step towards a larger, eIoT-enabled smart grid that integrates microgrids for optimal performance [281, 282]. The DOE’s Smart Grid Investment Grant (SGIG) Program made advances in distribution automation as an imperative to modernize the electric grid [21]. Partly funded by the American Recovery and Reinvestment Act (ARRA), utilities in the SGIG program installed 82,000 smart devices to 6500 distribution circuits [21]. Figure 3.8 shows the installations of distribution assets from the program.

Fig. 3.8
figure 8

Distribution automation upgrades during the smart grid investment grant program [21]

3.1.5 Sensing and Actuation of Demand-Side Secondary Variables

The sensing and actuation of demand-side secondary variables serves to empower customers to create energy-aware smart homes [284286], commercial buildings [287, 288], and industrial facilities [289, 290]. In that regard, eIoT developments should be seen as an energy extension to long-standing efforts for automation. Network-enabled sensors again play the key role of providing insights into electricity consumption patterns with potentially device-level granularity. Network-enabled actuators on these devices can then respond to energy-aware decisions that make trade-offs between consumer preferences and energy consumption.

That said, it is important to recognize that secondary variables on the supply and demand sides are fundamentally different. On the electricity supply side, the need for sensing and actuation is entirely motivated by a single purpose: the generation and sale of electricity. On the demand side, secondary variables describe the behaviors of electricity consumers in the residential, commercial, and industrial sectors. The electrical consumption patterns serve a more fundamental purpose of enabling these sectors to carry out their activities outside of the electricity sector. Consequently, an effective implementation of eIoT on the demand side always needs to answer the question “What is the electricity used for?”. For example, a production facility that uses 1 kW to run a milling machine will not shed that consumption because it directly contributes to production throughput. In contrast, it may shed 1 kW of a back-office because laptop computers can run on their own batteries. Consequently, the remainder of this section breaks the discussion into the various application of eIoT devices.

3.1.5.1 Energy Monitors with Embedded Data Analytics

While device-level sensing granularity of electricity consumption has become a goal of eIoT, in many cases it is not cost feasible. Instead, energy monitors, particularly in home applications, have developed to fill a much needed gap in the eIoT landscape. They are best understood by comparison to smart meters. Smart meters measure aggregate power approximately every minute, and provide data “outward” to the utility. Energy monitors, in contrast, measure a home’s or facility’s aggregate power consumption every millisecond (1 kHz), and the data is sent “inwards” to the homeowner or facility manager [291]. The operating principle of an energy monitor is illustrated in Fig. 3.9. The aggregate power consumption consists of several device-specific “signatures” that make it possible via data analytics algorithms to recognize when one device is operating versus another. Such a technique is most effective in differentiating high-consuming devices while less so for small devices. The resulting data can be provided to home owners and facility managers for cost-saving decisions. Home energy monitors are currently available at a variety of price points from about $150 to $400. Continuous gains in energy cost savings outweigh a consumer’s initial $300 investment in a home energy monitoring system.

Fig. 3.9
figure 9

Aggregate profile of household electric power consumption [22]

Meyers, Williams, and Matthews in an article in Energy and Buildings [292] used the US Energy Information Administration’s Residential Energy Consumption Survey data to estimate the inefficiencies in US home energy usage. The authors estimate that in 2005, 39% of energy delivered to US homes was wasted, costing the homeowners a total of $81.5billion, or $733.60 per household on average. Assuming that 41% of the energy inefficiencies could be reduced in part by using a home monitoring system to identify costly consumption behavior, the homeowner could see benefits within the first year of purchasing the system.

3.1.5.2 Network-Enabled Smart Switches, Outlets, and Lights

While energy monitors are relatively effective in resolving an aggregate power consumption profile into its constituent device-level components, they do leave room for further technological development. First, the data analytics algorithms will never resolve devices whose individual power consumption is comparable to the aggregate power consumption’s noise level. While this may seem like a trivial issue, in reality, it is important because most facilities have large populations of small devices that together may make up a large part of the total power consumption. Indeed, the Department of Energy has provided practical advice about “phantom loads” that draw electric power simply by remaining idle while plugged in [293].

Phantom loads are costly and inefficient [294, 295]. The average US households waste $100 per year on devices that draw power while not being used [293]. Electronics such as digital video recorders (DVRs) are large users of energy even in standby mode, using 37 W in a home [294]. “Dumb” devices can help decrease phantom loads. For example, connected power strips can make disconnecting groups of appliances easier [294, 296]. Intelligent actuators in home automation overcome inconvenience and human forgetfulness to eliminate phantom loads and provide household savings [297]. Unfortunately, energy monitors do not actuate individual devices without manual intervention. For these reasons, a wide range of smart home devices have developed in recent years to give homeowners device-level visibility and control.

Device-level visibility and control have the potential to transform energy management. eIoT extends to individual home appliances, or production profiles for factories, or HVAC patterns for commercial buildings. The success of such coordination depends on real-time data exchange between smart devices, electricity operations, and the energy consumer [298]. The data includes forecasts of prosumers (dependent on local variables), the energy usage schedule of consumers, and energy-management signals from economic and operation centers [298]. A smart scheduler can then act autonomously to collect data and control devices without active consumer engagement [298]. In so doing, it smooths a household’s demand curve and optimizes energy costs [298].

In essence, a smart scheduler is designated as a two-way communication device that synthesizes cost data and appliance profiles to ensure that a household’s aggregate consumption does not exceed a predefined limit [298]. The scheduler can shed or defer loads by sending “off,” “on”, “pause,” and “resume” signals to flexible appliances [298]. Hourly profiles can be developed from historical data of the appliances within a month, and it can be determined which appliances are used by a household [298]. Finally, a smart scheduler can act as a load aggregator with the potential to communicate with time-dependent retail and wholesale markets [298].

Perhaps the most common of smart home devices are smart outlets, switches, and lights. Smart outlets are used to cut off phantom loads at the source, without the inconvenience of unplugging appliances. Smart switches can operate by a button, or remotely through apps or a timer [299]. Motion sensors can detect room occupancy and switch lights on and off accordingly [297]. In addition to energy-efficient bulbs (see [300]), there are smart bulbs that can save energy by customizing brightness or color to a set schedule [301]. Although smart home devices are more expensive than their traditional alternatives, their annual energy savings are a counterbalance to the initial investment. Within smart homes, these devices offer not just cost savings but also a level of convenience that many homeowners may wish to have. Because of this, the rationale for adoption is not strictly based upon a return-on-investment (ROI).

In commercial and industrial applications, however, the investment decision is often strictly based upon ROI. Nevertheless, these sectors (as discussed in Sects. 4.4.2 and 4.4.1) often have larger, more energy-intensive equipment that make it easier to rationalize the investment of network-enabled sensors and actuators and their associated energy savings. Given that at least 40% of electricity generation is consumed in commercial and residential buildings, it is important to invest in energy-efficient systems that are also capable of participating in demand response [302].

3.1.5.3 Network-Enabled Heating and Cooling Appliances

While smart outlets, switches, and lights can go a long way to reducing demand-side energy consumption, devices that serve a heating or cooling function are the most energy intensive. Reconsider Fig. 3.9. There are clear power consumption spikes associated with refrigerators, kettles, toasters, heaters, and ovens. Furthermore, air conditioners, alone, account for approximately 6% of US electricity consumption and account for about $49 billion in energy costs.

The appliance marketplace has recognized the potential for developing “smart appliance” versions of these devices. Some appliances have an established market for smart products, while others are just forming. For example, smart refrigerators have a broad offering of features/specifications and efficiency capabilities [301]. Their price depends on the variations in size, doors, cooling features, freezing compartments, displays, efficiency, and power usage.

Smaller devices such as toasters and kettles are emerging as niche tech products. A smart kettle or coffee maker can connect to a smart home hub or to a smart phone app via WiFi, 3G, and 4G to program water temperatures [303, 304]. While the kettle doesn’t draw less energy, the scheduling feature has the opportunity to reduce unneeded energy usage. Similarly, a smart toaster can connect to an app on your phone through Bluetooth that enables the remote adjustment of the cooking timer, and return notifications when the toast is ready [305307]. Smart ovens are another appliance that can connect to smartphone apps to schedule cooking, measure cooking temperatures, and engage either pre-set or customized cooking programs [308]. There also exist smart all-in-one filter, heating, and cooling devices that are able to measure and transmit the temperature and air quality of a room to a mobile app. These values can then be scheduled and controlled in several automated and semi-automated modes [309, 310].

In all these cases, these network-enabled heating and cooling appliances are automated with sensing and software capabilities to optimize their control and performance. Once network-enabled, these devices can be operated remotely to operate at the best possible time regardless of the user’s presence. For example, electrified HVAC systems have used a technique called pre-cooling [311]. Instead of cooling a building at the hottest time of the day, the building can be cooled to an artificially low temperature earlier so that it warms but remains at a comfortable temperature during the peak.

Such a technique dramatically reduces electricity consumption because air conditioners are more energy intensive at high ambient temperatures [312]. This technique can be further enhanced with a system that receives and responds to (readily available) weather predictions [311]. Furthermore, smart thermostats can use georeferencing to match the global positioning system (GPS) on a homeowner’s phone to the home’s thermostat [313]. The device then activates the air-conditioning system based on the phone’s proximity and expected time of arrival, and it deactivates the air-conditioning system otherwise.

3.1.5.4 The Electrification Potential of eIoT

Beyond these traditional electrical devices, it is important to recognize the electrification potential of eIoT. Figure 1.3 shows a Sankey diagram for the American energy system. Electricity consumption accounts for just 12.6quads of the 97.3quads total. This means that in order to make radical improvements in decarbonization, many of the energy uses that rely directly on fossil fuels must first be electrified so that they will have the potential to be powered by renewable energy sources. In this regard, the transportation sector with 27.9quads of energy consumption (28.7% of the US total) is the first candidate for electrification. Of this quantity, electrified transportation accounts for only 0.03quads (or 0.1% of the transportation total). The manufacturing sectors also consume 24.5quads of energy (25.2% of the US total). Of this quantity, electricity for manufacturing accounts for only 3.19quads (or 13.0% of the industrial total). Finally, the residential sector consumes 11.0quads of energy (11.3% of the US total). Of this quantity, electricity for residential use accounts for only 4.8quads (or 43.6% of the residential total). In all of these cases, a switch from fossil fuels to electricity as an energy source can have a large decarbonization impact [24].

3.1.5.5 Net-Zero Homes: Electrification of Residential Energy Consumption

In residential applications, eIoT can directly support the electrification to achieve homes with net-zero carbon emissions. Returning to Fig. 1.3, the residential consumption of natural gas and petroleum accounts for 5.56 quads of energy, much of which goes to heating applications. Rather than using fossil-fuel furnaces and boilers, net-zero homes [314] often use air [314] and water [314] heat pumps with electricity as an energy supply.

From an energy balance perspective, heat pumps are often twice as efficient as simple resistive electric heating, boilers or furnaces [315]. These energy efficiencies translate directly into significant cost savings as well. Furthermore, recent generations of heat pump technology have embraced IoT [316]. They can be either controlled directly from a smartphone or interfaced with a smart thermostat. Such implementations allow homeowners to tune heating schedules so that they coincide with their home (or even room) occupancy for added savings. The introduction of smart heat pumps also facilitates their usage in active demand-response schemes and their coordination with rooftop solar energy.

3.1.5.6 Net-Zero Industry: Electrification of Industrial Energy Consumption

eIoT can have a similar role in the electrification of industrial energy consumption. Unlike residential applications, the electrification of industrial energy usage must (1) strictly follow an ROI rationale and (2) match the required manufacturing processes of the industrial facility. Nevertheless, many industrial sectors have already invested significantly into IoT technologies for supply chain management. Extending these efforts towards energy management is a logical next step.

In 2010, the US Department of Energy conducted a manufacturing energy consumption survey detailing how much of each type of energy was consumed for all major manufacturing sectors [23, 317, 318]. Figure 3.10 shows the associated Sankey diagram for the manufacturing sector in aggregate. It shows a heavy reliance on fossil fuels for steam generation and process heating [23]. In many cases, these fossil-fuel options can be replaced with their electrified alternatives. Figures 3.11 and 3.12 summarize the cost and payback periods of such electrification alternatives for a wide variety of manufacturing sectors. Furthermore, these proposed electrification technologies should be considered as an integral part of eIoT and lend themselves to energy-management practices within the manufacturing plant and the electric grid as a whole [24].

Fig. 3.10
figure 10

Sankey diagram for the energy consumption (TBtu) of the US manufacturing sector [23]

Fig. 3.11
figure 11

Summary of manufacturing sector electrification alternatives (adapted from [24])

Fig. 3.12
figure 12

Summary of manufacturing sector electrification alternatives (adapted from [24])

3.1.5.7 Connected, Automated, and Electrified Multi-Modal Transportation

Finally, the transportation sector represents one of the most prominent applications of eIoT. This is due in large part to three fundamental technological shifts that have the potential to transform the sector as a whole [319]: connected automation, electrification, and IoT-based ride sharing.

First, vehicles (of all types) are increasingly outfitted with connectivity solutions so as to become a veritable part of IoT [320323]. At first vehicle connectivity was simply for emergency roadside assistance and extensions of the driver’s mobile phone capabilities [324, 325]. However, the connectivity solutions have greatly expanded in the context of vehicle automation. Adaptive cruise control, where a vehicle’s automatic cruise control responds in congested conditions to the fluctuating speed of the car in front, has given rise to a plethora of vehicle-to-vehicle connectivity applications [324,325,326,327].

Whereas, the first application of adaptive cruise control was driver convenience, it is now being developed for its potential environmental benefits. Research is underway to enable automated vehicle platoons where vehicles automatically follow each other at short range so as to reduce overall road congestion and save fuel consumption by aerodynamically drafting. Such automated solutions motivate the need for vehicle-to-infrastructure as well. Beyond highway driving, there remains a significant need to reduce traffic congestion, improve air quality, and reduce energy consumption in congested city roads [328, 329].

One important challenge is the coordination of road intersections. Traffic light scheduling, whether it is done statically or dynamically in response to road congestion, has long been an area of extensive research [330332]. And yet, solutions like traffic lights retain a driver-in-the-loop control paradigm. More recent research envisions the elimination of traffic lights so that the intersection itself can coordinate the crossing of vehicles and potentially even pedestrians [333336]. Vehicle automation has been classified into five levels of technology development with some analysts predicting full Level 5 automation by 2030 [337340].

It is important to recognize that these developments toward connected automation exist in all modes of transport. Planes and trains have been automated to varying degrees for decades [46, 341343], while buses and trucks are directly benefiting from developments in the car market [344]. Nevertheless, the shift toward connected and automated road vehicles is important because of its share of overall vehicle miles traveled [340] and because of the difficulty of its coordination and control problems.

As a second fundamental shift in technology, electrified transportation greatly complements the benefits of connected and automated vehicles. As mentioned, in Chap. 1, the electrification of transportation is one of the five identified energy-management change drivers. Electrified transportation supports energy consumption and CO2 emissions reduction targets [41, 345348]. Relative to their internal combustion vehicle counterparts, EVs, whether they are trains, buses, or cars, have a greater “well-to-wheel” energy efficiency [348, 349]. They also have the added benefit of not emitting any carbon dioxide in operation and rather shift their emissions to the existing local fleet of power generation technology [42]. Furthermore, the technical, economic [350352], and social barriers [82, 353] to their adoption have eased. Despite continuing challenges in battery technology [354356], a wide variety of battery chemistry options have emerged leading to greater capacity and subsequently vehicle ranges [357359]. Fast chargers have also been introduced into the market which allow 80% of the battery capacity to be charged in 30 min [360362]. From an economic perspective, both plug-in hybrid EVs and battery-EVs show significant learning rates and cost improvements over time [73, 352]. There also exist significant improvements in public attitudes [363,364,365,366] and social transition rates [82, 349, 353, 367]. As a result, a number of optimistic market penetration and development studies have emerged for a wide variety of geographies [368374]. Consequently, supportive policy options have taken root worldwide [363, 375, 376].

The true success of electrified (multi-modal) vehicles depends on its successful integration with the infrastructure systems that support them. From a transportation perspective, plug-in electric cars may have only a short range of 150km [365], but it may still require several hours to charge them [377]. This affects when a vehicle can begin its journey and the route it intends to take. From an electricity perspective, the charging loads can draw large power amounts that may exceed transformer ratings, cause undesirable line congestion, or cause voltage deviations [378,379,380,381]. These loads may be further exacerbated temporally by similar charging patterns driven by similar work and travel lifestyles or geographically by the relative sparsity of charging infrastructure in high-demand areas [380]. This transportation-electricity nexus (TEN) [31, 8991, 382] requires new assessment models whose scope includes the functionality of both systems. Recent works have also proposed axiomatic design as a means to model large systems such as the transportation and manufacturing systems [383387]. As the complexity of these systems increases, it becomes more relevant to consider their resilience while especially focusing on flexibility and reconfigurability [382].

Relatively few studies have considered this coupling from an operations management perspective. A simplified study based on the city of Berlin has been implemented on the multi-agent transport simulation (MATSIM) [362]. Meanwhile, the first full-scale study was completed in the city of Abu Dhabi [379, 388,389,390] using the clean mobility simulator [391]. A third study focused on the differences between conventional plug-in and online (wireless) EVs [31]. More recently, a performance assessment methodology for multi-modal electrified transportation has been developed that integrates the methodologies of previous studies [91]. An older review compares a variety of open source transportation modeling tools [392].

IoT-based ride sharing, as the third fundamental shift in transportation technology, has the potential to dramatically intertwine vehicle automation and electrification. It expands the transportation options available to travelers so that even incumbent paradigms of vehicle ownership are questioned [393395]. Travelers, particularly in large cities, are now more likely to rely on a combination of transportation modes to arrive to their destination. In some cities, IoT-based ride sharing has already shifted transportation behavior from the traditional use of private cars [393, 395]. This work, however, argues that IoT-based ride sharing is likely to converge with eIoT-based energy management because their underlying decisions are fundamentally coupled.

Consider an EV rideshare fleet operator [379, 388,389,390]. They must dispatch their vehicles like any other conventional fleet operator, but with the added constraint that the vehicles are available after the required charging time. Once en route, these vehicles must choose a route subject to the nearby online (wireless) and conventional (plug-in) charging facilities. In real-time, however, much like gas stations, these charging facilities may have a wait time as customers line up to charge. Instead, the EV rideshare driver may opt to charge elsewhere. Once a set of EV rideshare vehicles arrive to a conventional charging station, the EV rideshare fleet operator may wish to implement a coordinated charging scheme [45, 80, 81, 396404] to limit the charging loads on the electrical grid. The local electric utility may even incentivize this EV rideshare operator to implement a “vehicle-to-grid” scheme [82, 362, 405] to stabilize variability in grid conditions.

These five transportation-electricity nexus operations management decisions are summarized in Table 3.1 [31, 89]. The integration of such decisions in a coordinated fashion ultimately forms an intelligent transportation-energy system (ITES) [389]. Naturally, significant research remains on how to best integrate these decisions so that they achieve operational benefits in both the transportation and electric power systems. More recently, studies have focused on the design of smart cities and their core infrastructures such as transportation, district heating and cooling (DHC), and electric power grid. Specifically, hetero-functional graph theory has been introduced as a more advanced means of studying coupled infrastructures such as the TEN [406, 407].

Table 3.1 Intelligent transportation-energy system operations decisions in the transportation-electricity nexus [31]

3.1.6 Network-Enabled Physical Devices: Conclusion

This section has provided an extensive discussion of the state of the art in network-enabled physical devices, whether they are network-enabled sensors or actuators in the control loop. In order to organize the discussion, Fig. 3.2 was used to distinguish between primary and secondary electric power system variables. In all, four major categories of network-enabled devices were discussed.

  • Section 3.1.2 addressed the (traditional) primary variables in the transmission system.

  • Section 3.1.3 discussed the concerns around the secondary variables associated with wind, solar, and natural gas generation.

  • Section 3.1.4 returned to the primary variables in the distribution to address smart meters and other “grid modernization” technologies.

  • Section 3.1.5 discussed smart homes, industry, and transportation in the context of demand-side secondary variables.

3.2 Communication Networks

3.2.1 Overview

The tremendous heterogeneity of network-enabled devices described in the previous section demands advancements in communication networks to route sensed information to control and decision-making entities. Because these devices vary greatly in size, power consumption, use case, and on-board computing, new types of networks will emerge that can enable two-way flows of information. Consequently, these networks must have different scope and ownership.

Figure 3.13 shows several network areas relevant to the electric power system. Starting at the center of the grid, utility networks are the communication backbone for grid operations. Wide-area networks (WAN), as the largest in geographical scope, encompass centralized generation, transmission, and substations under the utility’s domain. Moving “downstream” from the substations, neighborhood area networks (NAN) are of intermediate scope and use public and commercial telecommunication networks throughout the distribution network. The NAN serves AMI, meter aggregations, DER, and microgrids, which can also include utility participation. Finally, local area networks (LAN) address the private communication scope of residential, commercial, and industrial entities. These networks can encompass subnetworks that connect to a NAN or directly to the public internet [25]. The following definitions apply to the rest of this discussion:

Fig. 3.13
figure 13

LAN, NAN and WAN networks across the electric power system (adapted from [25])

Definition 3.3 (Commercial Telecommunication Network)

A telecommunication network that is owned and operated by a commercial telecommunication company.\(\hfill \blacksquare \)

Definition 3.4 (Private Network)

A network that is owned and operated by a private entity, be it residential, commercial, or industrial. In scope, a private network may be a WAN, NAN, or LAN. It may use interoperable, standard, or proprietary technologies.\(\hfill \blacksquare \)

Definition 3.5 (Proprietary Network)

A network that is not based upon an interoperable standard. Note that some networks may use open standards but are not interoperable because the standards themselves are not interoperable.\(\hfill \blacksquare \)

The development of mature eIoT communications is likely to be a gradual migration process. Traditionally, the power system has used private networks within the jurisdiction of grid operators and utilities. These include transmitted data over wired networks (e.g., power-line carrier and fiber optics) as well as wide-area wireless networks such as SCADA (supervisory control and data acquisition). However, with “grid modernization,” commercial telecommunication networks are increasingly playing a role.

Cellular data networks, and in particular 4G and long-term evolution (LTE), have the potential to transmit relatively high bandwidth data across long distances. Furthermore, WiMax networks can provide connectivity at the grid periphery at the neighborhood length scale. Finally, a large part of eIoT will require local area networks, be they wired Ethernet, WiFi, Z-wave, ZigBee, or Bluetooth. Naturally, industrial energy-management applications continue to leverage preexisting industrial network infrastructure in addition to these local area network options. Technological developments in communication networks are most likely to occur as a gradual migration rather than a swift shift from one technology to another. Furthermore, these developments are likely to occur in parallel so as to become complementary and mutually co-existing.

  • Tables 3.2, 3.3, and 3.4 summarize the eIoT communication networks discussed in this section.

    Table 3.2 Communication networks for grid operators and utilities
    Table 3.3 Telecommunication networks
    Table 3.4 Local area networks
  • Section 3.2.2 discusses grid operator and utility networks.

  • Section 3.2.3 discusses telecommunication networks.

  • Section 3.2.4 discusses local area networks.

3.2.2 Grid Operator and Utility Networks

Grid operator and utility networks use a range of legacy communication systems and technologies that are very much a product of the regulated electric power industry from several decades ago [428]. Nevertheless, technological developments in data acquisition, data analysis, and renewable energy generation are now pressuring grid communication systems to evolve and adapt. For example, the variability of renewable energy generation (discussed in Chap. 2) requires automatic control whose data rates are faster than what legacy communications systems are able to provide. This section highlights some of these traditional technologies so as to contextualize the discussion of eIoT communication technologies.

This section categorizes grid operator and utility communication into wired and wireless networks, each with their respective trade-offs and applicability within the electric system.

  • For wired communications, power-line carrier networks and fiber optics are covered in Sect. 3.2.2.1 [412]. Wired communications are relatively reliable and secure and very much represent the historical default for electrical utilities. However, their widespread deployment is associated with high rental fees and installation costs [106, 412]. Grid operators and utilities have also made extensive use of wireless networks, which in comparison have lower cost and reliability. Their flexibility and ease of installation, however, often supports their adoption.

  • Section 3.2.2.2 is devoted to SCADA-based wide-area monitoring systems as a traditional wireless power grid communication network.

  • Section 3.2.2.3 then delves into the emerging world of low power wide-area networks (LPWAN).

  • Section 3.2.2.4 discusses the wireless smart utility network (Wi-SUN) as a new development. Other types of wired and wireless communication networks are discussed more deeply in the context of commercial telecommunication and local area networks.

3.2.2.1 Wired Communications: Power-Line Carriers and Fiber Optics

Grid operators and utilities have used power-line carriers and fiber optic cables in transmission and neighborhood distribution applications. Over numerous decades, these technologies have undergone several upgrades from their original implementations, including from analog to digital communication [411]. In the past, the primary need for wired communication was fairly limited to application such as timely and efficient fault detection. This meant that communication systems needed to adhere to stringent cost rationales. A common strategy was to make use of existing utility-owned power poles or rent telecommunication poles to route information back to a control center [411]. This required wired communication systems often to match the radial topology of the underlying physical infrastructure.

Power-line carrier (PLC) communication uses power cables as a medium for data signal transmission [412]. It falls into four categories:

  • Ultra-narrow band power-line communication (UNB-PLC)

  • Narrowband power-line communication (NB-PLC)

  • Quasi-band power-line communication (QB-PLC)

  • Broadband power-line communication (BB-PLC)

Depending on PLC technology, data transfer speeds range from 100 Bps to 1.8 Gbps [409, 423]. The X-10 PLC protocol was influential in establishing narrowband PLC communication in the USA [409]. Since then, today’s NB-PLC standards include PoweRline Intelligent Metering Evolution (PRIME) (ITU-T G.9904), G3-PLC (ITU-T G.9903), IEEE 1901.2 2013, and ITU-T G.hnem [409]. The 63-PLC smart-grid applications have a 1.3–8 km range [409]. Depending on modulation type, this PLC could have a bandwidth of 30–35 kilobits per second (kbps) or 100 kbps [409]. PLC technologies are used in a diverse array of applications including home, transmission, and connective energy systems [409, 429]. For example, the G3-PLC standard has been used experimentally in the mid-voltage range with several topologies [429]. It has also been used to enable “smart grid” technologies such as AMI, vehicle-to-grid communications, demand-side management, and remote fault detection [408]. Broadband PLC, in particular, is suitable for local area networks (LANs) and AMI applications in the smart grid because it has higher bandwidth (but shorter range) as compared to narrowband PLC [409, 423].

In recent years, utilities have applied optical fiber communication as an upgrade to aging infrastructure [412]. Optical fiber is mainly used as a “backbone” distribution communications network, in what is called fiber-to-pole networks [412]. Optical fiber is characterized by high transfer rates, good stability, strong anti-interference ability, flexible network configuration, large-system capacity, and high reliability [412]. The data rate of optical fiber ranges from 155 megabits per second (Mbps) to 40 Gbps [410]. However, its implementation is a large investment because it requires relatively expensive testing and highly skilled installation and maintenance [411, 412].

The wide-area deployment of wired technologies (that is, PLC and optical fiber) is costly but does provide the benefits of communications capacity, reliability, and security [412]. Some utilities have also installed specialized communication networks according to their specific technical and economic needs. Such specialized lines are mainly composed of twisted-pair cable and provide for small capacity, high reliability, low transfer rate, and moderate anti-interference for a small investment [412].

3.2.2.2 SCADA Networks and Wide-Area Monitoring Systems

SCADA was developed in the 1950s because utilities needed a way to gather power output data from the scattered geography of the electric grid’s sensing endpoints to conduct load-frequency control and economic dispatch [101]. SCADA systems now communicate commands and system state data back and forth between utility control stations and individual substations within several seconds [428]. Due to the expansive geographical area covered by the transmission system, monitoring is a large task, and has special sensor communication requirements. SCADA systems have increased “openness” by connecting to wide-area monitoring systems (WAMS) and other networks through proprietary connections and the Internet [430]. This point is emphasized since connection to the internet is an important stepping stone in the development of eIoT.

The SCADA system in actuality uses a combination of wired and wireless technologies. Wired options include telephone lines and optical fiber; wireless alternatives include microwave and ultra-high frequency (UHF) radio [19]. The choice of implemented technology depends on an individual system’s needs for data rate, cost, and data security [19]. With traditional technologies, the data rate is typically 9.6–115.2 kbps [413]. SCADA protocols are based on IEEE C37.1 for the communication between remote terminal unit (RTU) and the master terminal unit (MTU) [19]. Traditionally, SCADA allows for serial communication between master and remote terminal units, but newer hybrid protocols allow peer-to-peer communication [272, 413]. These protocols include Modbus, DNP3, PROFIBUS (from standards IEEE 11674, IEEE 61158), DeviceNet, ControlNet, and Fieldbus [272].

The advantages and disadvantages of operating a legacy SCADA system are typical of any aging communication technology. On the one hand, the operating costs are small relative to the initial investment in infrastructure. On the other, the bandwidth and computational capability is relatively low [272]. Furthermore, as SCADA networks have developed, they have suffered unintentional negative consequences. Since the 1990s, utilities began transitioning from closed proprietary networks to interconnected and open internet-based networks [430]. The push for open communication protocols has increased network accessibility and consequently the potential for connection to other networks [413]. This is also an effect of custom networks being standardized so as to be sold as off-the-shelf SCADA systems [430]. As proprietary networks are turned into open networks, and peer-to-peer communication among SCADA devices increases, cybersecurity concerns have naturally increased [413].

In addition to SCADA, WAMS are being deployed as a form of complementary sensor network. A WAMS is a collection of hundreds of phasor measurement units (PMUs) at various locations in the electrical grid [414]. PMUs have faster data collection rates than SCADA systems, with 30–60 data points per second as compared to SCADA’s 1 data point per 1–2 s [431]. Data communications specifications are provided by the IEEE C37.118-2005 standard [414]. A phasor data concentrator (PDC) aggregates measurements from local PMUs through a local communication network, and then routes the data to a utility’s core network using proprietary networks [414]. Data transfers from the PMU to the PDC are required to have minimal latency for an efficient smart grid [414]. PMU data are produced continuously and synchronously and are therefore delay-sensitive [414]. Consequently, it must be intelligently scheduled to manage communication load and maintain quality requirements [414].

3.2.2.3 LPWAN Commercial Wireless IoT Technologies

Due to power constraints on remote IoT sensors and actuators, IoT devices need to operate in an energy efficient manner. Recently, commercial applications to support wide-area communication have emerged. Low power wide-area networks (LPWAN) is an umbrella term that encompasses technologies and protocols that support wide-area (> 2 km) communication and consume low power over long periods of time [432]. Data ranges for these devices are from 10 bps to a few kbps [433]. LPWAN networks must meet the following considerations [433]. Devices should have the following characteristics:

  • Be cheap to deploy

  • Operate on very low power

  • Function when required, preferably in star topologies

  • Ensure secured data transfer

  • Have robust modulation.

LPWAN networks will generally include devices, a network infrastructure, protocols, controllers, network and application servers, and a user interface [433]. This service can be provided as a single package or through coordination among multiple providers [433].

LoRa, short for long range, is a physical-layer LPWAN application by SemTech Corporation [434]. The system works in the 902–928 megahertz (MHz) frequency band in the USA and in the 863–870 MHz in Europe [418]. The LoRa system is composed of the PHY layer which is proprietary while the LoRaWAN protocol is an open standard that is managed by the LoRa Alliance which has over 300 members [415, 418, 433]. LoRa chips can be produced by various silicon providers to avoid a single source [433]. LoRa networks follow a star topology to relay messages between end-devices and a central network node [415, 416, 418]. Long-range wide-area network (LoRaWAN) radios are used with low power devices to support low bandwidth and infrequent ( 128 s) communication over wide areas [415, 416, 432]. This drives down the cost and extends the battery life of the devices. LoRaWAN devices draw no more than 2 μA while resting and 12 mA when listening [415, 416]. LoRaWAN can use a bandwidth of 125 kHz, 250 kHz, or 500 kHz depending on the region, application, or frequency [435]. The data rates can also be determined based on the frequency chosen [435]. These data rates typically range from 0.3 to 27 kbps [417]. It uses the AES-128 algorithm that is similar to the IEEE 802.15.4 standard [435]. LoRaWAN offers two security layers, one for the network layer and one for the application layer [433]. It offers a range of 2–5 km in cities and up to 15 km in suburban areas [417]. Another LPWAN technology is the Symphony Link by Link Labs that is a proprietary MAC layer built on top of the LoRa physical layer. This technology adds vital connectivity to LoRaWAN such as guaranteed message receipt [436]. Applications using LoRa technology in the power industry include radiation leak detection from nuclear power plants [437] and air pollution monitoring for thermal power plant systems [438].

The NB-IoT is narrowband communication system by the Third Generation Partnership Project (3GPP) standards body that was launched in 2016 [439]. It is used for low power, infrequent (over 600 s) communication devices [415, 439]. It supports a star topology [415, 439]. It can operate either in the GSM spectrum or LTE [415, 439]. NB-IoT can be deployed in three operation modes: (1) stand-alone using GSM, (2) in-band where it operates within a bandwidth of a wide-band LTE carrier, and (3) with the guard-band of an existing LTE carrier [439]. Since NB-IoT is based on LTE, hardware reuse and spectrum sharing is possible without coexistence issues [439]. NB-IoT is expected to ensure long battery life (up to 10 years) and to support over 52k low-throughput devices [439]. NB-IoT can cover a range of < 25 km and offers high accuracy rates [422]. The expected latency for this system is < 10 s for 99% of the devices [439]. NB-IoT systems are used in applications such as smart metering (gas, water, and electricity), smart parking, smart street lighting, and pet tracking [440, 441]. The NB-IoT forum comprises of over 500 members, contributors, and developers [441].

SigFox was launched in 2009 by the French company SigFox as the first LPWAN application for IoT. Compared to LoRa, SigFox is not nearly as widely used in the USA because its frequency band (900 MHz) is very prone to interference and its transmission time (≈3 s) is greater than the maximum transmission time of 0.4 s that is allowed by the Federal Communications Commission (FCC) [420]. The SigFox physical layer uses an ultra-narrowband technology that uses standard ratio transmission method called binary phase-shift keying (BPSK) going up and frequency-shift keying coming down [418, 419]. The SigFox technology is suitable for applications that require small and infrequent transmission [419]. The first releases were unidirectional but recent versions support bidirectional communication [418, 419]. SigFox offers data rates of 100 bps in the uplink with a maximum payload of 12 bytes [417]. It claims to support about a million connected objects with a coverage range of up to 50 km [419]. SigFox has not been as widely adopted, especially in the USA, due to its limiting transmission characteristics such as a restriction on the number of packets transferred by a device to only 14/day [417]. In the electricity and utility industry, SigFox is used to monitor back-up power supply systems and smart metering (gas, electricity, and water) and for electric pole surveillance [442].

Lastly, Ingenu, formally known as On-Ramp Wireless, works in the 2.4 GHz frequency and has a robust physical layer that allows it to still operate over wide areas [418]. It offers higher data rates compared to LoRa and SigFox [417]. Specifically, it can transmit up to 624 kbps in the uplink and 156 kbps in the downlink [417]. Its coverage is, however, shorter (around 5–6 km) and consumes much higher energy [417]. Ingenu is based on the random phase multiple access (RPMA) [417, 418].

3.2.2.4 Wireless Smart Utility Network

The wireless smart utility (ubiquitous) network (Wi-SUN) is a mesh topology network supported by the Wi-SUN Alliance. The Wi-SUN Alliance was founded in 2012 and comprises of 130 members who include product and silicon vendors, software companies, utilities, government institutions and universities [443]. The goal of the Wi-SUN Alliance is to promote open industry standards for wireless communication networks for both field area networks (FAN) and local area networks (LAN) [443, 444]. It also defines specifications for testing and certifying of said networks to enable multi-vendor interoperable solutions [443]. The Wi-SUN network was developed according to the IEEE 802.15.4g standard that defines physical layer (PHY) and medium access control (MAC) layer specifications [445], TCP/IP and related standards protocols.

Applications for the utility include the provision of field area networks (FANs) for smart metering infrastructures, distribution automation, and home energy management. The Wi-SUN coverage range is 2–3 km making it suitable for NANs [446]. AMI systems can use Wi-SUN technology for multiple meters [446]. Wi-SUN networks are usually laid out in a mesh topology although they support both star and star-mesh hybrid topologies [415]. This allows for enough redundancy in the network to limit single points of failure [415]. This network is deployed on both powered or battery-operated devices [415]. Devices that support mesh networks transmit over a short range and are suitable for applications that require distributed computing. The Wi-SUN mesh networks are self-forming. That is, whenever a new device is added, it immediately finds peers to communicate with and whenever a device disconnects the other devices in the peer-network reroute accordingly [415]. The short-range feature allows for faster and consistent data rates. Wi-SUN devices can perform frequent (up to 10 s) and low-latency communication, and draw less than 2 μA in resting and 8 mA when transmitting [415].

3.2.2.5 eIoT Perspectives on Grid Operator and Utility Networks

Grid operators and utilities have long made use of communication networks to gain situational awareness as an integral part of power systems operations and control. In many ways, the communication technologies described above were deployed as part of a regulated electric power industry. eIoT, however, as has been discussed at length will fundamentally change the nature of power system operations so as to need far more advanced communication system technologies. With the above interoperable LPWAN and Wi-SUN technologies, eIoT communication technologies for grid operators and utilities are likely to improve significantly. Open, interoperable standards also create room for innovation within this area.

One main need is the communication beyond the purview of just the grid operators and utilities. In that regard, communication over power-line carriers, proprietary fiber optics, and SCADA leave many new parties out of the evolving and highly flexible eIoT “cloud” [428]. As the next subsections will discuss, there is much room for these utility networks to be complemented by commercial telecommunication networks and LANs [160, 431]. Such a hybrid communication system architecture is much more likely to meet the new and unprecedented requirements for data access and transfer [447]. Naturally, a shift toward hybrid communication systems brings about very legitimate questions of jurisdiction, ownership, and authority over the data, servers, and communication channels that constitute the system. While it is clear that standards will continue to play a central role in the design of communication systems, it remains unclear what role regulation and legislation will have in these areas. These are still open questions as the grid transforms itself towards an eIoT paradigm.

3.2.3 Commercial Telecommunication Networks

One important trend in the development of eIoT communications is the shift towards commercial telecommunication networks as a complement to existing and dedicated grid operator and utility networks. In many ways, this has been a long-standing trend. The preceding section mentioned that utilities and grid operators have often rented telecommunication poles for wired communications over power-line carriers. A logical technological next step is to switch from power-line carriers to digital subscriber lines (DSL) over the (wired) telephone lines themselves [106]. DSL has high speeds of 1–100 Mbps depending on its type, that is, asymmetric digital subscriber line (ADSL), very-high-bit-rate digital subscriber line (VDSL), and high-bit-rate digital subscriber lines (HDSL) [410].

Although DSL technology is often chosen for smart grid projects because the use of existing telephone infrastructure reduces installation costs [106], the lack of standardization and differing ownership of equipment can cause potential reliability issues related to maintenance and repair [106, 412]. Furthermore, the expansion of telephone infrastructure needs to be cost rationalized in remote applications [106, 412].

Beyond wired telephone lines, eIoT communications is now making extensive use of wireless telecommunications networks for essential “smart grid” applications such as AMI-to-utility control center communications [106]. Wireless solutions have relatively very low cost [412] and are easier to implement in less accessible regions [106]. Despite these benefits, wireless options present several challenges including constrained bandwidth, security concerns, power limitations, signal attenuation, and signal interference [106].

With these trade-offs in mind, it is useful to acknowledge the needs of the utilities in choosing the most suitable network. Utility evaluation of communication networks usually involves consideration of the following [412]:

  1. 1.

    Bandwidth

  2. 2.

    Data rates

  3. 3.

    Coverage

  4. 4.

    Reliability of end-to-end connection solutions

  5. 5.

    Associated protocols

  6. 6.

    Integration of existing systems

  7. 7.

    Ease of deployment

  8. 8.

    Management tools

  9. 9.

    Life cycle costs

Section 3.2.3.1 highlights some of the technological developments in cellular data networks, and Sect. 3.2.3.2 covers WiMax networks before discussing their implications on eIoT in Sect. 3.2.3.3.

3.2.3.1 Cellular Data Networks: 2.5G-GPRS, 3G-GSM, 4G, and LTE

Cellular communication systems have provided coverage for data transmission for several decades [157]. They enable utilities to deploy smart metering in a wide-area environment and are a relatively quick and inexpensive option for meter-to-utility as well as distant node-to-node communication [106, 157]. Existing telecommunications infrastructure reduces investment cost and the additional time needed to build communications for a power systems purpose [106]. Systems, such as 2.5G, GSM, 3G, and 4G, are radio networks that communicate via at least one base station transceiver (or cell) per land area [157].

2.5G, also known as general packet radio service (GPRS), is a packet data bearer service over the global system for mobiles (GSM) [427]. User data packets are transferred between mobile stations and external IP networks so that IP-based applications can run on a GSM network [427]. Data speeds can range from 9.6 to 115 kbps by amalgamating unused time slots in the GSM network [427].

The next generation cellular network, 3G-GSM, provides data rates of 144 kb/s to over 3 MB/s [412]. GSM itself is widely used internationally for mobile telephone systems and is based on circuit-switching technology (as opposed to the sole use of packet-switching in GPRS) [427]. Cellular network operators have approved the use of GSM networks for AMI communications because they provide sufficient bandwidth, data rates, anonymity, and protection of data [412, 424]. At this point, 3G technology is a mature network with a completed theory and experience [412]. It is secured using various encryption technologies, but its security can still be a concern. Its communication rate is not reliably real-time [412].

More recently, the 4G and LTE standards have been developed. 4G was defined by the International Telecommunication Union (ITU) using many of the 3G standards. In 2007, the Third Generation Partnership Project (3GPP) completed its task of creating the LTE standardization [448]. The project’s objective was to meet increasing requirements on higher wireless access data rate and better quality of service [448]. Subsequently, 3GPP immediately started a standardization process called LTE-Advanced for 4G systems [424, 448]. Because of its high reliability and low latency, LTE is suitable for NAN smart grid applications such as automated metering systems and distribution system control [424]. Furthermore, LTE offers opportunities to scale deployment because it is widely supported and its hardware costs are expected to improve [424].

3.2.3.2 WiMAX Networks

In complement to the cellular data networks described above, the Worldwide Interoperability for Microwave Access (WiMAX) standard was developed by the IEEE 802.16 working group to meet 3G standards and then later revised to meet 4G requirements [448]. It has been developed for “first-mile/last-mile” broadband wireless access as well as backhaul services in high-traffic metropolitan areas [448]. WiMAX is a communication protocol that provides fixed and fully mobile data networking. It has versions that work with licensed and unlicensed FCC frequencies that work in the 10–66 GHz and 2–11 GHz ranges, respectively [427]. WiMAX has a theoretical data rate of 75 Mbps and is designed for larger areas with a range of up to 50 km with a direct line of sight [410, 427]. As a standard, WiMAX offers interoperable microwave access [424].

The WiMAX architecture is a proprietary network, which comes with the benefit of complete control to utilities [424]. It is well-suited for use in a NAN due to its bandwidth and range [412, 424]. It offers efficient coverage and high data rates [424]. It also has low latency and relatively low deployment and operating costs [424]. These characteristics favor smart meter networking and are sufficient to support the real-time data transfers required for real-time pricing programs [424]. Disadvantages of WiMAX include a high initial infrastructure cost for radio equipment, which requires optimizing the number of station installations and quality of service requirements [424].

3.2.3.3 eIoT Perspectives on Commercial Telecommunication Networks

As eIoT continues to develop technologically, it is clear that commercial telecommunications networks will have an increasingly important role. They provide sufficient bandwidth for wide-area data transfer that allow them to be used for distributed smart grid applications such as AMI and DERs [106, 423, 424]. These networks are suitable for NAN, where they can connect peripheral devices to private area networks [424]. The LTE and WiMax standards also have the bandwidth and quality of service capabilities to support NAN-to-NAN (N2N) communications [106, 423, 424]. Beyond simply speed and quality of service, telecommunication networks and their associated operators offer grid operators and utilities an existing and cost-effective means for networked energy management. Furthermore, utilities (especially smaller ones with limited technical staff) have the opportunity to outsource maintenance and security upgrades in networks that are continually evolving with new generations of technology. This allows utilities to focus more on “core” business services [424].

Despite these many advantages, the integration of telecommunication networks into grid operations faces potential challenges. Cellular networks serve a larger customer market, which may result in network congestion or decreased performance [106]. Critical communications applications may not find cellular networks dependable in an emergency such as a storm or abnormal traffic situations [106]. Furthermore, although the speed of cellular networks continues to evolve, the number of mobile devices and their demands for data is also continually growing [425]. Grid operators, utilities, and telecommunication networks will have to work collaboratively to ensure that telecommunication networks have sufficient capacity to handle a continually evolving eIoT and its associated energy-management applications. In some cases, a utility may prefer its own private network to ensure quality of service and reduce monthly operating costs [106, 424]. It is also possible to develop hybrid utility-telecommunication networks so that congestion events do not interfere with emergency utility operation. LTE, for example, has the ability to operate either as a default or as a backup network [424]. Finally, from the perspective of power grid cybersecurity, a public telecommunication network is often perceived as a vulnerable point of operation [423]. Further work is required to bolster security on public cellular networks given their new role in eIoT energy management [423].

Finally, as telecommunication system operators face the strains of increased mobile and wireless device usage, an advanced, next-generation technology (5G) is needed [425]. Mobile-cellular subscriptions increased from approximately 109 million to 355 million between 2000 and 2014 [449]. As more devices become wireless, the telecommunications industry must address the physical scarcity of the radio frequency spectra for cellular communications, increased energy consumption, and average spectral efficiency while maintaining high data rates, seamless coverage, and a diversity of quality of service (QoS) requirements [425]. Heterogeneous networks may cause fragmented user experience, and so compatibility of these devices and interfaces with networks must be ensured [425]. 4G network data rates may not be sufficient for cellular service providers [425]. Instead, they must adopt new technologies as a solution for the billions, perhaps trillions, of active wireless devices [425]. 5G is expected to be standardized around 2020 [425].

3.2.4 Local Area Networks

In addition to grid operator, utility, and telecommunication networks, there is a growing need for LANs at the consumer’s premises. Such networks use local area, often low energy, communication technologies to connect to a wide variety of devices in the home, commercial building, or industrial site [427]. These LANs also route information from peripheral devices such as smart thermostats and water heaters to energy-management systems and smart meters and monitors [410]. Local area networks are also often connected via smart meters and internet gateways to other “smart grid” actors such as electric utilities or third-party energy service companies (ESCOs). Such gateways enable customer participation in the utility’s NAN applications such as prepaid services, user information messaging, real-time pricing and control, load management, and demand response [410].

Because LANs support a tremendous diversity of peripheral devices, they are also characterized by a diversity of standards and protocols. This section highlights some of the more emergent technologies including [106, 427]:

  1. 1.

    Wired Ethernet in Sect. 3.2.4.1

  2. 2.

    WiFi in Sect. 3.2.4.2

  3. 3.

    Z-Wave in Sect. 3.2.4.3

  4. 4.

    ZigBee in Sect. 3.2.4.4

  5. 5.

    Bluetooth in Sect. 3.2.4.5

A brief discussion of industrial networks is also provided (in Sect. 3.2.4.6) to address the specific needs of industrial sites.

3.2.4.1 Wired Ethernet

Ethernet is a dominant wired technology and it is widely used in residences and commercial buildings [450]. Almost all personal and commercial computers are equipped with an Ethernet port, and Ethernet connections are increasing among consumer entertainment equipment [426, 450]. Ethernet using an unshielded twisted pair (UTP) cable has four different supported data rates (10 Mbps, 100 Mbps, 1 Gbps, and 10 Gbps) that are covered by the IEEE 802.3 standard [450]. Although Ethernet has a high data rate, not all devices in private networks may be suitable for Ethernet connection. These devices may not have Ethernet ports, such as many home appliances, or are in environments that cannot support the power requirements or justify the cost of Ethernet [426].

3.2.4.2 WiFi Networks

WiFi networks are the natural wireless alternative to wired Ethernet. WiFi provides high-speed connection over a short distance [427]. The IEEE 802.11 standard defines various WiFi ranges and data rates [427]. Its optimal data rates span from 11 to 320 Mbps, and its optimal range spans from about 30 to 100 m [427]. WiFi is not meant for moving devices, and although not intended for metropolitan areas it has been extended to larger areas [427]. This is due to its support of personal devices on wireless internet access. WiFi is an IP-based technology and is widely used for a variety of electronic devices such as computers and mobile phones [426].

3.2.4.3 Z-Wave Networks

Z-Wave is an example of a proprietary wireless communication technology in LANs [426]. It is most suited for residences and commercial environments with low-bandwidth data transfers [426]. It is able to include device metadata in its communications and is easily embedded in consumer electronic products due to its low cost and low power consumption [426]. Unlike WiFi, it operates in the 900 MHz range and can be customized for simple commands such as ON-OFF-DIM for light switches, and Cool-Warm-Temp for HVAC units [426]. Z-Wave compatible devices can also be monitored and controlled by gateway access to broadband Internet [426].

3.2.4.4 ZigBee Networks

Zigbee can be used as an alternative to WiFi and Z-Wave [423]. It is often used in industrial settings [427]. ZigBee can cover about 100 m with a data rate of 20–250 kbps according to the IEEE 802.15.4 standards [412]. In applications that do not require large bandwidth, ZigBee offers a low-cost solution [412, 427]. ZigBee has real-time monitoring, self-organization, self-configuration, and self-healing capabilities [423]. It is also appropriate to eIoT applications because LANs can use it to create a mesh network of devices whose range and reliability increases as more devices are added [412, 426]. ZigBee devices are battery-powered and this may factor into the choice of network topology (star, tree, or mesh) [412]. In general, ZigBee has low power consumption and reliable data transmission [412]. However, since ZigBee devices are smaller, they tend to have limited internal memory, limited processing capability, and low data rates [412, 423].

3.2.4.5 Bluetooth Networks

The Bluetooth protocol was developed to provide point-to-point wireless communication such as between mobile phones and laptop computers [451, 452]. Currently, it shares the IEEE 802.15 standard with ZigBee technologies. Bluetooth operates in the unlicensed 2.4 GHz spectrum [427]. In addition to point-to-point capabilities, it can create meshed networks with a range of 1–100 m at data rates of up to 3 Mbps [412, 427]. Its range and low power consumption makes it suitable for local monitoring of devices; however, Bluetooth is vulnerable to network interference and offers weak security [412].

3.2.4.6 Industrial Networks

In addition to the above communication technologies, there exist a number of communication technologies that are specific to industrial applications. As has been mentioned several times in the preceding sections, LANs must offer multi-level security, be cost effective, comply with standards, provide reliable transmission, offer ease of access and use. Industrial networks have several additional requirements including predictable throughput and scheduling, extremely low down times, reliable operation in hostile environments, scalability, and straightforward operation and maintenance by plant personnel (who are not specialized in communication systems). Ultimately, these (often competing) requirements have led to a diversity of industrial networks. Some of the leading industrial networks include [453, 454]:

  1. 1.

    DataHighway Plus

  2. 2.

    Modbus

  3. 3.

    Highway Addressable Remote Transducer (HART)

  4. 4.

    DeviceNet

  5. 5.

    ControlNet

  6. 6.

    Ethernet/IP

  7. 7.

    LonWorks

  8. 8.

    AS-1, P-Net

  9. 9.

    Profibus/Profinet

  10. 10.

    Foundation Fieldbus

  11. 11.

    Ethernet

A detailed review of these technologies is beyond the scope of this work, however, the reader is referred to the following references [453456] for an introduction to the topic. In the context of this work, these industrial networks form the communication layer of the “industrial Internet of Things” (IIoT) [457459]. Naturally, as energy management becomes an increasingly important part of industrial operations, IIoT and eIoT will be viewed as overlapping and complementary development rather than mutually exclusive.

3.2.4.7 Perspectives on Local Area Networks

The wired and wireless networks described above perform the communication function in homes, commercial buildings, and industrial facilities. As eIoT continues to develop Ethernet, WiFi, Z-Wave, ZigBee, and Bluetooth networks are likely to continue to exist alongside each other [106, 426, 427]. In most cases, the most important role of these networks is to connect peripheral “smart” devices back to centralized applications, such as home energy monitors, home hubs, or utility-facing smart meters. Smart meters, in particular, can act as an interface between the LAN and the NAN [106, 414]. Such interface can serve several purposes including remote load control and the monitoring, and control of DER and EVs [414].

Beyond traditional fixed applications, local area networks must increasingly support mobile devices. Unlike a fixed network topology, a mobile device must identify the network in which it operates, as well as the identity and location of its peer devices in order to operate properly [460]. The integration of mobile devices into LANs necessitates networks with changing topology and algorithms that enable the real-time discovery and update of new devices [460]. Such applications raise questions of network security. Data exchange and interface interactions must be supported by trusted and secure devices that gracefully recover from failure [428]. The security risk of an untrusted device entering the network (or a trusted device being hacked) increases as the attack surface of the network increases. LANs are dispersed, highly fragmented, last-mile communication networks of the electric grid [426]. This heterogeneity of devices and communication channels make it difficult to protect from security breaches and data poaching.

In addition to network security, the fragmentation in LANs also complicates their interoperability [426]. Each of the communication technologies described above has its associated advantages and no one standard is likely to emerge for all applications [106, 426, 427]. One solution is to use the IP as a unifying translation layer across many different heterogeneous networks [426]. In such a case, each “smart” device must have a usable IP (v6) address. Beyond LANs, IP can also serve to improve the interoperability with other networks such as SCADA. IP and “middleware” can deliver data to utilities in readable formats [412]. For these reasons, IP is viewed as an integral part of the widespread development of eIoT.

Finally, it is clear that communication networks will continue to require many thoughtfully developed technical standards. As communication networks are advanced, it is important to create protocols that:

  1. 1.

    Transmit data within a relatively small (private) area

  2. 2.

    Transmit data back to a central location

  3. 3.

    Provide backward compatibility to 2G, 3G, 4G, and LTE standards

Successful implementation of these open standards requires engagement of hardware and software companies in both the electric power and telecommunications sectors [132].

3.2.5 IoT Messaging Protocols

The previous sections have covered eIoT communication technologies that enable devices to form machine-to-machine networks using various radio technologies. For LAN, these may include Zigbee, Z-Wave, WiFi, or Bluetooth. This section now covers the messaging protocols that are used over communication networks. The messaging protocols discussed here include:

  1. 1.

    eXtensible Messaging and Presence Protocol (XMPP)

  2. 2.

    Advanced Message Queuing Protocol (AMQP)

  3. 3.

    Data Distribution Service (DDS)

  4. 4.

    Message Queue Telemetry Transport (MQTT)

  5. 5.

    Constrained Application Protocol (CoAP)

3.2.5.1 Data Distribution Service (DDS)

The DDS is a message-passing service that provides publish/subscribe capabilities [461, 462]. DDS has been used successfully to provide scalable and efficient applications within the LAN [461, 462]. This service is used for real-time M2M communication. Its architecture does not involve a broker thus making its communication a distributed service [461, 462]. DDS was developed to support any programming language and it is the only standard messaging application programming interface (API) for C and C++ [463]. Its publish/subscribe wired protocol allows for interoperability across various programming languages, platforms, and implementations [463]. It provides a quality of service (QoS) for different behaviors [463] but there have been suggestions to leverage the good features of DDS and MQTT to provide a more flexible QoS IoT applications [462].

3.2.5.2 Message Queue Telemetry Transport (MQTT)

IBM’s MQTT is optimized for centralized data collection and analysis through a broker [462, 464]. It offers an asynchronous publish/subscribe protocol that is based on a transmission control protocol (TCP) stack [464]. Usually a client sends information to a broker or a subscriber elects to receive messages on certain topics [464, 465]. It provides three QoS options [461, 464]:

  1. 1.

    Fire and forget (no response necessary)

  2. 2.

    Delivered at least once (acknowledgement needed, message received once)

  3. 3.

    Delivered exactly once (ensure delivery exactly one time)

MQTT has been designed to have low overhead and is suitable to IoT messaging as no responses are needed most of the time [464]. The system may require username/password authentication especially for brokers and this is achieved through secure socket layers (SSL) /transport layer security (TLS) [464, 466].

3.2.5.3 Constrained Application Protocol (CoAP)

The CoAP was designed by the Internet Engineering Task Force (IETF), and is based on HTTP making it interoperable with the internet [467]. It offers a request/secure protocol that use both asynchronous and synchronous responses [464]. It provides four types of messages [464]:

  1. 1.

    Confirmable

  2. 2.

    Non-confirmable

  3. 3.

    Acknowledgement

  4. 4.

    Reset

It also allows for a stop-and-wait transmission mechanism for confirmable messages and a 16-bit “Message ID” is provided to avoid duplicates [464]. Due to its compatibility with HTTP, CoAP clients can access HTTP resources through a translation system [464, 468]. It does not offer any security features [464].

3.2.5.4 eXtensible Messaging and Presence Protocol (XMPP)

XMPP was initially designed for messaging and has been widely in use for over 10 years. However, due to its age XMPP is starting to become outdated for some of the newer messaging requirements [464]. For instance, Google recently stopped supporting it [469]. XMPP runs on TCP and provides both asynchronous publish/subscribe and synchronous request/respond messaging systems. Given that it was designed for near real-time communication, XMPP is suitable for small and low-latency applications [464, 470]. It offers the specification of XMPP extension protocols to expand its functionality [464]. It has TLS/SSL built in for security purposes but does not offer any QoS [464]. It also uses XML which may cause additional data overhead and increased power consumption [464].

3.2.5.5 Advanced Message Queuing Protocol (AMQP)

AMQP came out of the financial industry [464]. It mainly uses TCP but can use other transport services as well. It offers asynchronous publish/subscribe protocols and has a store-and-forward feature that ensures reliability when service is lost [464, 471]. It provides three QoS [464]:

  1. 1.

    At most once (message sent once whether it is delivered or not)

  2. 2.

    At least once (message delivered one time)

  3. 3.

    Exactly once (message delivered only once)

Security is provided through TLS/SSL. AMQP may have low data rates at low bandwidths [464, 472].

3.3 Distributed Control and Decision Making

Thus far, this chapter has closely followed the generic control structure in Fig. 3.1. Section 3.1 highlighted the tremendous heterogeneity of network-enabled physical devices that are integrated across the electric power grid to measure and control primary and secondary variables on the supply and demand sides. Their deployment naturally inspired the development of multiple mutually coexisting communication networks. Section 3.2 differentiated these networks based upon their operator, traditional grid operators, telecommunication companies, and finally LANs belonging to residential, commercial, and industrial customers.

These two large-scale trends are transformative. No longer is the grid composed of thousands of centralized and actively controlled generators supplying billions of passive device loads. Rather, the centralized generation is complemented by distributed renewable energy that is often variable in nature. Furthermore, many of the passive device loads have become active and network enabled [45, 46]. The last step in the activation of the grid periphery is control and decision-making algorithms that serve to coordinate these devices to achieve balancing, mitigate line congestion, and meet voltage control objectives. Given the spatial and functional distribution of these devices, scalable and distributed control techniques that efficiently represent all the interactions are required to control and coordinate them, whether the interactions are collaborative or competitive [473].

In order to meet the challenges presented by the grid’s physical transformation, the structure and behavior of the power system’s operation and control must similarly change. Figure 3.14 shows a generic hierarchical control structure for a typical power system area. Passive loads are aggregated by a distribution system utility and passed to an independent (transmission) system operator (ISO) [20]. The ISO runs a wholesale day-ahead electricity market in the form of a centralized security-constrained unit commitment (SCUC) as well as a finer-grain “real-time” balancing market in the form of a security-constrained economic dispatch (SCED). These two market layers approximate the aggregated load at 1-h and 5-min intervals, respectively.

Fig. 3.14
figure 14

A generic hierarchical control structure for a typical power system area

Decentralized automatic generation control (AGC) and automatic voltage regulation (AVR) use feedback control principle to adjust frequency and voltage at finer timescales (on the order of 1 Hz). Typically, each of these control layers is studied independently, often separating technical and economic analyses [15]. More recently, the Laboratory for Intelligent Integrated Networks of Engineering Systems (LIINES) has advanced the concept of “enterprise control” to simulate, design, and assess such a hierarchical control structure holistically [245, 246, 474478]. An extended rationale for power system enterprise control has been published relative to the methodological limitations of existing renewable integration studies [15, 245, 246].

Such an approach must now evolve again to address the grid’s physical transformation. The centralized optimization algorithms found in the market layers of the generic hierarchical control structure (in Fig. 3.14) do not scale and are unable to address the explosion of active demand-side resources at the grid periphery [15, 17]. Furthermore, the decentralized control algorithms found in AGC and AVR lack coordination beyond their local scope of control. For these regions, effective control algorithms that provide both scalability and wide-area coordination are necessary [479, 480].

Perhaps one of the key research areas in distributed power system control is in solving the optimal power flow (OPF) problem in a distributed manner [481494]. Not only is this problem difficult to solve (by virtue of it being non-convex), it also consumes significant computational resources. Being able to solve the problem in a distributed manner allows for faster solutions to the OPF problem, and larger problem sizes. A common technique is usually based on augmented Lagrangian decomposition [493, 495, 496] such as dual decomposition [482, 497], the alternating direction method of multipliers (ADMM) [483, 484, 492, 494, 496, 498, 499], alternating direct inexact Newton (ALADIN) [485], analytical target cascading (ATC), and the auxiliary problem principle (APP) [486, 500]. The other common approach is based on decentralized solution of the Karush–Kuhn–Tucker (KKT) necessary conditions for optimality and gradient dynamics [487]. The ADMM is by far the most common of these techniques [488]. Other distributed control study areas include wide-area control problems, optimal voltage control, and optimal frequency control [501]. Despite extensive publications in this area, guaranteed convergence remains a concern for most of these approaches [501].

While the transmission system is likely to remain unchanged, the distribution system can implement two distribution system energy markets with distributed algorithms. Furthermore, eIoT devices have the potential to provide AGC and AVR ancillary services. In some cases, the communication networks described in Sect. 3.2 will be sufficiently fast to enable the distributed algorithms. In other cases, network latency will limit these implementations to decentralized control [502].

To that effect, the power systems literature has developed significant work on multi-agent system (MAS) distributed control algorithms. In MAS applications, agents are equipped with the ability to simplify decision making by allowing them to communicate with few of their immediate neighbors and make decisions that then inform higher-level decisions [503, 504]. This ensures that devices do not carry too much information, and allows for better coordination within the system [503]. Key MAS features such as modularity, scalability, reconfigurability, and robustness make them especially paramount to the realization of distributed control [505]. This section seeks to highlight some of the important outcomes of this research.

Perhaps the earliest works on multi-agent systems in power system research occurred at the turn of the century in the context of market deregulation. Then, it was recognized that as power system markets shifted from a single grid operator to multiple competing generation companies that such “genCo’s” would deploy new “game-theoretic” bidding strategies to maximize their profit. Therefore, some of the first works on the applications of multi-agent systems to the power industry were focused on modeling electricity markets in a deregulated power industry [506510].

At the time, most algorithms studied the effect of self-interested agents on auction market equilibrium with a particular focus on the unit commitment problem [511514]. As such, these MAS frameworks were composed of a few mobile agents, generator agents, and a market facilitator who would oversee the market bidding process [515]. Game-theoretic strategies were also employed to investigate potential coalitions or cooperative strategies among different competing parties [516, 517].

Around the same time, various MAS approaches considered optimal cost allocation techniques to manage cross-border exchanges, be it through tie-lines, or cross-jurisdictional transmission lines [518520]. These trends reflect the earliest MAS trends that set the stage for later applications in electric microgrids, demand response, and smart grids.

MAS applications later diversified to other aspects of power systems control and operations such as balancing, scheduling, line control and protection, and frequency regulation [509, 521525]. As more renewable energy resources have gained prominence in grid operation, MAS frameworks, too, have shifted focus to the provision of ancillary services. A significant number of studies have considered system restoration under vulnerable system conditions, and later these approaches have been applied to microgrids with some penetration of variable energy resources. Usually, these MAS applications study only a single layer of either economic or technical control [32]. In some cases, a MAS economic layer was combined with a single physical layer [32]. Later on, MAS applications came to incorporate demand response at the microgrid and residential levels [526529].

Agent-based and game-theoretic approaches have also been applied for cooperative and competitive demand-side management and microgrid control [530537]. Grid level MAS applications have focused on the provision of ancillary services, and in some cases the parallelization of grid-level communication and control networks such as SCADA [528, 529]. Game-theoretic approaches such as cooperative and non-cooperative games have shown great promise in the design of distributed control strategies for demand-side management [473, 538]. However, given the dynamic nature of the smart grid, these works showed that a stable equilibrium was not always possible in the presence of faults and slow learning speeds [473].

Multi-agent electric market simulators were also advanced to help in the study of competitive electricity markets. One such simulator is the multi-agent system competitive electricity markets simulator (MASCEM) which combined agent-based modeling and simulation to study the dynamics of competitive electricity markets [539545]. Continued research is required to design distributed algorithms that use game-theoretic principles and ensure robustness, stability, optimality, and convergence.

Another important application of multi-agent systems in power systems has been the control and energy management of microgrids. There, it was recognized that microgrids are often implemented in remote and potentially harsh environments. Their associated centralized controllers and energy-management software present a single point of failure [503, 546, 547]. MAS in contrast are fundamentally more resilient in that they can continue to operate in the face of certain types of disruptions. Such a functionality is enabled by a modular decision-making architecture composed of semi-autonomous agents that allows agents to be added and removed without the need to halt the entire system.

A modular architecture is particularly vital as the penetration of variable energy resources (VER) grows because it allows for other energy resources to be easily reconfigured to support microgrid operation [548]. For example, the ability to island part of the microgrid to allow it to heal is of paramount importance in the control of microgrids with a high penetration of VERs [548550]. As a result, many MAS frameworks have studied self-healing mechanisms of microgrids [548, 551555] and some have even demonstrated resiliency of such microgrids under several reconfigurations [551].

Recognizing the distributed manner in which microgrids are controlled, distributed MAS-based algorithms have also been proposed for various, usually, hierarchical microgrid control applications. These control applications include economic dispatch [556], load restoration [557], decision making [558, 559], and scheduling [560] to name just a few. There has also been significant research on the control strategies for microgrids in islanded operation [549, 561, 562] to ensure reliability within the islanded system. Naturally, a lot of attention has gone into designing and standardizing the informatic interfaces of multi-agent frameworks. These frameworks have been designed to closely follow IEC 61850, IEC 61499 [563], and IEC 60870-5-104 [564] as standard architectures for interoperability.

In the meantime, further research needs to ensure that agent groups can perform functions at or near real-time. Furthermore, more work is required to assess the performance of distributed algorithms with respect to optimality and its global behavior relative to centralized algorithms [479].

Despite this extensive MAS research in power systems, an important limitation has emerged. Much like what has happened with traditional hierarchical control structures in the transmission systems, these MAS research works generally only address one control layer at a time. Furthermore, there is a significant dichotomy between MAS that controls physical variables to secure grid reliability and those that control economic variables to implement distributed versions of traditional market structures. In a recent review, only eight works addressed multiple layers of technical and economic control [32, 565]. The same work assessed these works against 14 design principles that enable resilient eIoT integration. The result of the assessment is shown in Table 3.5. As a technology development roadmap, it identifies the need for further MAS development that:

  1. 1.

    Implements distributed control algorithms

  2. 2.

    Addresses both technical and economic control objectives

  3. 3.

    Addresses the multiple timescales found in the integration of variable energy, energy storage, and demand-side resources

Table 3.5 Adherence of existing MAS implementations to design principles [32]

Finally, it is important to emphasize that the effective implementation of distributed control algorithms requires access to real-time data, data filtering, coordination, and control [575]. Standards and architectures must be put in place as platform upon which such algorithms can operate. First, individual nodes must be equipped with the necessary memory and computing power for low-level control functions. Second, functional and control standards for devices must be agreed upon to ensure interoperability between platforms. Third, modularity must be applied as an integral design principle that facilitates the integration of ever-more sensors and actuators. Fourth, the computing capacity accorded to each node must match its functional requirements. Lastly, in a truly distributed system, each node must have all the information needed to re-initialize new nodes and initiate backup procedures in the case of failure [575]. These provisions facilitate the design and deployment of distributed control strategies.

3.4 Architectures and Standards

Fundamentally speaking, many of the discussions presented in this work thus far can be seen as large-scale architectural changes of the electric power system towards decentralization. In the original discussion on energy-management change drivers presented in Chap. 1, the deregulation of electric power markets was introduced. Figure 1.5 showed the deregulation or unbundling of electric power as a shift from centralized monopolies to multiple, decentralized, and competitive suppliers. Similarly, the integration of renewable energy and active demand response shown in Fig. 1.7 may be viewed as a fundamental change in the architecture of the physical electric power system itself. The role of centralized generation facilities is being eroded by distributed renewable generation. The previous section’s discussion on distributed control algorithms addresses the shift from a more centralized control structure in Fig. 3.14 to a more distributed one. Together, these three separate discussions show that eIoT is entirely consonant with a decentralized architecture in regulation, operations timescale decision making, and the physical power grid.

These three large-scale architectural changes fundamentally change how power and information are exchanged throughout the electric power system. As has been discussed several times throughout this work, eIoT brings about the need for two-way flows of power and information where one-way flows were once common. The most common examples of these are at the grid periphery where distributed generation can cause power to flow back up the radial distribution system and where network-enabled demand-side resources both send and receive information as part of demand-response schemes. Such two-way flows change the way both cyber and physical entities in the grid interact with each other. Physical energy resources must accommodate the two-way power flows. In the meantime, “cyber” entities such as controllers, enterprise information systems, and organizations as a whole will have two-way informatic interactions with each other. For example, utilities of the future [30] may become “distribution system operators” that enable retail electricity markets. Consequently, their historical role as a load serving entity in wholesale electricity markets is also likely to change. These changing roles of “cyber” entities on the grid further indicates fundamental changes in the electric grid’s architecture.

It is difficult to determine at this time what a future eIoT-enabled electric power system architecture will look like. It is clear that the grid cannot continue to operate in a centralized hierarchical fashion as it has in the past. On the other hand, a full transition to eIoT-enabled heterarchy and decentralization is improbable as well. Much research work still remains in order to achieve the holistic performance properties that centralized algorithms have already demonstrated and consequently centralized architectures are likely to endure in those conditions. The meshed communication networks (such as Z-Wave and Zigbee mentioned in Sect. 3.2.4) suggest distributed control architectures. However, their limited range similarly implies centralized nodes that aggregate peripheral devices and present them to the rest of the electric power system. Overall, the underlying trends that support eIoT remain strong and so decentralized and distributed control algorithms will take hold where possible. On a spectrum between total centralized hierarchy and complete decentralized heterarchy, the electric power grid’s overall future architecture falls somewhere in the middle.

In recognition of these electric grid’s evolving architectures, there have been efforts on both sides of the Atlantic to develop open and extensible architectures. Under EU mandate M/490, the Smart Grid Architecture Model (SGAM) was developed [26]. As shown in Fig. 3.15, it is a structured approach to modeling and designing use cases for power and energy systems. The architecture is organized into a three-dimensional framework consisting of domains, zones, and layers. These allow energy practitioners to structure the use case design in a clear and concise way.

Fig. 3.15
figure 15

EU mandate M/490 Smart Grid Architecture Model (SGAM) [26]

Meanwhile, on the other side of the Atlantic, the Energy Independence Security Act (EISA) of 2007 describes severable favorable qualities of a future smart grid architecture including flexibility, uniformity, and technology neutrality [576, 577]. To that effect, the GridWise Architecture Council (GWAC) created its interoperability framework created its interoperability framework shown in Fig. 3.16 [27, 28, 578]. (This framework has often been nicknamed the “GWAC Stack” for simplicity.) Much like the SGAM, the GWAC Stack recognizes the need for multiple layers of integration in order to ensure interoperability, but does not add the dimensions of domains and zones. At the bottom, three layers ensure the interoperability of technical connectivity. When these layers are abstracted, they can form two informational layers that provide business context and semantic understanding. These layers may be further abstracted to form three organizational layers that address policy, business objectives, and business procedures. Both the SGAM and the GWAC Stack serve as the basis for the future development of an electric power reference architecture that supports standard and interoperable implementations of eIoT.

Fig. 3.16
figure 16

The GridWise Architecture Council interoperability framework [27, 28]

In the meantime, there have been several efforts to develop commercial and quasi-commercial IoT platforms. Specifically, the OpenFog Consortium was launched in 2015 to spearhead the creation of an open architecture essential for creating IoT platforms and applications based on the fog computing ecosystem [579, 580]. The aim of the OpenFog Architecture is to accelerate the decision-making process of IoT sensors and actuators by bringing essential computation, networking, and storage closer to devices and reducing the latency brought about by all devices communicating directly with the cloud [579]. This architecture essentially serves as a middleman between the cloud and IoT devices and, thus, is not a replacement for cloud computing but rather complementary to the cloud [581]. The approach of bringing processing, that is, computation, storage, and networking closer to where the data is gathered is called fog computing, hence, the OpenFog Architecture [580, 581].

The OpenFog Architecture comprises of an OpenFog Fabric, OpenFog Services, devices and applications, and cloud services. The OpenFog Fabric is a computation platform on which services are delivered to all the devices [580]. The OpenFog Services interface between the devices and the platform. The services delivered by this platform include content delivery, video encoding, analytics platform to name just a few [580]. The device and application layer include sensors, actuators, and standalone applications running within or spanning multiple fog applications [579, 580]. Cloud services are available to be used for larger computational processes that later inform bigger decisions [579, 580]. The entire architecture is built to ensure the security of all communications and data. The OpenFog reference architecture is built upon eight pillars [579, 580]:

  1. 1.

    Security

  2. 2.

    Scalability

  3. 3.

    Openness

  4. 4.

    Autonomy

  5. 5.

    Reliability, Availability, and Serviceability (RAS),

  6. 6.

    Agility

  7. 7.

    Hierarchy

  8. 8.

    Programmability

Figure 3.17 illustrates the OpenFog reference architecture [580]. Recently, this reference architecture has been adopted as IEEE fog computing standard 1934 [580].

Fig. 3.17
figure 17

The OpenFog reference architecture [27, 28]

Other architectural standards are also provided by corporations such as Microsoft, Cisco, SAP, and Amazon. Amazon offers the Amazon Web Services (AWS) IoT Core which is a platform through which one can connect various IoT devices [582]. The AWS IoT comprises a device SDK that helps users connect and disconnect devices to the platform [582]. It provides broker-based publish/subscribe messaging through the MQTT, HTPP, or WebSockets Protocols [582]. The SDK supports C, Arduino, and JavaScript programming languages in addition to client libraries and a developer’s guide [582]. SigV4 and X.509 certificate-based authentication is also supported by this platform [582]. Further discussion on this platform is beyond the scope of this book; however, more information on third-party IoT platforms can be found here for Amazon [582], SAP/INTEL [583, 584], Cisco [585], and Microsoft [586].

Consequently, the implementation of eIoT as automated and interoperable solutions rests upon a significant effort to develop effective standards. Beyond the communication standards mentioned in Sect. 3.2, several standards initiatives were launched early on at national and international levels [587589] including concerted efforts by the IEC [590], IEEE [591], and NIST [577]. The following standards are highlighted as directly relevant [29, 592] (Fig. 3.18):

  • The IEEE 1547 Series provide requirements related to the performance, operation, testing, safety, and maintenance of DERs [593]. The presence of an international standard was seen as a roadblock to the implementation of DG projects. That said, the standard does provide some technological flexibility for regulators at the local, state, and federal level [593]. The standard is intended to be technology-neutral and cover resources up to 10MVA.

  • The IEEE 2030 Series establish interoperability as a basis for extensibility, scalability, and upgradeability [594]. IEEE 2030 defines interoperability as “the capability of two or more networks, systems, devices, applications, or components to externally exchange and readily use information securely & effectively” [593]. The standard is widely accepted as a pioneering document in architecture and interoperability in the electrical industry [414]. The standard uses perspectives from communications, power systems, and information technology platforms in the smart grid to provide design criteria for smart grid interoperability across generation, transmission, distribution, and customer domains [414, 594] The standard creates the smart grid interoperability reference (SGIR) model and supplies a premise for interoperability knowledge by presenting terminology, evaluation criteria, functions, applications, and other characteristics [594]. Furthermore, end-to-end solutions and security are addressed by its guidelines for interoperability in functional interface identification, logical connections and data flows, communications, and digital information management [594]. The IEEE 2030 standards help to maintain communications and information technologies progress, for improved integration for DERs and the evolving loads of the electrical power system [593].

  • IEC TR 62357 Seamless Integration Architecture (SIA) aims to provide a framework for energy-related ICT implementations that use IEC TC 57. For this reason, IEC TR 62357 and IEC TC 57 are often combined to create (specific) reference architectures. In such a way, they help to identify and resolve inconsistencies and create seamless frameworks.

  • IEC 61970 Common Information Model (CIM) specifies a domain ontology. In other words, it provides a kind of knowledge base with a special vocabulary for power systems. One goal is to support the integration of new applications in order to save time and costs. Another is to simply facilitate the exchange of messages in multi-vendor systems. The IEC offers an integration framework based on a common architecture and data model. In addition, the architecture is platform independent. The main application of IEC 61970 is the modeling of topologies.

  • IEC 61968 Distribution Management extends IEC 61970 CIM for distribution management systems (DMS). These extensions relate in particular to the data model. The main use case is the exchange of XML-based messages in different DMSs.

  • IEC 62325 Market Communications is also an extension to IEC 61970 CIM where the data model and messages are extended. However, the focus here is on market communication for EU and US-style electricity markets.

  • IEC 62351 Security for Smart Grid Applications addresses ICT security for power system management with the goal of defining a secure communication infrastructure for energy-management systems with end-to-end security. This implies that secure communication protocols are specified in IEC 61970, IEC 61968, and IEC 61850.

  • IEC 61850 Substation Automation and Distributed Energy Resource (DER) Communication focuses on communication and interoperability at the device level. The focal topics are:

    • The exchange of information for protection

    • The monitoring, control, and measurement

    • The provision of a digital interface for primary data

    • A configuration language for systems and devices

    This is implemented by:

    • A hierarchical data model

    • Abstractly defined services

    • Mappings of these services to current technologies

    • An XML-based configuration language for the functional description of devices and systems

  • IEC 62559 Use Case Management deals with the steadily increasing system complexity associated with eIoT. In such a complex system, use cases help to structure and organize all relevant information for a technical solution. Therefore, in IEC 62559, five phases are identified for the development of use cases and the identification of requirements. Furthermore, a description template containing a narrative and visual representation of the use case is also provided.

Fig. 3.18
figure 18

An overview of important eIoT standards (adapted from [29])

Despite these many efforts in the development of eIoT architectures and standards, interoperability remains a formidable technical challenge to widespread eIoT implementation. In that regard, it is clear that the IEC, IEEE, and NIST will need to continue their efforts to enhance eIoT interoperability.

3.5 Socio-Technical Implications of eIoT

The previous sections have described the development of IoT within energy infrastructure in terms of network-enabled physical devices, communication networks, distributed decision-making algorithms, and architectures and standards. When taken together, it is clear that eIoT fundamentally transforms the relationship that “energy things” have with the information that describes them. The proliferation of sensing technology (described in Sect. 3.1) means that the quantity of information available to describe energy infrastructure will reach unprecedented levels. Beyond the quantity of information, the type of data will also diversify. Reconsider Fig. 3.2 on page 3.

Whereas much the electric power grid’s data was associated with primary variables in the transmission system, Sect. 3.1.4 showed that this information will grow to include primary variables in the distribution system through smart meters. Furthermore, Sects. 3.1.3 and 3.1.5 showed that this information will grow to include secondary variables on both the supply and demand sides. These large and heterogeneous sources of data are also owned, generated, and transmitted by an unprecedented number of stakeholders. Reconsider Fig. 3.13 on page 69. The simultaneous presence of home area, neighborhood area, and wide-area networks implies that consumers will complement the role of utilities and grid operators as generators of data. As data is generated, natural questions will emerge as to the ownership of these data.

Finally, the extensive discussion on communication networks presented in Sect. 3.2 shows that the transmission of data will come to include telecommunication companies and private owners. Because eIoT fundamentally changes the role of information in energy infrastructure, there are two important socio-technical implications: privacy and cybersecurity. Both of these concerns are complex topics in and of themselves and cannot be extensively treated in the context of this work. Rather, this section seeks to provide an entry point from which more interested readers can more deeply investigate these topics.

3.5.1 eIoT Privacy

The proliferation of nearly ubiquitous eIoT data, particularly on the consumer side, raises important concern about consumer privacy. Reconsider Fig. 3.9 on page 47 which was mentioned in the context of home energy monitors that are able to infer the usage of individual home appliances based upon their electrical “signatures.” While such information is very useful to a homeowner in the context of changing their own electricity consumption behavior, it can easily be used by other parties to infer a detailed picture of the homeowner’s daily life including eating, sleeping, and leisure habits [595].

Beyond home energy monitors that point “inwards,” smart meters are able to provide similar information (albeit at a lower sampling rate) directly to electric utilities. Naturally, many privacy concerns have erupted over this consistent flow of real-time data back to the utility because it can be mined with sophisticated data analytics algorithms to gain market power and potentially exploit the end-user. While the single example of smart meter real-time data flows is an important privacy concern, similar concerns can be found all over the eIoT landscape. The introduction of telecommunication and energy service companies as additional eIoT stakeholders further complicates privacy concerns and motivates the need for sensible policies that inform the rights and responsibilities of data generators, owners, transmitters, and users. The interested reader is referred to further works on eIoT Privacy [596599].

3.5.2 eIoT Cybersecurity

The privacy concerns highlighted above gain further prominence in the context of cybersecurity. Returning back to Fig. 3.1 on page 3.1, every communication channel described in Sect. 3.2 has the potential to be compromised by an unintended or nefarious party. In some cases, such a party can gather data for potential gain outside of the grid. For example, a hacked smart meter could expose access to pricing information and communication networks in the home [276, 595]. In addition to the harm to end-users, the cost to the utility would be twofold. Not only could the utility be defrauded but it would also have to invest in fixing the problem [595].

In other cases, the unintended party can interject their data “upwards” to the control layer so that their associated algorithms have an incorrect picture of the physical world. For example, significant attention has been given to the impact of cyber-vulnerabilities of SCADA systems on the state estimators in operations control centers [600602]. Similarly, nefarious parties can interject their data “downward” to the physical layer so that devices behave incorrectly. In both cases, the cybersecurity concerns become cyber-physical ones. For example, the automatic generation control feedback signal shown in Fig. 3.6 can be compromised so that the full control loop is no longer stable, consequently, placing the entire power generation facility at risk of failure [245].

These cybersecurity concerns become even more challenging in the context of the discussion in Sect. 3.2. Not only will eIoT communication networks be owned and operated by grid operators and utilities but they will also pertain to telecommunication companies and private end-users. While telecommunication networks have significant expertise in combating cybersecurity threats, private area networks are significantly more vulnerable. Consequently, significant attention will have to be given to the grid periphery to ensure that end-users are equipped with easy-to-implement cybersecurity solutions. The interested reader is referred to further works on eIoT cybersecurity [603606].