Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The emergence of the Internet of Things (IoT) destroys every precedent and preconceived notion of network architecture. To date, networks have been invented by engineers skilled in protocols and routing theory. But the architecture of the Internet of Things will rely much more upon lessons derived from nature than traditional (and ossified, in my opinion) networking schemes. This chapter will consider the reasons why the architecture for the Internet of Things must incorporate a fundamentally different architecture from the traditional Internet, explore the technical and economic foundations of this new architecture, and finally begin to outline a solution to the problem.

Why the Internet of Things Requires a New Solution

The architecture of the original Internet was created long before communicating with billions of very simple devices such as sensors and appliances was ever envisioned. The coming explosion of these much simpler devices creates tremendous challenges for the current networking paradigm in terms of the number of devices, unprecedented demands for low-cost connectivity, and impossibility of managing far-flung and diverse equipment. Although these challenges are becoming evident now, they will pose a greater, more severe problem as this revolution accelerates. This book describes a new paradigm for the Internet of Things; but first, the problem.

It’s Networking on the Frontier

The IoT architecture requires a much more organic approach compared with traditional networking because it represents an extreme frontier in communications. The scope and breadth of the devices to be connected are huge, and the connections to the edges of the network where these devices will be arrayed will be “low fidelity”: low-speed, lossy (where attenuation and interference may cause lost but generally insignificant data, as depicted in Figure 1-1), and intermittent. At the same time, much of the communication will be machine-to-machine and in tiny snatches of data, which is completely the opposite of networks such as the traditional Internet.

Figure 1-1.
figure 1

The results of a lossy connection at an end point

Exploring the characteristics of the traditional Internet highlights the very different requirements for the frontier of the emerging Internet of Things. Conventionally, data networks have been over-provisioned; that is, built with more capacity than is typically required for the amount of information to be carried. Even the nominally “best effort” traditional Internet is massively over-provisioned in many aspects. If it weren’t, the Internet couldn’t work: protocols such as TCP/IP are fundamentally based on a mostly reliable connection between sender and receiver.

Because Moore’s Law provided a “safety valve” in the form of ever-increasing processor speeds and memory capacities, even the explosive growth of the Internet over the last two decades has not exceeded the capabilities of devices such as routers, switches, and PCs, in part because they are continually replaced at 3- to 5-year intervals with devices with more memory and processing power.

These devices are inherently multipurpose: they are designed with software, hardware, and (often) human access and controls. What is important about this point is that the addition of networking capability, usually in the form of protocol “stacks,” is nearly free. The processor power, memory, and so on already exist as byproducts of the devices’ prime functions.

But the vast majority of devices to be connected in the coming IoT are very different. They will be moisture sensors, valve controls, “smart dust,” parking meters, home appliances, and so on. These types of end devices almost never contain the processors, memory, hard drives, and other features needed to run a protocol stack. These components are not necessary for the end devices’ prime function, and the costs of provisioning them with these features would be prohibitive, or at least high enough to exclude wide use of many applications that could otherwise be well served. So these simpler devices are very much “on their own” at the frontier of the network.

Today’s Internet doesn’t reach this frontier; it simply isn’t cost-effective to do so, as will be explored later. Thus, it isn’t possible to overprovision in the same way networks have traditionally been built. On the frontier, devices in every aspect should therefore be more self-sufficient, from their naming, to protocols, to security. There simply isn’t the “safety net” of device performance, over-provisioning, a defined end-to-end connection, and management infrastructure as in traditional networking.

It Will be (Even) Bigger than Expected

As a growing number of observers realize, one of the most important aspects of the emerging Internet of Things is its incredible breadth and scope. Within a few years, devices on the IoT will vastly outnumber human beings on the planet—and the number of devices will continue to grow. Billions of devices worldwide will form a network unprecedented in history. Devices as varied as soil moisture sensors, street lights, diesel generators, video surveillance systems—even the legendary Internet-enabled toasters—will all be connected in one fashion or another. See Figure 1-2 for some examples.

Figure 1-2.
figure 2

A wide variety of end devices will be connected to the Internet of Things

Some pundits have focused only on the myriad addresses necessary for the sheer arithmetic count of devices and have pronounced IPv6 sufficient for the IoT. But this mistakes address space for addressability. No central address repository or existing address translation scheme can possibly deal with the frontier aspects of the IoT. Nor can addresses alone create the costly needed networking “horsepower” within the appliances, sensors, and actuators.

Devices from millions of manufacturers based in hundreds of countries will appear on the IoT (and disappear) completely unpredictably. This creates one of the greatest challenges of the IoT: management. This is a matter both of scope and device capabilities.

Consider smartphones, for example, which are expected to become the most common computing and communications platforms in the world. This number has recently been placed at 1.4 billion, or roughly one for every five persons on the planet. A similar figure has been estimated for PCs, bringing the total worldwide for these two types of devices to about 3 billion.

These devices incorporate the processors, memory, and human interfaces necessary for traditional networking protocol stacks (typically IPv6 today), the human interfaces necessary for control, and an infrastructure for management (unique addresses, management servers, and so on). The prices (and profit margins) of these devices mean that it is cost-effective for manufacturers (and governments) to keep track of addresses, feature sets, software revisions, and so on.

But the situation for the actuators, sensors, and appliances of the Internet of Things is vastly different. Considering the number of appliances per citizen in developed countries alone, the number is staggering: each of these individuals probably makes use of dozens of these devices each day. Even residents of developing countries interact with multiple end devices and sensors daily—and those numbers are growing with rising standards of living. Add to that a vast array of traffic-light controls, security devices, and status sensors operated by various levels of government, and the number of potential IoT end devices rapidly grows to a couple of orders of magnitude greater than the world’s population (7 billion and counting, as of this writing).

The estimated 700 billion IoT devices (see Figure 1-3) cannot be individually managed; they can only be accommodated. It will simply not be possible to administer the addressing of this huge population of communicating machines through traditional means such as IPv6 nor will it be necessary to do so. Instead, self-addressing and self-classification will provide the answers, as explained in Chapter 3.

Figure 1-3.
figure 3

The quantity of devices in the Internet of Things will dwarf the traditional Internet and thus cannot be networked with current protocols, tools, and techniques

Terse, Purposeful, and Uncritical

The kinds of information these hundreds of billions of IoT devices exchange will also be very different from the traditional Internet—at least the Internet we’ve known since the 1990s. Much of today’s Internet traffic is primarily human-to-machine oriented. Applications such as e-mail, web browsing, and video streaming consist of relatively large chunks of data generated by machines and consumed by humans. As such, they tend to be asymmetrical and bursty in data flows, with a relatively large amount of data exchanged in each “session” or “conversation.”

But the typical IoT data flow will be nearly diametrically opposed to this model. Machine-to-machine communications require minimal packaging and presentation overhead. For example, a moisture sensor in a farmer’s field may have only a single value to send of volumetric water content. It can be communicated in a few characters of data, perhaps with the addition of a location/identification tag. This value might change slowly throughout the day, but the frequency of meaningful updates will be low. Similar terse communication forms can be imagined for millions of other types of IoT sensors and devices. Many of these IoT devices may be simplex or nearly simplex in data flows, simply broadcasting a state or reading over and over while switched on without even the capacity to “listen” for a reply.

This raises another aspect of the typical IoT message: it’s individually unimportant. For simple sensors and state machines, the variations in conditions over time may be small. Thus, any individual transmission from the majority of IoT devices is likely completely uncritical. These messages are being collected and interpreted elsewhere in the network, and a gap in data will simply be ignored or extrapolated (see Figure 1-4).

Figure 1-4.
figure 4

Multiple identical messages may be received; some are discarded

Even more complex devices, such as a remotely monitored diesel generator, should generate little more traffic, again in terse formats unintelligible to humans, but gathered and interpreted by other devices in the IoT. Overall, the meaningful amount of data generated from each IoT device is vanishingly small—nearly exactly the opposite of the trends seen in the traditional Internet. For example, a temperature sensor might generate only a few hundred bytes of useful data per day, about the same as a couple of smartphone text messages. Because of this, very low bandwidth connections might be utilized for savings in cost, battery life, and other factors. On the IoT frontier, just as in the mythical “Old West,” laconic characters will be appreciated.

Dealing with Loss

Today’s traditional Internet is extremely reliable, even if labeled “best effort.” Over-provisioning of bandwidth (for normal situations) and backbone routing diversity have created an expectation of high service levels among Internet users. “Cloud” architectures and the structure of modern business organizations are built on this expectation of Internet quality and reliability.

But at the extreme edges of the network that will make up the vast statistical majority of the IoT, connections may often be intermittent and inconsistent in quality. Devices may be switched off at times or powered by solar cells with limited battery back-up. Wireless connections may be of low bandwidth or shared among multiple devices.

Traditional protocols such as TCP/IP are designed to deal with lossy and inconsistent connections by resending data. Even though the data flowing to or from any individual IoT device may be exceedingly small, it will grow quite large in aggregate IoT traffic. The inefficiencies of resending vast quantities of mostly individually unimportant data are clearly an unnecessary redundancy. Again, recall that for the vast majority of IoT devices, a lost message (or even a substantial string of messages) is not meaningful. (For those devices that are sending or receiving timely mission-critical information, traditional Internet protocols are likely a better fit than the emerging IoT architecture.)

The Protocol Trap

It’s extremely tempting to suggest existing widely deployed protocols such as TCP/IP for the IoT (see the sidebar “ Why not IP for the IoT?” in Chapter 2). After all, they have already been engineered and are widely available in protocol stacks on billions of devices such as PCs and smartphones. But, as briefly noted, most of these protocols are ill-suited for many of the end devices with potential interest for the IoT.

The basic problem is the very robustness of these protocols. They are intrinsically designed for high-duty cycles, large data streams, and reliability. Each of these otherwise desirable characteristics is a poor fit for the IoT, as noted previously. But what’s the harm, one might ask? Isn’t more capability a good thing? Not for the Internet of Things.

Mind the Overhead

A key reason why robust protocols aren’t needed (or possible) for the IoT is the overhead they require and the minimal processing, memory, and communications capabilities of many very simple IoT devices. This may come as a shock to some IoT thinkers who envision an IP stack on every light post and refrigerator. But when the IoT is considered from the proper “end of the telescope”—from the edge of the network in—this immediately becomes impractical, for all the reasons noted previously. Instead, it makes sense to provide a new solution that can run side by side with existing IP–enabled end devices to efficiently manage the immense amount of data being generated by devices for which IP support is unnecessary and perhaps a liability.

Much of what has been written to date about the IoT assumes a sophisticated networking stack in every refrigerator, parking meter, and fluid valve, so this may be a difficult idea to abandon. But from the forgoing discussion, it’s obvious that these devices won’t need the decades of built-up network protocol detritus encoded in TCP/IP, for example. One must free his or her thinking from personal experiences and concepts of the networking of computers, smartphones (and, by definition, human users) to address the much simpler needs of the myriad devices at the edge of the IoT.

Burdening otherwise simple devices such as power line sensors and coffee makers with a full networking protocol stack would serve only to massively increase the cost and complexity of billions of these devices. A traditional networking protocol stack requires a processor, operating system, memory, and other functions. Even if consolidated within a single chip, the complexity, power draw, and cost of this computing power is an unnecessary expense in the IoT. These costs will be considered later in this chapter.

As noted previously, the vast majority of IoT devices have very basic needs of sending or receiving a miniscule amount of data. The physical requirements may likewise be very simple: an integrated chip containing only the minimal interfaces and a means of transmission or reception.

More Smarts, More Risk

Although it may seem counterintuitive, dumber devices are safer. If every IoT device has some sort of operating system and memory, it becomes a potential subject for hacking or inadvertent misconfiguration. The operating systems and protocol stacks also require updating and management. Providing security and upgrades on the scale of the IoT for a massive number of devices, built and installed by millions of different manufacturers and individuals, is simply an impossible task (see Figure 1-5).

Figure 1-5.
figure 5

Contrasting the processor, OS, memory, and power necessary for traditional protocols vs. the IoT protocol

The Overhead of Overhead

Beyond the physical costs and management requirements, the data overhead of traditional networking is likewise overkill for the majority of the IoT. Traditional protocols are “sender-oriented”; that is, the sender must ensure that its message has been properly transmitted and received. This leads to extensive capabilities in terms of temporary storage of sent data, management of acknowledgments, and resending of lost or corrupted messages. And each of these robust capabilities is reflected in overhead data added to the message payload.

When this data overhead is considered in relation to the tiny snatches of data sent or received by the typical IoT device, the ratio of overhead to payload becomes ridiculous. Moreover, because each individual IoT message is completely uncritical, the check-and-retransmit overhead is an unnecessary expense in bandwidth and end device cost. It makes the most sense, therefore, for the emerging IoT architecture to be engineered for an absolute minimum of data overhead.

Humans Need Not Apply

Perhaps most importantly, traditional networking protocols and applications are almost all designed with the expectation of a human being on one end of the “conversation.” These traditional approaches are inherently designed to communicate concepts and context for humans.

But the networking overhead associated with smooth streaming, echoing of typed characters, and intelligible presentation of data are completely unnecessary at the machine-to-machine device level in the Internet of Things. So a large percentage of the processing and data overhead of traditional protocols is totally redundant for the IoT. An architecture for the Internet of Things should provide only the minimal amount of overhead that is needed—and only at the point that it is needed—to maximize efficiency and minimize costs.

Economics and Technology of the Internet of Things

One of the great promises of bringing IPv6 to the traditional Internet was that it would provide all the address space needed to connect every device ever needed forever—including the Internet of Things, no matter how large it grew. And within that narrow definition, the promise is correct. Because of some quirks in the way that only part of the IPv6 address space has been released, the current theoretical number of hosts (communicating devices) on an IPv6 Internet is 3.4×10*38*.

This is indeed a huge number, which even the massive Internet of Things is unlikely to surpass. For this reason, many pundits and manufacturers (particularly those with a vested interest) have sanguinely said that IPv6 is already prepared for the Internet of Things. The world simply needs to keep doing what it has always done to incorporate the new IoT—there are more IP addresses available than grains of sand.

But this “head in the sand” approach ignores the key economic factor that will drive the deployment of the Internet of Things (as it has driven nearly every other networking technology): the cost at the end points. There are three broad areas where these costs accumulate and compel the need for a new approach in the Internet of Things: hardware and software, oversight and management, and security.

Functionality Costs Money

As noted earlier, traditional computing and communications devices such as PCs, tablets, and smartphones already incorporate processors, working memory, and storage in their design. These capabilities are necessary for their primary purpose. Adding IPv6 to these devices requires only the addition of a protocol stack that resides in storage, executes within working memory, and is powered by the processor.

Thus the incremental cost of adding IPv6 to these devices is indeed negligible, in fact barely measurable, when compared with the profit margins these devices generate. But these devices are not a significant portion of the Internet of Things! Numbering in the low billions today, their number will be dwarfed by the hundreds of billions of simple sensors and appliances in the IoT.

The vast majority of these simple end devices contain no processors, memory, or storage; and are not data-connected in any way today. This is a key point: the future of the Internet of Things is networking devices that have never been connected before. These devices are designed to be built and sold, for the most part, at the lowest cost yielding the highest margin. Those sold in developing countries, in particular, must be extremely inexpensive. Yet they are some of the very areas in which the IoT will grow most quickly. To capitalize on the enormous potential of the IoT, creating a standard low-cost solution will enable billions of devices that would otherwise continue to be off the grid, never developed, or added to the massive quantity of one-off solutions that are being spawned even today.

Inexpensive Devices Can’t Bear Traditional Protocols

With a clearer picture of these cost realities in mind, it is immediately obvious that burdening moisture sensors, light bulbs, and the proverbial toaster with the additional hardware and software (not necessary for the basic functions of these sensors and appliances) needed to run traditional protocols such as IPv6 is a show-stopper. It has been estimated that the incremental cost of adding IPv6 to devices can be as much as $50, even in large quantities. (Note that beyond the processors and memory devices, additional Wi-Fi or Ethernet components are needed, and more power and heat dissipation will also be required).

Fortunately for the expansion of the Internet of Things, these simple devices do not require anything approaching the level of complexity offered by IPv6. Instead, simple modulation, broadcast, and receiving technologies will suffice, even including non-radio-frequency solutions such as infrared and power line networking. Assuming integration into silicon packages, costs for adding simple IoT networking (described in Chapter 2) to sensors and appliances will quickly approach $1 or less. The key is that this is barely “networking” in the traditional sense: broadcasting a state or receiving a simple instruction with no error correction, routing, or any other traditional networking functions. IoT devices are “dumb” in general, but they are exceedingly well-suited to a narrow task. At a very base level, it is easy to see that this cost argument alone is proof that the costs and the effort in creating a new solution for IoT devices are absolutely necessary. The result in not doing so would be that many of these new technologies and innovations would largely not come to pass. Others would be implemented at a cost that limits their usefulness. At what cost to growth, development, and prosperity?

And as noted previously, traditional one-size-fits-all networking protocols such as IPv6 burden even the smallest payloads with 1,000 bytes of data. In today’s over-provisioned world, these wasted bytes are unnoticed. But when extrapolated to hundreds of billions of simple end devices sending and receiving hundreds of thousands of times each day, the potential for network congestion and huge expenditures by carriers is significant. New carrier build-outs to support the “plain vanilla” data networking of the IoT will be difficult to cost-justify.

Overseeing 700 Billion Devices

The count of manufacturers building networking equipment likely numbers in the millions. They are relatively easy to find and track because each traditional piece of networking equipment is associated with a MAC ID (Media Access Control Identification) assigned to the manufacturer. A large number, but there is a central database of manufacturers that is maintained by the IEEE (Institute of Electrical and Electronics Engineers).

For those manufacturers who are today building traditional networking equipment, one may assume a significant amount of networking knowledge. Imagine the impact of a new IoT standard on the number of network-ready manufacturers out there and the boost that would give to the worldwide economy.

Contrast this with the likely millions of firms and individuals worldwide building the kinds of simple sensors, actuators, and appliances which will be connected to the Internet of Things. It is inconceivable that all those makers of simple devices can be expected to queue up for addresses assigned by any centralized authority—or that rogue states, organizations, or individuals wouldn’t attempt to subvert such systems.

Extending this thinking, simply scanning for hundreds of billions of IPv6 addresses would take literally hundreds of years. It is one thing to put addresses on nearly a trillion devices, but quite another to find and manage one device out of that constellation. The human cost to manage an Internet of Things made up solely of sophisticated IPv6 devices would exceed the cost of any networking project on earth to date. These costs will fall hardest on already strapped carriers that are already struggling to wring more revenue from expensive physical plant investments.

Only Where and When Needed

Of necessity, the emerging new architecture of the Internet of Things should take an entirely different approach, as described throughout this book. End devices have only locally meaningful and likely non-unique names. This is not a problem because there is networking intelligence elsewhere in the architecture at a much smaller (and thus more manageable) number of points.

And there is no need to oversee or control every maker of end devices. Because the IoT provides only limited networking capabilities at the end devices, there is little “harm” they can do on the network as a whole, and this is easily controlled through a much smaller number of “smarter” devices.”

This approach is totally different from IPv6, which demands that every device have the functionality and management to act as a “peer” on the network. The Internet of Things simply cannot scale if built of peers that all must be managed. Like a massive ant colony, the IoT will scale through specialization, individual autonomy, and localized effect. In this way, costs are reduced by orders of magnitude.

Security Through Simplicity (and Stupidity)

A trite statement, but ultimately true. Because the communications with the end devices in this emerging architecture of the Internet of Things are so basic and so specialized, there are limited back doors and security risks. Again, contrast this with the “peer-to-peer” world of the IPv6 Internet where many IP devices are exposed to hacking and cracking attempts from anywhere in the world. The global cost of Internet security breaches has been estimated at $115 billion (Symantec, 2012). With roughly 2.4 billion peer-to-peer nodes on the Internet today, this roughly equates to $50 per node (user) per year in losses. Multiplying that figure times the projected hundreds of billions of Internet of Things devices creates an unsustainably high cost of IPv6 in the IoT.

By focusing on limited networking capabilities for the end devices as described in this book, the emerging architecture of the Internet of Things drastically reduces the risks and costs associated with networking the huge population of appliances, actuators, and sensors.

Cost and Connectivity

The key for the expected expansion of the Internet of Things is connecting hundreds of billions more devices at far-reduced costs and risks. Only this emerging IoT architecture can accomplish both in a way that is cost-effective for device manufacturers, Internet carriers, and users.

Solving the IoT Dilemma

With the economic and technology challenges posed by the number and unmanageable nature of the end devices of the Internet of Things well-defined, the next step is to investigate solutions. The balance of this chapter, and indeed this book, is devoted to exploring the concepts which may be used to create an architecture (working side by side with, and enhancing the potential of, the traditional IP network) for the Internet of Things that may practically scale to the size and scope required.

Inspiration for a New Architecture

So if traditional networking architectures are not appropriate for all the potential applications of the Internet of Things, where can solutions be found? In addressing this question, fields as diverse as robotics, embedded systems, big data, and wireless mesh networking contribute concepts and technology, although none of these directly addresses the scale and scope of the Internet of Things, nor the simplicity of the vast majority of IoT end points.

There are no human-produced technology systems that scale to the massive size of the imminent IoT. So when considering techniques and processes, it is necessary to turn to nature, in which systems have evolved that scale to hundreds of billions of individual elements exchanging information (broadly defined) in some fashion. It quickly becomes clear that the only highly optimized systems exhibiting this sort of scope are populations of the natural world: colonies of social insects, the propagation of pollen, the dissemination of larval young, and so on.

Nature: The Original Big Data

The most obvious similarity between the natural systems and the emerging Internet of Things is scale—natural systems are truly massive. Billions and billions of individuals operate and interact as a population (of one species) or an ecosystem (of many species). Visual, aural, and chemical signals are broadcast and interpreted; gametes such as pollen may be distributed over vast areas by wind and currents to interact with other individuals of the same species; and huge groups of similar and dissimilar organisms share information about threats or food sources (intentionally or incidentally).

Obviously, the communication of these natural systems is not centrally controlled, nor are there elaborate protocols or retransmission schemes in place. Instead, species have evolved within the natural world in ways that make this communication possible. What are these characteristics that make this “networking” possible in the massive systems of nature?

Autonomy of Individuals

One of the most striking things about natural systems is the way in which individuals independently send and receive communications and act on the information. Even seemingly highly organized populations or colonies such as ant and bee colonies are actually made up of individuals making decisions independently. Because individuals make these choices based on simple algorithms (usually dichotomous decision points) that are shared by all, the actions of the colony as a whole are as efficient as if centrally directed.

Even more remarkably, the actual brain “computing power” available to many species in nature is quite limited. Yet they can act on stimuli, communicate threats, broadcast mating availability, and perform many other tasks vital for survival. In the natural model, the simplicity of the individual is balanced by a narrowly defined purpose to its communications.

In the same way, most individual end devices in the IoT can be (indeed must be)very simple and autonomous. As noted previously, it will not be economically or architecturally feasible to burden these billions of devices with large amounts of computing power, memory, or protocol sophistication. When powered up, these devices must begin sending or receiving data immediately with no setup, management, or other interaction. It is interesting to note that many social insects operate in much the same way; immediately upon emerging in adult form, they begin a task such as nurturing nearby young. Without this autonomy of function and independence of individuals’ actions, nature would not scale—and neither can the IoT.

Zones and Neighborhoods of Interest

Another aspect of natural systems that allow them to scale is the evolution of “zones” or “neighborhoods” of interest formed by “affinities,” which allow individuals to act upon a specific signal among countless other signals. A bird song is an interesting example of this phenomenon. Walking through a field, one may be struck by the songs being sung by several different bird species simultaneously. These songs can have a variety of purposes, such as advertising mating availability and suitability or defining territories.

But each individual takes note only of songs from members of its own species (see Figure 1-6). The zones of interest, or neighborhoods of interest, of various bird species can overlap, and one communications medium (in this case audible frequencies transmitted through the air) is being used for all messages. But each individual bird acts only upon messages within its own group. Similarly, a viable architecture for the IoT must allow interested observers to define a neighborhood of interest (within the much larger Internet) and analyze or send data only from or to that neighborhood.

Figure 1-6.
figure 6

Although many different species of birds may be singing in a field, only members of the same species listen

In the Eyes of the Beholder

Another important aspect of scaling in the natural world is that many communications are receiver-oriented. This is in direct contrast with the sender-oriented nature of many traditional communications protocols, as described previously. Plant pollen represents an interesting example of this highly scalable characteristic of natural systems.

Many of us view pollen as a (literal) irritant during hay fever season. But pollen’s actual role in nature is in plant reproduction. Pollen released by the male plant is carried indiscriminately by the wind. Because pollen is a lightweight (again, literally) signal, it can be distributed hundreds or even thousands of miles by air currents. At some point, pollen falls randomly out of the air, landing on any surface. The vast majority of released pollen falls on bodies of water, bare ground, streets, or plants of another species, where it deteriorates with no effect. But some tiny portion of the total pollen released falls upon the appropriate flowering parts of a female plant of the same species. At this point, pollination takes place and seeds are generated for the next generation (see Figure 1-7).

Figure 1-7.
figure 7

In nature, only the “correct” receivers act on “messages” received, such as pollen. All others discard or ignore the message

The communication of pollen is thus receiver–oriented. The zone or neighborhood of interest is defined by the receiving plant, which ignores all other signals (pollen from other species). The overall network (winds and so on) does not discriminate or actively manage the transmission of pollen in any way; it’s merely a transport mechanism. The “intelligence” of nature is applied only at the receiver.

In the same way, a scalable architecture for the Internet of Things out of necessity includes many elements that are receiver-oriented, with zones or neighborhoods of interest being applied at the point of data integration and collection. These integrator functions will build interesting streams of data from “neighborhoods” that are geographical, temporal, or functional.

Another way of expressing these natural-world communications interactions is in term of publishers and subscribers. Many individuals may “publish” information in the form of calls, visual displays, pollen, etc. But these are moot unless other individuals “subscribe” to these messages. There is no set relationship between publisher and subscriber, as there would be in the peer-to-peer world of traditional networking–the natural world is simply too large and (obviously) unmanaged. In the IoT, the principle is the same: the only way to fully extract information from the myriad possible sources is through publish/subscribe relationships, which can scale.

Signal Simplicity

In the preceding examples from nature, most “signals” are simple and have a single purpose. This makes them “lightweight” and easily transported through the environment, even to the fringes or frontiers of a territory. With a single purpose, they are also easily “analyzed” and acted upon at their destination. (Contrast this with the general-purpose nature of traditional networking protocols, designed with overhead sufficient to support transport of a wide variety of payloads).

Similarly, the vast majority of data transported in the Internet of Things will be very simple and single-purposed in function. Many sensor-type end devices will be communicating only simple states or conditions. If they receive any data at all, it will be simple “sets” defining minor configuration changes. Other types of devices may send nothing and receive only simple instructions or settings from a central source or function.

Besides being lightweight, another key element of natural communications, such as the broadcast of pollen, is that the individual messages are self-classified. Pollen particles exhibit a particular size and shape that “key” them to specific receivers. Bacteria and viruses are likewise structured to interact with specific hosts. These natural messages are classified for type and content externally, that is, by their shape or form. Similarly, messages in the emerging IoT will have external markers that will allow action by intermediate network elements.

Leveraging Nature

Bringing all these concepts found in nature into the emerging architecture of the Internet of Things is inherently a more organic approach. The key lesson from nature is that huge scale is possible only with simple building blocks. Rather than building upon already bloated networking protocols, the architecture of the IoT must be based upon the minimum networking requirements—with only the minimal complexity added at the precise points at which it is needed.

Peer-to-Peer Is Not Equal

Because most Internet of Things communications will be machine-to-machine, it can be tempting to consider the IoT a peer-to-peer network: the general concept of peer-to-peer architectures is extremely attractive. The prospect of billions of devices seamlessly interacting with one another would seem to allow the Internet of Things to escape the limitations of centralized command and control, instead taking full advantage of Metcalf’s Law to create more value through more interconnections.

But true peer-to-peer communication isn’t perfect democracy; it’s senseless cacophony. In the IoT, many devices at the edge of the network have no need to be connected with other devices at the edge of the network—there is zero value in the information (see Figure 1-8). As described previously, these devices have simple needs to speak and hear: perhaps sharing a few bytes of data per hour on bearing temperature and fuel supply for a diesel generator. Again, burdening them with protocol stacks, processing, and memory to allow true peer-to-peer networking is a complete waste of resources and creates more risk of failures, management and configuration errors, and hacking. More-sophisticated end devices may still require IP and they can exist side by side with simpler devices and be optimally served by technologies required to maximize the potential of the Internet of Things (as will be discussed in Chapter 7).

Figure 1-8.
figure 8

Machine-to-machine interconnection between devices at the network edge are unnecessary: toaster-to-printer, for example

Transporting IoT Traffic

There is obviously a need to transport the data destined to (or originating from) these edge devices. The desired breakthrough for a truly universal IoT is to use increasing degrees of intelligence and networking capability to manage that transportation of data at various points in the network—but not to burden every device with the same degree of networking capability.

Billions of Devices; Three Functional Levels

To this point, the economic and practical reasons for a new architecture for the Internet of Things have been described. In addition, lessons from massively scaling systems in nature have been explored as possible models for communications in the IoT, along with the arguments for keeping the burden of communications very low on the simple end devices that will form the vast majority of the Internet of Things.

But if the communications intelligence and functionality does not exist within the end devices, other devices to transport data efficiently must be found elsewhere in the network. And if the data being sent and received by end devices is to be of any use, there must be elements of the network outside of the end devices to manage that data flow.

The most powerful concept of the emerging architecture of the Internet of Things is division of the network into three functional classes, allowing deployment of networking functionality (and cost and complexity) only where and when needed. These three classes are:

  • The end devices

  • Propagator nodes providing transport and gateways to the traditional Internet

  • Integrator functions offering analysis, control, and human interfaces to the IoT

At the edge of the network are the simple end devices, which are represented on the left in Figure 1-9. They transmit or receive their small amounts of data in a variety of ways: wirelessly over any number of protocols, via power line networking, or by being directly connected to a higher-level device. These edge devices simply “speak” their small amounts of data or listen for data directed toward them. (The means of handling this addressing will be discussed in detail in Chapter 6.)

Unlike traditional protocols such as IPv6, the IoT architecture involves no error-checking, routing, higher-level addressing, or anything of the sort at the end devices. That’s because none of these is needed. Edge devices (Level I, so to speak) are fairly mindless “worker bees” existing on a minimum of data flow. This will suffice for the overwhelming majority of devices connected to the IoT.

Figure 1-9.
figure 9

The emerging architecture for the Internet of Things includes end devices, propagator nodes, and integrator functions

Propagator Nodes Add Networking Functionality

The protocol intelligence resides elsewhere in the IoT network: within the Level II propagator nodes shown in the mesh in Figure 1-9. They are technologically a bit more like familiar traditional networking equipment such as routers, but they operate in a different way. Propagator nodes listen for data originating from any device. Based on a simple set of rules regarding the “arrow” of transmission (toward devices or away from devices), propagator nodes decide how to broadcast these transmissions to other propagator nodes or to the higher-level integrator devices discussed in the next section.

In order to scale to the immense size of the Internet of Things, these propagator nodes must be capable of a great deal of discovery and self-organization. They will recognize other propagator nodes within range, set up simple routing tables of adjacencies, and discover likely paths to the appropriate integrators. Similar challenges have been solved before with wireless mesh networking technology (among many others), and although the topology algorithms are complex, the amount of data exchange needed is small.

One of the important capabilities of propagator nodes is being able to prune and optimize broadcasts. Data passing from and to end devices may be combined with other traffic and forwarded in the general direction of their transmission “arrow.” Propagator nodes are perhaps the closest functional elements to the traditional idea of peer-to-peer networking, but they provide networking on behalf of end devices and integrator functions at levels “above” and “below” themselves. Any of the standard networking protocols can be used, and propagator nodes will perform important translation functions between different networks (power line or Bluetooth to ZigBee or Wi-Fi, for example).

Although the preceding describes the generic function of the propagator nodes, many will also incorporate an important additional capability: the capacity to be managed and “tuned” by integrator functions across the network. This will take the form of a software publishing agent within fully featured propagator nodes. As more fully described in Chapters 4 and 5, this publishing agent will become part of the information “neighborhood” created by one or more integrator functions. In much the same manner as a Software Defined Network, the integrator function will apply higher-level management to particular propagator nodes, controlling functions such as frequency of data transmission, network topology, and other networking functionality.

Collecting, Integrating, Acting

Integrator functions are where the data streams from hundreds to millions of devices are analyzed and acted upon. Integrator functions also send their own transmissions to get information or set values at devices—of course, the transmission arrow of this data is pointed toward devices. Integrator functions may also incorporate a variety of inputs, from big data to social networking trends, and from Facebook “likes” to weather reports.

In this emerging architecture, integrator functions are the human interface to the IoT. As such, they will be built to reduce the unfathomably large amounts of data collected over a period of time to a simple set of alarms, exceptions, and other reports for consumption by humans. In the other direction, they will be used to manage the IoT by biasing devices to operate within certain desired parameters.

Using simple concepts such as “cluster” and “avoid” (discussed in Chapter 5), integrated scheduling and decision-making processes within the integrator functions allow much of the IoT to operate transparently and without human intervention. One integrator function might be needed for an average household operating on a smartphone, computer, or home entertainment device. Or the integrator function could be scaled up to a huge global enterprise, tracking and managing energy usage across a corporation, for example. (Integrator functions are fully explored in Chapter 5.)

When the Scope Is Too Massive

An additional device at this third level of the architecture is the filter gateway. Filter gateways are notionally two-armed routers, with a connection to the Internet and a connection to the integrator function. Integrator functions are general purpose processors like PCs and can be overwhelmed by very large amounts of data, denial-of-service attacks, and so on. So the filter gateway is an appliance that ensures that only meaningful data is forwarded to the integrator function. Filter gateways may use a simple set of rules (set by the attached integrator function) to filter the traffic presented to the integrator, restricting it to the “neighborhood of interest” only. These neighborhoods again can be geographic, functional, time-based, or some combination of many other factors.

Functional vs. Physical Packaging

When it comes to actually packaging and delivering products, some physical devices will certainly be combinations of architectural elements. Propagator nodes combined with one or more end devices certainly make sense, as will other combinations (see Figure 1-10). But the important concept here is to replace the idea of peer-to-peer for everything with a graduated amount of networking delivered as needed and where needed. In the Internet of Things, a division of labor is required (such as in ant and bee colonies) so that devices with not much to say or hear receive only the amount of networking they need–and no more.

Figure 1-10.
figure 10

Some devices incorporate multiple IoT functions in a single package. Here multiple end devices are combined with a propagator node that may provide networking services for additional nearby end devices

Connecting to the “Big I”

To this point, this chapter has focused on the characteristics and functions that differentiate the Internet of Things from the traditional Internet (or “Big I”).

Despite the clear and compelling reasons for a new architecture and protocol at the very edge of the Internet of Things, it is not possible to escape a fundamental truth: in order to scale to billions of devices worldwide, the traditional Internet is the only viable backbone for transporting IoT traffic. So at some point, the lightweight IoT protocols must be packaged or converted to traditional Internet protocols that may take advantage of the deployed worldwide Internet architecture.

As briefly noted previously and more fully explored in Chapter 6, the architecture of the Internet of Things provides trunking and conversion functionality at richly featured propagator nodes. Less-featured propagator nodes also exist that communicate only with lightweight IoT protocols, depending on other propagator nodes for IP conversion. This is described in detail in Chapter 4.

Thus, connections between propagator nodes may be either traditional protocols such as IPv6 or lightweight IoT protocols. More importantly, richly featured propagator nodes will provide conversion to IPv6 for routing data between end devices and their associated integrator functions. In turn, integrator functions also typically include IPv6 for direct Internet connectivity (or it can be provided by a filter gateway).

Smaller Numbers, Bigger Functionality

In addition, there is a relatively small number (still billions) of more-sophisticated end devices connected to the Internet of Things that incorporate mission-critical data, greater data requirements, and/or real–time data needs. These devices can justify the costs and complexity of processing, memory, and a full protocol stack, so they will connect directly via IPv6. An example is a video surveillance camera or complex process controller.

IPv6 data to and from these devices may still be combined with lightweight IoT data streams at the same integrator functions. In addition, interesting hybrid devices can develop that include both a lightweight IoT interface and a traditional IPv6 connection. In these situations, the lightweight IoT protocols might be used for normal or routine communications, with the IPv6 connections becoming active based on a particular event or condition.

Fundamentally, the IoT network protocols must coexist and interoperate with the traditional Internet and other networks such as Cellular 4G and LTE. The key challenge for the emerging Internet of Things architecture is to allow this interoperability without burdening the billions and billions of simpler end devices. The next chapter describes the simple “chirp” structure of IoT data and how it is delivered across the Internet of Things.