Keywords

1 Introduction

Digitalization is the most important driver of innovation across all industries and is strategically important for providing the flexibility and adaptability needed to succeed in times of highly volatile markets and rapidly changing requirements. Even though increasingly flexible and adaptable systems have been under development for a long time, production systems remain relatively rigid. The IT world, with its high innovative power and dynamics, which is characterized by adaptability, rapid innovation cycles, and a flexible response to customer requirements, is seen as a successful counter design.

1.1 Limitations of State of the Art Manufacturing Systems

Industrial systems combine application-specific hardware and software, compute platforms, and communication infrastructure. Since machines and systems are typically tailored to specific products, the hardware and software are tightly coupled, not adaptable, and integrated as a proprietary system. Furthermore, these systems are characterized by heterogeneous technologies, manufacturer-specific ecosystems, and monolithic solutions. Compute platforms are typically highly specialized controllers implemented as dedicated devices, e.g., PLCs. While real-time performance is a key requirement, the reasons for dedicated, decentralized devices are historic and business-model-related. Virtual PLCs are a trend; however, the used platforms are typically still dedicated and specialized devices. Connectivity is a key enabler for industrial systems, connecting physical devices and compute platforms. In the context of digitalization, the connectivity towards IT has become increasingly important. Today, various field buses are used, which are optimized but lack interoperability and connectivity. Current trends like Time-sensitive Networking (TSN) and related technologies like 5G, DetNet, or OPC UA on the higher layers have the potential to replace proprietary systems with standardized and interoperable solutions.

1.2 Software-Defined Manufacturing as a New Paradigm

A significant step toward IT-like flexibility and adaptability in production environments requires a paradigm shift: besides technologies, also methods, processes, business models and, in particular, the mindset must evolve. This new paradigm is referred to as Software-defined Manufacturing (SDM). The implementation of SDM requires a rethink from the system architecture to the technical implementation. Following the example of IT and the divide-and-conquer approach prevailing there, a division of the system takes place: cooperating subsystems, which have clear tasks, are connected by open interfaces and are characterized by interoperability. Another decisive factor is consistent abstraction by means of a layer model, which abstracts and encapsulates complex technologies for applications. Based on a requirements analysis in Sect. 2, we propose an architecture that enables the implementation of SDM in Sect. 3. Following the divide-and-conquer approach, our architecture is responsible for real-time network management and orchestration. Section 4 provides concluding remarks.

2 Requirements Analysis Based on Related Work

We identify basic requirements from the automation application development life cycle (AADLC) and emerging ones from Software-defined Manufacturing (SDM). Note that we focus on software, computing and networking infrastructure. We exclude the problem of designing SDM-enabling mechanical systems.

2.1 Requirements from the Automation Application Life Cycle

The AADLC consists of four major phases. Requirements engineering and design address the specification and validation of (non-)functional requirements, which are then iteratively refined between customers and development teams [1]. One major non-functional requirement for automation software systems is adaptability [2]. The development phase focuses on implementing application logic using Programmable Logic Controllers (PLCs). During commissioning, engineering methods have to be used, which support a quick and error-free set-up of automation systems based on pre-engineered modules [3]. Requirements will evolve as automation plants have lifetimes of several years. Thus, life cycle management and automated maintenance of existing applications must be employed [4, 5]. A significant problem in changes to automation software is the degradation of code quality caused by uncontrolled on-site changes [6]. There is an increasing need to extend functionalities and scale automation systems [7]. Proprietary automation software frameworks make adding functionalities hard. E.g., PLCs do not allow simple integration of third-party real-time software, as PLC run-times might not include required libraries. During operation, reliable vertical and horizontal communication is required [3]. Summing up, we derive the following requirements:

  • R1: Applications’ structure must be flexible, modular and extensible.

  • R2: Virtualization and hardware-independent deployment are needed for easier adaption of automation systems.

  • R3: Automation system software architectures should not be dependent on vendor-specific frameworks and allow simple third-party software integration.

  • R4: Changes to applications should be conducted via well-defined and quality-preserving processes while keeping track of which software version is deployed on which hardware.

2.2 Requirements Based on Published Approaches to SDM

Based on published work regarding SDM, we extend the requirements. To support SDM, the application layer of manufacturing hardware should be fully adaptable [8, 9]. This exceeds the capabilities of reprogramming within planned functions, such as replacing NC-Code. Based on a minimal platform adaption layer (PAL), functionalities and communication interfaces, i.e., the cyber part, can be defined within cyber-physical systems’ physical constraints. Orchestration, deployment, and configuration of services for basic functionalities can realize such an approach [8]. A common control plane then abstracts these low-level functionalities, which provides generic interfaces to upper layers [10]. ICT (information and communications technology) infrastructure has to be reconfigurable as well to achieve the necessary flexibility [11]. Software-defined Control (SDC) [12] is a concept similar to SDM. SDC consolidates information from production and enterprise levels. A central controller conducts configuration decisions leveraging application-level reconfiguration. Software-Defined Cloud Manufacturing is described in [13]. At run-time, applications are dynamically composed to match requests from upper layers. Higher-level systems define the application layer of manufacturing resources [14]. SDCWorks [15] provides formal methods for SDC. It allows analysis and verification of, i.a., real-time requirements. Summing up, we derive the following additional requirements:

  • R5: Manufacturing systems must be configurable by defining the application software layer.

  • R6: Defined application layers must be integrable in higher-level systems based on generic interfaces.

  • R7: The ICT infrastructure must be configurable and provide QoS concerning networking and computing.

  • R8: Higher-level systems must be able to define the automation application software layer.

3 Architecture Proposal

3.1 Integration in a Conceptual SDM Framework

Figure 1 (left) shows our approach to SDM based on [9], for which we concretize the implementation of the tree layers of SDM. The conceptual framework is structured hierarchically. Initially, the steps needed for production are derived from a product description. Based on the manufacturing description and knowledge of available manufacturing hardware, necessary automation applications are defined by combining generic functionalities. These are encapsulated in reusable units of deployment, i.e., services. Now, it is known which high-level functionalities, e.g., milling a part based on G-Code, a composed application provides. Thus, northbound interfaces for integration in higher-level systems are defined. Standardized job interfaces, e.g., OPC UA-based ISA 95 job control [16], are used for this. Composed applications are then tested via virtual commissioning and formal methods, such as SDCWorks [15], which allow the verification of QoS and real-time requirements. Then, the composed application descriptions are annotated with said requirements. The annotated deployment descriptions now have to be deployed on-site on physical infrastructure. Deployment is done using the architecture below, combining network management and orchestration (see Fig. 1, right).

Fig. 1.
figure 1

Extended approach to SDM based on [9] on the left and Network management and orchestration as integral components of SDM on the right. Here, the SDC controller generates deployment descriptions as described above.

3.2 Virtualization and Orchestration

Services are encapsulated in containers or virtual machines. Linux patched with PREEMPT_RT is used as the operating system (OS), as real-time containers and VMs are only available for this OS. We opt for reservation-based hierarchical real-time scheduling based on the SCHED_DEADLINE policy. Corresponding and compatible scheduling mechanisms are available for containers [17] and virtual machines [18]. A Kubernetes-based real-time orchestrator such as REACT [19] is used to assign real-time services to compute nodes. Network and application configuration are further steps needed to fully bring composed applications up.

3.3 Network and Application Configuration

Description of the Physical Infrastructure. Knowledge of the underlying infrastructure is essential for network configuration and application deployment. In addition to general information about the type of connected devices, such as bridges and (bridged) endpoints, the available resources and capabilities of the individual devices are also particularly relevant. Examples of this are the OS, the installed RAM, the number of processor cores and their clock frequency. The capabilities of the devices can provide information about their real-time characteristics, both in terms of program execution and communication. In addition, the topology of the network must be known. The topology includes a description of which devices are connected in the network, the properties of the individual connections, such as wired or wireless, and the maximum transmission rate. Mechanisms like the Link Layer Discovery Protocol (LLDP) enable partially automatic network topology detection. This protocol allows information about the connection layout, type of connected devices, and other information such as MAC addresses to be read out. However, as things stand, additional mechanisms or manual additions are needed to retrieve device-specific information, such as resources. However, to enable the mutability required by SDM through dynamic configurations and deployments, such a holistic mapping is necessary so that deployability can be evaluated while maintaining real-time capability.

Description of Distributed Applications and Service Modularization. For the requirements to be met by SDM and a flexible real-time network, the description of the physical infrastructure alone is not sufficient: A description of the distributed real-time application is also required to perform dynamic deployments. In addition to a list of the individual real-time applications involved and their specific (hardware) requirements, the interaction or interrelationships between the services are also particularly important. On the one hand, data exchange must be considered here to answer the questions of which data (quantities) should be exchanged and how often between which services. On the other hand, it must be possible to define dependencies, for example, in the form of fixed sequences and deadlines to be met with regard to communication between the services. This holistic description of the logical flow, i.e., service description, of the overall application, in combination with the available (hardware) resources of the physical infrastructure and the current workloads, enables the deployability of the services on the nodes to be evaluated. In addition, optimizations are possible, e.g., to consistently utilize the nodes or to defragment unused resources left behind by terminated services. To achieve the goals of SDM and thus the interaction of different components, converged communication is required, as well as converged services. In addition to interchangeability, a high possible combinability of multiple services also increases flexibility and the holistic feasibility of the overall applications. One way to achieve this is a uniform service specification. The specification defines a fixed scheme or structure for the creation and implementation of services. This structure enables a uniform definition of inputs and outputs and additional meta information. When designing the overall application, this provides an immediate check of the combinability of services based on the required inputs and the outputs provided. Furthermore, additional encapsulation of complexity can be achieved by abstracting real-time communication. Here, abstracted send and receive methods are provided to the service, generalizing the underlying communication mechanisms (e.g., access to the network interface and time handling) depending on the technologies and operating systems used. The goal of modularization or abstraction of services is to simplify the creation of the actual application logic. The implementation does not necessarily require knowledge of the underlying technologies (e.g., TSN, 5G, WiFi 6, OPC UA, or MQTT). The abstraction also enables separation of responsibilities: Developers define individual applications without knowledge of the infrastructure used, and the integrator, e.g., the SDC controller from Fig. 1, defines the interaction and data exchange of the distributed real-time application by linking multiple, individual applications without knowing the concrete implementation details.

Fig. 2.
figure 2

Overview of the deployment and orchestration workflow.

Deployment and Configuration Workflow. Figure 2 shows in the upper part the deployment and orchestration workflow described below, while the lower part details the sequence of the individual steps. 

The two descriptions, physical infrastructure and logical flow (service composition), are mapped in the deployment and configuration of the distributed real-time application. On the one hand, the communication relationships between the individual services are defined. As mentioned above, this is done by the SDC controller in Fig. 1 by linking the inputs and outputs of the respective services to define the logical data flow of the overall application. In the form of constraints, it is possible to specify this logical flow further using conditions to be met (e.g., specification of the maximum latency between two services) and thus limit the solution space for the subsequent configuration. The communication relationships are detailed by specifying various Quality of Service (QoS) parameters. These include the transmission interval, priority specifications, the required reliability, e.g., through redundant transmission, and the earliest and latest possible transmission times within an interval. Another component of the deployment is assigning individual services to physical end devices. This mapping represents an optimization problem that can be solved manually or automatically using suitable algorithms. The solution space depends on the specified communication relationships and the defined constraints, as well as on the provided resources of the end devices and the required resources of the services. A concrete example would be the compliance with the constraint of a minimum throughput of data between two services, which has to be evaluated depending on the available bandwidths given by the topology (e.g., speed of the cables, number of hops). The result is used as input for the configuration. First, the configuration of the communication takes place. Using TSN, the standard provides three different approaches: centralized, decentralized, and hybrid configuration [20]. Only the central approach is considered here, with the assumption of the existence of a Central User Controller (CUC) and a Central Network Controller (CNC). The CNC represents a central network management instance, and the CUC can be understood as an intermediary between the user or application and the CNC. The CUC first collects the communication relationships previously defined in the deployment description. In this case, a communication relationship describes a TSN stream consisting of the talker, the listeners and specified QoS. From the CUC, all stream requests are then forwarded to the CNC for calculation. The CNC has an overview of streams already in the network and thus knows the available resources. Accordingly, a schedule, i.e., the specific transmission times, is calculated based on the streams already computed and the requested streams. If no solution can be found, services can be redistributed to other nodes. This process is repeated until an (optimized) solution is found. The computed configurations for the requested streams are then sent back to the CUC. These contain, among other things, the specific send offset relative to the interval start, the calculated accumulated latency, the VLAN ID, the PCP and the source and destination MAC. The existing information from deployment description and mapping is then enriched with the stream configurations and serves as input to the subsequent container orchestration step. Accordingly, the orchestrator can deploy the containers on the nodes assigned to them, along with the computed communication configuration. Alternatively, it is feasible to communicate this information to the containers only after deployment using a provided interface. In this way, subsequent changes regarding communication (e.g., additional listeners) could also be taken into account. However, this also requires an additional communication channel and effort. The deployment is followed by the initialization of the services based on the mapping and the communication configuration. First, the initialization must set up the real-time communication channels. Based on the configuration, the talkers and listeners are created here based on their MAC addresses. Using OPC UA, this can be implemented, e.g., through the publish-subscribe extension (Part 14) [21]. Likewise, the execution of the service must be coordinated, with the actual application logic of the service being encapsulated in a cyclically executed RT thread. The process within a cycle is comparable to the execution logic of a PLC: at the beginning, the inputs are read, i.e., data is received, based on which new data is subsequently calculated.

Table 1. Comparison of the identified requirements and the proposed solutions through the architecture

4 Conclusion and Future Work

A major goal of SDM is to increase the reconfigurability of automation systems by making the functionalities of manufacturing hardware definable through software. Based on a requirements analysis regarding the automation application life-cycle and SDM, we proposed a conceptual architecture that allows higher-level systems to define the application layer of manufacturing systems while meeting QoS requirements concerning networking and computation. Furthermore, we describe how such software-defined functionalities can be integrated into higher-level systems based on a job order-oriented northbound interface. Table 1 summarizes how the requirements identified in Sect. 2 are addressed by the proposed architecture. As part of future work, we plan to implement the described architecture.