Keywords

1 Introduction

The idea of a digital twin (DT) develops within PLM experts. It is a virtual representation of a product. A CAD model provides the geometry. A PLM tool provides functional and architectural information. But what is special about a DT? It brings realtime information to a model. This is why the application of digital twins gains momentum with technical progress in Internet of Everything (IoE).

IoE provides a solution to bring PLM to a next level. Producers are able to observe their product in the field. They are able to get insights to the use of the product [1]. A direct feedback loop from the customer to the producer will be established. Producers can make adjustments to the product while it is in use by the customer. Major changes and decisions for the product become obvious. The DT has the potential to significantly change methods and tools in every phase of the PLM [2]. Now the question is why does not every company have a digital twin or at least is working to implement one?

It is easy to get the idea of a virtual representation, but it is hard to understand the whole idea. It is even harder to convince people of the idea. Of course there has to be a use case with a business value and the technology is far from the level of plug and play. In the following paper we want to discuss these hypothesis and give a proven way to make a DT in a company feasible.

2 Conceptual Foundation

2.1 The Digital Twin in Our View

Within five years, there will be over 21 billion connected machines, cars, buildings and other forms of smart devices in the world [3] all developed to collect, analyze and share all kinds of data. In total, by 2020, our accumulated digital universe of data will grow to around 44 trillion gigabytes [4]. This development is accompanied by the increasing sophistication of data processing technologies. Just think of recent advancements in AI, high performance- or quantum computing. One of the most recent examples being Google’s AI algorithm beating a professional player at the Chinese board game Go, which is considered the world’s most complex board game. As a result, the world is currently at the dawn of a technology-driven revolution which is fueled by a combination of the explosion in data production and data processing technologies. The combined effect of those two forces is changing the way we operate every “object” and manifests itself in the form of DTs.

So what are DTs? And why are companies so excited about them? A DT is a virtual dynamic model which includes everything that is known about an object. In other words, DTs are exact replica of their physical counterparts that change with the current environment in real time to help companies (and people for that matter) monitor, test, treat and maintain any number of systems [5]. They are continuously getting richer with data by integrating real-time location data, temperature data, energy consumption and other relevant data.

At a conceptual level, research of the Fraunhofer Institute for Intelligent Analysis and Information Systems [6] identified a couple of key features of DTs.

Comprehensiveness:

DTs can include multiple features of physical things in order to anticipate broad use cases. These features may be spatial, material, structural and design attributes or include aspects of their close environment such as usage, weather or schedules. Still, the level of granularity with which a physical object is represented by its twin can range based on the requirements of its use: from a mere identification to a representation at an atomic level. Note that DTs can be virtual aggregated views of data and information located at various sources.

Linkage:

DTs can be linked to other DTs in various ways. Relationships between DTs may be described as “part of”, “requires”, “communicates with” etc. A linked data representation of a DT facilitates the linking between two twins through the use of a unique identifier.

Interoperability:

Interoperability is an important prerequisite of linkage. It means that DTs can reason and make decisions with each other rather than operating in isolation. Why is this interesting? Think of humans having DTs. To understand how the body works and improve its health, your DT must model more than the physical parts of the body. It must model how your entire body works together. This model includes not just organs, bones and other parts but also describes processes like blood flow, metabolism and the interactions among organs. Leg pain may be caused not by a leg problem but rather by a pinched nerve in the spine. A standalone model of a leg won’t help you diagnose that problem. And if you only gave medication for the leg pain, without finding a treatment for the root problem, you condition would not improve.

Instantiation:

It is important to differentiate between an instantiated an uninstantiated abstract DT. An abstract twin may be compared to “abstract class” in object-oriented programming. They cover features that all DTs have in common, but is in itself not a fully functional DT. An instantiated DT is fully functional. Its instance defines specific features not shared with other DTs.

Evolution and Traceability:

DTs evolve over time as a physical object evolves, but keeps track of its evolution. In order to allow for independent maintenance and version-control, the properties from different domains related to the physical object should be represented in mutually distinct submodules of the overall DT.

Importantly, DTs are more than just a 3D model of a physical thing. They are also not just a mere user interface that enables remote control over a physical thing or a digital view on a thing/subject which focuses on only a particular aspect.

2.2 Current Players with a Successful Implementation of a DT

DTs are emerging everywhere across every industry, in business and in consumer markets. DT industrial use cases typically target maintenance and equipment health, predictive maintenance as well as operations and performance optimization [7]. DTs are already adding tremendous benefits in terms of efficiency gains and innovation. For example, GE is embracing the concept of the “digital twin”—a data model of a specific physical asset—in a bid to eliminate unplanned downtime of aircraft engines and other systems [8].

GE factory twins integrate operational analytics at the edge to ensure the efficient operation of the factory’s entire assets, e.g. predicting failures, and find ways to optimize against a multiplicity of KPIs, e.g., balancing revenue against remaining life [9].

In a similar vein, Lufthansa Technik is working towards digitally modelling every single customer aircraft throughout the entire lifecycle as part of a digital aircraft twin. That way they can perform predictive maintenance effectively across their entire fleet [10].

Moreover, Siemens has built a fully automated auto factory which not only produces physical cars but also their DTs including all relevant data. These twins are continuously fed with real-time data once the car has left the factory halls reducing time to market from 30 to 16 months [11].

Dassault 3DEXPERIENCE creates a DT of a city which attempts to capture the complex spatial and temporal implications of life and work to support planners as they seek to imagine, develop and experience sustainable urban solutions [12].

3 Research Method

The key goals of this research is to understand how customers across industry verticals see the state of DT for themselves, how relevant they see the approach on their business and what are the barriers they have to implement DT solutions into their operations. We analyzed and integrated approaches how to ease the transition to DTs by generating a practical guide towards it. This paper will give practitioners a profound understanding of the underlying architecture of DTs and provides practical ways to organically integrate DT models that create undisputed business and customer value. The research was in cooperation with companies of Deutsche Telekom Group, industrial clients of Detecon across verticals (e.g. automotive), business consultants of Detecon Consulting (Member of Deutsche Telekom Group) and further subject experts. We conducted a series of interviews and surveys, combined with practical learnings that derived from DT projects undertaken by Detecon for clients. This way of validation offered the chance to hypothetically check the developed approach with experiences from client projects.

4 The Benefit of DTs in General

The reason why DTs are becoming more and more widespread is mainly because of two factors: their ability to integrate large amounts of static, real-time, structured and unstructured data and to combine this data with advanced data processing methods such as AI, machine learning or high-performance computing [7].

Let us consider analytics. Importantly, the business value which is created by means of DTs depends on the analytics proficiency with which it is used. Because DTs have become a single source of truth for all information related to an asset, they can be used for descriptive analytics. Informing operators about what happened based on trending information on historical or current events represents the lowest or basic business value, which can be created by employing DTs. At the next higher level, DTs are used for diagnostic analytics. Diagnostic analytics helps operators understand the reasons for a current situation by leveraging past data to understand why something has taken place. Even higher business value is created, if DTs are used for predictive analytics. Using current and historical facts to predict the future or unknown events can help operators of machines to perform maintenance before a part breaks down. The highest business value is generated by means of prescriptive analytics. In this case DTs examines a set of possible actions to recommend actions and support decision-making based on diagnostic and predictive analyses of complex data. More specifically, DTs can not only anticipate and predict problems they also issue proactive measures accordingly. This is done by combining real-time data of its precise state with similarity learning techniques accessing the knowledge of thousands of other similar DTs and running thousands of simulations to optimize individual outcomes, and provide mitigation. Additionally, the DT may provide operators with a forecast on the impact of implementing the suggested options along with an estimation of its confidence for that impact to actually realize itself, allowing the operator to make the optimal decision.

In general, the application of analytics moves from descriptive to prescriptive. Thus the nature of how to apply data and analytics changes from merely collecting information and doing basic trending to instead focusing on how to leverage data and analytics for true optimization. We see a shift in leveraging data to protect physical assets from financial downside (equipment failure leading to unavailability and expensive repairs) to enabling an upside (purposefully timed maintenance that balances operational risk and reward). As the analytics provide more valuable insights on the operational risks and opportunities, they need to be connected to both the people who make operational decisions and to the advanced controls that can adapt and maneuver the machines towards the desired outcomes.

Finally, our experience in client projects shows the DT is used to create demonstrable business value along several axes:

  • Individual: The DT is applied to individual assets, tracking history and performance over the asset’s lifetime.

  • Adaptable: The DT infrastructure and models are adaptable. For example, they can transfer to another part or asset class, or adapt to new scenarios or new factors.

  • Continuous: The DT models are continuously updated as the physical asset is operated. At any moment the DT represents a faithful representation of the current state of the asset; the output of the model changes with every fuel burn hour.

  • Scalable: Benefit is derived when hundreds or thousands of like assets have a DT. A DT tracking a single asset learns from similar assets.

5 Challenges to Face for an Implementation

5.1 Understanding and Communicating the Digital Twin

If the intend to implement a digital twin is there, then it is absolutely necessary to understand the concept behind it. However, not only a visionary or missionary must understand the concept, but also the rest of the company.

Based on our experience the problem starts with the understanding. As seen in Sects. 2.1 and 2.2 there are several views and implementation on the topic. A general definition of a digital twin is hard to find because it is always customized. After a common understanding the management needs to be confident about the idea. They always ask for a business value behind it. So our recommendation is to identify a real business value in the company.

The investments may vary between the different use cases. Especially if the reference is a big project. Then it will be hard to state the value because the management is afraid of the investment. Indeed some use cases can be implemented with a lower investment. As a practitioner, we recommend to develop a prototype which is covered in detail in Sect. 6. It will also help to capture all technical challenges which will be covered in Sect. 5.3. If the used technology is clear for the use case then it will be much easier to calculate the cost. Consequently it is easier to calculate the business case behind it.

The prototype also helps to communicate the potential of the DT. People must experience the digital twin live. The enthusiasm rises as the idea matures in the head of the management or other employees. They start thinking on different use cases beside the intended one without saying anything else.

Once the idea is in everybody’s head, the structured preparation remains. The following questions get relevant:

  • Which use cases are relevant?

  • Which ones will be prioritized and implemented?

  • Which are the major steps?

  • What efforts are necessary to achieve the desired benefit?

  • What timetable is achievable?

  • Is the organization in a position to implement the project?

Although the potential seams endless, the implementation requires consistent action. There will be a transition phase where the expenses are made, but the benefit cannot yet be generated. Especially here, it is important to communicate the value of the idea.

5.2 Use Case, Business Value and Business Model – How to Get Started

During our research, we found out that many clients face challenges in getting the right orientation in the endless use cases and opportunities that digital twins provide [5]. Developing a short, mid- and long-term strategy that is holistic and consistent at the same time is not easy. Related to this, allocating the right level of internal and external resources (monetary and personnel) is even harder. The most important takeaways according to our analysis is to take the customer experience/impact at the center of any digital twin strategy, use case or activity. We have seen the negative side effects of an approach that is technology focused (“What can I do with the digital twin?”) and then looks for a suitable use case. The more effective way is to have a systematic and relentless focus on where the company can massively enhance customer experience. We have developed a four-step process that aims to help setting up a functional framework for clients who are in the early stages of their digital twin maturity (Fig. 1).

Fig. 1.
figure 1

Four-step process for identifying, selecting and implementing digital twins

Step 1 | External Inspiration: Due to its relative nascent stage, digital twins are still very new to the majority of clients (especially in the manufacturing sector). Given the vast array of potential use cases and the strong level of abstraction, it is difficult to get a tangible idea on use cases. Therefore, it is strongly recommend conducting a detailed understanding of use cases and activities that other clients from other verticals are working on by meeting and engaging with them. On top of that, these conversations with other clients have to be supplemented with dialogues from technology suppliers and innovative startups in order to get a holistic picture. Receiving external inspiration is potentially very helpful to get tangible input, that fosters ideation and though process about relevant use cases for the clients own company. It is recommended to get external help and advice from agencies/consultancies, who proved to have a strong network and access in the field of digital twins across clients, vendors and start-ups.

Step 2 | Inhouse impact analysis: As a next step, the focus has to come into the internal potentials of applying digital twins. Digital twins are able to have impact across the entire product lifecycle, starting from early stage prototyping until shipping of final products incl. aftersales. We recommend to conduct the inhouse analysis according to the phases of product lifecycles in order to distinguish between those and have clear focus. A very helpful exercise is, to jointly work out the Top 5 most important challenges that exist today in each of the phases and also working out a few use cases that would potentially add significant impact on customer delight. It is extremely important to have a relentless customer focus at this stage. In this step, it should be aimed to develop relevant use cases in each phase of the product lifecycle.

Step 3 | Shortlisting & Assessment: Post creating a longlist in each of the phases, it is now crucial to conduct a rigorous and systematic assessment of each these ideas with subject matter experts (advisors, vendors, internal employees, customers) in order to filter out which topics are worth working on first. The assessment has to conducted in these four areas:

  • Technical: How complex would a digital twin be for the use case?

  • Customer: How much would customers directly or indirectly benefit?

  • Commercial: What costs occur for prototyping and scaling?

  • Strategic: Is this use case going to exist and be relevant in 5+ years?

In order to assess how customers would benefit from the use case, it is helpful to assess the impact in three dimensions: time (e.g. does it make a process faster), money (cut costs with predictive maintenance or drive revenue by monetizing data or complete digital twins) and quality (e.g. reducing error rates, improving engineering quality). Most clients especially in the manufacturing sector care a lot about these dimensions. The use case must have significant impact on one or more of these areas.

Step 4 | Prototyping & Implementation: After the detailed assessment has been conducted, clients now should move on to build functional prototypes that are piloted in the field and post that implement the final solution at scale [13]. At this stage, it is very important to deeply consider the interdependencies of any solution for the entire product lifecycle, internal operations and processes. Especially for clients who have so far a nascent level of development in the area of digital twins, building prototypes that are not deeply integrated into IT-Systems and don’t require large scale platforms are a vital way to understand if the solution is promising, how internal employees and external customers perceive it. Piloting the prototype for a few months is providing valuable feedback and prevents a company to make large-scale investments into solutions that are not suitable.

5.3 Technical Challenges that Come with the Architecture

The value chain plays an important role to get an overview about it, mainly because DT architecture has a lot in common or rather is identical with an Internet of Things (IoT) architecture (Fig. 2):

Fig. 2.
figure 2

Architecture of a digital twin

The architecture consists of physical devices, an optional device management system, a connectivity solution, a platform, and the actual application. The physical devices capture the data from the real twin. It transports the information to the platform or it will be stored temporarily in the device management. The device management has the task to control all devices. This is sometimes necessary when the devices are from different suppliers or the device itself is complex. It can also be implemented in the device. That is why it is not mentioned consistently in literature. Usually a cellular network establishes the connectivity between the device and the platform. The platform itself is the managing part of the architecture on the software side. At the end we have the actual application for the end user. While the general architecture is always the same, the characteristic of the whole system is always different. It highly depends on the technology for connectivity, platform and especially the application.

The DT has in contrast to the IoT platform a context information through dependencies and interactions. Its information is not limited to the object depicted, but also includes its environment. Furthermore, a DT differs in its application. It has always a relation to its product life cycle. The level of detail depends on the phase.

Based on the high potential that we have seen in prior chapters one question arises: Why does not every company have a digital twin or starts to develop one? Some challenges still remain even with great progress in technology. Here is an extract of the most important technical challenges. They not always apply in a specific use case, but it might happen:

Device

On the one hand the processor has to be small to attach it easily on a device. On the other hand energy consumption has to be low so that it last long for long time periods like several month or even years. The scope needs to be defined with the right kind of measurements. Only those who are important to model the functionalities of the product. One step further, it is also possible to collect data from other digital twins or IoT infrastructure. The DT then needs a connection to those infrastructures and we are not there yet, but as soon as the amount of digital twins increases this option becomes more feasible.

Device Management

At first it is a managing device so it needs to be connected to the digital twin devices. It controls the information flow and possible signals for actuators. The device management also controls the batteries of the devices. This has a significant impact on the architecture of the digital twin depending on the architecture of physical twin and how the energy will be provided. At this point we have to decide where the processing takes place. Does it happen at the local device, in the device management or completely on the platform? An additional device management increases the complexity, but it can make live easier.

Connectivity

The connectivity is crucial because if no data is transferred there is no digital twin. The connectivity has to be managed within the devices and among the devices. The indoor reception has to be guaranteed. In the case of an edge application the connectivity with low signal needs to be established. Sometimes the connection has to be around the world. This is especially important if the connection is established with cellular networks. Furthermore the connection needs to consist of a secure connection with high security standard. Almost every information is critical because it is related to sensitive product data.

Platform

A powerful cloud platform is the first point that comes into mind when you think about a digital twin platform. Indeed, it is a very important part. There are many cloud providers on the market. Every provider has its own specialty. At this point the actual use case of the digital twin plays a significant role. Does the digital twin concentrate on controlling and monitoring or are realtime observations and actuation necessary? The option to get access or guarantee access for other digital twins is an important point. Even though it might not be important nowadays but it has a huge effect when more companies establish digital twins.

A different point is the information access within the twin. Mastering the data about components and modules require a detailed concept. The correct interpretations are only valid if specification about the physical twin and its components is available. This information must be either stored in the platform with the challenge to keep them up to date or the data is available through the access of the device or device management. Then the platform needs to have an interface for that. This question becomes even more important if information exchange of digital twins will be considered.

Application

Challenges that occur with the application mostly occur on the relevant systems and tools. The information is available for everyone on the platform, but every user gets access through the PLM system. This might be a central software with interfaces to more specific software like CAD. An alternative is the access through the specific software itself which is harder to implement. The easy way is to introduce system only for digital twin application.

No matter which way the system needs to manage big data volumes because the amount of data will explode sooner or later. Furthermore, the software needs to be capable of AI implementation and other advanced analytics methods. This is the intelligence of the digital twin where the raw data will be transformed to actual information.

6 Prototyping

It is recommended for clients who are early stage in their digital twin maturity and who don’t have large internal teams of Hardware- & Software developers to have an organic way to implement use cases. The outlined four-step process outlined in Sect. 5.2. is aimed to shortlist high-impact use cases and prototype them. The motivation of prototyping and piloting those prototypes is to get early stage feedback from real users and customers and the usefulness and modifications necessary for a full scale solution. It is a lean way to experiment before scaling up. Implementing digital twin solutions require very often far reaching and disruptive resources across hardware, software, connectivity and analytics – resulting into significant change management required for the internal teams of a client and external customers. Before deciding to go “all in” and invest into solutions that run in real operations, it is crucial to build prototypes that are mature enough that they can be piloted internally [7]. Also, for digital twins to be used across the organization, we see an organic implementation as more successful especially with internal acceptance compared to a tradition top-down approach of implementation [14]. There are four success factors for the prototyping stage:

Target Picture:

Clearly define what a prototype must be able to achieve so that it can be piloted in a way where users can provide valuable feedback. Also it should be layed out how a end-product/solution shall be designed.

“Silo-Prototyping”:

The prototype shall ideally not be integrated to the existing IT-Infrastructure and processes of a client, hence it shall operate in complete autarky. IT integration is very often a complex process and in many cases prototypes can be build in a way that engineers can still test and provide relevant feedback to it.

Off-the-Shelve/Frugal Prototyping:

As outlined in Sect. 5.3. the technical architecture compromises Device, Device Management, Connectivity, Platform and Application. For the prototyping stage, it is recommended to use very simple forms in each of these dimensions for the prototype in order to keep prototyping costs in balance and operative quickly. As an example, using Arduino & IoT Kits for the device part, 4G or Wifi for connectivity and Public Cloud Infrastructure for the Platform part might be good enough for the prototyping stage.

Development:

The developers needed for building digital twin prototypes are crucial for the development and pilot phase. Experienced teams are important due to the complexity and interdepencies of digital twins. It is therefore recommended to work alongside technology partners/innovation centers who have experienced developers and outsource the prototyping phase to certain extend [15]. Combining internal and external teams is helpful to achieve an early stage knowledge transfer, also contributing to the change management process required while implementing digital twins in an organization.

Each prototype shall be piloted for a few months inside the organization, allowing internal and external users to provide qualitative and quantitative feedback [14]. This helps to design and develop the strategy of large scale implementation. This section has to be planned in detail with domain experts (advisors, vendors, system integrators etc.).

7 Outlook

Digital twins are undoubtedly at a very early stage of their evolution, with many uncertainties existing in the marketspace and ecosystem. However, the value add to manufacturers and customers are undisputed across many areas of the product lifecycle. Digital twins are here to stay – and they will in the future become a standard, similar like Computer Aided Design (CAD) models are ubiquitous today. The pace of innovation and variety of IoT sensors will make digital twins much more heterogeneous and the multi-fold architecture adds complexities alongside. Over the next five years, we see digital twins being adopted by more and more manufacturers with the twins they design become more sophisticated. Penetration rate will grow and change the PLM landscape significantly. Key barriers of adoption today, such as missing standardization, interoperability and vendor lock-ins will disappear slowly. It is not decided yet who will drive the change – large-scale enterprises (top-down approach) or midsize companies (bottom-up approach). Customers who are in the very early stage of digital twin adoption should now start designing a comprehensive short-/mid- and long-term strategy that is holistic, customer centric and organic. Implementing digital twins at scale is comparable with a full switch to an Enterprise Resource Planning (ERP) system – it is complex, will take a lot of resources and needs significant change management. With the technology becoming more mature over time, we see the internal change management as one of the most important and most underestimated success factors in this transition. Different teams across the organization have to work together and support during the implementation phase. This cannot be dictated by a traditional top-down management approach, but more through an organic and inclusive method based on experimenting, prototyping, piloting and then scaling digital twins in those areas with most significant customer value.