Advertisement

Datenbank-Spektrum

, Volume 18, Issue 1, pp 15–25 | Cite as

Data Processing in Industrie 4.0

Data Analysis and Knowledge Management in Industrie 4.0
  • Frank Werner
  • Robert Woitsch
Schwerpunktbeitrag

Abstract

The pressure on companies to increase their flexibility and efficiency in manufacturing is constantly increasing. Factory managers therefore need to be able to obtain information in real-time across physical production systems for better decision making. Transparency on a production- and strategic level, for example, offers the advantage of being able to respond more quickly to volatile demand (time-to-market) and helps in reducing lead- and down-times. This can lead to a significant production gain and competitive advantage. Current approaches are challenged to bring results from the IoT world to decision makers in an appropriate manner. We introduce data models that serve as a mediator to create a better understanding between factory owners and data analysts. Particular challenges lie in the orchestration of the complex process steps, the vertical transparency of information, as well as in mutually contradictory optimization calculi (e.g., cost, speed, quality, sustainability). Due to better communication between factory managers, data analysts and people working at the line-side, the previously mentioned configurations can be implemented more transparently and consequently more efficiently.

Keywords

Smart data management Complex event processing Data analytics Industrial Internet of Things (IIoT) 

1 Introduction

Decision makers in industries are nowadays confronted with a flood of information that needs – if being eventually useful – to be collected, monitored and correlated. Production processes are often complex, involving thousands of components from a plethora of suppliers. Interruptions, related to unforeseen events in manufacturing (e.g., machine breakdown, unscheduled maintenance or software problems), transportation (e.g., vehicle breakdown, traffic delay, wrong delivery), or even at higher tiers in the supply chain (e.g., component quality, availability of materials, etc.) may lead to losses of hundreds of thousand euros per hour. In addition, there may be a high variation in received orders from customers, orders may change just a few days before the delivery time, or orders’ quantities may change from the actually requested numbers. All of the aforementioned circumstances require a flexible production environment. Existing work on agility and flexibility in production and manufacturing can be found in [12].

Following the movements of Industrie 4.0, situations like this can be mastered and efficiently handled if actual information is collected, brought in the right business context and presented to decision makers in an appropriate way. Otherwise, the sheer volume of useful information remains in data silos and consequently tangible business values remain hidden and cannot be leveraged. By combining information sources from production and bringing them into an appropriate context in terms of end-user and business aspect, this information may reveal potential production issues, indicate critical situations and may ultimately be exploited to increase business value. In this work, we sketch a solution that will be developed and further investigated within a joint research project called DISRUPT1. Following this idea, information is collected from sensor systems at the production level which are subsequently aggregated and correlated at the data level using complex event processing (CEP). The ability to give predictions, based on models previously trained through a machine learning process, enhances deterministic CEP capabilities by features that allow actual forecasts. Resulting data driven Key Performance Indicators (KPIs) can be enriched with available production knowledge and presented (so called semantic-lifting) by means of management dashboards to the responsible decision makers in a suitable manner. Deviations from the reference values can therefore not only be detected close to real-time in combination with predictions; the proposed approach allows to effectively influence production via the modelling perspective, the computation of aggregates and compound figures. Hence, management dashboard can be more easily personalized in the sense of the decision makers, in collaboration with data analysts.

Outputs from the sensor management platform are used as an input for real-time streaming analytics (aka CEP) in order to compute data-driven KPIs that are consequently interlinked (so-called semantic lifting) via applied knowledge management to enable dash-boarding which reveals the business context and allows strategic production monitoring (c.f., Fig. 1).

Source events (e.g., rejection ratio, downtime, etc.) are collected from the sensors within the IoT and Sensor Management layer. On the layer above (Streaming Analytics & Prediction Layer), base events are aggregated to compound figures using streaming analysis which is based on EPL (Event Process Language) rules executed on a CEP engine. Certain Events – e.g., the downtime of machinery – may concurrently trigger the PMML (Predictive Model Markup Language) Execution Engine to predict possible outcomes based on historic behaviour. All of the afore mentioned results create compound figures which are passed to the upper level using again compounds of aggregated events. The Knowledge Management layer aligns compound events with the business layer using a semantic database and semantic-lifting. Eventually, the enriched figures are displayed on Business Context and Strategic Production Monitoring level using dashboards and monitoring applications.

2 Data Collection and Event Aggregation Framework

2.1 Real-time Streaming Analytics

The base of the data analytic layer is using all kinds of relevant information from IoT devices and computes compound figures by data aggregation, correlation or analytics based on mathematical models. Its usage has been inspired by concepts from the \(\lambda\)-Architecture [4] which permit the analysis of real-time information while offering means to execute prediction models previously trained by using machine-learning algorithms and data science (cf. Sect. 2.2).
Fig. 1

High-level logical architecture of the proposed data- and knowledge management solution

Streaming Analytics is based on a commercial solution [5] using an in-memory architecture that enables real-time processing of extremely fast, large data volumes – orders of magnitude larger than traditional database-based IT solutions. Rules of the CEP are represented in EPL which need not to be specified and defined at design-time. These rules could, e.g., represent the calculation of spikes to identify whether individual sensor values are within normal operating bounds or the gradient of recent values to tell how much a sensor’s value is changing. Figure 2 illustrates an example of a CEP rule written in EPL.

Following the work of [11] and [10], we allow dynamic adaption of event patterns within the CEP following a model-based approach: CEP rules are modelled at the service layer (e.g., within the dashboard displaying business context and process overviews) and transformed into executable rules in EPL that can be injected at run-time into the CEP. This approach is feasible as all meta-model information from the sensor devices is available through the attached sensor management system.

The following figure (cf. Fig. 2) gives an example for a monitoring script written in EPL which contains 3 rules. The first two rules monitor event streams (temperature and pressure) and execute the print statement if values exceed a preset threshold. The third rule is fired if temperature is 2% above the target temperature followed by a pressure rise exceeding 5% of the maximum pressure within 3 seconds.
Fig. 2

CEP Monitoring Script in EPL

2.2 Predictive Analytics

In parallel to the streaming analytics described before, events are distributed via the Message Bus to be stored in an Event Store within the batch layer (cf. Fig. 3). The Event Store can be an SQL-based database (e.g., PostgreSQL, etc.), a non-relational database (e.g., Cassandra, Apache CouchDB, etc.) or could as well be implemented using Big Data technology (e.g., Apache Hadoop). The mere purpose of the Event Store is to persist data from the IoT event sources and make it available for further processing within data analytics phase. Here the events are used for Predictive Modeling Machine Learning. Applications such as KNIME [7], WEKA [8], R [9], and other statistical analysis tools are used. With the help of these programs and procedures (e.g., Bayesian Networks, Clustering, Gaussian Process, Neural Networks, Support Vector Machines, etc.) models on the collected events are trained to obtain a PMML model. This model can subsequently be executed on the predictive execution engine [12] to hypothesize outcomes (based on the trained models) from triggering events.

2.3 Event Message Bus

Reliable transport is enabled via a messaging bus [6] which offers a publish-subscribe mechanism to allow fast and efficient distribution of relevant information within the platform. The message bus’ purpose is not only to handle events coming from the IoT devices and raw sources; it also facilitates reliable information transport as an outcome of the streaming analytics- and the predictive analytics component.

2.4 Effective Sensor and IoT Device Management

The lowest layer uses an effective, holistic sensor management that enables connectivity to the platform by providing search functionalities of connected devices, collection of status information, device localization, remote control, software- and firmware management and troubleshooting. This layer is cloud-enabled to facilitate seamless M2M communication and essentially defines device parameters such as sensor type, device location, expected reading values, and the like. The proposed approach is not restricted to IoT devices but also allows interfacing with Web services and other sources of information to complement available information.
Fig. 3

IoT Platform – Streaming Analytics and Data Prediction

Effective Sensor and IoT Device Management, Event Message Bus, Real-time Streaming Analytics and predictive analytics are proposed instruments to collect, process and partly manipulate the data. An important add-on is the semantic enrichment of this data. The semantic is either given by the sensors (via the sensor management layer), the data schema or the way data have been collected (e.g. operational semantic), but a semantic enrichment of the data would enable not only a machine-based interpretation of the data but also a cross-domain and cross-sensor processing of the data. We propose in the following a model-driven approach to add this semantic to the data in order to achieve smart data management.

3 Smart Data Management

Data, information, knowledge and the corresponding interpretation is a key challenge in information technology in general and in data management in particular. The aim is not to only collect data, but also provide the corresponding context to support human- and/or machine interpretation of that data. We see therefore the dashboard not only as a collection, abstraction and visualisation of data, but as a decision support tool that behaves according to the knowledge layer; this layer either supports decision makers or, in well-defined cases, also enables automated machine-based interventions.

The logical architecture in Fig. 1 from above also introduces the so-called knowledge management layer as an additional layer to the traditional data management dashboards. We propose to use concept models for the realisation of such a knowledge-based layer, as conceptual models can support the full range of knowledge representation starting from semi-formal models that support human interpretation, up to strict-formal models that enable machine interpretation [14]. Informal models represent knowledge using graphical models in combination with textual descriptions, whereas strict-formal models partly use the graphical representation but focus more on the formal semantics via ontologies as well as corresponding inference rules definition. The model-driven knowledge-based approach, which we are proposing in this text acts as a moderator between domain-specific targeted goals and the monitored data. The model-based approach can either map from top to bottom or from bottom to top.

From top-to-bottom mediation is considered if human decision makers design their intentions and then configure corresponding sensors and data formats that fit the purpose of the intention. A typical top-down mapping would be to define produced parts per hours and then configure sensors – e.g. a scanner that counts the produced parts at the end of the line – providing the appropriate figure for the intended purpose. In the presented architecture as depicted in Fig. 3, this is, for example, realised through the model-based definition of event patterns and their translation to executable EPL-based CEP rules (c.f. Sect. 2.1). A bottom-to-top mapping, on the other side, provides findings that can be seen out of the data, and then identifies the corresponding business impact, where the data can provide a meaningful contribution. A sample may be an anomaly detection in the pattern of a machine condition monitoring. After analysing the potential impact of this anomaly, these findings may be allocated to a domain-specific intention for predictive maintenance.

3.1 Model-Driven Data Management

In this sub-section, we illustrate how intentions can be mapped to sensor data and vice versa. Model-driven support for “Smart Data Management” consists of the (a) model-based approaches to support data management as well as (b) model-based approaches to introduce smartness. Introducing smart monitoring in production, applying business intelligence and applying data analytics to predict future behaviour is a well-researched topic including but not limiting (a) with respect to smart monitoring to multi-agent-based systems [23, 24], smart sensors [25, 26], edge computing [27, 28, 29], or large scale sensor architectures [30].

For model-based approaches that introduce smartness, a list of Operations Research tools [31] exist like modelling predictive control designs [32, 33], and its realisation in Cyber Physical Tools [34] or with respect to (c) data analytics for manufacturing [35, 36, 37] or [38]. The novel characteristic in this approach is to combine potentially any of aforementioned approaches with concept modelling [39] . Concept modelling supports the conceptualisation of human knowledge and enables its integration with formal representation as it is used for smart sensoring, business intelligence or data analytics. Hence, the novel character is the combination of human knowledge and machine-generated knowledge, which supports the continuous knowledge extraction and provides an additional semantic layer on top of the current machine driven meta-data of data.
  • Model-driven management has been realised with ADOSCORE2 [15], that is a realisation of a Scorecard, which has been applied not only for financial strategy management – as originally intended – but also on knowledge and performance monitoring. This knowledge and performance monitoring has been iteratively improved to fit the purpose for real-time and complex data analysis.

  • Smart data management has been realised with PROMOTE [16, 17], that is a realisation of process-oriented knowledge management and can be extended with machine interpretable formalism like workflows, semantics, rules, cases or agent configurations.

3.2 Model-Based Data Management

Model-based data management externalisies the knowledge on how to map business intentions with collected data [18].
Fig. 4

Critical Success Factor Model

Fig. 5

Cause and Effect Model

Business Intention with Critical Success Factor –

The intention of the dashboard is capitalised by collecting critical success factors and grouping them to goals that need to be achieved. This is described in Fig. 4, where the goal “efficient production” is described with corresponding success factors dealing with “not production time”. This step is often performed in workshops in order to collect available knowledge from decision makers to establish the Critical Success Factor Model.

Cause and Effect Model –

It transforms the original business intention to a measurable structure of a dashboard, whereas each critical success factor is a so-called KPI and each group of critical success factors becomes a goal or a sub-goal. Such a model is depicted in Fig. 5, where the goal is specified and the success factors are measured with corresponding KPIs. We propose to have domain-specific KPIs which are different to the commonly used data KPIs. For this reason, we specify the KPI with domain-knowledge about the specific part of the goal, the type of measure, the definition of the ambition and realistic implementation as well as the time interval. Our proposal is to define the KPI as domain-specific and not as data-specific.

Data Access with “\(\alpha\)”-Indicator Model –

Each domain-specific KPI is mapped to a so-called data indicator with the corresponding algorithm. In order to simplify the model, this indicator combines (a) the data collection using sensors, (b) the data storage using snapshot data bases and (c) the data access algorithm. A sample provided in Fig. 6 shows the list of all the data sensors, where each data sensor has the corresponding technical access information to send a query or invoke an API.

The construction of the aforementioned models is a knowledge-intensive task, which is performed by a group of experts. The cause and effect model is typically constructed with the help of decision makers, plant engineers or managers. The \(\alpha\)-indicator model is typically worked out with IT-experts and technicians that are aware of the sensors, the data and their meaning. The model-based approach is used to commonly extract knowledge with the goal of a continues organisational learning and hence a continues improvement of the data analysis.
Fig. 6

\(\alpha\)”-Indicator Model

It is therefore seen as a unique combination to access relevant data, hence it is called \(\alpha\)-indicator similar to the alphabet that provides the atomic characters that can be used to construct words, those \(\alpha\)-data indicators are the atomic data providers that enable to construct data management dashboards. Traditionally, a straightforward top-down modelling is applied by analysing in collaborative workshops the business intensions and the critical success factors. The realisation of an appropriate data sensor is then a stepwise transformation into a strict data access format.

The benefit of this approach is to quickly gain a dashboard that is streamlined to the well-specified business intentions with an extensive analysis of the critical success factors, where each success factor is monitored with a corresponding KPI.

3.3 Smart Data Management

Model-based approaches can be enriched with knowledge-based techniques to result in smart solutions [19]. The knowledge-based support targets two challenges of the aforementioned model-based data management:
  1. 1.

    Current qualitative assessments about KPIs are often harvested in form of questionnaires or ratings from working groups or responsible persons. Although this is an excellent way to harvest the opinion, heuristics and experience of knowledge workers within the organisation, current findings in data analysis, anomaly detection or visual analytics have proven to at least massively support those assessments, and show the opportunity to even exchange them in particular cases.

     
  2. 2.

    Current digitization trends enable through the use of new technologies like edge computing, IIoT, smart sensing and cloud computing a massive increase of sensor data and hence enable more complete status reports. The management of the different sensor data becomes now a management challenge on its own, which can be supported by knowledge-based techniques.

     

ADONIS® is a business process management tool, which is based on (a) the so-called meta-model approach that allows a quick individualisation by configuring the so-called meta-model and (b) a data repository that allows to not only store model information but also semantics and technical information. In this context, ADONIS® is proposed to be used, as it provides powerful business process management features to describe, analyse, simulate and document the production process as well as enables to integrate semantic information that is necessary when describing the data. ADOSCORE was originally developed as a Balanced Scorecard tool, but has evolved to a more generic dashboarding tool. The strength of ADOSCORE is to support – as one of very few tools – not only the data visualisation part, but focuses on the design of knowledge and hence on the knowledge abstraction when creating the cause and effect models.

4 Integration and Realization of Data Collection and Knowledge Management

DISRUPT [3] is an European project that aims to spearhead the transition to the next-generation manufacturing by facilitating the vision of a “Smart Factory”. In the context of this project, the aforementioned modules and software systems will be integrated to establish a) a collection and aggregation of data, b) a prediction capability to forecast events based on historic data, c) to monitor the environment status using a dashboard, and d) to provide a smart decision support system in case of disruptions.
Fig. 7

Data Dashboard Overview

We proposed a decision support system involving a continuous automated data analysis and prediction. The messaging bus used provides a standardized, reliable, and efficient inter-communication between all the components. It acts as a central component that received information from the sensors and IoT devices on the shop-floor but also ensures communication between all the other components of the DISRUPT framework. The information exchange between the dashboard and the data providers is supported through this bus following a publish-subscribe paradigm that is the enabler for real-time streaming analytics [5], event collection [6], and the sourcing for the predictive execution engine [12] ensuring the delivery of data in real-time.

This selected approach particularly fits the requirement of modularity of the whole system; when required, every component can be unplugged from the bus and substituted with technological equivalent solutions offering the same features. The users in that way are not anymore constrained on a specific product or company but can choose the preferred one and update their decision over time due to the absence of vendor lock-in.

The whole environment of the smart data management dashboard, consisting of the data management design environment, the data sensors as well as the data dashboard are both commercially available in a tool set consisting of ADONIS3 and ADOSCORE [15] as well as available as research prototypes as a result of European projects in the world-wide open innovation community ADOxx.org4.

The realisation of this environment uses and improves the open-source micro-service framework OLIVE5 provided by OMiLAB6. OMiLAB is a world-wide acting community on model-based approaches [22] and provides support for model-based realisations. It is used as main container for the dashboard, which guarantees the modularity of the environment, needed for its continuation over time and, in order to enrich its features, the integration with other micro-service components resulting from other internal projects.

Data analytics, data processing and data anomaly detection is introduced as part of the IoT and Sensor Management and Streaming Analytics & Prediction layer in Fig. 1. The current data harvesting interface from ADOSCORE, where a person has to manually enter the qualitative data – e.g., the rating between 1 (excellent) and 5 (insufficient) – is now supported, as this person gets the results of a data analysis that has been configured for the particular purpose. The fact that the data harvesting is still performed manually is approached, by first minimising the need of qualitative data compared – that are manually entered – to quantitative data – that can be automatically imported, as well as adapt the data analytics results to ease the transfer – in the ideal case, to automatically import – from the results of the data analytics systems to ADOSCORE.

Technically this is performed with so-called data services [2021], which are databases – also known as snapshot databases – that store the relevant data streams and provide standardised data access interfaces. Those data services are deployed as independent REST services and hence enable an autonomous mediation between the ADOSCORE environment and Software AG‘s data services (i.e., XML-based event stream) that offer the output of the data analysis and can be automatically imported into the dashboard.
Fig. 8

Data Services from OMiLAB

The intention is to demonstrate the idea that a technical integration of all data sensor devices is not necessary, as long as a semantic integration can be achieved. This semantic integration can be achieved in several ways, starting from simple tagging to more advanced techniques. This tagging or, in more advanced cases, the semantic lifting is performed by a designer; in our case, we call it Data Service Designer. The data needs to be stored in an accessible way, often in form of a time-series snapshot database. For the provision of data in form of a snapshot database, we call it Data Service Engine, indicating the need of storing the data streams and enabling an access as well as performing calculations on it.

Figure 8 introduces the SEOR (Systematic Energy Operational Rating) as a result of the European research project ORBEET7, where data services are used for the data access to energy monitoring data. We are using this environment and transform it into the field of Industry 4.0 for continuously monitoring of data streams resulting out of the data management environment for Software AG. Second, the meaning of data is described using a semantic description, which is added to the data services. A terminology, taxonomy or an ontology can be used to semantically enrich the data services, hence, the meta-data of the data streams. In Fig. 8, this semantic enrichment is provided by the business process management tool ADONIS. Hence, the data streams are collected by the data services and semantically enriched with the business process context, in which they are relevant. This semantic description can be enriched to better manage the mass of available sensor data and hence tackle the second aforementioned challenge of smart data management.

5 Application Scenarios in Industrie 4.0

There is a vast number of application scenarios that recommend the use of the proposed framework. However, when looking into different industry domains in more detail, different industry requirements show up.

5.1 Automobile Industry

The automotive industry is facing a saturated market which is today component driven. However, digitization is a key factor for further growth, allowing car manufacturers to transform to a software and solution-focused industry [13]. In this transition process, digitization enables the key factors such as connected supply chains and Industrie 4.0. Connected supply chains allow traceability of components and products to support inventory management. Following the proposed approach, the acquisition of real-time information coming from IoT devices and sensors delivers the required data based on which real-time analytics and knowledge management will achieve time visibility at the level of supply chains, but also sustain collaboration, visibility, business continuity and dynamic responsiveness on unforeseen situations. According to [13], investments in this area are expected to yield as much as $1bn per OEM, in order to improve efficiency, reduce costs, and increase the overall collaboration and innovation speed.

In addition, automotive industry is facing an increase in the complexity of connected products, the increased number of model/variants, mass-customisation, reduced lead-time while business critical KPIs must be met (e.g., maximise the use of resources, minimise environmental impact). The proposed solution framework provides an Industrie 4.0 compliant solution to compete with companies in emerging economies, having a competetive advantage in terms of production costs, labour rates, tax regulations, etc. Especially IIoT technologies will support the Automotive OEM in tacking the challenges to increase the competitiveness of interconnected supply chains and existing productions.

5.2 Home Appliance Industry (“White-Goods”)

The rise of transport costs, the need for higher efficiency and productivity, the customer and user demand for greener products, the higher instability of raw material and energy prices and the shortening of the lead time for production will push for a more critical assessment of the delocalisation strategy towards low-cost countries. In addition, megatrends are shifting European manufacturing towards practices which lead to increased responsiveness to massively customised demand without abolishing the notion of production with respect to both cost and resource consumption.

Hence, manufacturing companies are adapting more and more digital technologies to network and transform their manufacturing processes. The “factory of the future” can be broadly defined as a future view of an interconnected manufacturing value chain, involving information and communications technology (ICT) and automation technologies. To that end, software will holistically interconnect and manage distributed factory assets, while embedded data collectors in processing centers will be linked to cross-functional enterprise systems, enabling real-time two-way data exchange, full production quality control and will be further driven by
  1. (i)

    the need to optimize the consumption of resources through the use of energy- and material-efficient processes and machinery

     
  2. (ii)

    the fact that increasing processing power of ICT and more sophisticated analytical software enables real-time performance analysis

     
  3. (iii)

    the need for a significant boost in efficiency, safety and resource sustainability in production and logistics and

     
  4. (iv)

    the need for reduction of design errors and “time to market”

     
  5. (v)

    the optimisation of production processes through digital factory modelling

     

Following the proposed DISRUPT solution, enterprises are enabled to constantly monitor the continuous changes in manufacturing processes, suppliers networks and other challenges across the value chain, as new opportunities and threats appear constantly in an increasingly interconnected world.

5.3 A Generic Solution to Industrial Challenges

Taking the above mentioned problems into consideration we may conclude that DISRUPT’s vision [3] is relevant to the above challenges thus offering a strong research and innovation potential for a manufacturer of this sector. From our application scenarios above (c.f. Section 5.1 and 5.2), production faces the challenge of optimizing the production of different products in an inter-factory supply chain. Our observation on one production factory that produces several products for sub-sequence production and assembly is the need to improve the resilience against in-house disruptions and disruptions from outside.

In-house disruptions are mainly caused by machine breakdowns and require a re-scheduling considering the overall production plan and the still available other production lanes. The skills and experiences of the currently available operators need to be considered to re-schedule the production. Functional tests are the bottleneck; hence, the production needs to be flexibly adapted having the test line and the operators and their experience as input and the optimised production for the overall production plan as the output. Smart data processing is hence needed, not only to monitor the current status of the production line, but also to identify anomalies – e.g., in the condition monitoring of the machines – to predictively indicate a potential issue with a machine breakdown. Depending on the expected impact of that machine breakdown, the production manager should flexibly adjust the plan to either re-schedule or wait till the machine is fixed.

The data are typically provided from different sources, hence not only from sensors that are added to the line, but also from operators that either raise an urgent issue or a potential idea for improvement. The data processing, hence, is not only considered to react on indicators that passed thresholds but also to identify potential improvements provided by operators. This is considered as knowledge, which is gathered and processed by smart data processing to enable further processing in knowledge management systems.

6 Conclusion and Outlook

The proposed IoT framework is currently realised in the EU project DISRUPT [3] between BOC and Software AG to currently support two distinct Industry-4.0 application scenarios. Smart data management is applied for improving operations research in logistics and production management by visualizing information and events – coming from the shop-floor – to decision makers in a transparent and suitable way. Especially, the traceability of production and process KPIs from the strategic level down to the very sensor reading offers the advantage of being able to respond more quickly to unforeseen and swiftly changing situations.

An iterative approach is currently performed by first integrating the available tools, second to introduce model-driven semantics for the model-based support and finally introduce semantic algorithms to realise smart mechanisms for model-based support. Following the presented approach will enable companies to become more competitive. However, the integration between the companies, partner’s processes and the collaboration of all stakeholders is key to ensure the effectiveness of the proposed approach.

Footnotes

References

  1. 1.
    Löffler C, Westkämper E, Unger K (2011) Change drivers and adaptation of automotive manufacturing. International Conference on Manufacturing Systems (ICMS), p 6Google Scholar
  2. 2.
    Westkämper E, Zahn E, Balve P, Tilebein M (2000) Ansätze zur Wandlungsfähigkeit von Produktionsunternehmen, WT. Werkstattstechnik 90:22–26Google Scholar
  3. 3.
    Eirinakis P, Buenabad-Chavez J, Fornasiero R, Gokmen H, Mascolo J, Mourtos I, Spieckermann S, Tountopoulos V, Werner F, Woitsch R (2017) A proposal of decentralised architecture for optimised operations in manufacturing ecosystem collaboration. Working Conference on Virtual Enterprises PRO-VECrossRefGoogle Scholar
  4. 4.
    Marz N, Warren J (2015) Big Data: Principles and best practices of scalable real-time data systems, ManningGoogle Scholar
  5. 5.
    Software AG (2016) Company white paper: the APAMA platform, under-the-covers: an in-depth view of ApamaGoogle Scholar
  6. 6.
    Software AG (2016) Product fact sheet: universal messaging. https://www.softwareag.com/corporate/images/SAG_UniversalMessaging_FS_Sept13_v3.5_WEB_tcm16-111010.pdf. Accessed 13.02.2018Google Scholar
  7. 7.
    Berthold MR, Cebron N, Dill F, Gabriel TR, Kötter T, Meinl T, Ohl P, Sieb C, Thiel K, Wiswedel B (2007) KNIME: the Konstanz information miner, studies in classification, data analysis, and knowledge organization. Springer, Berlin, HeidelbergGoogle Scholar
  8. 8.
    Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. SIGKDD Explorations 11(1):10–18CrossRefGoogle Scholar
  9. 9.
    R Core Team (2013) R: A language and environment for statistical, computing. R Foundation for Statistical Computing, ViennaGoogle Scholar
  10. 10.
    Krumeich J, Zapp M, Mayer D, Werth D, Loos P (2016) Modeling complex event patterns in EPC-models and transforming them into an executable event pattern language. Multikonferenz Wirtschaftsinformatik (MKWI), pp 81–92Google Scholar
  11. 11.
    Krumeich J, Mehdiyev N, Werth D, Loos P (2015) Towards an extended metamodel of event-driven process chains to model complex event patterns. 2nd International Workshop on Event Modeling and Processing in Business Process Management. Springer, Cham, SwitzerlandCrossRefGoogle Scholar
  12. 12.
    Software AG (2017) Company white paper: why you need zementis, predictive analytics. https://resources.softwareag.com/products-analytics-decisions/why-zementis-whitepaper. Accessed 13.02.2018Google Scholar
  13. 13.
    Frost & Sullivan (2016) Automotive Industry IT Spending, CIO Focus, Trends, and Highest Growth Areas, ReportGoogle Scholar
  14. 14.
    Woitsch R, Hrgovcic V (2011) Modelling knowledge: an open model approach. Proceedings of the 11th International Conference on Knowledge Management and Knowledge TechnologiesGoogle Scholar
  15. 15.
    Guschlbauer E, Lichka C (2013) Umsetzung des Prozesscontrollings, Prozessmanagement für Experten, Impulse für aktuelle und wiederkehrende Themen. Springer Gabler, Berlin HeidelbergGoogle Scholar
  16. 16.
    Woitsch R., Process-Oriented Knowledge Management: A Service-based Approach,PhD Thesis, Vienna (2004)Google Scholar
  17. 17.
    Woitsch R, Utz W, Hrgovcic V (2013) Integration von Prozess- und Wissensmanagement, Prozessmanagement für Experten, Impulse für aktuelle und wiederkehrende Themen. Springer Gabler, Berlin HeidelbergGoogle Scholar
  18. 18.
    Lichka C., Der modellbasierte Business Scorecarding-Ansatz zur Strategieoperationalisierung, University of Vienna, PhD Thesis (2006)Google Scholar
  19. 19.
    Karagiannis D, Woitsch R (2010) Knowledge engineering in business process management, business process management 2, strategic alignment, governance, people and culture. Springer, Berlin HeidelbergGoogle Scholar
  20. 20.
    Roussopoulos N, Utz W (2016) Design semantics on accessibility in unstructured data environment, domain specific conceptual modelling, concepts, methods and tools. Springer, Berlin HeidelbergGoogle Scholar
  21. 21.
    Utz W, Woitsch R (2017) A model-based environment for data services: energy-aware behavioral triggering using ADOxx. Collaboration in a data-rich world. PRO-VE 2017. vol 506. Springer, Berlin HeidelbergGoogle Scholar
  22. 22.
    Karagiannis D, Mayr H, Mylopoulos J (2016) Domain specific conceptual modelling, concepts, methods and tools. Springer, ChamCrossRefGoogle Scholar
  23. 23.
    Wooldridge M (2002) An introduction to multi-agent systems. Wiley & Sons, HobokenGoogle Scholar
  24. 24.
    Leitão P (2009) Agent-based distributed manufacturing control: a state-of-the-art survey. Eng Appl Artif Intell 22:979–991CrossRefGoogle Scholar
  25. 25.
    Middelhoek S, Hoogerwerf AC (1985) Smart sensors: when and where ? Sens Actuators 8(1):39–48CrossRefGoogle Scholar
  26. 26.
    Montironi MA, Castellini P, Stroppa L, Paone N (2014) Adaptive autonomous positioning of a robot vision system: application to quality control on production lines. Robot Comput Integr Manuf 30:489–498CrossRefGoogle Scholar
  27. 27.
    J‑ Parikh AD, Weihl WE (2004) Edge computing, extending enterprise applications to the edge of the internet. ACM New YorkGoogle Scholar
  28. 28.
    Satyanarayanan M, Simoens P, Xiao Y, Pillai P, Chen Z, Ha K, Hu W, Amos B (2015) Edge analytics in the Internet of things. IEEE Pervasive Comput 14:24–31CrossRefGoogle Scholar
  29. 29.
    Lee EA, Rabaey J, Hartmann B, Kubiatowicz J, Pister K, Sangiovanni-Vincentelli A, Seshia SA, Wawrzynek J, Wessel D, Jafari R, Jones D, Kumar V, Mangharam R, Pappas GJ, Rosing TS (2014) The swarm at the edge of the cloud. IEEE Des Test 31(3):8–20CrossRefGoogle Scholar
  30. 30.
    Kabáč M, Consel C, Volanschi N (2017) Designing parallel data processing for enabling large-scale sensor applications, Personal and Ubiquitous ComputingGoogle Scholar
  31. 31.
    Rossiter J (2003) Model-based predictive control: a practical approach. CRC Press, Boca RatonGoogle Scholar
  32. 32.
    Bemporad A (2006) Model predictive control design: new trends and tools. Proceedings of 45th IEEE Conference on Decision and ControlGoogle Scholar
  33. 33.
    Kouvaritakis B, Cannon M (2001) Non-linear predictive control: theory and practice, ISBN 978-0852969847, The Institution of Engineering and Technology, IEE PublishingGoogle Scholar
  34. 34.
    Park K, Zheng R, Liu X (2012) Cyber-physical systems: milestones and research challenges. Int J Comput Telecommun Ind 36:1–7Google Scholar
  35. 35.
    Stojanovic N, Dinic M, Stojanovic L (2015) Big data process analytics for continuous process improvement in manufacturing. Big Data, IEEE Publishing, Santa Clara, CA, USA, pp 1398–1407Google Scholar
  36. 36.
    Heemels W, De Schutter B, Bemporad A (2001) Equivalence of hybrid dynamical models. Automatica 37:1085–1091CrossRefzbMATHGoogle Scholar
  37. 37.
    Juloski A, Wieland S, Heemels WPMH (2005) A Bayesian approach to identification of hybrid systems. IEEE Trans Automat Contr 50(10):1520–1533MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Ferrari-Trecate G, Muselli M, Liberati D, Morari M (2003) A clustering technique for the identification of piecewise affine systems. Automatica 39:205–217MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Woitsch Robert, Hrgovcic Vedran, Robert B (2012) Knowledge product modelling for industry: the PROMOTE approach. 14th IFAC Symposium on Information Control Problems in Manufacturing, International Federation of Automatic ControlGoogle Scholar

Copyright information

© Springer-Verlag GmbH Deutschland, ein Teil von Springer Nature 2018

Authors and Affiliations

  1. 1.Software AGSaarbrückenGermany
  2. 2.BOC Asset Management GmbHViennaAustria

Personalised recommendations