New directions for applied knowledge-based AI and machine learning

In this article, selected new directions in knowledge-based artificial intelligence (AI) and machine learning (ML) are presented: ontology development methodologies and tools, automated engineering of WordNets, innovations in semantic search, and automated machine learning (AutoML). Knowledge-based AI and ML complement each other ideally, as their strengths compensate for the weaknesses of the other discipline. This is demonstrated via selected corporate use cases: anomaly detection, efficient modeling of supply networks, circular economy, and semantic enrichment of technical information.


Introduction
In 2014 we started a series of annual workshops at the Leibniz Zentrum für Informatik Schloss Dagstuhl, initially focusing on corporate semantic web, and later widening the scope to applied machine intelligence (AMI). In all work- shops, we focussed on the application of artificial intelligence (AI) technologies in corporate and organizational contexts. A number of books [3][4][5] and journal articles [6][7][8][9][10] resulted from those workshops. Those workshops have an intense atmosphere of interdisciplinarity, collaboration and focus on practical results (see [11]). 1 With the AMI 2022 Workshop we continued our workshop series at Schloss Dagstuhl after 2 years' interruption caused by the Corona pandemic. In between, we held the AMI workshop online. However, this year's workshop has again shown that nothing can substitute the interactions of in-person meetings with their intense discussions during sessions and the social interactions during coffee breaks, meals and common activities.
In this article, we are happy to present selected results and insights that we gained during this year's AMI workshop. We focus on real-world problems and practical issues raised by the application of semantic technologies, data science and machine learning.
This article is structured as follows. In the following section, we present insights and new directions for knowledge-based AI, and in the section thereafter, innovations for machine learning. Then, selected corporate use cases are shown, namely anomaly detection in manufacturing, efficient modeling of supply networks, circular economy, and semantic enrichment of technical documentation.

Knowledge-based AI
Since the 2010s, AI has experienced an enormous boost, mainly due to spectacular successes in machine learning (ML). Because of this, the term AI is nowadays often equated with ML in media coverage. However, this view is clearly constricted, since knowledge-based AI has also been an essential AI approach for many decades and is used in many real-world applications. We show some current and new developments regarding ontologies, WordNets and semantic search in the following sections.

Ontologies
Ontologies are a major approach of knowledge-based AI and are in everyday use. A commonly cited definition of ontology is "an explicit specification of a conceptualization" [12]. Gruber furthermore explains conceptualization as "the objects, concepts, and other entities that are presumed to exist in some area of interest and the relationships that hold them". This definition is broad enough to include formal upper-level ontologies but also knowledge graphs, thesauri and other forms of knowledge representation. (There are entirely different opinions on this, as we point out below.) Examples are domain-specific ontologies like Gene Ontology 1 or the Global Biodiversity Information Facility 2 for life sciences, but also Wikidata 3 or the Google Knowledge Graph 4 for general purposes.
Use cases for ontologies are semantic search, decision support systems or interactive systems like knowledgebased chatbots. In addition, conceptualizing a domain of interest is of value per se. Ontology search engines like the Basic Register of Thesauri, Ontologies & Classification (BARTOC) 5 include more than 3000 ontologies, each with thousands or even hundreds of thousands of entities being specified.
When having a closer look at concrete ontology examples (in the broad meaning of Gruber's definition cited above), different flavors can be distinguished; and with those flavors, entirely different communities are associated.
One flavor includes ontologies, for which the term knowledge graph has been used in recent years 6 . For example, for the Google Knowledge Graph, simple standardized ontology schemas like schema.org 7 are used. schema.org specifies a limited number of classes (currently about 800) like persons, organizations, locations, products, events, etc. Simple standardized modeling languages like RDFa, Microdata and JSON-LD may be used to specify concrete entities. Those simple modeling languages provide restricted semantic expressiveness, including specifying classes, instances, attributes, and relationships, but excluding complex reasoning.
The following example 8 shows the JSON-LD specification of an organization with its legal name, founding date and email address: f "@context": "https://schema.org/", "@type": "Organization", "legalName": "Elite Strategies Llc", "foundingDate": "2009", "email": "info@elite-strategies.com" g Because of the simplicity of modeling languages and schemas used, we call this flavor of ontologies lightweight. Lightweight ontologies are much used in industrial practice, e.g., by all major internet search engine providers and tech companies, including Google, Amazon, eBay, Facebook, IBM, LinkedIn and Microsoft [14]. They are relatively easy to use. Various tutorials and guidelines can be found to access such ontologies or provide data for them, e.g., jsonld.com 9 . However, their discussion in the academic community is rather sparse.
Another flavor of ontologies uses modeling languages with much higher semantic expressiveness, such as OWL 10 or F-logic [16]. Such ontology modeling languages allow knowledge engineers to express reasoning, logical operators, quantifiers, etc. Ontology examples are the BFO 11 and the Gene Ontology. Consider the following example from [15] formalizing the red color of strawberries using a set relation (subClassOf) and a quantifier (some):

Strawberry subClassOf bearerOf some RedColourQuality
Those ontologies are inherently complex, and, therefore, we call this flavor heavyweight. There is much attention being paid to this flavor in the academic community [13,[17][18][19]. Due to the high expressivity and, thus, inherent complexity of ontology modeling languages, several guidelines for engineering ontologies exist, e.g., [15]. To model the simple statement that strawberries are red in color in the complicated way shown above is, in fact, a recommended modeling pattern from [15].
In those guidelines, lightweight ontologies (including languages for modeling them like JSON-LD) are usually not mentioned at all, or it is stated that this flavor cannot be considered in ontologies at all (e.g., [15], Section 2.5). Even though this viewpoint is rather common, it is still in contrast with Gruber's broad ontology definition from 1993 cited above.
We notice that around those different ontology flavors, there are different communities dealing with them: rather developer-oriented communities for lightweight ontologies and academic communities for heavyweight ontologies. However, there seems to be minimal overlap and little communication between those communities. It seems like they live on different, disconnected islands. Why is this the case? Maybe there are fundamentally differing goals driving ontology engineering in those different communities.
One goal for developing lightweight ontologies is developing knowledge-based applications like semantic search engines. And the software engineering principle KISS (keep it short and simple) also applies to engineering ontologies to be used in such applications. Consequently, modeling languages, guidelines and tutorials are as simple as possible. The simpler, the better; this is the secret to their success.
One goal for developing heavyweight ontologies is formalizing an application domain as accurately as possible in order to draw valid logical conclusions. And, as reality is complex, the modeling languages and resulting ontolo-gies reflect this complexity; the more complex, the more accurate, and therefore, the better.
What can we conclude from this observation? We think that it is most important to be absolutely clear about the goals before starting to develop an ontology. To be aware of the different ontology flavors and corresponding communities may help us choose the appropriate modeling languages and tools suiting those goals.

Automated engineering of WordNets 12
A Wordnet is a lexical database of semantic relationships between words in a specific language. The first Wordnet 13 was created for the English language at Princeton University. As the usefulness of Wordnets as lexical resources became apparent, the Princeton Wordnet was expanded, and WordNets were constructed for other languages.
OMW (Open Multilingual Wordnet) 14 is an open-source project that was launched with the goal to ease the use of Wordnets in multiple languages. OMW has the added benefit of connecting equivalent sets of synonyms (called "synsets" in Wordnet) 15 in different languages by means of an interlingual ID called "ILI".
OdeNet 16 was constructed from open-source linguistic resources in combination with some manual and semi-automatic corrections. Since OdeNet was constructed independently of existing resources in OMW, it was difficult to connect equivalent synsets in OMW via ILI. As an initial implementation, Google Translate 17 was used in combination with statistical methods as described by Siegel & Bond (2021 [31]). However, this implementation has some shortcomings, including: Incorrect ILI classification for some synsets from a semantic perspective. Duplicate assignment of some ILIs to multiple synsets. Part of speech (POS) for some ILIs being inconsistent between the English Wordnet EWN and OdeNet. A significant problem in using machine translation to connect equivalent synsets in different languages occurs when translating homographs (words with the same spelling but different meanings) and polysemes (words with the same spelling but different although related meanings). This is particularly noticeable, when a word translated from 12 The work described here was carried out together with Johann Bergh, Lingolutions. 13 https://wordnet.princeton.edu. 14 https://omwn.org/. 15 Synsets are the basic structure of Wordnet: a set of synonyms represents a semantic concept. 16 https://github.com/hdaSprachtechnologie/odenet. 17 https://translate.google.com. a source language is a homograph or polyseme in the target language. As an example, we take the German word "Unterlegscheibe" from OdeNet. The corresponding English translation is "washer". Searching for "washer" in EWN, we find three synsets containing the word: ILI i94042: someone who washes things for a living. ILI i60971: seal consisting of a flat disk placed to prevent leakage. ILI i60970: a home appliance for washing clothes and linens automatically.
All synsets in EWN have a short, concise definition. We propose to use this definition to get more context for the disambiguation. First, we combine the word in the synset and the definition and do machine translations. Then, we extract the translated word from the machine translation and look for a corresponding match in OdeNet.
These are the results for the "washer" example: As is evident, the machine translation of the second item now enables us to make the correct ILI classification (i60971) for the corresponding OdeNet synset.
Since there still could be multiple candidates in OdeNet synsets for ILIs in EWN synsets, it is necessary to write a classification function to assign weights to each of the candidates, so that the most optimal assignment can be made. Fortunately, OdeNet is very synonym-rich (much more so than other Wordnets), and we can use these synonyms in combination with a German Word2Vec [32] model to do the classification.
The content words in the translation of the definition are added to a vector. All the synonyms in the candidate synset are also added to a vector. For each value in both vectors, a similarity value is computed. These values are summed and normalized to a value between 0 and 1, which is the weighted value for the candidate synset in OdeNet competing for the ILI in a specific EWN synset.
As a result, there are no more duplicated ILIs in OdeNet, and we could correct more than 400 wrong POS tags (of about 36,000 synsets).

Innovations in semantic search
Semantic search is well understood and well documented, see [5]. It exploits the semantic relationship between concepts such as generic term, subordinate term, opposite, synonym, etc. However, relationships to partial terms and partial meanings of concepts are usually not exploited. How can semantic search be improved further? One idea is to expand the search to include partial terms that are contained in term definitions. Definitions of terms can be created according to the genus-differentia scheme (Gen-Dif) known from Aristotle [18]. In an innovative approach to semantic search [1], we use so-called Basic Linguistic Symbols (BLS) as linguistic identifiers, which can record the nouns of the terms in any language. For example, for >BLS-gauge, in German we have "Meßinstrument", in English "gauge", in French "jauge" and in Spanish "calibre" (see Fig. 1). Now we can compose terms using the Gen-Dif pattern as follows. In >BLS-barometer = (>BLSgauge, >BLS-air_pressure) the left term gauge is the genus (˙Gen), and the right term air pressure is the differentia (˙Dif). According to the same scheme, we have >BLS-air_pressure = (>BLS-pressure, >BLSair) with the triples (>BLS-air_pressure,˙Gen, >BLS-pressure) and (>BLS-air_pressure, Dif, >BLS-air). This definition can be inserted into the first: (>BLS-barometer = (>BLS-gauge, (>BLS-pressure, >BLS-air))). Since both the left˙Gen and the right˙Dif argument of a definition can be resolved by further compound definitions, a socalled conceptual binary tree (CBT) is created [1]. The leaves of the CBTs are always indecomposable, atomic concepts like >BLS-air, >BLS-pressure, >BLSaid, >BLS-thing, and >BLS-measurement. The methodology presented here can also be used to more precisely define and disambiguate the terms used in ontologies and in WordNets. From (>BLS-Barometer,˙Gen, >BLS-gauge), (>BLS-Barometer,˙SymbolOf, Barometer) and (>BLS-Gauge,˙SymbolOf, Gauge) e.g. automatically (^Barometer,˙sub ClassOf,^Gauge) can be inferred, i.e., that^Barometer is a subclass of^Gauge. This allows a deep semantic search (DSS) to be implemented. Query processing then, in addition to the usual semantic relations such as superclass, subclass, synonym, etc., also makes use of the terms that occur at any level of the conceptual binary tree. A search radius can be set to the depth to which the partial terms should be included in the search. To find out whether Google also takes compound terms into account when searching, we evaluated Google search against DSS. As an example, various musical instruments were modeled with so-called Word sense definitions (WSD).
A Google search for "wind keyboard instrument" 18 returns 24 million search results, most of which refer to pages with melodicas. Searching for pipe organ only returns the In addition to melodica and pipe organ, the word accordion is also found. The term accordion does not even appear on the first Google results page. Thus, DSS can be used to determine additional suitable search terms, which can then be entered in the respective search engine. Also, our sample query "wind keyboard instrument" was evaluated against WordNet Search 19 . WordNet finds no result at all, only a search for "wind instrument" finds an entry that describes the term itself, but not the concepts that fall under it.
On the basis of the Longman Defining Vocabulary 20 , we developed a Foundation Core Glossary (FCG) with 450 atomic concepts and approximately 2500 compound terms in eight languages. All new terms can be defined with the concepts of the FCG. This means that queries with any combination of terms from these languages deliver the same results as if they had only been made in one of the languages. Steps for creating ML pipelines A first version of the deep semantic search has been used in the EnArgus research project for search optimization [2]. The multilingual news app rob.by 21 also uses this technology. Furthermore, applications are being implemented where the technology is to be used in portal search engines to search for scientific publications. These applications also benefit from the fact that the number of occurrences in documents is noted for each term in the glossary. As a further parameter in the deep semantic search, it can thus be taken into account whether the term is very general or very specific.

Machine learning
Machine learning (ML) is currently the most prominent AI approach, enabling models to make predictions based on previous observations [27]. ML is used to power AI applications, but creating effective ML-based AI applications is complex. It requires high expertise, which experts like computer scientists or data scientists gather over the course of their careers. A great deal of gut feeling is involved. To less experienced scientists, this may look like a "secret art", making experts operate like magicians (we use the magician icon for illustration in Fig. 2). They apply their expertise and knowledge to analyze a dataset and create an efficient ML pipeline, which can be trained with a dataset to allow predictions for new data. Steps to be performed include data preparation, feature engineering, model selection, hyperparameter optimization, validation, and more. See Fig. 2.
Automated machine learning (AutoML) [37][38][39][40] emerged as a research field aiming at generating parts or entire ML pipelines automatically. Presently, a wide variety of different AutoML solutions are available, ranging from opensource to commercial solutions. There is a great variety of 21 https://rob.by/en/App/.  [29] AutoML solutions, differing in functionality, maturity and usability.
However, there is one restriction in almost all AutoML solutions: they focus almost exclusively on one major ML library or ML ecosystem. This leads to some form of vendor lock-in when choosing an AutoML solution.
Meta AutoML [28,29] is a novel concept aiming at avoiding said vendor lock-in. The concept is to integrate different AutoML solutions in a cloud service. Managed through a metalayer, various AutoML solutions can compete against each other, allowing the user to select the best solution for his use case.
The ML ontology is the information backbone of OMA-ML and is used in various parts of the application: Firstly, in a configuration wizard, only plausible configuration options are shown to users. Secondly, the ontology guides the automated pre-processing of the dataset by determining if an automated pre-processing is required and which steps. Thirdly, the ontology determines the ML training strategy, guiding the metalayer with instructions on how to execute the training within the AutoML solutions.
OMA-ML is actively being developed as an open-source project and can be accessed as a GitHub repository 25 . At the time of writing, a minimum viable product is available with an initial set of AutoML solutions integrated, providing classification and regression tasks for tabular datasets. See Fig. 4 for a screenshot of the leaderboard displaying various AutoML results for a dataset training. 24 https://github.com/hochschule-darmstadt/MetaAutoML/tree/main/ controller/managers/ontology. 25 https://github.com/hochschule-darmstadt/MetaAutoML.

Corporate use cases
This section shows four partially new areas of applied machine intelligence: (a) a core problem of automated production processes: the detection of anomalies in time series of sensor data helping to avoid disruptions of industrial processes; (b) the determination of cross-organisational process disruptions and their consequences based on efficient semantic modeling of entire supply networks; (c) semantic product passports describing materials and components attached to goods in order to support repair and recycling and (d) semantic enrichment of technical information.

Anomaly detection
With the rising complexity of modern automated manufacturing processes like production, packaging and quality insurance, the importance of each process step realizing the desired outcome is increasing. Even a single fault in one step could influence the complete production line, resulting in a faulty product, a breakdown of the process or a carryover of the failure through the entire production line. Therefore, it is crucial for modern cyber-physical systems to recognize, react early and prevent such failures. Anomalies can be taken as essential failure indicators. With the help of a timely detected anomaly, a fast failure recognition and reaction is possible. Anomaly detection refers to the identification of abnormal system behavior, e.g., behavior that deviates significantly from the regular operation of the system.
Problems with the implementation of anomaly detection in an industrial setup result from the different industrial requirements that must be considered. Among time prerequisites to establish a fast reaction to an anomaly, a reasonable prediction quality and a configurable design to adapt to the dynamics of such a system are essential. Moreover, the communication and processing limitations of those systems must be considered.
In cooperation with Yaskawa 26 , a leading mechatronics and robotics company, a fully automated, data-driven, and feasible anomaly detection for cyber-physical systems in manufacturing has been developed [30]. The cyber-physical system, shown in Fig. 5, demonstrates different drive units and their collaboration to realize complex processes. The 26 https://www.yaskawa.de/. overall process involves picking small items from a round table in the middle of the unit and placing them into several cups while they move on conveyor belts around the machine.
One example of an anomaly that can appear in the described system is an incorrect establishment of the vacuum generation for picking the cups from the conveyor belt to place them in a delivery position. This can be caused by broken cups, an incorrect picking position, or a worn gasket at the intake. As a result, the system can run into an undefined state, and the normal process can no longer be executed.
To realize anomaly detection while fulfilling industrial requirements, several reconstruction-based, one-dimen-sional convolutional autoencoders were applied to the cyber-physical systems. The models are trained without the need for domain knowledge. No anomalous data is provided in the training step to meet the data-driven requirements. A sliding window approach achieves the timing requirements and the prediction quality. A shallow structure and the decentralized integration of the separate models address the inherent processing limitations.
An example of a reconstructed data stream and the detection of the abnormal behavior can be seen in Fig. 6; red dots indicate the detected anomalies, and the marked areas display the abnormal sections.
Effective and feasible anomaly detection, considering industrial requirements, can significantly impact automated processes to realize modern and safe cyber-physical systems in production.

Efficient modeling of supply networks
The recent Corona pandemic, the flooding of the German Ahr valley, the accident of the container vessel Evergiven stuck in the Suez Channel, the traffic jams caused by it at container terminals in China and Europe and the current Russian war against Ukraine have shown the vulnerability of our society and economy in the case of unforeseen disruptions. In 2020, the German Federal Ministry for Economics and Climate Protection (Bundesministerium für Wirtschaft und Klimaschutz, BMWK) started the funding of five projects addressing questions of economic resilience in pandemic and other economic crises. Within these projects, the ResKriVer-Project 27 addresses-besides other problems-some issues of modeling, documenting, analyzing and simulating supply networks of crises-relevant and potentially substitutable goods and resources, in order to strengthen the resilience of critical supply paths.
Fraunhofer FOKUS' 28 role within this project-besides its role as consortium leader-is the semantic modeling of supply networks as a knowledge graph, which, in contrast to simple supply chains, considers entire cross-organizational networks in order to perform preventive as well as reactive analysis. The approach chosen by FOKUS is the adoption of W3C's provenance standard PROV-O 29 as one of the top-level ontologies of a "ResKriVer common core ontology". This ontology supports the modeling of supply networks on a fine level of granularity and at the same time should allow for (a) the extraction of simulation models for Fraunhofer IML's ODE-Net system for discrete event 27 http://www.reskriver.de, funded by the German federal ministry for economics and climate protection (Bundesministerium für Wirtschaft und Klimaschutz, BMWK) under research contract 01MK21006A. 28 http://fokus.fraunhofer.de. 29 https://www.w3.org/TR/prov-o/. simulation and (b) approaches for dependency analysis in order to determine critical nodes within a supply network and their associated probabilities.
During the initial modeling of some examples of supply networks, it soon became clear that this approach will face three major issues. First, the integration of probability information can be addressed by partial adoption of PR-OWL, an ontology for modeling probability information. Second, structural equivalent RDF subgraphs will frequently occur during the modeling of supply networks in order to capture alternative supply chains. Thus, some kind of efficient RDF templating mechanism would be helpful for efficient modeling. Consequently, mechanisms to partially instantiate these subgraphs with available information need to be developed. A basis for such templates could be SHACL 30 or-probably better suited-ShEx 31 , augmented by some mechanism to generate new unique anonymous resources and instances during template instantantiation. We consider this to be a new research topic. As a third issue, the nonavailability of supply chain information, which is usually difficult to obtain for second and higher-order tier suppliers, needs to be addressed. Hence, coping with highly incomplete information about supply networks and acquiring them during the pre-crisis documentation phase and within some current crisis will be a major issue in this applied research project.

Circular economy
On 11 December 2019, the European Commission presented the Green Deal 32 . The Communication from the Commission also includes a detailed action plan, including an annex with a table of measures within a timeline. In March 2020, the Communication on "A new Circular Economy Action Plan" followed 33 . The action plan contains an industrial strategy for a clean and circular economy, aiming at sharing, reusing, repairing and recycling existing products as long as possible. Within the framework of the Multi-Stakeholder Platform for Standardization (MSP) 34 W3C and GS1 gave birth to the idea of combining the Linked Data platform of W3C with the work on GS1 Digital Link. A report was handed to the Commission.
The idea contributed to the considerations around the works on the Digital Product Passport (DPP). Good ideas 30 https://www.w3.org/TR/shacl/. 31 https://shex.io/. 32 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX: 52019DC0640 Accessed 2022-06-01. 33 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A 52020DC0098&qid=1654441498742 accessed 2022-06-01. 34 https://digital-strategy.ec.europa.eu/en/policies/multi-stakeholderplatform-ict-standardisation accessed 2022-06-01. Fig. 7 An architecture for the Digital Product Passport must be simple, and this is the case here. On a high level, the problem of circularity is seen from an information system perspective. Today, if a product hits the recycling plant, we do not know much about it, and the recycling plant has to do a lot of guesswork and research to find out about the physical object in their hands. This does not really work well and, consequently, recycling can still be improved. But how can we get information to the recycling plant? This question looks simple but has a variety of complex issues of system engineering.
Who can provide useful information? The producer of the tangible goods has lots of information, but not necessarily everything. First, this producer may only assemble components. So the composition of the components is not always known to them. Secondly, the producer has commercial secrets around producing certain goods. So it is not simple and easy to just make information public. The same goes for the producer of the components assembled into a consumer product. It may be produced with secret production techniques not known to the entity assembling the various components. And the producer of the component cannot know about all the products that their component is used for.
Finally, there is the user of the product. Users may be careful with a product or treat it badly. There may be accidents, lightning, repair, etc. in the lifetime of a product that may influence the way the product or its components can be recycled. This information is with the consumer or end user. But not only with end users, also with their agents and repair services. This is important information that needs to be collected.
While a central database is easier to establish, the large variety of actors or sources of information and the heterogeneity of the information collected makes monolithic systems see their limits quickly.
But what would such a distributed, product-centric system look like? The basic idea is to use GS1's Digital Link Specification [20] together with the Linked Data platform [21] and create a system of annotations to the physical product identified by some IRI. Linked Data annotations [22] have the power to create a knowledge graph around a type of product or the instance of a product. Because the system uses the Linked Data paradigm, data from a variety of sources can be retrieved and easily merged into a full picture. Meanwhile, DL 1.1 makes sure that the annotations are reliably linked to the product instance at hand. A persistent link between the physical and the virtual world is established.
The overall architecture of the system is shown in Fig. 7. We are only at the beginning of a development.
GS1 Digital Link uses the Global Trade Identification Number (GTIN) that is often represented as a barcode, a QR code or an RFID tag [23]. Batch numbers, serial numbers and more can also be included. From the code, a link is created that is an entry point into the web. The simplest case is that a barcode on a product just points to a web page. The GS1 Digital Link Specification not only uses the simple web but allows for sophisticated operations using so-called "resolvers", applications identifying that the URI at hand is a GS1 Digital Link URI. These resolvers have a set of typed URIs corresponding to the GS1 Digital Link requested, as indicated in [24]. Although designed for GTINs, the system would work with any other identity scheme (e.g., DNS plus URI) with its own resolver. Standardization work is very likely to begin at ISO shortly to expand the GS1 ideas to encompass any recognized identifier system.
Using the GS1 Digital Link standard, one type of link can now point from the product into an RDF knowledge graph. This graph may receive information from a large variety of sources. Having the product instance at hand, all this information is instantly accessible. If transformed back into HTML, this can be used for consumer information. If given in machine-readable form to a gateway in the recycling plant, it can help sort incoming goods. But the knowledge graph can also help with compliance and regulatory constraints. The minimum requirements for the DPP could be expressed in SHACL [25]. It is then easy to match the regulatory requirements with the existing graph. By including information about access, usage limitations and other constraints, the knowledge graph itself already contains the necessary administrative information. Both can also serve as a proof of compliance [26].
Much remains to be done. Vocabularies and ontologies for the industry will have to be created. The first milestone will be to address the new battery regulation 35 .

Semantic enrichment of technical information-creating smart technical information
This section deals with the semantic enrichment of technical information that shall ensure the safe operation of machinery put into service in the European Union. The scope of this technical information is defined by the European Machinery Directive 2006/42/EC [33]. Technical information becomes smart by being labeled with metadata addressing its issues and concepts as well as the different entities it relates to, for example the machinery and its components (e.g., battery), the actors and their roles (e.g., service technician), and the actions it describes regarding its use and maintenance (e.g., replacing a component). We call these data context-related metadata. Thus, smart technical information has been semantically enriched and put into a comprehensive context. 35 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A 52020PC0798 accessed 2022-06-01.
Making legally compliant technical information smart enables it to be retrieved within a semantics-based information system over a facetted or semantic search and to be semantically linked to other pieces of information. Making legally compliant technical information smart is a development that has increasingly taken place over the last few years, due to the change in information consumption, the requirements for context-aware information and the digitalization of information delivery in the context of industry 4.0 [34,35].
The semantics-based hybrid information system SOLIS-Doc 36 intends to include a technological as well as methodologically sound framework for authoring and delivering such smart technical information.
Authoring technical information often occurs within a component content management system (CCMS), enabling technical writers to author and manage technical information in a modular way, meaning that each single issue of the technical information becomes a small information unit, a topic, or a fragment. Putting all the needed topics together, technical writers can, over the CCMS, automatically publish an operating manual, either as a PDF document or as an HTML snippet.
To facilitate the modular authoring of legally compliant technical information within a CCMS, SOLIS-Doc provides the semantic model of an operating manual according to the European Directive as an RDF/RDFS ontology. The semantic model builds a minimally valid, and thus transferable, reference template of the entire mandatory content of such a legally compliant operating manual. The SOLIS-Doc Reference Template (see Fig. 8) can be configured and imported into a CCMS and provides technical writers with a reference structure including all the topics and fragments that they can use as a basis within their projects.
Over the SOLIS-Doc reference template, all reference topics have already been pre-labeled with standardized context-related metadata of the new iiRDS 37 RDFS-based metadata model [36]. Consequently, technical writers only need to add those project-specific context-related metadata such as product metadata. These proprietary context-related metadata have been modeled within the SOLIS-Doc Twin of Customer (Product) World. Thanks to this pre-labeling, the amount of context-related metadata that need to be 36 Developed within the SOLIS-Doc project no. 20_0162_2A, funded by the state of Hesse, Germany, in the funding line no. 2 Digital Innovation Projects of the Distr@l program Strengthening Digitalization-Living Transfer. https://www.lidia-hessen.de/projekte/solis-docsmart-information-for-aftersales-services/ accessed June 21st 2022. 37 iiRDS: Intelligent Information Retrieval and Delivery Standard developed by tekom e. V. Release 1-Nov. 2020 https://iirds.org/ fileadmin/iiRDS_specification/20201103-1.1-release/index.html accessed June 21st 2022. The SOLIS-Doc metadata model not only uses iiRDS standardized vocabularies for labeling information units but also adds semantics to these vocabularies by modeling relations between them. Thus, SOLIS-Doc becomes an RDF knowledge graph that is a semantically enriched iiRDS twin. This enriched iiRDS twin enables an additional reduction of the number of context-related metadata technical writers must assign to information units. For example, being modeled over the iiRDS twin that the action replacing (e.g., replacing a battery) is related to the product life cycle phase (PLCP) service (customer:replacing solisdoc:related-PLCP solisdoc:service), technical writers only need to assign the action-related metadata to the topic, while the PLCP-related metadata is automatically assigned to it by the CCMS.
As shown in Fig. 8, SOLIS-Doc consists of three semantic models, each of them based on standard frameworks for the development of semantic web applications. It is the aim of SOLIS-Doc to be interoperable with different CCMS as well as different content delivery portals (CDP). Of course, it is a prerequisite that these CCMS and CDP have been (technologically speaking) prepared to integrate semantic web-based models. That is a development also taking place within the SOLIS-Doc project.

Conclusions
In this article, we presented selected new directions in knowledge-based AI and ML, and their corporate use cases. AI is much more than ML; knowledge-based AI still plays an important role in everyday applications. Knowledgebased AI and machine learning complement each other ideally, as their strengths compensate for the weaknesses of the other discipline. Machine learning approaches deliver good results in areas with little prior knowledge. They usually require a large amount of training data, but at the same time can be scaled to big data and handle noisy data well. However, they are also error-prone and lack the means to explain chosen decisions. This is where the strength of knowledgebased AI methods lies. Since they are based on explicit representations of human expert knowledge, they are applicable in areas where only limited data are available. They can be used for complex logical reasoning, their decisions can be explained to humans, and errors in their knowledge bases can be corrected more easily and quickly. On the other hand, the development of their knowledge bases is costly.
Hybrid AI approaches combine ML and knowledgebased AI. They have been researched for years and are successfully applied in practice today. The OMA-ML system presented above is a good example. To share experience in developing hybrid AI applications in the community, we are planning a new book about hybrid AI with ML and knowledge graphs to be published by Springer.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/.