We evaluate the AAS to SAAS mapping by examining the results and the performance of the three use cases (see Table 1). As a reference to estimate the information coverage, the number of XML nodes of the AAS serializations are provided. In addition, the amount of unique leaves of the three XML trees are noted, as these numbers better reflect the single information content of the AAS. Table 1 also presents the numbers of generated triples by the RMLMapper. The comparison indicates, as already mentioned, that not the whole expressiveness of AAS can be transported to the SAAS version. This is due to the fact that some constructs can not being represented sufficiently in RDF (for instance the Property class) but also many original entities contain redundant information. Especially the ConceptDescriptions repeat many attributes, which are collapsed by the mapping process and only added once.
9.1 Mapping Time
The necessary overhead in terms of computation time measured in milliseconds is presented in Fig. 4, in addition to the average mapping times outlined in the last column of Table 1. The time was measured on a regular laptop (Win10, 16 GB, Intel i5-7300 2,60 GHz) using a bash emulation. The different RDF serializations do influence the execution time, indicating that the writing is not the bottleneck. While the average mapping time of the Raspberry Pi AAS (2,7 s) and AAS2 (3,1 s) are rather close, the duration for AAS3 (5,7 s) is significantly higher. The variation between the selected use cases reflects the differences in their XML file size. This could indicate that the overall behavior is nearly linear. However, each of the 19 TripleMaps leads to a reloading and reiteration of the whole XML file. Overcoming this expensive process would speed up the process significantly but is out of the scope for this paper.
9.2 Data Overhead
RDF is in general not an effective data format in terms of storage efficiency. Nevertheless, the syntax requirements of the AAS and especially its XML schema create already significant overhead for the original AAS model. As depicted in Table 1, all RDF serializations reduce the necessary storage size. Especially noteworthy is the difference between the original XML file size and the RDF/XML serialization. This is mostly due to the usage of namespaces in the RDF/XML version, which reduces the noted URIs. It should be mentioned that for all serializations the mapping step (m) and the serialization (ser) were executed directly by the mapping engine.
Nevertheless, the resulting costs in terms of storage requirements and communication bandwidth do not exceed the ones created by the original Asset Administration Shells. Consequently, all devices and scenarios capable of handling AAS are also sufficient for the operation of SAAS. Furthermore, the possible serialization of SAAS as both XML and JSON should enable AAS implementations to quickly adapt to SAAS objects in their original file format.
Three different rule sets have been applied to all use cases. All rule sets contain a web request to the ontology source file in order to load the class hierarchy and any other relevant axioms of the data model itself. The first one also adds several rules reflecting the symmetry and transitivity of owl:sameAs as well as the fact that same instances share all properties and annotations of each other. The second rule set contains subclass statements as encoded by the rules rdfs9 and rdfs11 . The third set combines both to the most expressive reasoning set. Table 2 gives an overview of the amount of created triples. rdfs:subClassOf, owl:sameAs and the combination of both entailments are shown with the amount of uniquely added triples and the average reasoning time.
We use the Linked Data-Fu engine . The preparation of the reasoning engine, involving the parsing of the rule files, takes around 1 s. The following web request, the download of the ontology, the evaluation of the rules and the serialization to a n-triple file is then executed. The duration distribution of ten repetitions is shown in Fig. 5. One can see that the whole process takes between 2,3 and 3,3 s, nearly independently of the amount of inputs (AAS3 is significantly larger than the graph for the Raspberry Pi) and the expressiveness of the rule sets (the second set is leading to way less results than the others).
As the rule sets are only regarding the structure of the ontology, the inferencing of context-dependent knowledge is not yet possible. In order to reach productively usable information, domain-specific axioms tailored to the actually contained or expected data is necessary. However, we can show that the reasoning process with complex rules is applicable in an acceptable amount of time.
9.4 Schema Validation
The evaluation times of the SHACL shapes are shown in Fig. 6. On average, the execution of all shapes takes 46,2 s and the execution of one single shape 1,8 s. All shapes have been executed a total of ten times.
About 2 s are required for setting up the validation tool and parsing the data shape (the Asset Administration Shell) and the single class shape. The size of the Asset Administration Shell has no significant impact on the achieved results. Regarding these conditions, we claim that the necessary effort is acceptable for a typical Industrie 4.0 scenario as the validation itself is not necessary for every restricted devices. This is due to the fact that the validation of data takes either place at development or deployment time where time is not critical. In addition, the validation is important for the higher-level data analytical services which usually run on more powerful machines or are even hosted in the cloud.