Securing ubiquitous AR services
- 135 Downloads
This article describes a new approach to creating and securing ubiquitous augmented reality (AR) systems. Creation of AR presentations in distributed environments, where AR presentation can be dynamically composed at runtime based on distributed data sources and the usage context, and where new services can be dynamically added by various service providers, raises security concerns related to both service access control and users’ privacy. To address this challenge, a generic architecture for deployment of ubiquitous AR services and an application-layer security protocol, which enforces usage of AR content according to semantically described usage policies, are proposed.
KeywordsAccess control Security protocol User privacy Augmented reality Semantic web Mobile applications Ubiquitous applications
Augmented reality (AR) technology enables superimposing computer-generated content, such as interactive 2D and 3D multimedia objects, in real time, on a view of real-world objects. Widespread use of AR technology has been enabled in the recent years by remarkable progress in consumer-level hardware performance, in particular, in the computational and graphical performance of mobile devices and quickly growing bandwidth of mobile networks. Augmented reality, with its potential to blend real and virtual objects, creates new opportunities for building immersive and engaging applications. Education [5, 38, 41], entertainment [14, 16, 25], medicine [15, 23, 36], and cultural heritage [10, 19, 37] are examples of application domains in which AR-based systems are increasingly being used.
Existing AR platforms support mainly two forms of augmentation: directional augmentation – based on relative geographical position and orientation of the user’s device and fixed coordinates of specific points of interest, and image-based augmentation – based on image matching and tracking. The advantage of image-based augmentation results from the fact that synthetic content is directly aligned with a view of real-world objects. However, due to limitations of the available image matching algorithms, image-based AR applications are built independently for specific purposes. A user has to install a new application to be able to access new content. Yet, taking into account the diversity of application domains and information that can be presented using AR technology, the most promising are ubiquitous environments, in which different kinds of augmenting content can be contributed by different users and providers. In such systems, AR presentations can be created dynamically based on the available data sources and the current context, through selection of data and automatic composition of AR scenes.
To enable creation of ubiquitous AR environments, the concept of Contextual AR Environments (CARE) has been proposed [32, 33]. CARE enables the creation of AR presentations, which combine the advantages of both directional and image-based augmentation. In CARE, AR presentations use image-based augmentation, but they are dynamically composed in real time based on the current context and multiple distributed data sources. Contextual creation of AR presentations enables access to a variety of data sources, guarantees scalability and seamless operation. In CARE, AR presentations are accessed through mobile devices equipped with a camera and are visualized using a dedicated mobile application, called CARE Browser, which is capable of presenting rich image-based augmentations coming from various specialized services.
application-layer security protocol based on an architecture for interdependent AR services;
design of trusted security middleware;
semantic representation of access/usage control policies for AR content/services and privacy policies for AR end-users;
identification and detailed description of the use case scenario for the proposed system.
The protocol assures access control that is fine-grained, since precise semantically described AR usage policies are employed, and – at the same time – access control that is comprehensive, due to taking into account the interests of all stakeholders, namely end-users, scenario providers, trackable object providers, multimedia content providers and business dataset providers. It allows interoperability of usage policies of AR services, and therefore decentralized deployment of loosely-coupled ubiquitous AR systems. Furthermore, it preserves user anonymity and privacy according to their preferences.
The remainder of this paper is structured as follows. Section 2 presents the current state of the art in approaches to building interoperable AR systems and the challenges related to AR data security. Section 3 introduces the conceptual architecture of the SA-CARE environment, which enables creating secure ubiquitous AR environments, describes proposed architecture and access control protocol, discusses its security properties, and provides information about the implementation. Section 4 describes an example of application of the proposed approach in a smart city environment. Finally, Section 5 concludes the paper.
2.1 Context-aware AR systems
In spite of the current success of AR technologies, many practical applications have a specific purpose and are used in a specific domain. Moreover, the lifetime of such AR systems is relatively short. Applications of this kind do not allow for experiencing AR presentations in a continuous and contextual manner, i.e., regardless of where the user is located (indoor or outdoor), what kind of device type is used, and taking into account user preferences and needs. Thus, new data models and approaches are required to go beyond the standard techniques of development of AR applications as we know today, to develop continuous, adaptive, and context-aware AR systems. For instance, Schmalstieg and Reitmayr argued that ubiquitous AR systems require independence of the data model from specific applications, and to deal with it, a semantic model of geo-referenced data can be used . The authors derived a data model that allows a suitable degree of semantic reasoning for mobile AR and described how it can be used in urban navigation. Reynolds et al. discussed future directions for mobile AR applications  – in particular – how Linked Data can be used in mobile AR browsers for enhancing the reality with information about local points of interest. The authors argue that semantic web technologies can be used for dynamic selection and integration of data from different sources. Furthermore, the use of a cloud of Linked Open Data, such as GeoNames, LinkedGeoData, and DBpedia, can provide a wide range of contextual information for mobile AR browsers. The authors also state that the browsing experience with Linked Data is similar to what we know when we surf the internet using standard web browsers. Another example of using semantic web technologies in an AR system has been presented in . The authors developed a location-based outdoor application that combines Linked Data with domain-specific cultural heritage content. The mobile application explores and visualizes data provided by a back-end server based on a user’s GPS location. Hervás et al. presented a ubiquitous AR information system describing context information with the semantic web and QR codes . The authors developed a general model for transforming the physical location of objects into a virtual representation. In order to adapt synthetic content presented in the user interface, the solution requires collecting data provided by an accelerometer and digital compass. QR codes are used to provide data corresponding to a user’s location. Nixon et al. demonstrated the SmartReality platform that combines AR and semantic web technologies in the entertainment domain . The goal of this AR system is to use Linked Data to provide information about places and events in the vicinity of the user’s location. The system uses metadata to select the most appropriate information.
To date, a number of context-aware AR applications have been developed. For instance, in , the Argon AR browser has been presented, which permits presenting AR content from multiple sources. Argon has been used in various domains such as cultural heritage and community-based AR applications . A contextual approach for indoor navigation using semantic web AR technology has been presented by Matuszka et al. . Similarly to the solution presented by Hervás et al., QR codes are used for recognizing coordinates of indoor locations that are further processed by a server responsible for providing semantically described location information (passages, corridors, exits, etc.) associated with the QR codes. Additionally, the server computes possible paths between two different locations using SPARQL queries. The QR codes are also used for visualization of 3D arrows indicating the direction to a chosen location. In  authors present an architecture of a semantically enriched location-based AR browser. On the basis of the user’s geographical location, client application retrieves RDF data from DBpedia. The AR interface visualizes 2D annotations representing selected RDF data.
Also, noteworthy tools to model ubiquitous AR applications have been built using web interfaces. For instance, one of the crucial parts of the OutdoorAR framework is a web-based authoring application, in which a user can browse, modify, and manage geo-located scenes’ information . With this tool, a user is also able to create a new AR scene by specifying the characteristics of the scene, uploading related media assets, and placing them on a map. Finally, these media data can be retrieved by a mobile application that implements the OutdoorAR framework. Another approach, presented in , allows users without programming skills to create interactive AR presentations on mobile devices directly on-site. Last, but not least, the designers can also use commercial web-based applications, such as Layar Creator, Wikitude Studio, and Aurasma Studio, to rapidly prototype contextual AR experiences. On the other side, popular low-level libraries such as ARToolKit, Vuforia, ARKit, and ARCore can be used to build AR applications – however, to incorporate contextual approach within the application, sophisticated programming skills are required.
2.2 AR data security
Data security risks are much higher in ubiquitous AR than in regular systems because of continuous mode in which AR systems operate. Complex AR applications require always-recording feature, which can lead to data aggregation phenomenon , related mostly to temporal and spatial accumulation of raw visual data that can be privacy-sensitive. It also raises the risks related to user location disclosure (identity privacy, user’s position privacy, user’s movement path privacy) in the context of user anonymity, unlinkability of user’s actions and the strongest requirement of complete unobservability of user actions.
Mobile AR systems employ new input techniques such as voice or gaze-tracking technologies. Usage of these methods, while running multiple applications simultaneously, produces new data security threats related to inaccurate identification of the application that is in focus and should receive input . This threat is even more significant since multiple AR applications expose their APIs to each other and users can share multimedia content between these applications.
Generally, in order to preserve data security in AR systems, access to users’ data can be limited by different techniques – separately or as their combinations: policy-based techniques, e.g., formalized XACML security policies; privacy-preserving querying, e.g., based on database anonymizing techniques; techniques dedicated to mobile solutions, e.g., spatial cloaking, etc. However, constant progress in AR techniques in conjunction with the development of mobile infrastructures poses a challenge for the existing systems. In particular, content processed by multiple AR services interacting dynamically with each other in a mobile environment and provided by distributed SPs cannot be sufficiently protected by current access/usage control approaches.
The most distinguished standardization effort in the domain of protecting the usage of multimedia content is MPEG-21 REL – a rule-based access control language . Unfortunately, the digital item representation, which is the base for this model, is not expressive enough to support interactive AR presentations with spatially-sensitive composite content contributed by different SPs. Generic languages developed to allow modelling of attribute-based access control, such as XACML , despite their usefulness in many multimedia protection scenarios, do not support spatial constraints. XACML has spatial extension called GeoXACML , however, mainly due to its two-dimensional nature and lack of AR interaction protection, GeoXACML is not sufficient for AR frameworks. The same applies to GEO-RBAC .
It has to be noted that there exist general-purpose security protocols, such as SAML 2.0  and OAuth 2.0 , that are designed to control access to distributed data or services. Partially they could be utilized as basic building blocks in the process of designing and implementing a secure framework for distributed AR services (e.g., SAML for authentication or OAuth for authorization). However, alone they do not constitute a holistic approach that is required for domain-specific service protection, taking advantage of semantically represented security policies.
There is also AR-specific research on data security focused on user privacy protection. Due to the novelty of the problem, many works only point to forthcoming research directions, but do not present any solutions. An example is , in which OS-level access control to AR objects, such as human face or skeleton, is discussed. Other researchers focus on providing AR-specific location privacy , however, they propose anonymization-based approach only.
3 Securing contextual AR environments
3.1 SA-CARE architecture
Contextual AR environments consist of AR presentations, which are not static as in typical AR systems, but are dynamically composed based on the usage context, including user’s preferences and privileges, location, time, device capabilities, previous actions, etc. Typically, contextual AR environments are ubiquitous. For example, a user can walk around a city and observe relevant augmentations using a mobile device equipped with a camera and a dedicated browser application. A contextual AR system, called CARE, has been presented in [32, 37].
To enable dynamic composition of AR presentations in CARE, four types of semantically described elements are used. The first type of elements is a trackable object, which is a visual marker representing the real-world objects that can be augmented. The second type of elements is a content object representing 2D and 3D multimedia content to be presented in relation to the markers. The third type of elements is a dataset related to business services provided in AR environments. The last one is an AR scenario describing the course of presentations, i.e., objects being presented, spatio-temporal relationships between the objects, and their behavior. In general, those four types of elements are independent from each other and may be offered by various SPs in a distributed architecture. Discovery and matching between the particular elements of AR presentations are possible based on semantic web techniques.
Both the real-world objects being tracked and the synthetic content being presented depend on the used AR scenarios. For positioning of synthetic content, the browser captures images from the camera and detects trackable objects. Synthetic content is generated based on content objects and datasets. The content objects are media objects (i.e., 2D, 3D, audio, video), while the business datasets are structured data related to the execution of business processes. The datasets can be visualized in an AR environment, and can also be used to control the flow of presentations.
The dynamism underlying creation of AR presentations in CARE with the use of distributed independent data sources requires comprehensive consideration of security aspects. This includes usage control policies (impacting both users’ access to services and services’ access to users’ data) for trackable objects, content objects, datasets, and scenarios; consistency of presentation data and metadata, as well as privacy of users’ actions. Below, we present a security-aware extension of CARE, SA-CARE.
Trackable AR Services – provide trackable objects, which are binary representations of physical objects. Trackable objects are used for tracking physical objects, which can be augmented with multimedia content. Those services are usually provided by owners of physical objects represented by the trackable objects, e.g., museums may share photos of the cultural objects held in their collections, advertising agencies may publish images of advertising posters.
Content AR Services – provide content objects, which are used for augmenting physical objects and enabling user interaction. Content objects include 3D models, 2D images, video, audio, and text. Providers of those services can be entities which are either the owners of physical objects being augmented or entities independent of the owners of physical objects providing third-party content.
Dataset AR Services – provide datasets consisting of texts and numbers retrieved from IT systems of business entities interested in providing services through AR presentations. Business datasets are structured to enable their automatic processing. In general, data retrieved from dataset AR services are not directly visualized within AR environments, but they can be transformed into a multimedia form (e.g., 2D images, 3D models) for visualization. Also, datasets can be used for modifying visual, spatial, temporal and behavioral features of AR presentations. Dataset AR services can be provided by various entities: the owners of physical objects, multimedia content providers or other independent entities such as educational institutions, tourist agencies, municipalities, etc.
Scenario AR Services – provide AR scenarios, which specify visual, spatial, temporal and behavioral features of AR presentations. Scenario AR services can be offered by different entities, but most often they are business or public entities that are interested in the development of an AR interface to their services.
Semantic AR Service Catalog – stores semantic descriptions of both the available services and the data provided through these services. These data are made available in the user’s context, which may consist of time, location, user preferences, the status of business services, etc. Given a query sent by the browser, the service catalog responds with the addresses of AR services (trackables, objects, scenarios and datasets) that satisfy the query conditions.
User Assertion Provider – provides digitally signed assertions proving authenticity of the values of the user attributes. User Assertion Provider uses internal user database, external data sources or UC Ontology (Usage Control Ontology) to verify claims, if it is required.
Ontologies – AR Ontology describes concepts and relations between objects used to model generic AR scenes and interactions. UC Ontology describes security-related concepts and relations. Domain ontologies describe domain-specific concepts and relations that constitute a base for the Domain Knowledgebase. All these ontologies constitute common vocabulary and formalism used for building semantic UC policies for AR scenarios, trackables, content and data objects, for building semantic user privacy policies, and for building Semantic AR Service Catalog. Last but not least, they are used by the Semantic Policy Decision Point in the policy evaluation process.
Domain Knowledgebase – in the Domain Knowledgebase, knowledge consistent with the above-mentioned ontologies is stored, describing particular AR services. The Domain Knowledgebase is a data source for the Semantic AR Service Catalog, and for the Semantic Policy Decision Point.
Semantic Policy Decision Point – evaluates the UC policies and users’ privacy policies taking into account user attributes, ontologies, and the Domain Knowledgebase.
3.2 Access control protocol
For the clarity of presentation, the diagrams do not show technical messages, which do not influence the logic of the data flow, e.g., digital signature verification steps are intentionally omitted. Also, the procedure of the secure scenario authoring is regulated by a separate protocol, which is out of the scope of this work. As a prerequisite, the SA-CARE Browser registers itself at the User Assertion Provider and proves its attributes through a secure channel.
An SA-CARE Browser requests an AR scenario from a Scenario AR Service (this request is denoted as the initial request).
The Scenario AR Service responds with either an AR scenario in the case of a publicly available scenario (and then it ends the message interchange process) or a request for a list of selected user attributes that are necessary to evaluate a usage control policy.
The user, after authentication with a certificate, requests from the User Assertion Provider digitally signed user assertions confirming values of the attributes.
The User Assertion Provider responds with digitally signed user assertions proving authenticity of the values of the requested attributes together with a timestamp.
The SA-CARE Browser requests the Semantic Policy Decision Point for decision regarding the initial request. In the request message, the SA-CARE Browser sends also user assertions obtained in the previous step.
The Semantic Policy Decision Point requests the Scenario AR Service for the usage control (UC) policy.
The Semantic Policy Decision Point receives the UC policy for the scenario from the Scenario AR Service.
In some cases, knowing the semantic policy and the user attributes is enough for the Semantic Policy Decision Point to evaluate the policy with respect to the initial request. However, generally it is required to query the trusted Domain Knowledgebase.
The trusted Domain Knowledgebase sends back the facts required by evaluation process of the policy with respect to the initial request.
The Semantic Policy Decision Point evaluates the UC policy with respect to the initial request. The digitally signed result (“allow” or “deny” assertion with a timestamp) is sent back to the SA-CARE Browser.
Having the “allow” assertion, the SA-CARE Browser passes it to the Scenario AR Service.
In the response message, the Scenario AR Service sends the requested AR scenario back to the SA-CARE Browser.
An AR scenario may contain semantic rules that specify trackable objects, content objects, datasets, and other referenced AR scenarios, which are to be used within the scenario at a specific context. Therefore, the SA-CARE Browser sends these semantic parameters to the Semantic AR Service Catalog.
The Semantic AR Service Catalog performs reasoning based on semantic data from the Domain Knowledgebase and responds with URIs of the required trackable objects, content objects, and datasets.
In the subsequent steps, based on the protocol pattern described above (steps #1-#16), the SA-CARE Browser obtains required data from Trackable AR Services, Content AR Services, and Dataset AR Services.
The most important difference of the data flow in the 2 n d course of the protocol concerns usage control to the trackable objects, content objects, and datasets according to their UC policies. Before the evaluation of such policy in the step #7 (in the 2 n d or subsequent courses of the protocol), SA-CARE Browser sends to the Semantic Policy Decision Point not only user assertions, but also scenario “allow” proofs (obtained in the step #12 of the 1 s t course). Therefore, the Semantic Policy Decision Point “knows” the context of the trackable objects / content objects / datasets usage (knows the scenario in which it is going to be used), and UC policies, constraining their usage in specified scenarios, can be evaluated.
3.3 Security analysis
Fine-grained access control. The authorization decisions are based on the custom and precise attributes (assertions) taking into account AR specificity.
Broad range of access control. SPs can limit the usage of their services and content according to specified attributes of end-users as well as specific attributes of other SPs (e.g., trackable object provider vs. scenario provider conflict of interests).
Interoperability of the usage control policies. Interoperability is provided due to semantic representations of policy rules, as opposed to low-level technical values or parameters.
- 4.Applicability for decentralized processing in large-scale and ubiquitous (e.g., city-wide) AR systems:
AR scenario providers are independent from trackable object providers, data providers and content object providers. Specialized services can be developed and separation of concerns is maintained.
SPs can take advantage of a trusted semantic AR catalog created as an element of the trusted infrastructure.
The SA-CARE approach has been developed on the basis of the service-oriented architecture paradigm. SA-CARE enables semantic modeling of AR environments dividing responsibilities between loosely coupled services distributed on the internet. The services are consumed by software clients running on mobile devices of users, who can move freely in ubiquitous environments. Complex and intensive computation of semantic processing is executed on the server side, while real-time rendering of AR presentations is done on the client side.
Client side. The SA-CARE Browser is a client application that runs on a user’s mobile device and is responsible for communication with AR services and real-time rendering of AR presentations. The SA-CARE Browser is built on top of the OpenGL ES library , which allows to render content objects and data objects provided by diverse vendors. To recognize and track planar images in real time, the SA-CARE Browser uses the Vuforia computer vision library . The application collects data from various devices such as: Bluetooth (via the Altbeacon library  that provides APIs responsible for getting notifications when beacon devices appear or disappear in the sensing range), GPS (via Android Location Manager to obtain geographical position of end-user), and also from Android OS and Vuforia to retrieve knowledge about the type of device that is used by an end-user.
The application is based on the REST architectural paradigm and can communicate with multiple distributed AR services. The SA-CARE Browser converts arbitrarily complex Java objects into their JSON representation and vice versa with the use of the GSON library  while communicating with AR services. The application has been implemented in Java and runs on the Android platform.
Server side. The architecture of SA-CARE’s server is based on Spring and Apache CXF frameworks. The server side is built on top of the Apache Jena SPARQL library. The system consists of a number of RESTful SOA services. The services essential to creating AR presentations are: Scenario AR Services and Semantic AR Service Catalog, which automatically communicate with each other and with the SA-CARE Browser.
The performance of a system built according to the proposed framework strongly depends on the performance of the reasoning in the Semantic Policy Decision Point realized in the step #12 of the protocol. The reasoning (specifically inference) performance depends on the ontology size and even more on the ontology complexity, as well as on the reasoner itself. For the inference, Apache Jena OWL reasoner is employed. Jena OWL reasoner is slower than the Jena RDFS reasoner about 3-4 times . Thus, if performance issues become critical in a given application, one way of performance improvement is migration to Jena RDFS reasoner on pure RDFS data. The other way is to apply Jena OWL Micro reasoner, which is intended to be close to RDFS performance while also supporting the core OWL constructs. Also, for large-scale deployments, Apache Jena TDB repository should be used for high-performance storage and query.
4 Use case
In this section, an application example demonstrating the practical use of the proposed approach to develop a secure ubiquitous AR environment is described.
4.1 City-wide AR exploration scenario
For example, on a bus-stop shelter several city light boxes are installed. The boxes are maintained by an advertising agency, which uses them for showing advertising posters of their clients. When a user points a smartphone towards a bus stop sign, a new AR scenario, specific to the advertising agency, is started.
- 1.The providers of scenario AR services are the following:
municipal information services for the city-wide exploration scenario;
an advertising agency or other entity involved in the development of interactive multimedia presentations.
- 2.The providers of trackable AR services are the following:
municipal information services providing first-stage scenario trackable objects;
a movie distributor or an advertising agency that can create a trackable object based on a poster image provided by the distributor.
- 3.The providers of content AR services are the following:
a digital art agency (a movie distribution company) providing multimedia data related to the movie (e.g., a trailer, photos);
an advertising agency or other entities providing multimedia content required to build a user interface for AR scenarios.
- 4.The providers of dataset AR services are the following:
popular movie websites providing reviews and user opinions on movies;
cinemas providing information on tickets for movies and the services for buying tickets online.
The AR presentation (Fig. 4) is composed of four elements: movie trailer, ticket price, buy ticket button, and movie rating. A user can interact with the AR presentation, e.g., he/she can play the movie trailer by tapping on the play button. An electronic ticket to a cinema can be acquired by tapping on the buy ticket button.
4.2 Data flow in the scenario
The protocol described in Section 3.2 is employed multiple times in the presented use case – it is used as a basic building-block for message interchange. This section illustrates how it proceeds in the case of AR-based exploration of cultural events in a city.
Initially, the SA-CARE Browser – based on recorded user preferences – requests a generic, city-wide exploration AR scenario from a municipal information service. The requested scenario is public, so it is sent back directly to the client (cf. protocol step #2 – public case). The scenario requires trackable objects, i.e., images of bus stop signs and content objects, i.e., interaction elements enabling activation of the second-stage scenario. The required trackable objects and content objects are obtained by the browser from municipal trackable services and content object services, based on public policies (simple two-step protocol version).
The AR scenario requires trackable objects representing posters, and for each poster a number of content objects and datasets, such as trailers, photos, cinema ticket prices, and user reviews – in the case of a movie poster. These objects are not hard-coded in the scenario as URIs of the pre-selected services, but they are selected using semantic rules. These rules are evaluated by the Semantic AR Service Catalog (steps #15-#16) that either has the required knowledge regarding the services and their providers or uses external knowledge sources from the Domain Knowledgebase, e.g., about movies. In the subsequent courses of the protocol, a trackable object of a movie poster is obtained from a service provided by a movie distributor, content objects (trailers, photos) are obtained from services offered by a digital art agency, and datasets (cinema ticket prices and user reviews) are obtained from cinema services and movie critics portals.
In the presented example, a movie distributor, as a part of its semantic usage control policy, restricts the use of its trackable objects within scenarios to advertising agencies which are bound by a legal agreement (this constraint contains no hardcoded service URIs, but is again expressed semantically). Therefore, the knowledgebase querying precedes the policy evaluation (steps #10-#11 are not omitted in this protocol course). Similarly, a movie critics portal, in its semantic usage control policy, allows the use of its data (user reviews regarding a particular movie) only within scenarios that are directly referenced by scenarios provided by municipal information services (anti data harvesting policy), which is also expressed semantically. Thus, in the case of semantic usage control policy for business datasets, the knowledgebase querying precedes the policy evaluation, in the steps #10-#11 of its protocol course.
When a user points a smartphone camera towards the movie poster, due to the scenario running on the smartphone, the poster is augmented with trailers, photos, user reviews and ticket prices. Also, the user can tap on the augmenting buttons, in order to buy cinema tickets using appropriate business services.
The presented SA-CARE approach enables development of a new class of augmented reality applications in which security constraints are applied in the process of dynamic creation of interactive AR presentations. The approach is based on semantically-modeled access control policies, combined with “privacy-by-design” system architecture and a protocol that separates stakeholders’ duties and reduces the attack surface. The implemented and tested prototype system proves the validity of the theoretical model.
This research work has been supported by the Polish National Science Centre (NCN) Grants No. DEC-2012/07/B/ST6/01523 and DEC-2016/20/T/ST6/00590.
- 1.Aart C, Wielinga B, Hage WR (2010) Knowledge Engineering and Management by the Masses: 17th International Conference, EKAW 2010, Lisbon, Portugal, October 11-15, 2010. Proceedings, chap. Mobile Cultural Heritage Guide: Location-Aware Semantic Search, pp. 257–271. Springer Berlin. https://doi.org/10.1007/978-3-642-16438-5_18
- 2.Apache Software Foundation (2017) Apache Jena Documentation. https://jena.apache.org/documentation/
- 3.Aryan A, Singh S (2010) Protecting location privacy in augmented reality using k-anonymization and pseudo-id. In: 2010 international conference on computer and communication technology (ICCCT). IEEE, pp 119–124Google Scholar
- 4.Bertino E, Catania B, Damiani ML, Perlasca P (2005) Geo-rbac: a spatially aware rbac. In: Proceedings of the tenth ACM symposium on access control models and technologies. ACM, pp 29–37Google Scholar
- 6.Cantor S, Kemp IJ, Philpott NR, Maler E (2005) Assertions and protocols for the oasis security assertion markup language. OASIS StandardGoogle Scholar
- 7.D’Antoni L, Dunn AM, Jana S, Kohno T, Livshits B, Molnar D, Moshchuk A, Ofek E, Roesner F, Saponas TS et al (2013) Operating system support for augmented reality applications. In: HotOSGoogle Scholar
- 8.Google Inc. Gson – an open source Java library to serialize and deserialize Java objects to (and from) JSON. https://github.com/google/gson
- 9.Hardt D (2012) The oauth 2.0 framework. http://tools.ietf.org/html/rfc6749.html
- 10.Haugstvedt AC, Krogstie J (2012) Mobile augmented reality for cultural heritage: a technology acceptance study. In: 2012 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 247–255Google Scholar
- 11.Hervás R, Garcia-Lillo A, Bravo J (2011) Ambient Assisted Living: Third International Workshop, IWAAL 2011, Held at IWANN 2011, Torremolinos-Málaga, Spain, June 8-10, 2011. Proceedings, chap. Mobile Augmented Reality Based on the Semantic Web Applied to Ambient Assisted Living, pp. 17–24. Springer Berlin. https://doi.org/10.1007/978-3-642-21303-8_3
- 12.Jana S, Narayanan A, Shmatikov V (2013) A scanner darkly: Protecting user privacy from perceptual applications. In: 2013 IEEE symposium on security and privacy, pp 349–363. https://doi.org/10.1109/SP.2013.31
- 13.Khronos Group The standard for embedded accelerated 3d graphics. https://www.khronos.org/opengles/
- 14.Koutromanos G, Styliaras G (2015) The buildings speak about our city: a location based augmented reality game. In: 2015 6th international conference on information, intelligence, systems and applications (IISA). IEEE, pp 1–6Google Scholar
- 15.Kugelmann D, Stratmann L, Nühlen N, Bork F, Hoffmann S, Samarbarksh G, Pferschy A, von der Heide AM, Eimannsberger A, Fallavollita P, Navab N, Waschke J (2017) An augmented reality magic mirror as additive teaching device for gross anatomy. Annals of Anatomy - Anatomischer Anzeiger. https://doi.org/10.1016/j.aanat.2017.09.011
- 17.Lee GA, Billinghurst M (2013) A component based framework for mobile outdoor ar applications. In: Proceedings of the 12th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry. ACM, pp 207–210Google Scholar
- 18.MacIntyre B, Hill A, Rouzati H, Gandy M, Davidson B (2011) The argon ar web browser and standards-based ar application environment. In: 2011 10th IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 65–74Google Scholar
- 20.Matuszka T, Gombos G, Kiss A (2013) Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments: 5th International Conference, VAMR 2013, Held as Part of HCI International 2013, Las Vegas, NV, USA, July 21-26, 2013, Proceedings, Part I, chap. A New Approach for Indoor Navigation Using Semantic Webtechnologies and Augmented Reality, pp. 202–210. Springer Berlin. https://doi.org/10.1007/978-3-642-39405-8_24
- 21.Matuszka T, Kámán S, Kiss A (2014) Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments: 6th International Conference, VAMR 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22-27, 2014, Proceedings, Part I, chap. A Semantically Enriched Augmented Reality Browser, pp 375–384. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-07458-0_35
- 22.Moses T et al (2005) Extensible access control markup language (xacml) version 2.0. Oasis Standard. http://docs.oasis-open.org/xacml/2.0/access_control-xacml-2.0-core-spec-os.pdf
- 24.Networks R Altbeacon. http://altbeacon.org/
- 25.Nintendo Co. Ltd. Pokemon go. https://www.pokemon.com/us/pokemon-video-games/pokemon-go/
- 26.Nixon LJ, Grubert J, Reitmayr G, Scicluna J (2012) Smartreality: integrating the web into augmented reality. In: I-SEMANTICS (posters & demos). Citeseer, pp 48–54Google Scholar
- 27.(2013) Open Geospatial Consortium: Geoxacml standardGoogle Scholar
- 28.PTC Inc. (2012) Vuforia Augmented Reality SDK. https://www.qualcomm.com/products/vuforia
- 29.Reynolds V, Hausenblas M, Polleres A, Hauswirth M, Hegde V (2010) Exploiting linked open data for mobile augmented reality. In: W3c workshop: augmented reality on the web, vol 1Google Scholar
- 31.Rumiński D, Walczak K (2013) Creation of interactive ar content on mobile devices. In: Proceedings of international conference on business information systems. Springer, pp 258–269Google Scholar
- 32.Rumiński D, Walczak K (2014) Semantic contextual augmented reality environments. In: The 13th IEEE international symposium on mixed and augmented reality (ISMAR 2014). IEEE, pp 401–404. https://doi.org/10.1109/ISMAR.2014.6948506
- 33.Rumiński D, Walczak K (2017) Semantic model for distributed augmented reality services. In: Proceedings of the 22nd international conference on 3d web technology, web3d ’17. ACM, New York, pp 13:1–13:9. https://doi.org/10.1145/3055624.3077121
- 34.Schmalstieg D, Reitmayr G (2007) The world as a user interface: augmented reality for ubiquitous computing. In: Location based services and telecartography. Springer, pp 369–391Google Scholar
- 35.Speiginer G, MacIntyre B, Bolter J, Rouzati H, Lambeth A, Levy L, Baird L, Gandy M, Sanders M, Davidson B, Engberg M, Clark R, Mynatt E (2015) Human-computer Interaction: users and Contexts: 17th International Conference, HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part III, chap. The Evolution of the Argon Web Framework Through Its Use Creating Cultural Heritage and Community–Based Augmented Reality Applications, pp 112–124. Springer International Publishing, ChamGoogle Scholar
- 37.Walczak K, Rumiński D, Flotyński J (2014) Building contextual augmented reality environments with semantics. In: Virtual systems multimedia (VSMM), pp 353–361. https://doi.org/10.1109/VSMM.2014.7136656
- 39.Walczak K, Wojciechowski R, Wójtowicz A (2017) Semantic exploration of distributed ar services. In: De Paolis LT, Bourdot P, Mongelli A (eds) Augmented reality, virtual reality, and computer graphics: 4th international conference, AVR 2017, Ugento, Italy, June 12-15, 2017, Proceedings, Part I. Springer International Publishing, Cham, pp 415–426. https://doi.org/10.1007/978-3-319-60922-5_32
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.