1 Introduction

Nowadays, mobile robots are able to efficiently map indoor and outdoor environments. They experience no fatigue and many state of the art systems are robust enough to perform this task autonomously. We focus on an intelligent system capable of 3D mapping and searching objects. Thus, the robot is able to compare current 3D information with the reference model to locate new objects. These objects are considered as objects of potential interest OPI that are confirmed by a classification procedure. Accurate 3D models are also used in spatial design support. The spatial design process, in the sense of activity, is described as “Conception → Modeling → Evaluation → Remodeling” cycle in [19]. The work in this paper aims at improving the last three steps. Our goal is to propose a solution to end-users that allows easier improvement and redesigning of already existing areas, especially those with incomplete or obsolete floor plans. We have built an intelligent mobile system, capable of performing qualitative reasoning.

We propose a modeling → conceptualization → evaluation process based on a 3D scanning and a Qualitative Spatio-Temporal Representation and Reasoning (QSTRR) framework capable of modeling the environment in the sense of generating a qualitative representation. The Qualitative representation used within the context of spatial assistance concept is connected with a framework of multi-modal data access for spatial assistance systems [27]. The main goal of this work is to develop an assistance system for supporting designers in decision making based on the Qualitative Spatial Representation and Reasoning (QSRR) [12]. The study of spatial-structural design processes and computational tools such as described in [14] are an example of design support tools in early stages of the design process. As pointed in [13], the main challenge and limitation for QSRR systems is providing an appropriate mathematical tool set allowing for qualitative reasoning without traditional quantitative techniques. Bhatt [4] shows the importance of integrating QSRR techniques with artificial intelligence from the application point of view, such as cognitive robotics or spatial design support. The problem of reasoning about space is discussed in [25]. It simplifies an earlier theory presented in [23] and [24] considered the original approach based on a calculus proposed in [11]. The ontological primitives of the theory include physical objects and regions, an additional set of entities is also defined. The Spatial Assistance System (SAS) described in this paper is defined as a computer implemented decision making system [27] that possesses other analytic abilities that require training, knowledge and expertise. There exist many solutions for spatial perception, but the integration of various spatial knowledge sources for solving complicated problems is an important and complex research problem [2]. The integration problem arises from the fact that conducting the spatial task in a real environment is not bound with one activity but with a set of the mutually dependent tasks, which forces a minimal level of the coordination. Thus, it determines a rich set of the integration problems.

The first step of our spatial design support system consists of acquiring 3D point clouds of the area and merging them into one consistent 3D representation using an improved 6D simultaneous localization and mapping algorithm [20]. During last decade 3D metric information is widely used in inventarization problems [16, 18]. The improvement is related with computational aspects of parallel programming discussed in [7]. In this paper we show current research results of 6D SLAM from the application point of view, thus input data for the qualitative reasoning are based on accurate 3D models of the environment. We achieve better accuracy of final 3D metric map compared to our previous work [3], where we designed ground truth 3D data setup measured with geodetic precision. For spatial design purpose we create a QSTRR model based on this accurate 3D metric map. Finally the evaluation of the area, in connection to the selected spatial design problem, can be done using reasoning capabilities of QSTRR. The preliminary results on QSTRR framework for Spatial Design was shown in [7]. This paper is focused on the integrated intelligent mobile applications into a single mobile system capable of performing qualitative reasoning in the security domain and for spatial design support.

The paper is organized as follows: first we introduce the Qualitative Spatio-Temporal Representation and Reasoning framework, second we describe mobile mapping systems, third we show the search and recognition of 3D objects, fourth we discuss access to robotic functionalities and data with a novel data center using NVIDIA GRID technology. Finally, we show experiments convincing end-users about the functionalities of our systems.

2 Qualitative spatio-temporal representation and reasoning framework

We designed an ontology for the purpose of encoding spatial design domain knowledge, thus allowing an intelligent mobile robot to perform qualitative reasoning. It allows building a semantic model of an environment using a qualitative spatio-temporal representation. An ontology O is composed of entities: a set of concepts C, a set of relations R, a set of axioms A (e.g. transitivity, reflexivity, symmetry of relations), a concepts’ hierarchy CH, a relations’ hierarchy RH, a set of spatio-temporal events E s t . It is formulated as the following definition:

$$ O=\left \langle C,R,A,CH,RH,E_{st} \right \rangle $$
(1)

An ontology is a core component of a qualitative reasoning mechanism. A concept is defined as a primitive spatial entity described by a shape S composed of points or polygons in 3D space. We associate a semantic label SL to each shape. Ontology distinguishes two different types of attributes that can be assigned to a concept, quantitative A q n and qualitative A q l . We extends the set of four values of qualitative attributes: real physical object, functional space, operational space and range space by the new attribute: empty space. This new approach helps in reasoning about trajectories in 3D space using state of the art path planning methods. Functional, operational and range spaces are related with spatial artifacts; detailed information are given in [5, 6, 27]. Quantitative attributes are related with physical properties of spatial entities and are as follows: location, mass, center of mass, moment of inertia (how much resistance there is to change the orientation about an axis), material (friction, restitution). Therefore, the definition of the concept C is formulated as:

$$ C=\left \langle S, A_{qn}, A_{ql}, SL \right \rangle. $$
(2)

The set of relations R is composed of quantitative and qualitative spatial relations. For topological spatial qualitative relations the Region Connected Calculus (RCC-8) [25] is used. The ontology includes eight different topological relations between two shapes: disconnected DC, externally connected EC, partial overlap PO, equal EQ, tangential proper part TPP and its inverse T P P i, non-tangential proper part NTPP and its inverse N T P P i. Quantitative spatial relations are used for constraining the way entities move relative to another. Our ontology defines following constraints: origins locked, orientations locked, origins free, orientations free, free rotation around one axis sliding. The ontology provides a mechanism for building world models that assume spatio-temporal relations in different time intervals. It captures changes in 3D space. Our temporal representation takes temporal intervals as a primitive [1]. We distinguish the following qualitative spatio-temporal events E s t related with topological spatial relations RCC-8:

  • onEnter (D CE CP O),

  • onLeave (P OE CD C),

  • onStartInside (P OT P PN T P P),

  • onStopInside (N T P PT P PP O).

These four qualitative spatio-temporal events are used to express the most important spatio-temporal relationships that holds between two concepts in different intervals of time within the context of Spatial Design. In this paper we focus on the two events: onEnter and onLeave. To store the instances of ontology-based elements an instance base I B O is defined:

$$ IB^{O}=\left \langle {I_{C}^{O}}, {I_{R}^{O}}, I_{E_{st}}^{O} \right \rangle, $$
(3)

where \({I_{C}^{O}}\) contains instances of concepts C, \({I_{R}^{O}}\) contains instances of relations R and \(I_{E_{st}}^{O}\) contains instances of spatio-temporal events. Finally the semantic model is defined as a pair:

$$ SM=\left \langle O, IB^{O} \right \rangle, $$
(4)

where O is an ontology and I B O is an instance base related to ontology O. The ontology is known a-priori but the instance base is being updated during spatial design process. An important tool is the semantic map being defined as a projection of semantic model onto 3D space. All concepts, relations and events have their own user-friendly 3D representation; this is the core of the proposed intelligent mobile user interface.

3 Mobile mapping systems

Our mobile mapping systems are shown in Fig. 1. The mobile robots acquire 3D data and register them using improved 6D SLAM algorithm [10]. The lightweight mobile robot PIONEER 3DX is used for 3D mapping of indoor environments and search missions. The Husky robot can provide accurate 3D data with the range of 170 meters in a single 3D scan. We are using this robot to acquire data in the indoor and outdoor environments. We improved the 6D SLAM algorithm by use of semantic classification, loop closing with Complex Shape Histogram (CSH) and parallel implementation. It is shown in Fig. 2, i.e. we have extended the work described in [10]. Important aspects concerning our implementation is given in [8]. Green rectangles correspond to novel algorithmic components using high performance computing in Compute Unified Device Architecture (CUDA). Red rectangles correspond to qualitative reasoning components: semantic classification and semantic 3D scan matching. Semantic classification is used for discrimination of points, thus nearest neighborhood search procedure in semantic 3D scan matching uses this information for finding pairs of closest points having the same semantic label. Semantic labels denote shapes around query points and it is described in Section 4 within the context of computing CSHs. In CSH we use classification of shape round each query point into two classes: flat, not flat. In semantic 3D scan matching we the extend number of classes by ceiling and floor. Ceiling corresponds to points with label: flat above robot with normal vector pointing down, floor corresponds to points with label: flat below robot with normal vector pointing up. The main part of semantic 3D scan matching is a modified Iterative Closest Points (ICP) algorithm. Our improvements concentrate on a parallel computing implementation and discriminating points into four classes during the nearest neighborhood search procedure. The key concept of the ICP algorithm can be summarized in two steps:

  1. 1.

    Compute correspondences between the two scans (Nearest Neighbor Search),

  2. 2.

    Compute a transformation which minimizes distance between corresponding points.

Fig. 1
figure 1

Intelligent mobile mapping systems. Left: autonomous mobile robot PIONEER 3DX equipped with 3D laser measurement unit. Right: mobile mapping system based on robot Husky equipped additionally with accurate geodetic 3D laser Z+F IMAGER 5010

Fig. 2
figure 2

Improved 6D SLAM algorithm by semantic classification and parallel implementation. Green rectangles correspond to novel algorithmic components using high performance computing in Compute Unified Device Architecture CUDA, red rectangles correspond to qualitative reasoning components: semantic classification and semantic 3D scan matching

Iteratively repeating these two steps results in convergence to the desired transformation. Semantic discrimination of these correspondences improves the convergence. Range images are defined as model set M where

$$ |M|=N_{m} $$
(5)

and data set D where

$$ |D|=N_{d} $$
(6)

The goal of semantic 3D scan matching is to minimize following cost function:

$$ E\left (\mathbf{R,t} \right )={\sum}_{i=1}^{N_{m}}{\sum}_{j=1}^{N_{d}}w_{ij}\left \| \mathbf{m}_{i}^{c}-\left (\mathbf{Rd}_{j}^{c}+\mathbf{t} \right ) \right \|^{2} $$
(7)

where w i j is assigned 1 if the i th point of M correspond to the j th point in D in the sense of minimum distance and the same class. Otherwise w i j = 0. Class c discriminates points into flat, n o n f l a t, ceiling or floor. R is the rotation matrix, t is the translation matrix, m \(_{i}^{c}\) corresponds to points of class c from the model set M, d \(_{j}^{c}\) corresponds to points of class c from the data set D. The algorithmic representation of the process is shown in Listing 1. The semantic approach efficiently decreases the time of the 6D SLAM convergence, by requiring a lower number of iterations, especially in the situation where odometry readings are not sufficiently accurate. It is also more reliable than other state of the art approaches for demanding scenarios, such as moving down stairs or mapping harsh environments. We show an example in Section 6 in Fig. 8.

figure a

4 Recognition of 3D objects

Recognition of 3D objects is composed of OPI detection and identification. The pipeline of the autonomous search is shown in Fig. 6. To detect 3D points of OPI we use a nearest neighborhood search procedure for each query point in the current 3D measurement. We perform ICP alignment of the current 3D scan to the global reference model. This step is necessary to minimize the number of false detections. Prepared point clouds are processed using Algorithm 2. All of the query points which do not have neighbors within the certain radius are considered as a parts of OPI. Points with existing nearest neighbor get the label non OPI. In the graphical interface OPIs are marked by the red color instead of black. An important issue is the proper initialization of the system by providing a reference 3D model with minimized number of dynamic obstacles.

figure b

To identify 3D objects we build a knowledge base, thus we prepare a training data set composed of objects with assigned semantic labels SL. Recognition of 3D objects is performed by classification of the observed object into semantic label based on knowledge base. In this application we have two semantic labels: OPI and not OPI. We assign these semantic labels manually, thus the advantage of the system is the capability to train for recognition of different objects. To characterize the 3D objects we use the CSH. The CSH (Algorithm 3) is an extension of Point Pair Feature (PPF) [15], which is similar to the surflet-pair feature from [26, 28]. The PPF method uses a point pair feature that describes the relative position and orientation of two oriented points. The feature F PPF for two points m 1 and m 2 with normals n 1 and n 2 and distance d = m 2m 1 is defined as:

$$ \textbf{F\(_{PPF}\)}(\textbf{m}_{1}, \textbf{m}_{2}) = (||\textbf{d}||_{2}, \angle(\textbf{n}_{1},\textbf{d}), \angle(\textbf{n}_{2},\textbf{d}), \angle(\textbf{n}_{1},\textbf{n}_{2})) $$
(8)

where (a,b) ∈ [0; π] denotes the angle between two vectors. CSH reduces the dimension of the feature space and uses characterisation of shape assigned by semantic label around query point, thus

$$ \textbf{F\(_{CSH}\)}(\textbf{m}_{1}, \textbf{m}_{2}) = (||\textbf{d}||_{2}, \angle(\textbf{n}_{1},\textbf{n}_{2}), c(\textbf{m}_{1}, \textbf{m}_{2})) $$
(9)

where (a , b)∈[0;π] denotes the angle between two vectors, c(a,b) ∈ (0,1,2,...n) denotes combination of semantic labels SL denoting shapes around query points a and b. In the current implementation of CSH: 0 corresponds to (flat,flat), 1 corresponds to flat, non flat) or (non flat, non flat), 2 corresponds to (non flat, non flat). The classification into flat and not flat labels is performed with Principal Component Analysis (PCA) of the neighborhood for a given query point. Fig. 3 depicts the feature in PPF and CSH. In our experiments the Euclidean distance between points is quantized to 20 buckets. The combination of shapes is quantized to 3 buckets and the angle between normal vectors computed for a pair of points is quantized to 5 buckets. The advantage of CSH over PPF is a reduction of feature space, thus assuming the same quantization the histogram of CSH is composed of 300 buckets and histogram of PPF is composed of 2500 buckets. A neglectable disadvantage of CSH is the higher computation complexity over PPF determined by classification of neighborhood for each query point into certain semantic label SL, thus a parallel algorithm for fast CSH computation is implemented. In average on a cloud of 15000 3D data points the CSH is computed within 4 seconds on GeForce GTX680 and within 1.7 second using an approximated method.

figure c
Fig. 3
figure 3

Left: Point Pair Feature F P P F of two oriented points. The component F 1 is set to the distance of the points, F 2 and F 3 to the angle between the normal vectors and the vector defined by the two points, and F 4 to the angle between the two normal vectors. Right: Features F C S H in CSH. The component F 1 is set to the distance of the points, F 2 to the angle between the two normals, F 3 to combination of semantic labels SL denoting shapes around points m 1 and m 1

4.1 Intelligent system for 3D objects identification

Object identification is performed with a Support Vector Machine. The SVM is a tool for machine learning performing a task of classification and approximation. SVMs belong to the family called kernel machines with linear or nonlinear kernel. SVM solves the task of classification and regression methods with the optimization constraints. Our system is capable of building SVM models for 3D objects. These models are used for classification purpose, therefore the system is able of performing qualitative reasoning by assigning a semantic label SL for observed 3D object. The training and classification phases are shown in Figs. 4 and 5. Input is obtained by a 3D scanner. For such 3D point cloud we compute the CSH. The processed data are stored in database. Each training set is composed of positive examples (CSH of query 3D object) and negative examples (CSH of different 3D objects). Classification is done using the SVM with linear kernel function. The task of searching for the classification of the optimal hyperplane separates the data belonging to the different classes. The training set is defined as (x i ,y i )1≤iN where each example x i R d, d being a dimension of an input space, belongs to the class labeled by y i ∈{−1,1}. The goal of supervised learning is to find a hyperplane, which divides the set of examples. Thus, all the points with the same label suppose to be classified to the same category. Before the classification SVM has to be trained to obtain a model describing the query 3D object. The classification is performed by the SVM that responds binary positive, negative. P o s i t i v e label corresponds to potential object of interest OPI. Qualitative reasoning is performed by assigning semantic label to the result of the classification (Fig. 6).

Fig. 4
figure 4

System structure for 3D objects recognition - SVM training phase

Fig. 5
figure 5

System structure for 3D objects recognition - SVM qualitative reasoning phase

Fig. 6
figure 6

The pipeline of the autonomous search

5 Intelligent mobile user interfaces

The intelligent mobile user interfaces are implemented using a Software as a Service SaaS model with the capability to perform High Performance Computing (HPC) in the Cloud. The HPC approach is a relatively new topic, especially in mobile robotics. We are using it for the task of 3D mapping and for detection of OPI. HPC with Graphic Processing Unit GPU virtualization technology allows many users to have access to computationally demanding applications in the Cloud. We have presented this technology applied for the Search and Rescue ICARUS system in the field [22]. We decided to provide needed computation power through a mobile data center based on an NVIDIA GRID server (Supermicro RZ-1240i-NVK2 with two VGX K2 cards - in total 4 GPUs with 4GB RAM each) capable of GPU virtualization. In the ICARUS project we have used the Citrix XenApp infrastructure for building the SaaS model. In XenApp, many users can share a single GPU by the use of Microsoft Windows Server 2012 system or a group of published applications. XenApp model allows sharing a single GPU for numerous application using CUDA. Published applications can be accessed as SaaS In the ICARUS project after installing the Citrix Receiver - thin client compatible with all popular operating systems for mobile devices (Mac OS X, Linux, Windows, iOS, Android etc.). The limitation is that the newest GRID compatible GPU drivers support only the CUDA 5.5 programming framework and the GRID GPU processors offer CUDA 3.0 compute capability. The provided functionalities are sufficient for our application. Besides CUDA, GRID technology supports building applications with 3D rendering (OpenGL 4.4). Thus, in our system it is used for remote graphical user interface for authorized personnel. Our system is able to improve global awareness of the situation by providing 3D renderings of the mapped environment with marked OPIs over Ethernet in real time. Our SaaS based security system enables integration with existing closed-circuit television systems, thus it reduces blind spots in existing safety infrastructure. The advantage of the approach is that all information from the robot is transmitted over Ethernet as renders, which is significantly faster than sending raw data. Thus, for example Crisis Management Centers will be able to use robotic information to coordinate crisis action. We tested the experimental setup in Institute of Mathematical Machines shown in Fig. 7. Our system is able provide intelligent mobile user interfaces for 20 users simultaneously. This allows to quick set up command and control centers in search and rescue scenarios

Fig. 7
figure 7

Experimental setup of simulated Crisis Management Center in Institute of Mathematical Machines for testing intelligent mobile user interfaces in a SaaS model

6 Experiments

We performed an end user case study. This shows the need of providing following functionalities:

  • accurate 3D mapping,

  • search and detecting changes in 3D environment,

  • closed-circuit television CCTV camera system modeling and evaluation.

Therefore, we performed the following experiments as a contribution of our research. All figures shows intelligent mobile interfaces available over Ethernet via a novel SaaS system based on the NVIDIA GRID technology.

6.1 Accurate 3D mapping

An improved 6D SLAM algorithm as shown in Fig. 2 is capable building accurate 3D models of the environment. Figure 8 shows real task experiment demonstrating the advantage of proposed approach compared to state of the art within the context of demanding scenario - robot moving down the stairs. Our 3D mapping solution is providing more accurate maps in less number of iterations. We observed that map relaxation calculations can be reduced from 100 iterations down to 10 having comparable accuracy, thus the time of computation is drastically reduced. It is observed that semantic discrimination of points improves scan matching, thus objects such as ceiling and floor are not merged.

Fig. 8
figure 8

Example of demanding 3D mapping scenario - robot moving down stairs. Left - 3D map based on odometry readings, middle 3D map after state of the art data registration based on Iterative Closest Point, right 3D map obtained with proposed semantic 3D scan matching. We use the classes: flat, not flat, ceiling and floor in semantic 3D scan matching

6.2 Mobile intelligent security application – search and detecting changes in 3D environments

The system is capable building 3D model of the environment as a reference data. This data is used for detecting geometric changes in the environment and for classification. Searching objects is performed in autonomous mode in stop-scan fashion. The robot is visiting goals and in each certain location it is performing 3D scan. This scan is compared with reference 3D model of the environment for detecting geometric changes. These geometrical changes are input for the classifier capable distinguish different objects into predefined classes: object of potential interest or other, provided during supervised learning. The system is informing the operator about the detected changes and the classification result. We compared Complex Shape Histogram, Point Pair Feature and Point Feature Histogram in an experiment where we performed different classification strategies of 10 objects shown in Fig. 9. Each object was scanned 100 times assuming different field of view. Example scans with visualized semantic discrimination of points - different semantic labels SL are depicted in Fig. 10, thus we built representative data set. For each scan we computed CSH, PPF and PFH. Example histograms are shown in Fig. 11. We compared two classifiers SVM and kNN, thus we depict results in Figs. 12 and 13. We use two quantitative measures:

Fig. 9
figure 9

Objects in classification experiment comparing Complex Shape Histogram, Point Pair Feature and Point Feature Histogram

Fig. 10
figure 10

3D point clouds for each object from Fig. 9. Colors correspond to different semantic labels SL: red-non flat, green-flat

Fig. 11
figure 11

Example CSH, PPF and PFH histograms computed for data shown in Fig. 10 obtained by scaning objects from Fig. 9

Fig. 12
figure 12

Classification results after training strategy where we assigned positive label (y=1) only to single object and negative label to other objects. CR (correct rate), CRT (correct rate for positive label), TT (training time), CT (classification time) and SV (percentage of support vectors). In last rows we calculated how many times CSH was better than PFH and PPF, thus for example CSH >PFH: 6/10 means that CR of CSH was higher than CR of PFH in 6 of 10 (repeated 100 times) training/classification trials

Fig. 13
figure 13

Classification results after different training strategies where we tested different configurations of assigned labels. CR (correct rate), CRT (correct rate for positive label), TT (training time), CT (classification time) and SV (percentage of support vectors). In last rows we calculated how many times CSH was better than PFH and PPF, thus for example CSH >PFH: 4/4 means that CR of CSH was higher than CR of PFH in 4 of 4 (repeated 100 times) training/classification trials

  1. 1.

    Correct Rate

    $$ CR=\frac{Correctly\,Classified\,Samples}{Classified\,Samples} $$
    (10)
  2. 2.

    Correct Rate Positive Samples

    $$ CRP=\frac{Correctly\,Classified\,Positive\,Samples}{Classified\,Positive\,Samples} $$
    (11)

In Fig. 12 we collect classification results after training strategy where we assigned positive label (y=1) only to single object and negative label to other objects. In Fig. 13 we collect classification results after different training strategies where we tested different configurations of assigned labels. We performed 100 cross validation tests for each classifier, thus for each test in training phase we have chosen 50 random samples and for classification we used remaining 50 samples for each object. The mean values of the correct rate CR, correct rate for positive label CRT, training time TT, classification time CT and percentage of support vectors SV are collected in Figs. 12 and 13. In our opinion SVM is more relevant for our application than kNN. SVM classifies much faster than kNN and needs less information, thus small number of support vectors (typically less than 10 percent of training set) enough for reaching satisfactory correct rate. Another advantage of SVM is the simple implementation of the classification routine on an embedded system limited with computer power. We observed that CSH performs better than PFH in most cases. PPF gives best classification results but the computational complexity is largest. SVM classifier is faster than kNN but correct rate CR in general is slightly lower.

The prototype of the intelligent mobile system was tested in the Museum of the History of Polish Jews and at the Airport in Łódź, Poland, shown in Fig. 14. The system is deployable within 30 minutes. After one hour of 2D mapping we obtained accurate floor plan. Mission planning takes in average two hours. Typical single 3D measurement takes one minute. The robot reaches its next goal within seconds because the average distance between goals should not exceed 5 meters. System was able to detect changes in 3D environment. First OPI was classified by our system as backpack, second OPI was interpreted by end users as people. This is shown in Fig. 14. Our solution is able to improve the existing security system by eliminating blind spots between installed cameras.

Fig. 14
figure 14

The robot tested in the Museum of the History of Polish Jews (Warsaw, Poland) and at the Airport in Łódź, Poland. The results of inspection; Detected changes are marked in red color

6.3 Mobile intelligent spatial design support application

The main goal of the experiment was to validate and improve the surveillance system for the underground garage. The questions from end-users were: does the existing CCTV system is efficient and how to improve the lighting system to increase the awareness of the security staff. As a respond we prepared an experiment in which we built accurate 3D model of underground garage, then we built semantic model of the garage and perform CCTV camera system evaluation using QSTRR framework. To increase the awareness of the security staff we propose the lighting simulation system capable evaluate different light configurations for better CCTV data acquisition. The visualization using WebGL technology is shown in Fig. 15. The staring point is the QSTRR model of the environment with an initial CCTV system. The task is to check the system for blind spots, redesign the system using different capabilities of the framework and validate the solutions. Each camera in the garage was modeled as an entity with a proper range space. The task of the experiment was for a virtual car to drive through the garage, by a predefined path. The assumption was that the car should be visible in at least one camera in every moment of the drive, for the system to pass the validation. The car was considered visible if any of it’s part is in the camera range space. The garage was considered empty for the experiment. the result of the experiment is shown in the Gantt plot in Fig. 17 corresponding to semantic model of the experiment shown in Fig. 16. Data are available for download and an immersive visualization is available here.Footnote 1 From the Gantt plot we can observe the blind spot between cameras 1,2 and cameras 2,3. This is a very useful quantitative information for end user obtained with proposed qualitative reasoning framework. The subject is closely related to the Art gallery problem. Our system can evaluate existing CCTV system by providing information concerning blind spots (Fig. 17).

Fig. 15
figure 15

Underground garage model visualized in WebGL

Fig. 16
figure 16

Semantic model of the experiment for CCTV system evaluation

Fig. 17
figure 17

Gantt plot for CCTV system evaluation

The Art gallery problem is a classical problem in geometric optimization [21]. Given a polygonal region, i.e., the art gallery, place as few stationary guards as possible, such that that the entire region is guarded. The Art gallery problem is NP-hard, while the related watchmen problem, where one replaces the stationary guards with a single mobile guard, can be solved efficiently [17, 21]. These geometric optimization focus on theroretical aspects and use simplified models, e.g., all guards have a field of view of 360-degree and unlimited range. Nonetheless, recent progress in art galleries allows solving simplified real-world instances by using an integer programming approach and a finite number of witnesses and a finite number of guard positions [9]. For these algorithms a powerfull integer program (IP) solver such as CPLEX (IBM ILOG CPLEX Optimization Studio) is needed. In contrast to the theoretical grounded work, we search for blind spot by the QSTRR model including realistic camera parameter.

The second part of the experiment was the spatial design task related to the improvement of the security system. This experiment shows the possibility of designing the lighting systems in existing buildings using immersive visualization technique. The goal was to increase the awareness of the security personnel. To simulate the light in the model of a building the cameras parameters and locations must be specified. We use 5 cameras from the previous experiment. The lights are described by its positions and the shape which is the same rectangular shape for all lights in this experiment. The lights were simulated using path tracing implemented in NVIDIA Optix library. Several rays are shoot from each pixel and traced. The resulting images have 800×600 resolution for all cameras. The visualization uses the accurate 3D model, acquired by the mapping system, the camera positions from previous experiment and defined lights. The lights locations are based on the 18 existing lights and the proposed change adds 20 more lights and the presented method shows the result. The results for the 5 cameras and two lighting system layouts are presented in Figs. 18 and 19. Obtained images show that there are dark spots especially in camera 2 and 5. The images generated with more light sources are brighter and they can be more useful for security personnel. The implemented framework allows designers to test the proposed light and CCTV systems before any physical change. Realistic rendering provides reliable information for end users.

Fig. 18
figure 18

Simulation of existing CCTV system

Fig. 19
figure 19

Simulation of improved lighting system of existing CCTV system from Fig. 18

7 Conclusion

In this paper we have shown the advantages of using the QSTRR framework in security and spatial design applications. The intelligent mobile system is designed for protection of urban environments including critical infrastructures. The method for searching of objects of potential interest was discussed and the results in real task scenarios were presented. The prototype of the system was tested in the Museum of the History of Polish Jews in Warsaw and Airport in Łódź, Poland. The system was able to detect changes in 3D environment and is able to improve the existing security system. Our spatial design support system, based on modeling-validation-remodeling philosophy, allows for creation of accurate model of given environment, modify it using specialized tools and validating the results with simulation. We generated accurate 3D metric models and by adding QSTRR layer, we modeled spatio-temporal events and relations in the environment, which allows for simulating not only the view of the area but also explore the qualitative functionalities of different configurations of spatial entities. The presented experiments show the proof of concept of the system functionality looking from end-user cases point of view. Use of parallel computation allows for increase in quality of the results and lower the computation time to reasonable levels. Our intelligent mobile interfaces implemented using the SaaS model allow accessing data over Ethernet, thus end-users can work simultaneously from different locations.

8 Future work

The proposed classification method uses 3D information of objects, thus we will investigate how 2D information from sensor such as camera could improve the performance of the system. There is no limitation for using 2D and 3D information since we can represent the objects using the certain feature space, thus in future work we will investigate the methods for the characterization of 2D objects. The research goal will be the classification system capable working with 2D and 3D information. The new system will be dedicated for cases and applications where user can only provide a 2D images as input.