Keywords

4.1 Surveying and Mapping the Existing

4.1.1 Existing Praxis, State of the Art, Tools and Workflows for Surveying and Mapping

The existing tools for surveying technologies are multiplying, specialising in relationship to the size of surveying objects and their visibility inside the building also. What can be precisely detected on a surface by one tool can be ignored by another; on the contrary, what is inserted inside a masonry is ignored for example by a laser scan or photogrammetric techniques. Depending on the peculiarities of each one a survey of the building could have to do with:

  • Lidar technologies (Light Detection and Ranging or Laser Imaging Detection and Ranging) that allow to reconstruct three-dimensional models through the recording of single or multiple scans determining the distance of an object or surface using a laser pulse, based on the time of flight (the time it takes for the laser beam to travel to the target and reflect back) (Achille et al. 2017); the emitter generates a coded light known to the electronic sensor that strikes the object being measured. The working principle of laser light-based optical 3D measurement sensors can be briefly described as generating a pulse and analysing the signal reflected from the struck object to determine a distance measurement.

    The sensor is defined as active because there is an emitter that emits encoded light, which invests the object to be measured, as opposed to photogrammetry which is passive; in fact, the camera sensor only captures light. Laser emits electromagnetic radiation, it is a wave emitter with time coherence, with same frequency and same phase.

  • Photogrammetry, a passive detection system, time consuming in terms of post processing but with undoubted economic advantages; now post processing times and increasingly faster considering the development of the latest modelling software algorithms (Grilli et al. 2017).

  • Infrared thermometers as well as heat cameras where thermal mappings give different information about the building such as objects behind others with uniform temperature, find heat leaks, or detect faulty electric cabling and which can rarely be associated with model geometries except with the intervention of the operator (González et al. 2020).

  • Radar which uses electromagnetic radiation in the microwave band (UHF/VHF frequencies) of the radio spectrum and detects the reflected signals from subsurface structures.

  • Ultrasounds where ultrasonic range sensors produce a beam of ultrasound that is sent out and reflects from the object, allowing the sensor to measure distances but different density of a wall as well. This technology is the same used in medicine application to create multidimensional images of the human body, detecting different densities.

  • Magnetic or x-ray sensors but also capacitors or voltage or stud detectors as well as x-rays.

Considering the different structure and nature of data acquired it is natural to investigate a digital database as the structure that can possibly collect them; in the AEC sector a geometrical model is the unique container and the digital replica of what we can investigate. Therefore, difficulties derived from the measurements with different tools are not the only issue; the need to match and position the measurement we are performing is the other one.

Spatial positioning of the model within a Coordinate System and an Internal Positioning System of the surveys carried out becomes a challenge.

In the case of heat cameras matching images in the geometric model can be done manually. In the case of some laser scanners the connected applications allow the automatic recording of the user's movements from one scan to another for pre-recording in the field without manual intervention but the acquisition of associated images in HDR (High Dynamic Range) becomes an automatic process. However, in the case of more precise detection positioning, it may be necessary to scale down the indoor positioning with tolerances of just a few mm.

Almost all indoor positioning systems also lack external reference systems due to their nature and are anchored to temporary positioning systems that should be linked to more general systems.

The GPS geolocation system is particularly effective in open spaces but inside buildings or in heavily urbanised areas, the GPS easily loses operation, and it is necessary to play hard to find other alternatives.

Among the most used technologies for indoor positioning, we consider with different functionalities:

  • Beacon (using Bluetooth Low Energy) as other technologies or app for mobile devices have found wide use both in maintenance and in areas more related to Cultural Heritage or Marketing Proximity as tracking tools. Specifically, Beacons are hardware devices thanks to which Bluetooth technology is used to send and receive signals within short distances. They are used nearby as access-points to calculate where the device is located (Pavan et al., 2020).

  • Glasses: HoloLens (Hubner 2020), HTC Vive, Oculus Rift. They are different in functionality with respect to which they have been designed but all three disposals contain positioning capabilities with respect to indoor environments. HoloLens, designed for AR for the vision of holograms superimposed on environments, includes several sensors, to measure inertia also, a light sensor and four cameras for environmental analysis. The tool measures the time of flight for IR light and creates a 3d image of the room. The accuracy is about 3 cm. HTC Vive uses an IR-flash, little microchips with a photocell that are tuned to listen to infrared light, followed by sweeping IR-line lasers horizontal and vertical. It is a passive sensor sensing on headset, handles and trackers. Positioning is then determined by detecting the pulses of a photo led. The time between the initial pulse and the pulse generated by the line laser helps the positioning with a 2 mm accuracy with a 30 Hz update time. There are some limitations in the dimension of the spaces that must be related to the sensor’s possibility of detection. Oculus rift uses LED markers on the headset, handles and trackers and then uses cameras to track their position. The cameras are both fitted in the headset as well as the on stands in the room. The accuracy of this system is around 3.5–12 mm but just as with the HTC vive errors increase on distance to the cameras (Weinmann 2021).

  • QR codes attached to objects is the most traditional tool we can use but still one of the most accurate. A simple camera tracks objects when the QR code is attached to a fixed object as a reference point. Doing so that position will be geographically known for a camera. The orientation, angle and the position of the camera can be calculated by registering the symmetry and size of the QR-code in the image. Image recognition is used in AR-tools, allowing to anchor/position a virtual model to the real world so that they match each other.

4.1.2 The Survey Process in BIM4EEB

The whole process of BIM4EEB (Daniotti 2021), is structured around the idea of a BIM model enriched with data from different areas of data collection; the data base collects both data coming from users than related to energy needs and consumption and is located in an IFC file in the BIM Management System. However, the model used, a digital twin of the existing, must be reproduced with extreme accuracy to be able to intervene in a punctual way for the renovation process.

One of the tools developed by the project relates to the ability to systematise a variety of existing survey tools to produce a more complete and rapid mapping of buildings. The coupling of different instruments for different purposes has allowed us to realise a new tool used to deepen and speed up the survey.

Installations within walls can be easily detected by the new tool and reconstructed with augmented reality within the realised IFC model.

At this point, the following definitions should be given:

Virtual Reality (VR): VR is an immersive experience based on realistic 3D contents, sounds and other sensations to replicate a real environment or create an imaginary world that you can view through glasses.

Augmented Reality (AR): AR is a live view of a real-world environment with augmented and superimposed contents. The augmentation is achieved utilising devices like smartphones, tablets or custom headsets and dedicated apps that overlay digital contents onto the scene real environment (without interaction).

Mixed Reality (MR): MR—or hybrid reality, is the merging of real and virtual worlds to produce a new environment where physical and digital objects interact and cooperate in real-time.

In our experiment the AR overlapped to the digital model, i.e. a point cloud, is modeled and geolocated starting from the survey that the sensors have captured.

The creation of an AR-tool for fast mapping has been divided into development of a sensor stick with several functions for detecting installations inside walls to be coupled with IFC-file coming from general point clouds previously surveyed.

AR visualisation is used to give detailed information about a placed virtual object, in this case hidden pipes and cords inside the walls and materials. The data from the mapping is shaped as an IFC-file and is transferred to the BIM Management System to be used in a larger workflow for renovation processes.

The AR-tool has been developed to collect sensor data, laser scanning data and mix the data in a 3D-model environment in Hololens.

AR-tool development has been an integration work, divided into development work of sensor-stick, testing of measurement results from sensors in different environments, evaluation of testing data, decision making of methods and choice for type of sensors. The results from the work have confirmed it can be possible to show objects inside walls as electricity cables, humidity, metals, studs, know and locate the position in the building.

4.2 Functionalities of a Fast-Mapping Toolkit

4.2.1 The Hardware

The end goal of the fast-mapping toolkit is to produce an IFC-file that contains information about the building’s construction, geometry, and installation. For years now, there has been a consolidated awareness that an accurate and complete 3D digitization is indispensable for various maintenance activities. In our project the IFC-file collects information from a point cloud, a sensor stick used through HoloLens 2 device (AR glasses) and a laptop (Fig. 4.1).

Fig. 4.1
figure 1

The IFC-files are created out of information from a point cloud, HoloLens 2 de-vise (AR glasses), a sensor stick and a laptop

Using surveyed 3D data in the field can facilitate the interpretation and fruition of geometrical shapes in real-time; the opportunity to consult surveying results overlaid to real scenarios improves the building management process. Maintenance actions could also benefit from VR/AR/MR solutions and the main idea is to apply concepts taken from Industry 4.0, where AR and MR systems are used to reduce production costs, increase efficiency, and ease working processes faster.

In the fast-mapping workflow the point cloud obtained with a TLS campaign will be visualised in HoloLens two, where Augmented elements surveyed with sensor stick are created. HoloLens are a powerful device, with some computational and battery life limits. They can handle and render 600,000 points inside its field of view before a noticeable frame drop. Battery life is set around 2–3 h, depending on the workload. For this reason, it would be impossible to use, visualise and interact with full-resolution 3D datasets composed of hundreds of millions or even billions of points—typical of large surveys, and this is a limit. The HoloLens 2 can recognize hand gestures, gaze and voice commands. When a user’s hand approaches the field of view of the HoloLens 2, one of its depth sensors begins to monitor the user’s hand.

Any laser scan can be used to create a setup that is then aligned to create the point cloud; BIM4EEB project tested three demo cases in three different residential buildings, verifying the usability of the system at the scale of an apartment. All the difficulties that can be found in a normal survey remain: the presence of visual impediments to the survey of the normal geometry of the walls, the limited size of some rooms, the presence of reflecting surfaces.

When the point cloud is done, the scan is then imported to the toolkit using a laptop running the so-called Companion App. The Companion App has been developed to prepare the point cloud to stream and to make it available to the point cloud streaming service that is also running on the laptop.

All the workflows need to be accompanied by a wi-fi connection and this may be considered another limit.

The laptop is connected to Wi-Fi to allow other devices to stream and view the point cloud in the same environment.

The point cloud is visualised inside the HoloLens and becomes a reference for creating accurate geometries manually. In order to do that the user on the HoloLens basically creates a specification of the geometry that is sent to the laptop. Another service on the laptop takes this specification and translates it to an IFC-file. The results are sent to the HoloLens device and visualised. Together with all this we have the sensor stick that is also connected to the same Wi-Fi, the HoloLens connects to the sensor stick pulling data from it, translated to the correct position relative to where the HoloLens sees the sensor stick, that creates a position sensor cloud (Fig. 4.2).

Fig. 4.2
figure 2

The companion app prepares the point cloud to streaming and makes it available to the point cloud streaming service that is also running on the laptop. The laptop is then connected to a Wi-Fi that makes possible for other devices to stream and view

4.2.2 The Companion App

The Companion app is written in the programming language Unity and has the task of communicating with the HoloLens and sensor stick. To be able to handle big amounts of data in a Hololens2 device with limited computer resources the calculation parts of the program are put into the Companion app instead of the Hololens2 glasses. This has created a big stream of data between Hololens2 to Companion apps and it was decided to use a standalone WIFI. As already said HoloLens2 are small devices and fail in handling big point clouds; to handle big point clouds in a small device like this was created an algorithm that worked together with HoloLens2 device. The algorithm shows the position of the HoloLens2 and its direction part view into the Companion App.

The algorithm then streamed that part of the point cloud that the HoloLens2 devices just needed. By this algorithm we could use a big high density point cloud of a big building into a small device with limited resources. The Companion app has three tabs in the menu; Settings, Ifc Files and Point-Clouds.

Under settings, you set the IP addresses and ports for the HoloLens and sensor stick. You also name where (in which map) point cloud will be picked up and where IFC-files will be stored.

The point cloud and IFC-files are visualised in the Companion App. It is also possible to see where the HoloLens are. The below figure shows the Companion App when the point clouds at CGIs office in Sweden are visible. The red glasses in the left lower corner shows where the HoloLens are and in which direction it is looking (Fig. 4.3).

Fig. 4.3
figure 3

The Companion app when the point cloud at CGIs office in Sweden is visible. The red glasses show where the HoloLens is

4.2.3 The HoloLens 2 Device

The training with the Point Cloud started with Hololens1 but shifted to HoloLens 2 as soon as we realised the new more powerful device with the new hardware had more power and features. The new interaction model was also something that was greatly appreciated, the HoloLens2 can track both hands fully which then means that we can have richer, more life-like experience. Interactions with the menu will be more natural and the need for a secondary hand controller we earlier use is no more. But as the HoloLens2 was introduced, some problems did arise, some expected and some unexpected. To be able to use them in the fast mapping they need the App “PolyTech” which is written in Unity and makes it possible to download the point cloud, run the sensor stick and create IFC-files. The menu in the PolyTech app contains; settings, point cloud, IFC-files, the sensor stick, workspace (where the IFC-objects are created) and home (Fig. 4.4).

Fig. 4.4
figure 4

The home menu in the HoloLens app PolyTech

When creating an object in the HoloLens, it’s possible to choose between; 4 Point cube, 3 Point cube, Simple Cubic and Spline. Once objects are created you can change position, rotation, and size of it. When creating the objects, you also select which type it shall be. There are different options to choose between. In the figure below IFC_BEAM is selected (Figs. 4.5, 4.6 and 4.7).

Fig. 4.5
figure 5

In the workspace it's possible to create objects. You can choose between; 4 point cube, 3 point cube, simple cube and spline

Fig. 4.6
figure 6

When creating an IFC-object, you need to choose which kind of object types it should be

Fig. 4.7
figure 7

Beams are created as IFC-files in the HoloLens (orange). Now they need to be tilted in the correct position so that they are behind the wall

4.2.4 The Sensor Stick

The sensor stick is the tool used for the survey of hidden data; it is part of the fast-mapping toolkit together with a laser scan and the HoloLens. It is a hardware instrument that detects through different sensors what is not visible to the naked eye. The sensor stick contains four different sensors to catch: temperature, voltage, inductance, and conductance. The temperature sensor detects the temperature at the surface of an object, of a wall, floor, or ceiling. The voltage detects electrical AC 50 Hz voltage at different depths in a construction. It is possible to change the sensitivity of the metre to find what is desired. The inductance measures the relationship between a magnetic flux and the current strength. This indicates where there might be beams in the construction. The capacitance measures the resistance in the material and can in that way among others detect moisture.

All four sensors are active at the same time and the measures are both recorded and visualised in real time by the HoloLens. When the camera at the HoloLens sees the sensor stick (which has an QR-code on its top) its position is registered. The result from the sensor stick is visualised in the HoloLens both by different collars at the point cloud and in the menu as diagrams.

The results from the sensor stick are then used to identify where beams, water pipes etc. are located. The IFC objects are then created with help of the sensor sticks results within the HoloLens. The figure below shows the result from the sensor stick as blue and orange colours. With the colour indication, the beams are then created (Fig. 4.8).

Fig. 4.8
figure 8

The results from the sensor stick are visualised at the point cloud as different colours. Each sensor gives a specific colour. Here the green colours indicate where the beams are in the construction

4.2.5 Modelling Objects in a 3D-Environment by a Unity Based AR Visualisation

All interactions with users using a HoloLens2 is based on the users' hand moves that the camera in the HoloLens captures. After scanning with a laser scanner and the sensor stick you can create the 3D Ifc image of the single room and, adding one to the other of the flat and of a building. The HoloLens could then put out an Ifc building element based on the point cloud and the data from the sensor stick scan. The user is then able to choose which Ifc element to use and then put it into the area based on that kriterium. For example, the application based on the data inside the wall could automatically put out the beams inside a wall. All elements can be moved in all directions and rotated inside the application from the device.

4.2.6 Main and Open Issues

The software is working well but there seems to be some things that could make it more efficient to use after some more development. The workflow has shown that interaction between different data is possible through the interfaces we create with Ifc data even if it comes from different sources, whether laser scan or sensor stick. However, the manual recognition of different objects for the reconstruction of elements in Augmented reality in the long run can be complex and cumbersome. A possible application to further speed up the survey process could be to develop machine learning processes that would allow us to automatically recognize the most recurring parts within a building and be able to create templates of doors, windows, openings that are placed often, so we don’t have to redo as much of the mapping. Also, more improvements should be directed to the positioning of the augmented elements in the walls as it seems that is not as accurate as it should be in terms of positioning.