1 The Impact of Augmented Reality Technology on Visualization Requirements

Modern techniques of augmented reality (AR) offer people the opportunity to perceive additional information in their immediate field of view that is not available in real space. Using various methods, augmented reality visually generates a virtual (possibly also three-dimensional) object creating the impression that this object, regardless of the viewing angle, exists in reality. Basically, augmented reality is described "as an interactive real-time environment in which virtual content is superimposed into the user's real environment in a perspectively correct manner" (Dörner et al. 2016, cf. also Azuma 1997).

A characteristic feature of augmented reality (AR) lies in the fact that this form of visualization is not static, but continuously adapts to the current visual angle of the respective viewer (Broll 2019). However, a precondition is the simultaneous electronic recording of the environment and the matching of the recorded spatial situation with an internally available 3D data model. Methods of image registration make it possible to locate virtual 3D objects relative to objects in the real world, such as signs, symbols, texts or even animated objects appearing in the environment. The virtual objects are then displayed in real-time, based on the perspective on the environment. This creates the impression that a virtual object, for e.g. a virtual landmark, is part of the real scene. Interaction with the real environment, i.e. immediate orientation in or navigation through space, can, therefore, benefit from this visualization technique and brings advantages over classical media of spatial visualization. This requires a constant change of gaze and cognitive transformations (Munoz-Montoya et al. 2019).

Compared to virtual reality (VR), the appeal of augmented reality (AR) is that it does not place users in a completely alternative virtual environment, but maintains a much stronger connection to reality. The users are hardly decoupled from the current sensory influences of reality. They are merely confronted with additional virtual objects projected into reality used to provide them with additional information. Within the reality-virtuality continuum described by Milgram and Kishino (1994), augmented reality thus clearly leans more towards reality rather than virtuality (for a further terminological discussion, see Çöltekin et al. 2020). In the context of AR experiences, users can primarily act in (their) reality, which significantly increases the possibilities of practical application compared to virtual reality.

The dynamic positioning of the superimposed image in the scene being viewed is crucial for the generation of AR elements displayed correctly in terms of perspective. Understanding these technical basics is an important prerequisite for the cartographic use of augmented reality elements. The different techniques influence the visualization and the perception of AR elements in 3D space. In the following, important visualization properties of current augmented reality techniques are emphasized.

2 Practical Applications of Augmented Reality

Augmented reality is playing an increasingly important role in a wide variety of everyday application scenarios. For example, non-visible item labels can be displayed during warehouse and logistics work, or additional explanatory texts can be shown for exhibits when visiting museums. But even at a smaller scale level of geographic applications, such as in the field of environmental planning, two- or three-dimensional objects or phenomena can be communicated more directly to users of AR systems (Wang et al. 2019). Examples that can already be considered as "classic" include AR experiences for navigation with mobile devices, where arrow symbols superimposed on the display guide the way (Fig. 1; see Liu et al. 2021 for indoor navigation). The gaming and entertainment industry has used AR for quite a long time, e.g., to visualize artificial characters in Pokémon GO (Nintendo), to overlay digital masks in Snapchat, Zoom, etc., or to superimpose distance lines in soccer match broadcasts (Fischer-Stabel 2018, p. 150). However, forms of augmented reality are also gaining practical importance for applications outside the world of games and leisure-time activities (Keil et al. 2020, 2019, Schart and Tschanz 2015, 2017; van Krevelen and Poelmann 2010). For example, in the case of planning purposes in urban design (architecture, road construction and maintenance), technical infrastructure that is not directly visible in reality can be visualized directly through a device display, such as the underground pipeline systems running under a road (Stylianidis et al. 2020) and future construction projects (Wolf et al. 2020). Even industrial plants and machines are now designed with the support of such virtual techniques (Kühn-Kauffeld and Böttcher 2020). For guiding vehicles, driving lanes have been superimposed as lines in the camera perspective of rear view cameras for years, depending on the current steering angle (Dörner et al. 2016). The latter in particular shows how much "people already intuitively rely on additional virtual elements to solve spatial tasks" (Keil et al. 2020). AR applications are also becoming increasingly important in education, for example in geography lessons (Challenor and Ma 2019; Stintzing et al. 2020; Trunfio et al. 2020).

Fig. 1
figure 1

Utilization principle of an AR element (3D arrow) superimposed into reality for smartphone-supported indoor navigation (photo: J. Keil)

3 Opportunities for Cartography

AR is characterized by the fact that users can recognize the content of information (e.g., location, size, shape, and color of an object) at first glance. This can happen without having to understand abstract symbols that would have to be used to paraphrase the same information in conventional textual or graphical coding systems (Narzt et al. 2006). However, AR technology also allows to project abstract symbols and models into real space to provide more detailed information about the immediately perceptible space and thus to support orientation (roughly comparable with non-digital mid-air displays, see Dickmann 2012). AR-supported head-up displays on car windshields, which have been developed and offered by car manufacturers as navigation aids for many years, already indicate a great potential for cartographic applications (Narzt et al. 2006). This emphasizes that augmented reality can also be used for 'classic' mapping applications. For example, de Almeida Pereira et al. (2017) show that 2D maps can be augmented with three-dimensional information (see also Hugues et al. 2011, on combining AR with GIS). Moreover, an increasing number of empirical studies demonstrate and discuss the benefits of AR applications for spatial memory performance, for example, compared to two-dimensional photography (Munoz-Montoya et al. 2019; Rehman and Cao 2017, Guarese et al. 2019) or even for distance estimation of objects in space (Keil et al. 2020; Hedley 2003).

Like no other medium, AR opens up the possibility to directly refer additional information to the 3D space perceived in a specific moment. This especially applies to map-based interaction with space, as well as for navigation and planning. Typical challenges of cartographic interpretation are effectively addressed, such as the often tedious, geometrically correct assignment of spatial objects represented in maps to the corresponding positions in spatial reality (georeferencing). This has advantages especially for spatial orientation, as augmented reality can make important route cues appear directly in the field of view (Fig. 1; for an example using similar signifiers in immersive virtual reality, see Edler et al. 2018). In particular, spatial target positions, for example ones that are hidden behind a block of houses, can be visualized to significantly support orientation.

Although, similar to fully virtual realities (VR), there are ongoing debates whether AR techniques are cartography in the narrower sense (see Hruby et al. 2021), it can be noted that such visualization methods of spaces are coming more and more into the focus of cartographic work and research (Hruby et al. 2021; Kersten and Edler 2020; Keil et al. 2019, 2020; Çöltekin et al. 2020; Edler et al. 2018; Griffin et al. 2017).

The advantages of visually integrating cartographically modeled elements into real space are particularly evident in spatial tasks that are carried out in directly experienced reality, e.g., object search, orientation, and navigation (Pranz et al. 2020; Schmalstieg and Reitmayr 2007). Due to the use of AR visualization, there is no need for cognitive transformation processes which occur, for example, in the context of orientation processes of using 2D maps (Bitter 2000; Lloyd 1993). The AR elements intended for the visualization purpose, e.g., a destination hidden behind trees during a navigation task, are usually projected graphically generalized into a spatial scene. Unlike the superimposed AR elements, however, the environment is not modeled. This easily ensures the detectability and spatial referencing of AR objects even in spatially complex environments. For vehicle navigation, the possibilities of using AR elements to display routes or to indicate potentially dangerous situations in road traffic have been discussed for years (Narzt et al. 2006).

The model character that generally characterizes maps (Dickmann 2018, p. 13) is reduced by the large proportion of non-abstract reality. Although the small number of abstract elements are not dominant, they are highly relevant for the message. On the contrary, cartographic information such as immediate navigation instructions (e.g., superimposed direction arrows) or semantic information about spatial objects (e.g., about the use or age of buildings) can be related to real space in a thematically and spatially much more focused way. This representation principle is comparable to analog or GIS-based aerial photo maps. In aerial maps, aerial photographs of reality are expanded by a few selected topographic–cartographic elements which are placed in the photograph (orthophoto) to support orientation. The majority of the aerial photo information is retained and forms the spatial basis for the representation. However, the added map elements such as margin information, symbols or text decisively supplement the information provided by the aerial photograph. Similarly, AR usually integrates only a few additional (cartographic) elements into the perceived real-world space. However, compared to two-dimensional maps, AR visualization is far more immersive due to the dynamic and three-dimensional representation method. Augmented reality, therefore, has the potential to do particular justice to the genuine task of maps, i.e. to convey spatial information as efficiently as possible (Dickmann 2018, Robinson et al. 2006). In AR, only the elements which are positioned in the field of view and which are directly used to solve spatial tasks need to be cartographically encoded. The vast majority of the field of view can, therefore, remain cartographically non-encoded. By integrating AR, the classical mapping requirements of cartography such as generalization or encoding (Bollmann 2002, p. 13) have to be considered quantitatively less, since only (some) AR elements of a perceived spatial scene are addressed. However, more focus needs to be placed on the representational quality of the AR elements, such as issues of the level of abstraction (generalization, salience) or the possibilities of user-based interaction.

4 The Technical Visualization of AR Elements in Ambient Reality

In general, the visualization principles that have been used up to now to generate AR are similar. Nevertheless, there are considerable differences in the actual technical implementation (Table 1). This has consequences for the possibilities of practical application and the visualization purpose. By now, numerous hardware and software solutions are available that allow AR elements to be visualized or projected into an ambient reality. There are two technological options that stand out: two-dimensional AR visualizations, which can be generated on the basis of smartphone or tablet displays, and the more technically complex AR presentations which create a true stereoscopic 3D image and thus a significantly more immersive impression using head-mounted displays (HMDs). Handheld devices are currently the most important output devices for augmented reality due to their widespread use (Fischer-Stabel 2018). Moreover, they are now equipped with suitable AR frameworks, such as ARCore for Android and ARKit for iOS.

Table 1 Visualization of AR elements with the help of handheld devices and head-mounted displays (after Broll 2019, p. 316)

Crucial to the generation of an AR element that is presented correctly in terms of perspective is the dynamic positioning of the inserted image in the observed scene. Using objects recorded in the video stream, the relative positioning and orientation of the objects to the camera (extrinsic camera parameters) are determined (Grimm et al. 2019). This ensures that a virtual object is located in the viewer's field of view in such a way that it appears as if it were actually located at this position in reality, regardless of the viewer's location (Dörner et al. 2016). For geometric registration, different tracking methods can be used which continuously record or calculate the location of the viewer in the real environment along with the position of the AR element. In addition to the (position) sensor technology of the devices used (magnetometers, gyroscopes, accelerometers, etc.), computer vision techniques are used intensively. Using these techniques, a sufficiently high spatial precision can be achieved. Simple GPS signals are not sufficiently accurate for this purpose (Grimm et al. 2019). With the marker-based tracking method and feature-based tracking methods, two different approaches of AR applications are heavily used at the moment (Grimm et al. 2019).

4.1 Marker-Based Tracking Methods

In the marker-based tracking method, (often) black and white characters (similar to a QR code) are attached to selected objects (of an environment) in front of which an AR element is meant to be visualized. As soon as the camera detects these markers, the software examines whether the marker is stored in the database. In this case, the position of the camera in relation to the marker is calculated according to the positions of the marker. In this way, the AR element can be presented in the correct perspective. In case the corner points of the marker shift, e.g. due to movement of the viewer, a recalculation (tracking) takes place immediately. However, this procedure requires that such markers are attached to real objects before an AR element can be rendered. Marker-based applications could be used in the tourism sector. During a city tour, historical buildings that are no longer visible today can be visualized on the basis of AR at individual locations (buildings) equipped with small markers.

4.2 Feature-Based Tracking Methods

Alternatives to this approach are feature-based tracking techniques that do not require markers in the environment. Among other things, they rely not only on lasers that scan the environment, but also on cameras that capture specific features of image objects (Grimm et al. 2019). In the camera-based technique, instead of markers, characteristic features of real (3D) objects, for e.g., edges and vertices of a statue, are then directly captured and matched with models of a database. The tracking procedure captures the displacement values to the positions of the corresponding features of the previous images and updates the current position accordingly (Broll 2019; Herling & Broll 2011). However, in this more elegant method, suitable 3D models of the real objects must be available (Dickmann 2021).

4.3 Video See-Through Augmented AR

In handheld devices, also typified as "non-immersive display devices" (Çöltekin et al. 2020, after Halloway 1993), the virtual content is superimposed on a video image of reality captured by a tablet or smartphone camera. In this so-called “video see-through augmented AR” (VST-AR), virtual content of any form is superimposed on the video image in a perspectively correct position with the help of software and then shown on a display (Broll 2019, p. 321).

4.4 Projected Augmentation (HMDs)

In contrast, most HMDs project them optically onto the surface of a pair of glasses using a semitransparent mirror (prism), e.g., HoloLens (Microsoft), Google Glass (Google), Spectacles (Snap Inc.), or Magic Leap (Magic Leap, Inc.). This allows users to perceive the real environment directly (i.e., not via a video image). To match the projected AR element, a stereoscopic display is used. The position of the AR element is calculated separately for each eye or visualized with a corresponding slight shift in perspective.

However, HMD techniques also exist that use a full digital visualization approach to augment reality with additional objects. HMDs, e.g. the HTC Vive Pro, send camera streams onto a pair of displays positioned in front of the eyes. Similar to handheld devices, virtual elements can be added to this camera stream. Strictly speaking, users do not directly perceive the environment with this HMD technology.

5 The Term “Holograms” in the Context of Augmented Reality Techniques

Augmentations of real-world space with virtual elements using handheld devices or HMDs are often called holograms. This is also reflected in the name of the Microsoft HoloLens. Considering the frequent display of holograms in science fiction, this is an easily accessible term to describe the visual effect created by these devices. However, terminologically, this term is not quite accurate, as classic holograms are based on a completely different technological foundation (Gabor et al. 1971). In contrast to the techniques described above, classic holograms are based on a carrier medium (photographic plate). The complex construction of modern holograms in the physical sense requires, among other things, specialized equipment, such as laser light. Therefore, if the focus is on the technical process used to visualize a virtual 3D element, the term hologram should be avoided. Instead, we propose to simply use the term “AR element”, the term “projected augmentation” when using an HMD with projection lenses, or, in case the handheld device or HMD relies on video streaming, the term “video augmentation. However, if the focus is on the resulting visual effect, using the term hologram makes it easier for non-physicists to understand how a virtual element can be perceived in real-world space.

6 Creating AR Elements

The specific design of an AR element (object) and its spatial assignment to the surrounding space is based, for example, on a typical 3D development software such as the game engine Unity (www.unity.com). Due to the limited 3D modeling capabilities of game engines, complex 3D models are usually created with 3D modeling programs like Blender (www.blender.org). The 3D models can then be imported into Unity, graphically edited (optionally by applying textures), and geometrically precisely scaled and positioned within 3D space. The AR elements are usually designed in a prominent color, as a sufficient saliency of the AR elements is necessary for them to be clearly recognizable in the surrounding environment. Meanwhile, numerous, freely downloadable digital assets (e.g., Unity's Asset Store) are available that can be imported directly into the program (Dickmann 2021). This reduces the demand to create AR objects from the scratch.

To link the visualization of artificial elements with ambient reality, further technical steps are required. The methods of displaying AR elements simultaneously with the ambient reality and placing them correctly differ significantly between handheld displays and HMDs (Table 1). This leads to different immersive experiences.

6.1 Visualization of AR Elements for Mobile Devices

In applications of mobile devices, the AR effect is created by recognizing selected objects in the surrounding reality via image recognition software. Artificial elements can be loaded from a database. These elements can be superimposed on a previously captured video image. The spatial effect is achieved by detecting the position of a user (or the camera of the smartphone or tablet) with respect to its environment using appropriate (positional) sensing and computer vision techniques. Feature-based approaches of computer vision are used for this purpose (image/object recognition software), a sub-field of artificial intelligence. Therefore, to spatially position an AR element relative to a real object (reference object) and to visualize it in a geometrically correct way, extensive information about the shape, position and size of the real object is required. This object information must be kept in the devices’ database so that the image recognition software can access it for comparison. The virtual position of the AR element to be visualized can only be calculated once the image recognition software identifies the shape of the reference object in the surrounding reality. In this process, characteristic features, such as edges or corners, are extracted by the software using the images captured from the camera of a handheld device. The displacement amounts are recorded by tracking the corresponding features' positions in the previous images (Broll 2019; Herling and Broll 2011). By comparing the positions and shapes of the objects extracted from the camera image with properties already stored in the system as 2D or even as 3D data, the perspective of the camera can be calculated. This allows the relative position of the smartphone or tablet to a real object to be determined.

The position and perspective of the content virtually projected onto the displays are calculated and visualized. The representation of the AR object to be superimposed is also dynamically transformed depending on the viewing angle. This is possible in real time, since modern smartphones and tablets have already been equipped with the necessary technologies or sensors for AR visualizations for years, i.e. camera(s), magnetometer or inertial sensors (rotation sensor, acceleration). In addition, they also have a computing and graphics power which is generally required for visualization. The same applies to current tablets (Dörner et al. 2016). GPS signals for positioning are not used in this process, because the accuracy is not sufficient for the correct geometric placement of AR elements. Despite the progressive technical improvements of smartphones and tablets, feature-based position determination can also lead to inaccuracies in the positioning of AR elements if there are not enough reference objects within the range of the sensors used. In the worst case, such inaccuracies can lead to wrong decisions, e.g. when a navigation instruction is displayed too late or at the wrong intersection. For the visualization of an AR object that refers to a specific three-dimensional object of reality (e.g., a building), the availability of a corresponding 3D model in the memory of the mobile device is a necessary prerequisite (for further information on the creation of such 3D data models, see Dickmann 2021). Not only the AR elements themselves, but also the corresponding reference objects must be available in modeled form. This limits the possibilities of using augmented reality on a larger spatial scale, such as outdoors. But handheld systems also limit the possibilities of the AR experience by the camera perspective used. Strictly speaking, only the view of the camera is augmented with AR elements, but not the user's actual view. The augmentation is done from the camera's point of view, which has a different position and orientation than the eyes directed at the display (see Figs. 2 and 3) (Dörner et al. 2016; Broll 2019, p.333). Body or hand movements or different viewing positions on the display will amplify this. Furthermore, the significantly smaller size of the display compared to the entire field of view of the user reduces the level of spatial immersion.

Fig. 2
figure 2

Difference between camera and viewer perspective in AR visualizations with smartphones or tablets (modified after Broll 2019, p. 333)

Fig. 3
figure 3

source: Dickmann 2021, A-9)

Eye-offset due to camera positioning caused by handheld devices. The real-world resolution is distorted through the camera perspective (see van Krevelen and Poelman 2010) (

6.2 HMD-Based Visualization of AR Elements

The described loss of immersion is less pronounced if AR elements are displayed with an HMD. This is due to the higher overlap of the sensor-tracked and augmented spatial area with the spatial perspective of the user (Fig. 4). Cameras, sensors and projection areas or displays of AR HMDs, like the Microsoft HoloLens or the HTC Vive Pro, are located in immediate proximity to the eyes. Furthermore, the use of two lenses instead of one allows to display stereoscopic images (Noor 2016) and thus provide three-dimensional visual information of augmented elements in a way that resembles the natural stereoscopic spatial perception. Since the real world can still be perceived through the transparent lenses or on displays (Gruenefeld et al. 2017), the projected AR elements “merge with the real environment” (Keil et al. 2020).

Fig. 4
figure 4

Construction of a 3D environment (HoloLens); left: Real space (initial situation), right: Point cloud model of the real space, which provides the depth information for the geometrically correct positioning (visualization) of an AR element (photo: J. Keil)

This indicates that the quality of the perceived immersion is significantly increased if an HMD is used instead of a handheld device. However, HMD-based AR also suffer from limitations of immersion based on technical characteristics of the HMDs. The lenses and displays used to visualize AR elements are still very small. With an increasing size of the lenses and displays, more pixels and, consequentially, more computing power would be required to achieve a high resolution. This limits the field of view (FOV) available to display AR elements and can cause the virtual images to be cut off at the edges of the projection field (van Krevelen and Poelman 2010; see Figs. 5 and 6).

Fig. 5
figure 5

The small lenses of AR HMDs limit the field of view in which virtual elements can be displayed. Therefore, these elements are only visible if the eyes are oriented directly towards these elements or their approximate direction

Fig. 6
figure 6

View through the right lens of the Microsoft Hololens on an indoor AR element (example of a white geometric grid placed virtually on the floor of a room)

The selection of a specific AR HMD and whether it uses projection onto transparent lenses (e.g. Microsoft HoloLens or Magic Leap) or camera streams and displays (e.g. HTC Vive Pro) has an impact on the capabilities and limitations of an AR application. The total FOV of HMDs with transparent lenses can be significantly larger, as the natural FOV of the eyes is only restricted by the non-transparent elements of the HDM. However, it is not necessarily possible to display AR elements across the whole FOV. For example, the reflective surface of the Microsoft HoloLens covers only a small fraction of the total FOV (see Fig. 6). The total FOV of HMDs with transparent lenses can be significantly larger than that of handheld devices, as the natural FOV of the eyes is hardly restricted.

Sophisticated hardware and software solutions even enable direct interaction with the AR elements, for example by repositioning them or by triggering an animation (Keil et al. 2019; see Fig. 7). In addition to the use of controllers (HTC Vive), the use of gestures to interact with objects in augmented reality or for system control is also considered as a powerful technique (Dörner et al. 2019). Furthermore, there are numerous other forms of interaction, some of which are technically very complex (see overview in Dörner et al. 2019).

Fig. 7
figure 7

Using hand gestures, the Microsoft HoloLens allows users to directly interact with AR elements

Given that all visual information is provided indirectly via output of camera recordings instead of natural scene perception, display-based HMDs are more prone to lag due to processing operations. Not only the AR elements can be displayed with lag, but also the complete visual scene. This can make the visual scene perception less comfortable, especially during fast head movements. An advantage of display-based HMDs is that AR elements are less susceptible to transparency if placed in front of a bright background. The background light cannot shine through the displays as it can through the projection on a transparent surface.

7 Current Potentials and Restrictions for Cartographic Use

The potential application of AR techniques can have a positive impact on classic fields of cartographic visualization. By retaining the ability to interact with real space, AR elements have the potential to improve, for example, orientation, navigation, but also the perception of space and the construction of cognitive representations of space. The limitations of two-dimensional and also pseudo-3D representations, which have so far hampered the perception and cognitive processing of spatial and especially 3D information (objects), e.g. due to the need for cognitive transformation, can be overcome by augmented reality. AR elements placed in space can provide visual information about spatial objects that are otherwise not visible. They can also represent cartographic signs in generalized or abstracted form, which can be used, for example, as additional directional cues for orientation and navigation. This also applies to augmenting an environment with landmarks that may not be in the viewer's field of vision at the time of orientation as they are covered by other topographic objects.

Beyond that, the ability to interact with AR elements individually can address another dimension of conveying spatial information. Using controllers or even hand gestures, user can directly interact with AR elements. Moreover, any number of database-driven spatial or factual queries can be made. Thus, geographical analyses can be carried out in the directly perceived (experienced) space or, if required, abstract topics can be displayed cartographically.

However, the meaningful implementation of spatial visualizations for a specified task depends on the consideration of the AR technology and the form of data provision used. Although handheld devices and HMDs can be provided with very detailed spatial information in combination with GIS data (Wang et al. 2019), limitations of the field of view can affect how this spatial information is perceived. As mentioned above, the user’s field of view differs from the camera perspective of handheld devices used to visualize AR elements. Additionally, it has been mentioned that the field of view covered by reflective lenses that are used to display AR elements with an HMD is usually smaller than the natural field of view. These discrepancies need to be considered by developers of AR applications, as AR elements are only perceived if the handheld device or HMD is oriented in their general direction. Consequentially, if users do not systematically scan their surroundings, potentially important AR elements might be missed. Therefore, AR creators must accurately predict where users will look at and find ways to draw visual attention to important AR elements (through additional AR elements such as arrows, icons, text, etc.). Otherwise, they have to consider that important AR elements needed for a successful task completion might be missed.

Another limitation for the use of AR as a means to communicate spatial information is caused by the tracking system used by an AR-capable device. Spatial tracking has been argued to be essential for the accurate positioning of AR elements. If the spatial orientation of the used device changes, the visualization of AR elements needs to be adjusted. This helps to give the user the feeling that the AR elements are spatially present in a fixed location. To accurately register changes of the spatial position and orientation of the used AR-capable device, one or several spatial elements that act as reference points need to be registered and tracked. Consequentially, if no suitable reference points are available in the range of the device’s sensors, tracking the device’s movements will fail and the displayed AR cannot be visualized in their correct position and orientation. This limits the application of AR in large spaces with few fixed spatial elements, especially in the context of outdoor scenarios. Therefore, their use will initially be limited to small-scale (indoor) environments.

However, the described issues as the limited field of view and the tracking range could become less relevant due to the ongoing improvements of AR hardware. Advantages for efficient information transfer in cartographic tasks can then be expected.