Exploring Map Orientation with Interactive Audio-Tactile Maps
Multi-modal interactive maps can provide a useful aid to navigation for blind people. We have been experimenting with such maps that present information in a tactile and auditory (speech) form, but with the novel feature that the map’s orientation is tracked. This means that the map can be explored in a more ego-centric manner, as favoured by blind people. Results are encouraging, in that scores in an orientation task are better with the use of map rotation.
KeywordsMulti-modal maps Blind people Tactile Speech Rotation
When travelling to an unfamiliar location people may prepare in a number of ways. Sighted people will often rely on their vision – supported by visual cues such as signs to navigate on arrival. They may also prepare in advance by consulting a map (increasingly this may be on-line). Maps designed for sighted users are overloaded with information, combining a range of visual encodings: geometrical, symbolic, textual, colour etc. The important point is that users can cope with this complexity because of the power of the visual channel. They can – literally – focus on the information that is relevant to their current task and ignore that which is not relevant.
Blind people can also benefit from the use of (non-visual) maps. Indeed, they may have more of a need to do so, since it is not possible for them to use vision to navigate in situ. In creating maps for blind users one encounters what is known as the ‘bandwidth problem’, which is to say that the non-visual senses simply do not have the same capacity to carry so much information that can be filtered as necessary. In this situation it is common to use as many of the non-visual senses together in the form of multimodal interaction, with the objective that the whole may be greater than the sum of the parts. Thus there has been developed the concept of the multimodal map (e.g. [1, 6, 10]). This paper presents the results of experiments with multimodal maps that combine haptic interaction with auditory output, but introducing the novel aspect of using the map’s orientation as an additional interactive element.
The primary objective of the research is to find out whether blind people can be better prepared to navigate in unfamiliar environments, given this form of multimodal map. A secondary aspiration is to find out more about the kinds of internal representations such people use.
As argued above, maps can be very powerful tools. One role for (non-visual) maps is for planning. That is to say that before arriving at a new location the person may prepare, becoming familiar with the area by exploring a map of it. Increasingly it is the case that when in the location other technology (usually GPS-based) will be used to assist guidance, but there is still a role for the map to be used as a means of learning about the layout of the place in planning a visit.
It is relatively easy to manufacture tactile maps, using technology such as swell-paper , although designing such maps to be usable is quite difficult, given the limitations of the haptic senses. One approach to overcoming some of the limitations of tactile maps is to make them interactive and to provide other non-visual cues. Thus, a tactile map can be mounted on a touch-sensitive pad. This can track the location of the user’s finger as well as haptic gestures that they may make, such as pressing on the map. Then appropriate auditory feedback can be provided, such as speech or non-speech sounds.
It seems likely that such a map will generate a more useful representation in the user’s mind. For instance, the building represented by the grey rectangle in Figs. 1 and 2 will be to the left of the finger following the turn (2.4) – and will be to the user’s left if they walk that way in the real world. However, there is evidence that this kind of approach is particularly appropriate for blind people because they tend to use an ego-centric spatial frame of reference as opposed to the external, allocentric one which sighted people tend to use .
The long-term aim of this work is to develop and test a fully-functional multimodal map system, incorporating tactile interaction with speech and non-speech audio output. However, the main focus of the experiments reported herein is the specific facility of map rotation. This builds on previous work on the role of map orientation for both sighted and blind people, as explained in the following section.
3 Experimental Precursors
Rossano and Warren  found that blind participants were much more accurate in the aligned condition, achieving a correctness score of 85 %.
Giudice et al.  also carried out work based on these experiments. They found evidence to support a hypothesis that spatial images are stored in an amodal representation, in both sighted and blind people. Of interest in the context of this work, though, was the fact that their experiments included elements of rotation. However it is not possible to make any direct comparison between their results and ours. Pielot et al.  have also conducted experiments on auditory tangible maps with rotation. In their case they spatialized the audio feedback to match the orientation of a tangible avatar. In the experiments reported herein we did not spatialize the sounds, though we plan to do so in future developments.
4 Experimental Outline
Are blind participants more accurate in pointing to a landmark when they have used a rotatable audio-tactile map in comparison to a static audio-tactile map?
Does their performance on this task correlate with their sense of direction?
What is their subjective preference for the Static and Rotation conditions?
Further data was collected which might shed some light on the results. For instance, is it possible to identify why people are particularly good or bad at the task and can we find something about the cognitive representations that people use in the task? This paper reports on the investigation of objective 1. Further data was collected addressing 2 and 3 which will be reported in future publications.
The method was based on that of Rossano and Warren. Tactile maps were produced, based on the guidelines in . Alphabetical labels were provided through synthetic speech. That is to say, that if the participant pressed on the point B (Fig. 3), then they would hear the label ‘B’. There were two conditions in the experiment: Static, in which it was not possible to rotate the map, and Rotation, in which it was possible to rotate it. All participants experienced both conditions but the order of presentation was counter-balanced (i.e. 50% of the participants undertook the static condition followed by the rotation condition, and vice-versa).
Note that in the rotation condition participants were not obliged to use rotation. Whether they did or not was noted and their results analyzed accordingly. One suggestion was that in the rotation condition, most participants would rotate the map, corresponding to the aligned condition in Rosanno and Warren’s experiments, so that we might expect similar results between the aligned and mis-aligned conditions.
Each participant was briefed and gave consent for the experiment. They then verbally completed a pre-test questionnaire. The questionnaire was constructed in different sections to obtain information on their mobility experience, experience in using tactile maps, experience in using audio-tactile maps and demographics.
They were then introduced to the T4. They sat by the device and were able to explore a practice map. Then they were given the experimental map in one of four orientations. Then they were given the task (e.g. ‘Imagine that you are standing on point B of the path, with point A directly in front of you. What is the direction to point C?’) In the rotation condition they were then allowed to rotate the map as they wished.
When they felt they were ready they stood up and, guided by a handrail, walked to the centre of the room. They were given a beanbag in their hand, asked to point in the required direction and to drop the beanbag to mark the direction. The position of the beanbag was recorded by an overhead camera.
Each participant would complete the test under the current condition (Static or Rotation) for the same map presented in each of five orientations. They would then complete them in the other orientation.
On completions of the tests, the participant was given a post-test interview to ascertain their preferences. Twelve blind or visually impaired people undertook the experiment. They were volunteers but received an Amazon voucher for £25 in compensation for their time.
The distribution of the pointing accuracy measures was inspected for all conditions. The distribution was approximately normal with a mean accuracy error of 12.5° and a standard deviation of 40.6°. However, there were three outliers in which the pointing accuracy error was close to 180°, which suggested that the participants were making reflection errors, confusing a point in front of them with a point behind. The three instances were from three different participants, two in the Static condition and one in the Rotation condition, so there was no particular pattern to these errors. Including these measures in the analyses would have skewed the results substantially, so they were removed from the dataset. In order to use the rest of the data from the three participants, these measures were replaced by the mean pointing accuracy for either the Static or Rotation condition, depending on which the error occurred. This resulted in an overall distribution of pointing accuracy measures with a mean of 9.2° and a standard deviation of 32.2°.
In the Rotation condition participants were not required to rotate the map. In practice 3 participants did not always use rotation when they might have, in a total of 11 instances. In that case their data was treated with the Static data.
Participants in the Rotation condition are more accurate in the pointing task than in the Static condition. However, the difference fails to reach significance at the 0.05 level. It is planned to continue the experiments with a larger numbers of participants to see whether a significant result will be achieved.
This experiment was based on those reported in . Rossano and Warren did not allow rotation as such, but it was assumed that when in our experiments participants rotated the map into alignment, their results would be similar to those in Rossano and Warren’s aligned condition. However, it is difficult to make any comparison because of the way Rossano and Warren treated their data. They classified responses as either correct or not, depending on whether the response was within ±30° of the target. A superficial examination does appear to show differences: participants in our experiment scored more highly in all conditions. This warrants further investigation.
8 Further Work
We also collected further data in this experiment, relating to participants’ sense of direction, their ability to recreate the maps they used in tactile form and their subjective preferences. These have yet to be analyzed. Specifically, where participants have made errors, it may be possible to use their map reconstructions to identify the nature of the error, and thereby to see whether it is possible to promote less error-prone cognitive maps. Furthermore, more-reliable results will be obtained if we can run more participants.
It has also been mentioned that this is also only a small part of the development of the T4. In the long run we plan to develop it into a multimodal tool. We plan to experiment with the use of auditory environmental cues to help users in navigation tasks. Given that the results in these simple experiments on rotation are encouraging, we would experiment with spatialized sounds that are linked to the map orientation.
It is envisaged that a map of this form would be used in preparation for visiting a new, unfamiliar location. Clearly experiments to see whether the learning from such a map transfers to the real world would be called for.
We have experimented with a multimodal, non-visual map which has the novel facility that it can be rotated to match orientation in the real world. Our results (while not achieving statistical significance) suggest that errors in an orientation/pointing task are less when map rotation is used than in a static condition. This gives encouragement to further develop the system.
- 1.Brock, A., et al.: Usage of multimodal maps for blind people: why and how. In: ACM International Conference on Interactive Tabletops and Surfaces (2010)Google Scholar
- 2.Hamid, N.N.A., Edwards, A.D.N.: Facilitating route learning using interactive audio-tactile maps for blind and visually impaired people. In: Extended Abstracts CHI 2013, pp. 37–42. ACM Press (2013)Google Scholar
- 6.Paladugu, D.A., Wang, Z., Li, B.: On presenting audio-tactile maps to visually impaired users for getting directions. In: Extended Abstracts CHI 2010, pp. 3955–3960. ACM (2010)Google Scholar
- 10.Wang, W., Li, B., Hedgpeth, T., Haven, T. Instant tactile-audio map: enabling access to digital maps for people with visual impairment. In: ACM SIG ASSETS (2009)Google Scholar
- 13.Eriksson, Y., Strucel, M.: A Guide to the Production of Tactile Graphics on Swellpaper. The Swedish Library of Talking Books and Braille, Stockholm (1995)Google Scholar