Keywords

1 Introduction

Nowadays the use of mobile applications (apps) to improve community collaboration is increasing. Collective intelligence and crowdsourcing have become a common approach to many problems that cannot be easily solved or that need human feedback explicitly [1]. Supported by this concept, these apps can offer services, ideas or content.

In the context of Smart Cities, many citizen apps have been using collective intelligence to address urban problems. These applications can appear in multiple flavors and different dimensions regarding their data source, goals and types of urban problem they tackle [2]. Thus, in order to contextualize community information, many of them are built based on maps [3, 4], which is the case of Colab.reFootnote 1.

On Colab.re, citizens can identify and report urban problems, suggest improvements and rate public services using a map view. The usability of mobile map applications depends not only on data visualization, but also on the interaction with map and device [5]. Regarding mobile devices, the use of gestures such as pinch to zoom and drag to pan is frequent because they generally have small screens. In addition, the interaction with maps usually requires the exploration of large spaces, increasing the mentioned interactions [6].

Focusing on map interactions and their data visualization, the present research intends to better understand the user’s interaction with maps in this class of apps. To perform this user study, we have selected a sample divided by users with and users without previous knowledge of map-based applications. The main goal of this separation was to examine if users could perform the gestures that are necessary to navigate in the map and comprehend the information received.

The remaining of this paper is organized as follows. In the next section, we discuss background on map-based interfaces and their interactions such as zooming and panning. Following that, we describe our method divided in application definition, planning, pilot test, observation, and data analysis. Thereafter we report the results based on the user study. Finally, we draw some conclusions from this work and point out further research on this topic.

2 Background

The idea of data visualization is to transform complex data sets in visual representation, presenting data in various forms with different interactions. Thus, the user can spot patterns, structures and features to understand the underlying data [7]. For mobile applications in multi-touch devices, data visualization also needs to handle available screen space and user interactions [8].

One of the most common multi-touch interactions is the pinch gesture. The pinch gesture is dated from early 80’s [9] and it can be defined as a lateral motion expanding or contracting the finger spread [10]. This continuous action, performed with two fingers, is called zooming, which enables users to zoom in or zoom out an image or a map, for example.

In the map context, zooming functionality allows users to change the map scale. Along with zooming there is panning, which is the ability to reposition the map on the screen [11]. The panning action can be performed with hold and drag gestures in any direction, generally with one finger. Zooming and panning are used to navigate in 2D spaces that are too large to be displayed within a single window.

Beyond the pinch and drag interactions, there are some other available gestures that can be performed in map applications to accomplish zooming and panning. One of them is the Double-Tap gesture, which allows zooming in with only one hand by using a double tap with one finger. Another one, introduced by Google Maps, is Double-Tap-and-Hold + Drag. This gesture allows zoom adjustments with one finger, tapping twice, holding the screen the second time and then moving the finger upwards to zoom out, or downwards to zoom in. Furthermore, there is Two-Finger-Tap, used by Google Maps and Apple Maps, that allows zooming out by tapping the screen with two fingers [6].

Regarding zooming and panning functionalities, Jokisch et al. [12] performed an empirical research where the main goal was to examine if users could find and use gestures on a multi-touch device. They examined three gestures used to interact with a virtual globe, which were drag to pan, pinch to zoom and tilt to change the inclination angle of the camera. To carry out their study they used the multi-touch version software Google Earth. The first main test was performed with adults aged between 20 and 30 years old with no background in computer science, and the second main test was performed with children from a primary school. Their findings were that many but not all adults knew intuitively how to operate the multi-touch device, even if they did not have experience with this technology. Children interacted faster and more frequently with the device, having more difficulty to find the right gestures. Although this study focused on users with no background, it did not studied older users.

3 Method

The user study was composed of the following phases: application selection, planning, pilot test, observation, and data analysis [13].

In the first phase, we investigated different apps related to spatial data visualization that allow observation of functionalities as zooming and panning. With respect to planning the study, we decided to investigate zooming and panning functionalities in map-based application and observe users with or without previous knowledge of this type of system. We also defined the observation scenario, tasks and a questionnaire to collect data about the users’ profile and their impression about the app. After planning, a pilot test was run with a 31-year-old female user, and the results of this observation were used to refine the questionnaires. In the observation phase, there were 24 participants. Of this amount, 50 % have had familiarity with map systems and the other 50 % have never used them.

3.1 Application Selection

In this stage, we explored different mobile applications that met the Smart City concept and allowed spatial data visualization. The criteria used to select this app were popularity and location. Considering both criteria and background, we were able to choose Colab.re application. The popularity of the app was relevant because of the amount of data available in the app. Colab.re is being used by several city halls and is available on Android and Apple platforms [14]. Regarding location, the app should be accessible in the city of Porto Alegre, where the study was performed; the case of Colab.re.

The proposal of Colab.re is to support city management through cooperation between citizens and city halls. By making use of it, users have the possibility to identify and pinpoint urban problems, suggest improvements and evaluate solutions. The main advantage of problem identification is that it accelerates the process of problem resolution by making the city hall aware of a problem [2]. These data create a collaborative map, containing pictures and location of an issue, which support concerned authorities to take assertive actions.

Citizens can use a smart phone to take a picture of a problem related to infrastructure, street cleaning, unlit lampposts, broken roads, litter, etc. The application automatically detects its location using the phone’s GPS and allows the user to categorize and add comments about the problem. In addition, the user can track the progress of all reported issues through a map view and follow the issues of interest (Fig. 1). The city hall by its turn, can handle the problem assigning it to the specific public agency, and it can also inform citizens about the status of a reported problem. Once the problem is solved, it disappears from the map.

Fig. 1.
figure 1

Colab.re screenshots. From left to right: (1) Main map view pointing city problems; (2) a popup showing details about a selected problem; (3) details about a problem containing a picture, category and street address

According to Desouza and Bhagwatwar’s [2] classification scheme, Colab.re can be classified as follows: Problem Identification regarding its goal, User Feed related to its data source, Mobile Device and Web Based App related to its platform deployment, and National regarding its range. Furthermore, Colab.re uses Google Maps as a basis for its map view.

3.2 Planning

In order to observe the zooming and panning functionalities and data visualization in the map-based application, we created a data collection tool, consisting of one questionnaire to collect data about the participants’ profile, a user scenario and a questionnaire to collect their impression about the app after its use. In addition, an ethical protocol was presented to the participants with an informed consent form.

The sample was divided in people with previous knowledge in map-based applications, hereafter called experienced, and people with no prior knowledge in map-based apps, called beginners. The goal of this separation was to observe if potential users could perform the required gestures to navigate in the map, and identify data visualization issues. To guide the observation method, a scenario and two main tasks were defined:

Scenario: :

You are in the city of Porto Alegre and someone tells you there is an app to help you check urban problems, called Colab.re.

Task 1: :

You decide to go through the system to verify if there is any problem near the place where you are.

Task 2: :

After checking the first problem, you decide to find another one near your house/work. By choosing any problem that calls your attention, you decide to verify all details about the issue including resolution/status/progress.

3.3 Pilot Test

In this stage, we validated the scenario performing a pilot test. This preliminary study was performed with a 31-year-old female user, who have had previous knowledge in map-based applications. This test was not recorded. However, we could evaluate the study’s feasibility and estimate the necessary time for each test. The results of this observation were used to refine the scenario and questionnaires.

3.4 Observation

The observation took place on the University campus. The user observations were performed with a sample of 24 participants, being 12 experienced and 12 beginners. Experienced participants’ age ranged between 24 and 44 years old, and beginners’ age ranged between 30 and 78. Seven experienced participants were male and five female. From beginners, five participants were male and seven female.

The participants were asked if they wanted to participate in a short user study focusing on evaluating a mobile application. After that, they were invited to answer a few questions about their profile and their experience using map-based applications. Afterwards they were given a short overview about Colab.re and the defined scenario with its tasks, and they were observed during the use of the app. Finally, after the observation, the interviewer asked some questions about the participants’ experience and feelings during the observation. All observations were recorded for further analysis. One interviewer monitored the studies and another recorded the videos. Finally, it was possible to identify several issues by analyzing the recorded videos and the observations made during the study.

3.5 Data Analysis

Graphics were generated from questionnaires’ results, which were used for quantitative data analysis. These data were used to compare the level of difficulty between the first and the second task used in the scenario, as well as to associate the user’s profile. In addition, all comments about the participant’s feedback were reviewed to find similar answers. Afterwards, a detailed video analysis has enabled the results’ polishment.

4 Results

After the data and video analysis, we could compare information between people with previous knowledge and without prior knowledge. Moreover, we highlighted some issues that came up during the observations and became clear during the video analysis.

All experienced participants could use zooming and panning functionalities without great effort. However, not all beginners found out the needed gestures to perform zoom and pan. Two participants could not find the drag gesture to pan the map, and instead of that, they tried to navigate by tapping the problem’s pins, which centralizes the map in the tapped pin. Besides that, the same two participants could not find the pinch gesture and could not zoom, using the map in the default zoom level to complete the tasks. In total, four participants could not find the pinch gesture, or any other gesture to perform zoom. One participant found only the pinch to zoom in and, before performing the gesture, he/she asked if he/she could “enlarge the map”, referring to zooming in. As an alternative to pinch to zoom out, this participant used the Zoom Control (minus button) located in the bottom right corner of the map (Fig. 2-a).

Fig. 2.
figure 2

Observation: (a) use of zoom control; (b) use of index and middle fingers to zoom

The Zoom Control, which is a default control from Google MapsFootnote 2, was used by only three participants, two from experienced and one from beginners’ group. Experienced participants used pinch in various ways. Besides the most common used fingers, thumb and index, to perform zooming, three participants used two thumbs, and two participants used index and middle fingers to zoom (Fig. 2-b). However, experienced participants did not use any different gestures to zoom, while two beginners performed Double-Tap to zoom in while they were trying to find possible gestures. Neither participants from experienced nor from beginners’ group used no other gestures such as Double-Tap-and-Hold + Drag and Two-Finger-Tap.

During the execution of the first task, three main questions stood out. The first most cited was about the icons’ size, followed by the lack of an auto-locate button, and the pin color, respectively. Regarding icons’ size, ten beginners and four experienced participants reported that the icons inside pins were very small. Comments such as “I cannot see the icon very well even when there is 100 % of zoom in” became evident during the video analysis, when participants persisting to zoom in tried to magnify icons without success. In addition, six beginner participants reported problems to understand what some icons represented. One participant said the lamppost icon, represented by a bulb, symbolizes an idea to him.

Another issue was related to the lack of an auto-locate button, which was reported by five participants right after they started the task. One participant stated he was not good with maps and the application could do the “difficult work” for him. Despite that, the participant could perform panning and zooming to complete the task. Finally, as mentioned before, the third highlighted issue was about the pin color. The application uses a green pin to draw all reported problems. Four participants said that green does not remind them of a problem and maybe the pin should be red to better symbolize it.

As shown in Fig. 3, two experienced participants considered hard or very hard to find their location in the first task. However, both mentioned the auto-locate button and said they are not good with geo location. In the meantime, most experienced participants considered easy or very easy to find their location. All beginners considered easy or moderate to find their location in the map, despite the fact some of them could not find some gestures.

Fig. 3.
figure 3

Comparison between beginners and experienced participants related to first task

In the second task, participants should locate another problem, tap in the overview popup, and navigate to a different screen to see its details (Fig. 1). Three beginner participants could not see the popup in more than three attempts and then the interviewer indicated where they should tap to proceed with the task. Nevertheless, in this task, participants were able to locate themselves on the map easier than in the first task. Most of the participants considered easy or very easy to use the map in the second task (Fig. 4).

Fig. 4.
figure 4

Comparison between beginners and experienced participants related to second task

The oldest beginner participant, aged 78 years old, considered being easy to interact with the map in both tasks. He said that he did not know how to adequately handle the smartphone and whenever he needs anything, he asks his grandson. However, when he started interacting with the map, his first question was “Can I enlarge that?”, and soon after he was performing pinch to zoom and asking “is it that way?” already zooming in. After the observation, when asked why he performed that gesture, he stated he had seen his grandson interacting with the device that way.

5 Conclusions and Future Work

The user study shows that not all beginners could easily use gestures to interact with the map. However, they used more gestures to interact with the map, using double-tap besides pinch to zoom. Some of them stated that they learned those gestures by looking at other people interacting on their multi-touch devices. Pinch seems to be the most common gesture to perform zoom among experienced participants, and none of them used other gestures during the observations.

Lack of features, such as an auto-locate button, was mentioned specially by experienced participants. They also reported issues concerning icons, but most reports related to icons were from beginner participants. The relation between participants’ age and the icon size can be further investigated, as well as the understanding of the icon.

The present study focused on data visualization in map-based applications, investigating gestures to interact with a map. The main idea was to observe if potential users could perform the needed gestures to navigate in the map, and understand in-formation. As future work, other map-based applications can be used in user observations and also with different age ranges.