Smartphones allow users to easily access and share valuable and sensitive data digitally (e.g., banking, intellectual property). This access is supported by a plethora of mobile applications (app) available for download from several official app stores (e.g., Apple’s App Store, Android’s Google Play). Apps are popular because they are perceived as useful (Mylonas et al. 2013). Unfortunately, approximately one-third of Android apps are over privileged (Felt, Chin, Hanna, Song and Wagner, 2011). In some cases over privileged apps threaten the security of sensitive data. When making a selection, users rely on information that is readily available on the description page of the mobile app. Ratings, reviews, cost, and number of downloads become some of the main criteria used to make an install decision (Felt et al. 2012; Kelley et al. 2012; Kelley et al. 2013). Rarely do users consider company and personal privacy violations when installing an app. Mylonas et al. (2013) found that privacy was ranked near the bottom of the app decision criteria. Most smartphone users are unaware of the severity of the risks associated with an app installation because they rely on an external entity for protection (i.e., the app store) (Mylonas et al. 2013). Part of the reason is because the majority of smartphone users are not security experts (Mylonas et al. 2013). Users are not equipped with the right mental models to understand how their actions impact their privacy. Privacy self-management is also not considered to be their primary task (Pfleeger and Caputo, 2012). However, an important part of the app store experience requires users to consent to a certain level of data access that may involve detrimental consequences like identity theft. Several studies have demonstrated that there are inherent vulnerabilities with consent-based permission systems unbeknownst to smartphone users (Balebako et al. 2014; Barrera et al. 2010; Felt et al. 2011). Attempts to incorporate warning messages have failed as users will act on privacy related information even if they do not fully comprehend its meaning (Felt et al. 2012).
A growing concern in the field of mobile security within the enterprise space is that the majority of smartphone users do not exhibit the ability to maintain their privacy to avoid increased risk for themselves or associated organizations (Solove, 2013). There are several factors that come into play when examining the behaviors that dictate a smartphone user’s absence of privacy self-management. Users will dismiss or overlook privacy related information due to technical jargon or becoming habituated to their prevalence (Felt et al. 2012). Over time, smartphone users have been trained to ignore privacy policies, warning messages, consent dialogs, and permission request screens (Bohme and Kopsell, 2010; Chia et al. 2012; Kelley et al. 2012). Although risk communication could help facilitate a heightened awareness of the potential dangers associated with installing mobile apps, the consent-based permission systems ought to be improved in a way that naturally encourages users to make informed decisions through more direct communication. Altering risky user behavior can be accomplished by communicating how the harm can personally relate back to users and their associated organizations’ Bring Your Own Device (BYOD) policies (Pfleeger and Caputo, 2012).
Users are simply not provided with the proper information needed to flag privacy concerns. The task of maintaining awareness of personal and professional risk on a smartphone is becoming increasingly difficult. Therefore, the use of contextual warning messages may help to convey relevant privacy and security information that transparently and effectively connect risk with permission requirements. As a caveat, attention and comprehension to privacy information on a smartphone is significantly different than when using a desktop computer. 50 % of users take no more than 8 s to read consent dialogs on websites (Bohme, 2010). Therefore, the mobility and form factor of a smartphone requires immediate recognition of privacy relevant information that will prompt users at the appropriate time when making a decision to install an app. They cannot be overloaded with too much information that distracts them from moving forward or it will be ignored. Several studies have experimented with the timing and presentation of privacy information to motivate securer behavior and prevent risk in other contexts (Akhawe and Felt, 2013; Egelman et al. 2009; Kelley et al. 2013). In this study, we seek to explore the impact that warning messages and the temporal location of when permission requirements are presented have on the discernment of identifying risky and safe apps.
1.1 Relevant Warning Messages
Routinely experiencing the same standard warning messages may be misinterpreted overtime as trustworthy because of its sheer familiarity (Bohme and Kopsell, 2010). Similarly, default settings or Calls to Actions (CTAs) have an underlying influence on the user’s privacy decisions without him being aware of it (Solove, 2013). Current defaults do not provide the appropriate framing necessary for users to proceed with caution. According to Jou, Shanteau and Harris (1996), “framing is a form of manipulating the salience or accessibility of different aspects of information” (p. 9). We propose that warning messages should be dynamic to the security needs of a particular context and be recognizable by the user. Figure 1 provides an example of an Android mobile app screen. As displayed in screen B, the interface provides a list of permission requirements without any visual indication to communicate risk or give warning that these items could potentially violate the user’s privacy. Users are required to read through the information to interpret the risk and make their decisions accordingly without any visual support. Choe et al. (2013) found that the representation of privacy related information in a visual way could influence decision making, specifically with the use of color and symbols that resonate with common cultural experiences. Red has been used in privacy contexts to indicate conflicts between current settings and previous selections (Egelman et al. 2009). We explored the addition of warnings by highlighting risky permission requirements in red text, as well as placing a red stop sign to increase the likelihood that users will stop and attend to the permission requirements. We did not delineate the level or severity of risk per permission item. The level of risk has to be interpreted by the user. The warning message denotes permission requirements that are in violation of their BYOD policy or personal privacy.
1.2 Temporal Location of Permission Requirements
The timing of when privacy information is disclosed can nudge users towards installing a trustworthy or compromising app (Kelley et al. 2013). If users are presented with indicators of increased risk after a decision is made, they are more likely to disregard the new information (Egelman et al. 2009). According to Egelman et al. (2009), presenting privacy indicators on the search results page before a user makes a decision to proceed to a website optimized results in achieving higher levels of privacy in a shorter amount of time. Critically, once a user makes a decision, they are likely not to reverse it or spend extra time looking for alternatives (Akhawe and Felt, 2013). In other words, app stores are using a popular selling technique called low-balling to encourage the acceptance of uncomfortable risk. This persuasive method involves offering a great deal (e.g., a useful app) and asks for explicit agreement (e.g., to install) without presenting the unpleasant costs until later (c.f., Cialdini, Cacioppo, Bassett and Miller, 1978). The current Android app installation process, shown in Fig. 1, presents privacy requirements only after a user has made the decision to install the app (see screen B). Prior to the install decision, the user is given non-privacy related criteria (see screen A). Once the user has made the install decision by tapping on the Install button, screen B prompts them to “Accept” the required permissions. Please note, that screen C is hidden until the user taps on the individual permission items from screen B to get more details. The main CTAs on the first two screens (A and B) encourage users to install and then accept. There is no distinction on the user interface that explicitly distinguishes the binary choices to “Install” or “Not Install” on screen A, and “Accept” or “Not Accept” on screen B. Users are given permission requirements on screen B only after deciding to install the app on screen A. Kelley et al. (2013) found that users practically glossed over the permission requirements if presented after the install decision in the context of new apps. We propose to move the permission requirements from screen B to screen A to test which location facilitates safer choices and less risky behaviors.
1.3 Experiment Overview
Identifying malicious apps from safe ones is a difficult task, especially when there is no visual or contextual distinction between them. In this experiment, we explore the use of warning messages and the timing of the permission requirements’ location relative to the install decision. We hope to encourage more secure decision-making and increase the number of safer app installations by presenting warnings prior to the install decision. This should help users recall their BYOD policy and personal privacy preferences in order to minimize risk and increase attention to the consent-based permission system.