Skip to main content

Big hover or big brother? Public attitudes about drone usage in domestic policing activities

Abstract

Unmanned aerial systems (that is, UAS or drones) have been increasingly proposed and used by federal and state law enforcement agencies as an evolving technology for general surveillance, crime detection and criminal investigations. However, the use of UAS technology, in general, and within the particular context of domestic policing activities raises serious concerns about personal privacy and the greater intrusion of new forms of ‘big brother’ surveillance in people’s daily lives. On the basis of a national survey, the current study provides empirical evidence on public attitudes about UAS usage in various policing activities. Socio-demographic differences in the public support for drone usage in this context are also examined. Our general findings of context-specific variability in public support for UAS usage in policing operations are discussed in terms of their implications for developing public policy.

This is a preview of subscription content, access via your institution.

Notes

  1. The poll data reported are based on studies that used methodological approaches designed to enhance the reliability and validity of the results. Brown and Newport (2013) report the results of a Gallup poll conducted in March, 2013, in which telephone interviews (50 per cent landline, 50 per cent cell phone) were conducted on a random sample (using random selection of listed land line numbers and random digit dialing of cell phone numbers) of 1020 adults, living in all 50 US states and the District of Columbia. The sample was weighted to match national demographic patterns. The margin of sampling error was +/−3 per cent. The Fox News (2013) poll is based on interviews with 1010 registered voters conducted by telephone (702 landline, 308 cell phone). Respondents were randomly selected using a probability proportionate to size method, which involves selecting an amount of phone numbers for each state that is proportionate to the number of voters in each state. The margin of sampling error was +/−3 per cent. The Pew Research Center (2014) poll involved multi-state cluster sampling stratified by region and urbanity within each of the 44 nations examined in the poll. The sample size in each nation was typically 1000, though higher in several nations (Pakistan – 1203; Ukraine – 1659; India – 2464; China – 2190). Face-to-Face interviews of adults were conducted in all but seven nations, where telephone interviews were used. The margin of error ranged from +/−3.5 to +/−4.5 (highest error rates were obtained in El Salvador; Jordan; Turkey, and Vietnam).

  2. As was the case with other poll studies previously discussed in this article, the polls in this section utilized commonly practiced methodological techniques to enhance the reliability and validity of results. The Monmouth University (2013) poll was based on a national sample of 1012 respondents contacted via telephone (708 landline, 304 cell phone), using data weighting and producing a sampling error rate of +/−3.1 per cent. Less information was available to determine the methodology of the Reuters poll, however, the sample for that poll consisted of 2405 adults and had a sampling error of +/−2.3 per cent.

  3. The firms used various recruitment strategies to develop sampling frames. For example, Survey Monkey contacts individuals who have previously completed a web-based survey on their site to create a panel of respondents. However, Qualtrics outsources participant recruitment to other firms. Mechanical Turk (owned by Amazon.com) creates a sampling frame through their labor workforce, which is composed of more than 500 000 individuals (Paolacci and Chandler, 2014). Systematic research evaluating the sampling frame panels provided by Survey Monkey and Qualtrics is not available. However, published studies examining the representativeness of samples generated through Mechanical Turk indicate that the demographic profile of Mechanical Turk’s samples are ‘at least as representative of the US population’ and ‘at least as diverse and more representative of non-college populations’ than those of typical Internet and traditional samples (Paolacci et al, 2010, p. 414; Buhrmester et al, 2011, p. 5).

  4. There were some basic socio-demographic differences in respondents across each sampling platform. For example, the Survey Monkey sample generated participants with higher education and income levels. Mechanical Turk yielded a sample that was considerably younger than both the Survey Monkey and Qualtrics samples. By combining the results from the different sampling platforms, the composite sample is more closely align to estimates of the characteristics of the adult population provided by US census data.

  5. Ordinary least square regression was used to examine participant responses collapsed across all police drone use areas, using the composite index which was a continuous variable (ranging from 0 to 5). Table 2 reports the unstandardized coefficients produced by these regression analyses. Coefficients indicate the size and direction (positive or negative) of a relationship. In addition, the coefficient indicates the extent that the dependent variable is expected to change when the independent variable increases by one unit. Additional regression analyses were used to examine the support for specific areas of police drone use. As previously mentioned, variables indicating support for these activities were created through dummy coding (1=support, 0=oppose/unsure). Because these dependent variables were dichotomous, logistic regression analyses were used with these data. Logistic regression produces odds ratios, rather than unstandardized coefficients and are interpreted differently. Odds ratio indicate how changes in categories of the independent variable affect the likelihood of being in one category of the dependent variable rather than the other category. For example, the impact of a conservative political view on support for done use in international boarder patrol (Model 1) is 2.70. That means that conservatives are more than twice as likely as liberals to support drone use in this context.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mari Sakiyama.

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sakiyama, M., Miethe, T., Lieberman, J. et al. Big hover or big brother? Public attitudes about drone usage in domestic policing activities. Secur J 30, 1027–1044 (2017). https://doi.org/10.1057/sj.2016.3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/sj.2016.3

Keywords

  • drones
  • UAS
  • policing
  • surveillance
  • privacy