In the near future, humans and robots will be interacting and working together. Yet it is not clear how humans en masse will be welcoming these new companions, nor what kinds of personality traits, socioeconomic factors, or experiences influence these attitudes. Social scientists wishing to understand general socioeconomic and cultural changes currently taking place need a way of measuring attitudes towards robots and robotization. In this paper, we will present a new instrument for social survey research with which to study individual differences in attitudes towards robots. Our scale differentiates between personal-level and societal-level attitudes, and has separate subscales for positive and negative attitudes, thus consisting of four dimensions.
An attitude is a psychological tendency that is expressed by evaluating a particular entity with some degree of favour or disfavour . Ajzen and Fishbein  described attitude as a pre-disposition to respond favourably or unfavourably to objects in the world. Implicit in this viewpoint is the notion of evaluation, where individuals (perhaps unconsciously) rate their feelings toward an object or event on a number of dimensions such as good–bad, harmful–beneficial, pleasant–unpleasant and likeable–dislikeable and together these evaluations drive behaviour .
Attitudes Towards Robots
Recently, a systematic review by Naneva et al.  showed that between the years 2005 and 2019, at least 97 papers measured attitudes towards robots. They categorized the measurements as either Affective attitudes (feelings towards robots), Cognitive attitudes (thoughts on robots), General attitudes (mix of cognitive and affective attitudes), and as specific cases Acceptance (intention to use), Anxiety or Trust towards robots. Most of the studies were on how people felt about interacting with one specific robot in a real world situation, using tools like the Almere Model of robot acceptance , the Godspeed Questionnaire  and the Unified Theory of Acceptance and Use of Technology (UTAUT; ). These types of studies can be very useful for design and engineering approaches, marketing, and for predicting the effects of implementing a specific robot in a specific work environment, but the results from such specialized studies cannot be extrapolated into wider societal impacts and expectations. Also, as these measures ask subjects to evaluate their experience with a physical robot, they cannot be used in on-line large scale general surveys. Of the studies measuring attitudes towards robots in general, Anxiety was predominantly assessed via the Robot Anxiety Scale (RAS; ), General attitudes were almost exclusively measured via non-validated self-reporting measures and Cognitive and Affective attitudes either via non-validated self-reporting measures or the Negative Attitudes towards Robots Scale (NARS,Footnote 1 [9, 10])—NARS subscales 1 (interaction with robots) and 3 (emotions in interaction with robots) for Affective and 2 (social influence of robots) for Cognitive attitudes.
Similarly, Krägeloh et al.  found only six validated questionnaires to measure the acceptability of social robots, three of which can be used to measure attitudes towards robots in general: NARS, the Frankenstein Syndrome Questionnaire  measuring acceptance of humanoid robots and the Multi-dimensional Robot Attitude Scale  which has 12 facets (Familiarity, Interest, Negative attitude, Self-efficacy, Appearance, Utility, Cost, Variety, Control, Social support, Operation and Environmental fit) and is mainly intended for gauging the needs and wishes of potential users of domestic robots (such as robotic vacuum cleaners). There is also the Robot Perception Scale , which consists of two subscales (general attitudes toward robots and attitudes toward human–robot similarity and attractiveness).
In summary, the NARS measures only negative attitudes, the Frankenstein Syndrome Questionnaire is only applicable to humanoid robots, the Multi-dimensional Robot Attitude Scale is focused on surveying the needs of buyers, and the Robot Perception Scale compresses positive and negative attitudes towards robots on a single dimension. Thus, we can see that while tools to measure fears and anxieties about robots exist, there is a definite lack of compact tools to measure positive attitudes like hopes and expectations. It is then no wonder that the Special Eurobarometer  in 2012 ended up basing most of its socioeconomic and demographic analyses on the single question: “Generally speaking, do you have a very positive, fairly positive, fairly negative or very negative view of robots?”.
General Attitudes Towards Technologies
Previous research on attitudes towards technology in general—or computers and information technology specifically—shows that attitudes often are multidimensional. At least general positive (hopes, expected benefits) and negative (fears, expected risks) attitudes can be measured independently . For example, Edison and Geissler  showed that an interest in playing with technology (positive attitude on a personal level) does not necessarily imply low fears towards how technologies can impact societies (negative attitude on a societal level).
Fears and hopes are not two ends of a single line, but rather two separate constructs. It is entirely possible for someone to have both high hopes and fears of something (for example, many people accept that cars make moving around much faster and easier and simultaneously know that car accidents are often lethal and exhaust gases contribute to global warming). Similarly, someone can simply have no interest in something, thus having both low hopes and fears on the subject. A scale that has fear at one end and hope at the other fails to differentiate between subjects who have both high hopes and fears and those who have both low hopes and fears. Hopes and fears are not a zero- sum game; some behaviours are predicted solely by fears, some by hopes and some by both.
There is also a distinction between personal and societal level of attitudes towards robots. At a personal level, hopes and fears are felt as innate, visceral reactions. One simply likes playing with a robot or shudders at the thought of touching one without a need to rationalize the feeling [3, 17]. At a societal level, we can worry about robots replacing humans at the workplace, thus creating unemployment, or hope that increasingly smart automatic driving systems will result in fewer traffic accidents, thus easing the burden on health care systems. Societal-level hopes and fears are based at least partly on information received from outside the self, i.e., learned [1, 18]. Thus, it is reasonable to think that societal and personal attitudes should be measured as separate constructs, although the distinctions are by no means clear cut. In addition to attitudes, experience in a subject influences the way one sees the subject, typically easing the worst fears but also dampening highest hopes (for an example on robots in elderly care, see ).
Up to date, there has not been an instrument that would have been properly validated to study multidimensional attitudes towards robots in general. Here, we attempt to fill this gap by designing and validating a scale that will allow social scientists to study attitudes towards robots. Specifically, we aim to design a scale with four attitude factors:
Personal level positive (P+): comfort and enjoyment around robots
Personal level negative (P−): unease and anxiety around robots
Societal level positive (S+): rational hopes about robots in general
Societal level negative (S−): rational worries about robots in general
and a few criterion items measuring real life experience with and knowledge about robots.
Sections 2–4 of this paper describe the iterative design process in detail. In short, we started with 23 items (V.1), dropped items based on exploratory factor analysis and content-related issues, added new items and repeated the factor analysis process (V.2), added more items and repeated the analysis again (V.3), and finally picked the strongest items of the last iteration and validated the scale. The final version of our scale (Appendix A) is described and validated in Sect. 5.
Overview on Data Analysis
There were no missing responses in any of the data sets (we used forced responses in both laboratory and online studies, and screened the data afterwards to ensure no missing data). We also screened the data for straight-line responders (i.e., participants who did not actually respond to the survey with any thought but merely chose the same response option for all items). No data imputation methods were used since all data was cross-sectional. In all studies, participants were over 18 years old, and were only allowed to participate if they were fluent in the language in which the study was run. In conjunction with exploratory factor analyses (pilot, Study 1, Study 2), satisfactory uniformity of sub-scales was confirmed with Cronbach’s alphas and Tarkkonen’s rhos. We used SPSS, versions 22–24 ; R: packages psych , lavaan ; and JASP, versions 0.7–0.11  for our analyses.
We decided on the number of factors to be retained based on our pilot study: four factors, based on Kaiser criterion (for a non-Heywood case result of maximum-likelihood factor analysis). However, parallel analysis and optimal coordinates suggested two factors and acceleration factor suggested unidimensionality. We approached Studies 1 and 2 as a four-factor solution, which retroactively also validated the results of the pilot study. In our exploratory factor analysis studies, we iteratively kept selecting items that had a factor loading of at least 0.35 on one factor and that had no cross-factor loadings while excluding ill-fitting items one-by-one. The results of these exploratory factor analyses were finally confirmed in Study 3 with a confirmatory factor analysis (CFA). As a safeguard against multinormality violations, we used the Satorra–Bentler correction in our CFA.