Skip to main content
Log in

The Development of a Scale to Evaluate Trust in Industrial Human-robot Collaboration

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Trust has been identified as a key element for the successful cooperation between humans and robots. However, little research has been directed at understanding trust development in industrial human-robot collaboration (HRC). With industrial robots becoming increasingly integrated into production lines as a means for enhancing productivity and quality, it will not be long before close proximity industrial HRC becomes a viable concept. Since trust is a multidimensional construct and heavily dependent on the context, it is vital to understand how trust develops when shop floor workers interact with industrial robots. To this end, in this study a trust measurement scale suitable for industrial HRC was developed in two phases. In phase one, an exploratory study was conducted to collect participants’ opinions qualitatively. This led to the identification of trust related themes relevant to the industrial context and a related pool of questionnaire items was generated. In the second phase, three human-robot trials were carried out in which the questionnaire items were applied to participants using three different types of industrial robots. The results were statistically analysed to identify the key factors impacting trust and from these generate a trust measurement scale for industrial HRC.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Ding Z, Hon B (2013) Constraints analysis and evaluation of manual assembly. CIRP Ann Manuf Technol 62(1):1–4

    Article  Google Scholar 

  2. Hägele M, Schaaf W, Helms E (2002) Robot assistants at manual workplaces: Effective co-operation and safety aspects. International symposium on robotics ISR 2002 / CD-ROM. October 7–11, Stockholm

  3. Schraft RD, Meyer C, Parlitz C, Helms E (2005) Powermate—a safe and intuitive robot assistant for handling and assembly tasks. IEEE international conference on robotics and automation, IEEE, Barcelona, pp 4047–4079

  4. Santis AD, Siciliano B, Luca AD, Bicchi A (2008) Atlas of physical human-robot interaction. Mech Mach Theor 43(3):253–270

    Article  MATH  Google Scholar 

  5. Unhelkar VV, Siu HC, Shah JA (2014) Comparative performance on human and mobile robotic assistants in collaborative fetch-and-deliver tasks. Human Robot interaction 2014. ACM, Bielefeld

  6. ISO (2011) Robots and robotic devices-safety requirements for industrial robots, Part 1: robots. international standards organisation, Geneva

  7. Bortot D, Born M, Bengler K (2013) Directly or detours? How should industrial robots approach humans?. ACM/IEEE international conference on human-robot interaction (HRI 2013), IEEE, Tokyo

  8. Walton M, Webb P, Poad M (2011) Applying a concept for robot-human cooperation to aerospace equipping processes, SAE International

  9. Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Fact 39(2):230–253

    Article  Google Scholar 

  10. Freedy A, de Visser E, Weltman G, Coeyman N (2007) Measurement of trust in human-robot collaboration. In: Proceedings of the 2007 international conference on collaborative technologies and systems, Orland

  11. Chen JY, Barnes MJ (2014) Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans Hum Mach Syst 44(1):13–29 IEEE

    Article  Google Scholar 

  12. Groom V, Nass C (2007) Can robots be teammates? Benchmarks in human–robot teams. Interact Stud 8(3):483–500

  13. Park E, Jenkins Q, Jiang X (2008) Measuring trust of human pperators in new generation rescue robots. Proceedings of the 7th JFPS international symposium on fluid power, Toyom, pp 15–18

  14. de Visser EJ, Parasuraman R, Freedy A, Freedy E, Weltman G (2006) A comprehensive methodology for assessing human-robot team Performance for use in training and simulation. Proceedings of the 50th Human factors ergonomics society, San Francisco, pp 2639–2643

  15. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Fact 46(1):50–80

    Article  Google Scholar 

  16. Verberne F, Ham J, Midden C (2012) Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Hum Fact 54(5):799–810

    Article  Google Scholar 

  17. Mazney D, Reichenbach J, Onnasch L (2012) Human performance consequences of automated decision aids: the impact of degree of automation and system experience. J Cogn Eng Decis Mak 6(1):57–87

    Article  Google Scholar 

  18. Parasuraman R, Molloy R, Singh I (1993) Performance consequences of automation-induced ‘complacency’. Int J Aviat Psychol 3(1):1–23

    Article  Google Scholar 

  19. Dzindolet MT, Pierce LG, Beck HP, Dawe LA, Anderson WB (2001b) Predicting misuse and disuse of combat identification systems. Mil Psychol 13(3):147–164

    Article  Google Scholar 

  20. de Visser E, Krueger F, McKnight P, Scheid S, Smith M, Chalk S et al. (2012) The world is not enough: trust in cognitive agents. In: Proceedings of the 56th annual HFES meeting, pp 263–267

  21. Chen JY, Barnes MJ (2012) Supervisory control of multiple robots: effects of imperfect automation and individual differences. Hum Fact 54(2):157–174

    Article  Google Scholar 

  22. Desai M, Stubbs K, Steinfeld A, Yanco H (2009) Creating trustworthy robots: lessons and inspirations from automated systems. Paper presented at The AISB convention: new frontiers in human-robot interaction. Edinburgh

  23. Yagoda RE, Gillan DJ (2012) You want me to trust a ROBOT? The development of a human-robot interaction trust scale. Int J Soc Robot 4(3):235–248

    Article  Google Scholar 

  24. Hancock PA, Billings DR, Oleson KE, Chen JY, De Visser E, Parasuraman R (2011) A meta-analysis of factors influencing the development of human-robot trust. Aberdeen proving ground, MD 21005-5425: US Army Research Laboratory

  25. Singh IL, Molloy R, Parasuraman R (1993) Automation-induced “complacency”: development of a complacency-potential rating scale. Int J Aviat Psychol 3(2):111–122

    Article  Google Scholar 

  26. Muir BM, Moray N (1996) Trust in automation. Part 1. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3):429–460

    Article  Google Scholar 

  27. Jian X, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71

    Article  Google Scholar 

  28. Master R, Gramopadhye AK, Melloy BJ, Bingham J, Jiang X (2000) A questionnaire for measuring trust in hybrid inspection systems. Paper presented at the industrial engineering research conference. Dallas

  29. Schaefer KE (2013) The perception and measurement of human-robot trust. Doctoral dissertation, University of Central Florida Orlando, Florida

  30. King N (1998) Template analysis. In: Gillian S, Catherine C (eds) Qualitative methods and analysis in organizational research: a practical guide. Sage Publications Ltd, Thousand Oaks, pp 118–134

    Google Scholar 

  31. Nunnally JC (1978) Psychometric theory, 2nd edn. McGraw-Hill, New York

    Google Scholar 

  32. Kline TJB (2005) Psychological testing: a practical approach to design and testing. Sage Publications Inc, London

    Google Scholar 

  33. Lowenthal KM (1996) An introduction to psychological tests and scales. UCL Press, London

    Google Scholar 

  34. Harris D, Chan-Pensley J, McGarry S (2005) The development of a multidimensional scale to evaluate motor vehicle dynamic qualities. Ergonomics 48(8):964–982

    Article  Google Scholar 

  35. Hair JF, Anderson RE, Tatham RL, Black WC (1998) Multivariate data analysis. Prentice Hall, Upper Saddle River

    Google Scholar 

  36. Kaiser HF (1974) An index of factorial simplicity. Psychometrika 39(1):31–36

    Article  MATH  Google Scholar 

  37. Kline P (1999) The handbook of psychological testing, 2nd edn. Routledge, London

    Google Scholar 

  38. deVellis RF (1991) Scale development: theory and application. Sage Publishing, Newbury Park

    Google Scholar 

  39. Bartneck C, Kulic D, Croft E (2009) Measuring the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1:71–81

    Article  Google Scholar 

  40. Inoue K, Nonaka S, Ujiie Y, Takubo T, Arai T (2005) Comparison of human psychology for real and virtual mobile manipulators. IEEE international workshop on robot and human interactive communication, ROMAN 2005, IEEE, pp 73–78

  41. Huber M, Rickert M, Knoll A, Brandt T, Glasauer S (2008) Human-robot interaction in handing-over tasks. 17th IEEE international symposium on robot and human interactive communication, pp 107–112

  42. Mayer MP, Kuz S, Schlick CM (2013) Using anthropomorphism to improve the human-machine interaction in industrial environments (part II). In: Duffy VG (ed) Digital human modeling and applications in health, safety, ergonomics, and risk management. Human body modelling and ergonomics. Springer, Berlin, pp 93–100

    Chapter  Google Scholar 

  43. Broadbent E, Stafford R, MacDonald B (2009) Acceptance of healthcare robots for the older population: review and future directions. Int J Soc Robot 1(4):319–330

    Article  Google Scholar 

  44. Bartneck C, Kanda T, Mubin O, Mahmud AA (2009) Does the design of a robot influence its animacy and perceived intelligence? Int J Soc Robot 1(2):195–204

    Article  Google Scholar 

  45. Li D, Rau PL, Li Y (2010) A cross-cultural study: effect of robot appearance and task. Int J Soc Robot 2(2):175–186

    Article  Google Scholar 

  46. Shiomi M, Zanlungo F, Hayashi K, Kanda T (2014) Towards a socially acceptable collision avoidance for a mobile robot navigating among pedestrians using a pedestrian model. Int J Soc Robot 6(3):443–455

    Article  Google Scholar 

  47. van den Brule R, Dotsch R, Bijlstra G, Wigboldus DH, Haselager P (2014) Do robot performance and behavioral style affect human trust? Int J Soc Robot 6(4):519–531

    Article  Google Scholar 

  48. Rosenthal-von der Pütten AM et al, Krämer NC (2015) Individuals’ evaluations of and attitudes towards potentially uncanny robots. Int J Soc Robot 7(5):1–26

  49. Prakash A, Rogers WA (2014) Why some humanoid faces are perceived more positively than others: effects of human-likeness and task. Int J Soc Robot 7(2):309–331

    Article  Google Scholar 

Download references

Acknowledgments

Special thanks to the senior laboratory technical officer at Cranfield University, Mr. John Thrower, for the assistance and expertise provided. Also, special thanks to Dr. Matthew Chamberlain and the team at Loughborough University for their assistance. The research is funded by the EPSRC Centre for Innovative Manufacturing in Intelligent Automation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George Charalambous.

Appendix

Appendix

1.1 Appendix 1: Interview Template

Section

Main question

Probe

Introduction

Can you talk to me about your first thoughts regarding the interaction with this robot?

Why did you feel this way?

Robot related

Did you feel you could rely on the robot to hand you over the components safely?

Why?

Can you talk to me about the robot’s ability to hand you the components?

Why?

Can you tell me more?

How did the appearance of the robot influence your trust?

Why?

Safety

Did you have any concerns when you interacted with the robot?

What?

Why?

Other topics

Considering the task you have just completed, what has encouraged you to trust the robot?

Can you talk to me more about this?

Is there anything else about the robot that encouraged you to trust this robot?

Why?

Can you tell me more?

1.2 Appendix 2: Coding Template

Element

Trust-related theme (code sign)

Lower level theme (code sign)

Robot (R)

Robot’s performance (R1)

Robot motion (R1m)

Robot and gripper reliability (R1r)

Robot’s physical attributes (R2)

Robot size (R2s)

Robot appearance (R2a)

Human (H)

Safety (H1)

Personal safety (H1p)

Safe programming of the robot (H1prog)

Experience (H2)

Prior interaction experiences (H2int)

Robot mental models (H2mm)

External (E)

Task (E1)

Complexity of the task (E1comp)

1.3 Appendix 3: Item Generation

Items

Direction

The way the robot moved made me uncomfortable

\(-\)

I was not concerned because the robot moved in an expected way

+

The speed of the robot made me uncomfortable

\(-\)

The speed at which the gripper picked up and released the components made me uneasy

\(-\)

I felt I could rely on the robot to do what it was supposed to do

+

I knew the gripper would not drop the components

+

The design of the robot was friendly

+

I believe the robot could do a wider number of tasks than what was demonstrated

+

I felt the robot was working at full capacity

\(-\)

The robot gripper did not look reliable

\(-\)

The gripper seemed like it could be trusted

+

The size of the robot did not intimidate me

+

I felt safe interacting with the robot

+

I was comfortable the robot would not hurt me

+

I trusted that the robot was safe to cooperate with

+

I had faith that the robot had been programmed correctly

+

The way robots are presented in the media had a negative influence on my feelings about interacting with this robot

\(-\)

I had no prior expectations of what the robot would look like

+

I don’t think any prior experiences with robots would affect the way I interacted with the robot

+

If I had more experiences with other robots I would feel less concerned

-

I was uncomfortable working with the robot due to the complexity level of the task

-

If the task was more complicated I might have felt more concerned

-

I might not have been able to work with the robot had the task been more complex

-

The task made it easy to interact with this robot

+

1.4 Appendix 4: Extract of the Rating Scale

figure a

1.5 Appendix 5: Trust Scale Components

  

Component

1

2

3

1

The way the robot moved made me uncomfortable

  

0.759

2

The speed at which the gripper picked up and released the components made me uneasy

  

0.848

3

I knew the gripper would not drop the components

 

0.651

 

4

The robot gripper did not look reliable

 

\(-\)0.828

 

5

I trusted that the robot was safe to cooperate with

0.688

  

6

The gripper seemed like it could be trusted

 

0.793

 

7

I was comfortable the robot would not hurt me

0.782

  

8

The size of the robot did not intimidate me

0.754

  

9

I felt safe interacting with the robot

0.787

  

10

I felt I could rely on the robot to do what it was supposed to do

 

0.506

 
 

Reliability analysis output: cronbach alpha value

0.802

0.712

0.612

1.6 Appendix 6: User Guide to the Trust Scale Questionnaire

figure b
figure c
figure d

Part D: Instructions for the Assessor

The statements in this survey have been randomised to reduce participant bias. Follow these five steps post administration to analyse the output.

Step 1: Group the Statements Together into their Major Components as shown below:

Questionnaire number

Major components

A

Robot’s motion and pick-up speed

C

D

Safe Co-operation

F

H

I

B

Robot and gripper reliability

E

G

J

Step 2: Score the Individual Statements:

*The following scoring scheme assumes that the individuals were exposed to an industrial robot of 100 % reliability (e.g. no failures, no abnormal motion)

Scale points

Use this scoring scheme for statements: B, D, E, F, H, I, J

Use this scoring scheme for statements: A, C, G

Score

Score

Strongly agree

5

1

Agree

4

2

Neutral

3

3

Disagree

2

4

Strongly disagree

1

5

Step 3: Deriving the Trust Score for Each Component and the Total Trust Score

ID

Trust component

Individual score

Minimum score possible

Maximum score possible

1

Perceived Robot’s motion and pick-up speed

X

2

10

2

Perceived Safe co-operation

Y

4

20

3

Perceived Robot and gripper reliability

Z

4

20

Totall trust score

X + Y + Z

10

50

Date undertaken

DD/MM/YYYY

  

Step 4: Interpreting the Results

First, observe the total trust score. If the total trust score is low (e.g. less than 25), then this could be an indication the individual does not trust the robot to collaborate with. This could have severe implications on operational effectiveness and efficiency. At the same time, if the total trust score is very high (e.g. close to 50), then this could potentially indicate the individual is overly relying on the robot which could lead to complacency.

Secondly, observe the scores achieved for each trust component separately. Identify any poorly rated trust components. For example, a particular worker might have scored high on component 3 (‘Perceived robot and gripper reliability’) but poorly on component 2 (‘Perceived safe Co-operation’) and 1 (‘Perceived robot’s motion and pick-up speed’). This will enable to identify the specific areas requiring actions, e.g. refresher training course.

Step 5: Identifying Emerging Trends

To identify emerging trends and patterns it is suggested to:

  • Log results for each individual

  • Administer the questionnaire every four months. Consider re-randomising the order to the statements on the questionnaire to avoid participant bias

  • Compare results between each quarter

  • Communicate to the individual the results of the questionnaire. If the individual is identified to be in the ‘low’ or ‘very high’ region, follow up with an informal conversation to understand the reasons behind this. This will enable to take mutually agreed actions.

*Data for Comparison

The table below presents the average score and standard deviation of each of the questionnaire items is provided following the experimental trials at Cranfield and Loughborough Universities (Sample size: 155). These data can be used as an early comparison with the collected data.

Questionnaire item

Average score

SD

A

1.8

0.8

C

1.8

0.9

D

4.3

0.7

F

4.1

0.9

H

4.3

0.8

I

4.3

0.7

B

4.1

0.9

E

3.7

1.1

G

1.9

0.9

J

4

0.8

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Charalambous, G., Fletcher, S. & Webb, P. The Development of a Scale to Evaluate Trust in Industrial Human-robot Collaboration. Int J of Soc Robotics 8, 193–209 (2016). https://doi.org/10.1007/s12369-015-0333-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-015-0333-8

Keywords

Navigation