Who Should I Trust (Human vs. Automation)? The Effects of Pedigree in a Dual Advisor Context

  • Carl J. Pearson
  • Christopher B. MayhornEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 822)


Source type bias (human vs automation) may influence the development of trust in decision aids. Situations involving two decision-aids may depend on the influence of pedigree (perceived expertise) such that decision making or reliance behavior is affected. In this task, the Convoy Leader decision-making paradigm developed by Lyons and Stokes (2012) was adapted to address advisor pedigree such that the human and automated information sources could be of high or low pedigree. Two hundred participants were asked to make eight decisions regarding the route taken by a military convoy based on intelligence (e.g., past insurgent attacks, Improvised Explosive Devices (IEDs) detected, etc.) provided by two information sources (human and automation) of varying degrees of pedigree. In two of these eight decisions, the decision-aids provided conflicting information. Results indicated that participants were likely to demonstrate a bias such that they were more likely to trust the information coming from the human advisor regardless of pedigree. This bias towards the human was only reversed when the automated decision aid was presented as having far greater pedigree. Measures of trust attitudes were highly indicative of decision making behaviors. The findings are addressed in terms of design within a dual-advisor context where human operators may receive conflicting information from advisors of different source types.


Decision-making Trust Dual-advisor systems 


  1. Lyons JB, Stokes CK (2012) Human-human reliance in the context of automation. Hum Factors 54(1):112–121CrossRefGoogle Scholar
  2. Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253CrossRefGoogle Scholar
  3. Sage AP (1991) Decision support systems engineering. Wiley, New YorkGoogle Scholar
  4. Madhavan P, Wiegmann DA (2007a) Similarities and differences between human-human and human-automation trust: an integrative science. Hum Factors 8:277–301Google Scholar
  5. Madhavan P, Wiegmann DA (2007b) Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Hum Factors 49(5):773–785CrossRefGoogle Scholar
  6. Pearson CJ, Welk AK, Boettcher WA, Mayer RC, Streck S, Simons-Rudolph JM, Mayhorn CB (2016) Differences in trust between human and automated decision aids. In: Proceedings of the 2016 symposium and bootcamp on the science of security. ACM, p 95Google Scholar
  7. Ohanian R (1990) Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness. J Advert 19(3):39–52MathSciNetCrossRefGoogle Scholar
  8. Merritt SM (2011) Affective processes in human-automation interactions. Hum Factors 53(4):356–370CrossRefGoogle Scholar
  9. Merritt SM, Sinha R, Curran PG (2015) Attitudinal predictors of relative reliance on human vs automated advisors. Hum Factors 3(3–4):327–345Google Scholar
  10. Dzindolet MT, Petersen SA, Pomranky RA, Pierce LG, Beck HP (2003) The role of trust in automation reliance. Int J Hum Comput Stud 58(6):697–718CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.North Carolina State UniversityRaleighUSA

Personalised recommendations