Cognition, Technology & Work

, Volume 21, Issue 4, pp 643–656 | Cite as

Principles of transparency for autonomous vehicles: first results of an experiment with an augmented reality human–machine interface

  • Raissa PokamEmail author
  • Serge Debernard
  • Christine Chauvin
  • Sabine Langlois
Original Article


Highly automated driving allows the driver to temporarily delegate the driving task to the autonomous vehicle. The challenge is to define the information that needs to be displayed to the driver in this mode, to let him be able to take over properly. This study investigates the automation transparency to ensure a meta-cooperation between the driver and the automation and the way to convey information to the driver using Augmented Reality according to some transparency principles. Therefore, among 45 participants, we evaluated five human–machine interface (HMI) in which some or all of the following functions were integrated: information acquisition, information analysis, decision-making and action execution. To validate our transparency principles, we assessed Situation Awareness, discomfort feeling, and the participants’ preferences. Even though there is no convergence in the first results, it appears clearly that no transparency in the HMI does not help to understand the environment. Additionally, it appears that “information acquisition” and “action execution” functions are quite necessary. Furthermore, the HMI with the high level of transparency was preferred by the participants. However, more analysis is required to obtain final results.


Driverless vehicles Human–machine interface Cognitive work analysis Human–machine cooperation Transparency Augmented reality Simulation 



This work receives support from the French government in accordance with the PIA (French acronym for Program of Future Investments) within the IRT (French acronym for Technology Research Institute) SystemX.


  1. Bengler K, Zimmermann M, Bortot D, Kienle M, Damböck D (2012) Interaction principles for cooperative human-machine systems. Inform Technol Methoden und Innovative Anwendungen der Informatik und Informationstechnik 54(4):157–164Google Scholar
  2. Biester L (2009) Cooperative automation in automobiles. Dissertation Humboldt-University Berlin zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät IGoogle Scholar
  3. Billings CE (1997) Aviation automation: the search for a human-centered approach. Lawrence Erlbaum Associates Publishers, Mahwah, NJGoogle Scholar
  4. Debernard S (1993) Contribution à la répartition dynamique de tâches entre opérateur et système automatisé: application au contrôle du trafic aérien (Doctoral dissertation)Google Scholar
  5. Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Human Factors 37(1):32–64CrossRefGoogle Scholar
  6. Endsley MR, Garland DJ (2000) Theoretical underpinnings of situation awareness: a critical review. Situation awareness analysis and measurement 1:3–32Google Scholar
  7. Endsley MR, Kaber DB (1999) Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 42(3):462–492CrossRefGoogle Scholar
  8. Flemisch FO, Bengler K, Bubb H, Winner H, Bruder R (2014) Towards cooperative guidance and control of highly automated vehicles: h-mode and conduct-by-wire. Ergonomics 57(3):343–360Google Scholar
  9. Flemisch F, Abbink D, Itoh M, Pacaux-Lemoine M-P, Wessel G (2016) Shared control is the sharp end of cooperation: towards a common framework of joint action, shared control and human machine cooperation. In: IFAC analysis, design and evaluation of human-machine systems, Kyoto, JapanGoogle Scholar
  10. Hoc J-M (2001) Towards a cognitive approach to human-machine cooperation in dynamic situations. Int J Human Comput Stud 54:509–540CrossRefGoogle Scholar
  11. Hollnagel E, Woods DD (1983) Cognitive systems engineering: New wine in new bottles. Int J Man Mach Stud 18(6):583–600CrossRefGoogle Scholar
  12. Kim T, Hinds P (2006). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In: Robot and human interactive communication, 2006. ROMAN 2006. The 15th IEEE international symposium on (pp 80–85). IEEEGoogle Scholar
  13. Lee JD, Seppelt BD (2009) Human factors in automation design. In: Springer handbook of automation. Springer, Berlin, Heidelberg, pp 417–436CrossRefGoogle Scholar
  14. Lyons JB (2013) Being transparent about transparency: a model for human-robot interaction. In: AAAI spring symposium series: trust and autonomous systemsGoogle Scholar
  15. Macadam CC (2003) Understanding and modeling the human driver. Vehicle Syst Dyn 40(1–3):101–134CrossRefGoogle Scholar
  16. Markoff J (2010) Google cars drive themselves, in traffic. The New York Times 10(A1):9Google Scholar
  17. Michon JA (1985) A critical view of driver behavior models: what do we know, what should we do? In human behavior and traffic safety. Springer, Boston, pp 485–524Google Scholar
  18. Miller CA (2014) Delegation and transparency: coordinating interactions so information exchange is no surprise. In: International conference on virtual, augmented and mixed reality, Springer International Publishing, pp 191–202Google Scholar
  19. Moray N, Inagaki T, Itoh M (2000) Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J Exp Psychol 6(1):44Google Scholar
  20. Muir BM, Moray N (1996) Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39:429–460CrossRefGoogle Scholar
  21. Naranjo JE, Jiménez F, García Fernández F, Armingol Moreno JM, Zato JG, Quero A (2010) Specification and development of a HMI for ADAS, based in usability and accessibility principlesGoogle Scholar
  22. Parasuraman R, Sheridan TB, Wickens CD (2000) A model of types and levels of human interaction with automation. IEEE Trans Syst Man Cybern 30:286–297CrossRefGoogle Scholar
  23. Pokam Meguia R, Chauvin C, Debernard S, Langlois S (2015) Towards autonomous driving: an augmented reality interface for lane change. In: 3rd international symposium on future active safety technology toward zero traffic accidents—FAST-zero’2015, Gothenburg, SwedenGoogle Scholar
  24. Rajaonah B, Anceaux F, Vienne F (2006) Trust and the use of adaptive cruise control: a study of a cut-in situation. Cogn Technol Work 8(2):146–155CrossRefGoogle Scholar
  25. Rasmussen J (1983) Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans Syst Man Cybern 3:257–266CrossRefGoogle Scholar
  26. Retel-Rude N, Retel O (2000) Statistique en psychologie. In Press édGoogle Scholar
  27. Sinha R, Swearingen K (2002) The role of transparency in recommender systems. In: CHI'02 extended abstracts on Human factors in computing systems. ACM, pp 830–831Google Scholar
  28. Ungoren AY, Peng H (2005) An adaptive lateral preview driver model. Vehicle Syst Dyn 43(4):245–259CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  • Raissa Pokam
    • 1
    • 2
    Email author
  • Serge Debernard
    • 2
  • Christine Chauvin
    • 3
  • Sabine Langlois
    • 1
    • 4
  1. 1.IRT SystemX, Centre d’Intégration nano-INNOVPalaiseauFrance
  2. 2.Université Polytechnique Hauts-de-France, CNRS, UMR 8201 - LAMIHValenciennesFrance
  3. 3.Lab-STICC IHSEV TeamUMR CNRS 6285, Université Bretagne SudLorientFrance
  4. 4.Renault, Technocentre-Human FactorGuyancourtFrance

Personalised recommendations