Skip to main content

Augmented Human and Human-Machine Co-evolution: Efficiency and Ethics

  • Chapter
  • First Online:
Reflections on Artificial Intelligence for Humanity

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12600))

Abstract

Capability and expected potential of AI-based computer solutions increased significantly in the recent years, mainly due to progress in machine learning technologies and available data. Growing effectiveness in reasoning, knowledge representation, automatic training via machine learning, and especially in computer vision and speech technology result in AI systems becoming an ever-better communication and work partner of their human counterparts. Deeply embedded in the every-day context of work and leisure, AI systems can act as competent dialog partners and powerful work assistants. Furthermore, they increasingly help humans to better acquire new insights, process and apply situation-specific instructions, receive improved training and learn new knowledge more effectively. Ultimately, intelligent systems exhibit the potential to become inseparable partners of humans or – in case e.g. of prosthesis solutions and innovative sensor technology – even become part of the human body. Such close mental and physical interconnection between human and AI system raises new concerns and ethical questions which need to be considered not only by computer scientists, but ask for interdisciplinary work and social discourse. This paper outlines the different levels of human-computer integration, gives examples of the innovative potential in work assistance and learning support, and sketches ethical and moral issues conjoined with such progress.

With contributions from:

Vincent Aleven, Human-Computer Interaction Institute, Carnegie Mellon University.

Kenneth Holstein, Human-Computer Interaction Institute, Carnegie Mellon University.

Elisabeth André, Computer Science Department, University of Augsburg.

Justine Cassell, Human-Computer Interaction Institute, Carnegie Mellon University.

Gordon Cheng, Institute for Cognitive Systems (ICS) Technical University of Munich.

Dimosthenis Karatzas, Computer Vision Centre, Universitat Autònoma de Barcelona.

Koichi Kise, Dept. of Computer Science and Intelligent Systems, Osaka Prefecture University.

Kenji Mase, Graduate School of Informatics, Nagoya University.

Raul Rojas, Department of Mathematics and Computer Science, Freie Universität Berlin.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See also chapter 8 of this book.

  2. 2.

    Human-Machine Co-Creation, Co-Learning and Co-Adaptation.

  3. 3.

    https://www.iftf.org/humanmachinepartnerships/.

  4. 4.

    See also chapters 10 and 11.

  5. 5.

    A contribution based on the GFAIH-speech and research work of Dimosthenis Karatzas.

  6. 6.

    As well described by Hans Moravec’ paradox, seemingly easy aspects of intelligence involving perceptual and motor skills appear easy to humans who count with a long evolution process mastering these skills, but tend to be much more challenging to solve and engineer than higher-level aspects of intelligence such as reasoning which might appear perplexing, but are not intrinsically so difficult to achieve.

  7. 7.

    See also chapter 6.

  8. 8.

    A contribution based on the GFAIH-speech and research work of Gordon Cheng.

  9. 9.

    A contribution based on the GFAIH-speech and research work of Elisabeth André.

  10. 10.

    A contribution based on the GFAIH-speech and research work of Kenji Mase.

  11. 11.

    A contribution based on the GFAIH-speech and research work of Justine Cassell.

  12. 12.

    See also chapter 8.

  13. 13.

    A contribution based on the GFAIH-speech and research work of Raúl Rojas.

  14. 14.

    A contribution based on the GFAIH-speech and research work of Vincent Aleven and Kenneth Holstein.

  15. 15.

    In a third condition, teachers used an ablated version of Lumilo, to evaluate the added value of the analytics within Lumilo over and above other elements of Lumilo’s design. For brevity, this comparison was omitted.

  16. 16.

    A contribution based on the GFAIH-speech and research work of Koichi Kise.

References

  1. Ishimaru, S., et al.: Augmented learning on anticipating textbooks with eye tracking. In: Zlatkin-Troitschanskaia, O., Wittum, G., Dengel, A. (eds.) Positive Learning in the Age of Information (PLATO) - A Blessing or A Curse?, pp. 387–398. Springer, Wiesbaden, October 2017. https://doi.org/10.1007/978-3-658-19567-0_23

  2. Dengel, A.: Digital co-creation and augmented learning. In: Proceedings KMO 2016, Hagen, 11th International Conference on Knowledge Management in Organizations, Hagen, Germany, July 2016. ACM. 978-1-4503-4064-9/16/07. https://dx.doi.org/10.1145/2925995.292605

  3. The Schumpeter Blog 2015 Homepage. https://www.economist.com/news/business/21664190-modern-version-scientific-management-threatens-dehumanise-workplace-digital. Accessed 01 June 2020

  4. Devillers, L.: Social and emotional robots: useful artificial intelligence in the absence of consciousness. In: Nordlinger, B., Villani, C., Rus, D. (eds.) Healthcare and Artificial Intelligence, pp. 261–267. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-32161-1_32

    Chapter  Google Scholar 

  5. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1) (2019). https://doi.org/10.1162/99608f92.8cd550d1

  6. Blandfort, P., Karayil, T., Borth, D., Dengel, A.: Captioning in the wild: how people caption images on Flickr. In: ACM Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes (2017)

    Google Scholar 

  7. Gomez, L., Mafla, A., Rusiñol, M., Karatzas, D.: Single shot scene text retrieval. In: The European Conference on Computer Vision (ECCV), pp. 728–744 (2018)

    Google Scholar 

  8. Gómez, L., Rusiñol, M., Karatzas, D.: Cutting Sayre's Knot: reading scene text without segmentation. application to utility meters. In: 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), pp. 97–102. IEEE (2018)

    Google Scholar 

  9. Nayef, N., et al.: ICDAR2019 robust reading challenge on multi-lingual scene text detection and recognition—RRC-MLT-2019. In: Proceedings 15th International Congress on Document Analysis and Recognition, IEEE CPS, pp. 1582–1587 (2019)

    Google Scholar 

  10. Reddy, S., Mathew, M., Gomez, L., Rusiñol, M., Karatzas, D., Jawahar, C.V.: RoadText-1K: text detection recognition dataset for driving videos. In: IEEE International Conference on Robotics and Automation (ICRA) (2020)

    Google Scholar 

  11. Toyama, T., Dengel, A., Suzuki, W., Kise, K.: Wearable reading assist system: augmented reality document combining document retrieval and eye tracking. In: Proceedings ICDAR 2013, 12th International Conference on Document Analysis and Recognition, Washington D.C., USA, pp. 30–34, August 2013

    Google Scholar 

  12. Patel, Y., Gomez, L., Rusiñol, M., Karatzas, D.: Dynamic lexicon generation for natural scene images. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9913, pp. 395–410. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46604-0_29

    Chapter  Google Scholar 

  13. Gomez, L., Patel, Y., Rusiñol, M., Jawahar, C.V., Karatzas, D.: Self-supervised learning of visual features through embedding images into text topic spaces. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  14. Gomez, R., Gomez, L., Gibert, J., Karatzas, D.: Self-supervised learning from web data for multimodal retrieval. In: Yang, M., Rosenhahn, B., Murino, V. (eds.) Multimodal Scene Understanding: Algorithms, Applications and Deep Learning, pp. 279–306. Elsevier (2019). ISBN 978-0-12-817358-9. https://doi.org/10.1016/B978-0-12-817358-9.00015-9

  15. Mafla, A., Dey, S., Furkan Biten, A., Gomez, L., Karatzas, D.: Fine-grained image classification and retrieval by combining visual and locally pooled textual features. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2020)

    Google Scholar 

  16. Gomez, R., Gibert, J., Gomez, L., Karatzas, D.: Exploring hate speech detection in multimodal publications. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2020)

    Google Scholar 

  17. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40 (2017)

    Google Scholar 

  18. Terman, L.M.: The Measurement of Intelligence: An Explanation of and a Complete Guide for the Use of the Stanford Revision and Extension of the Binet-Simon Intelligence Scale. Houghton Mifflin, Boston (1916)

    Book  Google Scholar 

  19. Karayil, T., Irfan, A., Raua, F., Hees, J., Dengel, A.: Conditional GANs for image captioning with sentiment. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds.) Artificial Neural Networks and Machine Learning. Proceedings ICANN19, 28th International Conference on Artificial Neural Networks, Munich, Germany, September 2019. LNCS, vol. 11730, pp. 300–312. Springer, Cham (2019). https://doi.org/https://doi.org/10.1007/978-3-030-30490-4_25

  20. Narasimhan, M., Schwing, A.G.: Straight to the facts: learning knowledge base retrieval for factual visual question answering. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 460–477. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_28

    Chapter  Google Scholar 

  21. Narasimhan, M., Lazebnik, S., Schwing, A.: Out of the box: reasoning with graph convolution nets for factual visual question answering. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 31, pp. 2654–2665 (2018)

    Google Scholar 

  22. Hendricks, L.A., Burns, K., Saenko, K., Darrell, T., Rohrbach, A.: Women also snowboard: Overcoming bias in captioning models. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 793–811. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_47

    Chapter  Google Scholar 

  23. Rohrbach, A., Hendricks, L.A., Burns, K., Darrell, T., Saenko, K.: Object hallucination in image captioning. In: Empirical Methods in NaturalLanguage Processing (EMNLP) (2018)

    Google Scholar 

  24. Bhargava, S., Forsyth, D.: Exposing and correcting the gender bias in image captioning datasets and models. arXiv preprint arXiv:1912.00578 (2019)

  25. Biten, A.F., Gomez, L., Rusinol, M., Karatzas, D.: Good news, everyone! Context driven entity-aware captioning for news images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12466–12475 (2019)

    Google Scholar 

  26. Cheng, G., Ramirez-Amaro, G., Beetz, M., Kuniyoshi, Y.: Purposive learning: robot reasoning about the meanings of human activities. Sci. Robot. 4, eaav1530 (2019)

    Google Scholar 

  27. Kuniyoshi, Y., Inaba, M., Inoue, H.: Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans. Robot. Autom. 10, 799–822 (1994)

    Article  Google Scholar 

  28. Ijspeert, A.J., Nakanishi, J., Hoffmann, H., Pastor, P., Schaal, S.: Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput. 25, 328–373 (2013)

    Article  MathSciNet  Google Scholar 

  29. Bentivegna, D.C., Atkeson, C.G., Cheng, G.: Learning tasks from observation and practice. IEEE Robot. Auton. Syst. J. 47, 163–169 (2004)

    Article  Google Scholar 

  30. Ramirez-Amaro, K., Beetz, M., Cheng, G.: Transferring skills to humanoid robots by extracting semantic representations from observations of human activities. Artif. Intell. 247, 95–118 (2017)

    Article  MathSciNet  Google Scholar 

  31. Tenorth, M., Beetz, M.: Representations for robot knowledge in the KnowRob framework. Artif. Intell. 247, 151–169 (2017)

    Article  MathSciNet  Google Scholar 

  32. Heimerl, A., Baur, T., Lingenfelser, F., Wagner, J., André, E.: NOVA - a tool for eXplainable cooperative machine learning. In: ACII 2019, pp. 109–115 (2019)

    Google Scholar 

  33. Baur, T., Clausen, S., Heimerl, A., Lingenfelser, F., Lutz, W., André, E.: NOVA: a tool for explanatory multimodal behavior analysis and its application to psychotherapy. In: Ro, Y.M., Cheng, W.-H., Kim, J., Chu, W.-T., Cui, P., Choi, J.-W., Hu, M.-C., De Neve, W. (eds.) MMM 2020. LNCS, vol. 11962, pp. 577–588. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37734-2_47

    Chapter  Google Scholar 

  34. Zhao, R., Sinha, T., Black, A., Cassell, J.: Automatic recognition of conversational strategies in the service of a socially-aware dialog system. In: Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue, Los Angeles, CA, 13–15 September 2016 (2016)

    Google Scholar 

  35. Cassell: The Ties that Bind: Social Interaction in Conversational Agents. Reseaux Issue number 220-221, pp. 21-45 (2020)

    Google Scholar 

  36. Tartaro, A., Cassell, J., Ratz, C., Lira, J., Nanclares-Nogues, V.: Accessing peer social interaction: using authorable virtual peer technology as a component of a group social skills intervention program. ACM Trans. Accessible Comput. (TACCESS) 6(1), 1–29 (2015). Article 2

    Google Scholar 

  37. Pecune, F., Chen, J., Matsuyama, Y., Cassell, J.: Field trial analysis of Socially Aware Robot Assistant. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS) (2018)

    Google Scholar 

  38. Finkelstein, S., Yarzebinski, E., Vaughn, C., Ogan, A., Cassell, J.: The effects of culturally-congruent educational technologies on student achievement. In: Proceedings of Artificial Intelligence in Education (AIED), Memphis, TN, 09–13 July 2013 (2013)

    Google Scholar 

  39. Holstein, K., McLaren, B.M., Aleven, V.: Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity. J. Learn. Anal. 6(2), 27–52 (2019). https://doi.org/10.18608/jla.2019.62.3

    Article  Google Scholar 

  40. Holstein, K., McLaren, B.M., Aleven, V.: Designing for complementarity: teacher and student needs for orchestration support in ai-enhanced classrooms. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 157–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_14

    Chapter  Google Scholar 

  41. Holstein, K., McLaren, B.M., Aleven, V.: Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In: Penstein Rosé, C., Martínez-Maldonado, R., Hoppe, H.U., Luckin, R., Mavrikis, M., Porayska-Pomsta, K., McLaren, B., du Boulay, B. (eds.) AIED 2018. LNCS (LNAI), vol. 10947, pp. 154–168. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93843-1_12

    Chapter  Google Scholar 

  42. Ishimaru, S., Kunze, K., Kise, K., Dengel, A.: The wordometer 2.0: estimating the number of words you read in real life using commercial EOG glasses. In: Proceedings UbiComp ‘16, 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, pp. 293–296. September 2016 https://dx.doi.org/10.1145/2968219.2971398

  43. Tonomoto, S., Iwata, M., Kise, K.: Preliminary experiments toward personalized nudging strategies extensive reading of English. In: Proceedings of CHI2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems, 8 p., April 2020

    Google Scholar 

  44. Ishimaru, T.M., Kise, K., Dengel, A.: Gaze-based self-confidence estimation on multiple-choice questions and its Feedback. Submitted to Asian CHI Symposium 2020, Honolulu, Hawaii, USA, April 2020. https://doi.org/https://doi.org/10.1145/3391203.3391227

  45. Yamada, K., Augereau, O., Kise, K.: Estimation of confidence based on eye gaze: an application to multiple-choice questions. In: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (UbiComp 2017), pp. 217–220, 4 p. ACM, October 2017.

    Google Scholar 

  46. Grinbaum, A., Chatila, R., Devillers, L., Ganascia, J.G., Tessier, C., Dauchet, M.: Ethics in robotics research: CERNA mission and context. IEEE Robot. Autom. Mag. 24(3), 139–145 (2017)

    Article  Google Scholar 

  47. Devillers, L., et al.: Research ethics in Machine Learning, CERNA (2017). https://www.al-listene.fr/files/2019/05/54730_cerna_2017_machine_learning.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Dengel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dengel, A., Devillers, L., Schaal, L.M. (2021). Augmented Human and Human-Machine Co-evolution: Efficiency and Ethics. In: Braunschweig, B., Ghallab, M. (eds) Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science(), vol 12600. Springer, Cham. https://doi.org/10.1007/978-3-030-69128-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69128-8_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69127-1

  • Online ISBN: 978-3-030-69128-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics