Skip to main content

The Route to Artificial Phenomenology; ‘Attunement to the World’ and Representationalism of Affective States

  • Chapter
  • First Online:
Emotional Machines

Abstract

According to dominant views in affective computing, artificial systems e.g. robots and algorithms cannot experience emotion because they lack the phenomenological aspect associated with emotional experience. In this paper I suggest that if we wish to design artificial systems such that they are able to experience emotion states with phenomenal properties we should approach artificial phenomenology by borrowing insights from the concept of ‘attunement to the world’ introduced by early phenomenologists. This concept refers to an openness to the world, a connection with the world which rejects the distinction between an internal mind and the external world. Early phenomenologists such as Heidegger, consider this ‘attunement’ necessary for the experience of affective states. I argue that, if one accepts that the phenomenological aspect is part of the emotion state and that ‘attunement to the world’ is necessary for experiencing emotion, affective computing should focus on designing artificial systems which are ‘attuned to the world’ in the phenomenological sense to enable them to experience emotion. Current accounts of the phenomenal properties of affective states, analyse them in terms of specific types of representations. As artificial systems lack a capability for such representation mainly because of an inability to determine relevance in changing contexts (‘the frame problem’), artificial phenomenology is impossible. I argue that some affective states, such as ‘attunement’ are not necessarily representational and as such a lack of capacity for representation does not imply that artificial phenomenology is impossible. At the same time ‘attunement’ helps restrict some aspects of the ‘frame problem’ and as such, goes some way of enabling representational states such as emotion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In a similar way Elpidorou and Freeman suggest that the function of emotion is to reveal features of situations we find ourselves in and motivate us towards some action (Elpidorou & Freeman, 2015, p. 661).

  2. 2.

    For the purposes of this paper I use Kim’s definition of functional states as combinations of inputs and outputs where these inputs and outputs may include other mental states as well as sensory stimuli and physical behaviours (Kim 2011, p. 169). As I discuss in Sect. 2 these inputs and outputs do not presuppose a distinction between an internal subject and external objects.

  3. 3.

    See for example Ford & Pylyshyn (1996), Dennett (1998), Wheeler (2008), Ransom (2013) and Boden (2016) chapters 2 and 3.

  4. 4.

    Here by representation I do not refer to simple simulation of representation states which has been achieved by some embodied AI (see Di Paolo 2003). Instead, I refer to a stronger sense which takes representation to be intrinsically meaningful rather than a state with externally imposed meaning or significance.

  5. 5.

    It is still problematic to give an accurate description of the ‘frame problem’. Here I borrow from Ransom (2013, p. 2) who takes it to refer to a cluster of problems having at their core the problem of determining relevance in changing contexts.

  6. 6.

    As I discuss in the following section, one area of AI can deal with some aspects of ‘the frame problem’ if it isolates the features of the environment the AS operate in. However, such attempts cannot accommodate changing contexts as the context must always remain constant. As Froese and Ziemke argue, the existence of closed feedback loops or unchanging contexts is not a sufficient condition for the attribution of intrinsically meaningful perspective in AS (Froese & Ziemke, 2009, p. 472).

  7. 7.

    Here I focus on the intrinsic view of affective states, according to which affective states are intrinsically phenomenal.

  8. 8.

    A notable exception in affective computing literature is Parisi and Petrosino (2010) who claim that they have created simulated robots which ‘can be said to have emotions’. They claim that ‘adding a special emotional circuit to the neural network controlling the robots’ behaviour leads to better motivational decisions’ (Parisi & Petrosino 2010, p. 453). However, Parisi and Petrosino are not interested in replicating phenomenological features of emotion. Instead they focus on the functional role that emotions play in motivating behaviour. A more recent paper by Hickton et al. (2017) focuses on designing an architecture which links affect and behaviour with the use of artificial hormones. According to Hickton et al. this architecture can be applied to different types of robot with the view to facilitate ‘subtle forms of expression and adaptive behaviour’ (Hickton et al., 2017, p. 472). However these authors, in a similar way to Parisi and Petrosino, do not discuss the phenomenological features of such affective behaviour.

  9. 9.

    A functional account does not preclude the possibility that some type of physical body may be important for experiencing affective states e.g. humans and octopi can both be in fear states and they both have physical bodies. However functional accounts relax the conditions according to which something would be considered a ‘physical body’ because that also would be determined on the basis of the function it performed and not on whether it was similar or different to a human-like or octopus-like body. Here I do not focus on ‘embodied’ accounts of affective states however insights provided by such accounts are important for the general project of designing artificial phenomenology because they provide details on how some systems realise affective states.

  10. 10.

    For a good discussion of objections against functional accounts see Boden (2016). For a discussion of objections against representational theories of mental states see Kim (2011).

  11. 11.

    See Boden (2016) for a good summary of this debate and its current status. Other researchers suggest hybrid accounts according to which situated behaviour and mental representations can be replicated in AS once they reach a certain level of organisational complexity (Sloman & Chrisley, 2003). In addition connectionist or Artificial Neural Networks accounts allow for distributed representation which refers to something different to the one postulated by Symbolic AI (Baars 1988).

  12. 12.

    See for example Zimbardo’s (1972) Stanford Prison Experiment where the majority of experiment participants behaved in accordance with the roles they were assigned within the given context rather than deliberating about which action they should perform.

  13. 13.

    Here by ‘representation’ I mean of the specific type I have been discussing in the first part of the paper e.g. a representational state that is intrinsically phenomenal.

  14. 14.

    One could make a stronger claim to the effect that the mood I am in determines which emotions I feel. Here I focus on the weaker claim that the mood makes it more likely that I will feel certain emotions but does not guarantee it. I am grateful to an anonymous reviewer for bringing this to my attention.

  15. 15.

    According to Heidegger this attunement is necessary not only for the experience of emotion but for the experience of all mental states because it enables us to interact with the environment. In this paper I focus on the importance of attunement for affective states.

  16. 16.

    For example, Heidegger seems to use the same term to refer to ‘attunement’ and ‘mood’. The debate on explicating these terms and providing an ontological account of mood is currently ongoing. For a useful collection of papers on the different interpretations of ‘mood’ see the special issue published in Philosophia 2017 Vol. 45.

  17. 17.

    Following Kim (2011, p. 24–25) I use representationalism and intentionalism interchangeably to refer to the view that mental states have the capacity and function of representing things and states of affairs in the world from a certain perspective.

  18. 18.

    Saying that mood is pre-intentional entails that it is not representational. Ratcliffe’s view is similar to the view I endorse in this paper as I consider mood to be non-representational whilst at the same time enabling emotion states which are representational.

  19. 19.

    Critics suggest that it is not clear that the phenomenological properties associated with phenomenal states can be adequately described on the basis of representational content; for example spectrum inversion cases are possible where one person sees green where another sees red when they see a red tomato but both call that colour ‘red’. In such cases although the content of the representation seems identical e.g. ‘a red tomato’, the colour experienced is different (Shoemaker 1982). According to this objection, the content of a representation on its own cannot adequately account for the phenomenological aspect of experience. Any representational account of the phenomenal properties of affective states would need to address these concerns. See Kriegel (2017) for a recent attempt to give a representational account of emotional phenomenology by suggesting that the representational content is not exhausted by the object being represented but also by an attitude of representing the object as x. Here I suggest that ‘the attitude towards x’ should be based on the idea of ‘attunement to the world’. As such my account could be considered compatible with, and an addendum to, Kriegel’s.

  20. 20.

    This rejection of the distinction between internal/external is a recurring theme in phenomenological accounts e.g. Merleau-Ponty’s or Heidegger’s. However an account providing an adequate explanation of this phenomenon is still missing.

  21. 21.

    As Dreyfus argues, Heidegger attacks this internal/external distinction as a misunderstanding based on Descartes’ distinction between the internal mind and the external world (Dreyfus 2007). According to Heidegger (1927), Descartes’ distinction was accepted as a dogma and created several epistemological issues such as the knowledge of the external world which was completely unwarranted unless someone pre-supposed the truth of the distinction.

  22. 22.

    Ratcliffe connects this feeling of disconnectedness with the experience of being in a depressive mood (see Ratcliffe 2014 for more detail).

  23. 23.

    For a stronger view which takes mood to be the background condition for most other representational states see Ratcliffe (2013) and Slaby 2014.

  24. 24.

    This interaction is called ‘sense making’ in enactive AI accounts (Froese & Ziemke, 2009, p. 495).

  25. 25.

    Ratcliffe call these ‘existential feelings’ however here I interpret moods as modes of being rather than feelings (see Ratcliffe, 2005, p. 56).

  26. 26.

    See entries for ‘depression’ and ‘anxiety’ disorders on Diagnostic and Statistical Manual of Mental Disorders (2013) published by the American Psychiatric Association for more details.

  27. 27.

    This concept of mood has similarities with the concept of disposition and it would be beneficial for the project of designing artificial phenomenology to provide an account of the relation between moods and dispositions.

  28. 28.

    Some roboticists integrate moods in their models however these moods are typically limited to three modes, that is, negative, positive and neutral (see for example Kirby et al., 2010). As such, these attempts reflect different approaches to the concept of mood than the one I discuss in this paper.

References

  • American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (DSM-5). American Psychiatric Publishing.

    Google Scholar 

  • Armstrong, D. (1980). The nature of mind. University of Queensland Press.

    Google Scholar 

  • Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.

    Google Scholar 

  • Beer, R. D. (1990). Intelligence as adaptive behaviour: An experiment in computational neuroethology. Academic Press.

    Google Scholar 

  • Block, N. (1995). On a confusion about a function of consciousness. Behavioural and Brain Science, 18, 227–247.

    Article  Google Scholar 

  • Block, N. (1996). Mental paint and mental latex. Philosophical Issues, 7, 19–49.

    Google Scholar 

  • Boden, M. A. (2016). Ai: Its Nature and Future. Oxford University Press UK.

    Google Scholar 

  • Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159.

    Article  Google Scholar 

  • Chalmers, D. (2004). The representational character of experience. In B. Leiter (Ed.), The future for philosophy (pp. 153–181). Oxford University Press.

    Google Scholar 

  • Colombetti, G. (2013). The Feeling Body: Affective Science Meets the Enactive Mind. MIT Press.

    Google Scholar 

  • Colombetti, G. (2017). The embodied and situated nature of moods. Philosophia, 45, 1437–1451.

    Article  Google Scholar 

  • de Sousa, R. (1987). The rationality of emotion. MIT Press.

    Book  Google Scholar 

  • Dennett, D. (1998). Cognitive wheels: The frame problem of AI. In Brainchildren (pp. 181–205). Penguin Books.

    Google Scholar 

  • Deonna, J., & Teroni, F. (2012). The emotions. A philosophical introduction. Routledge.

    Google Scholar 

  • Di Paolo, E. (2003). Organismically-inspired robotics: Homeostatic adaptation and teleology beyond the closed sensorimotor loop. In K. Murase & T. Asakura (Eds.), Dynamical systems approach to embodiment and sociality (pp. 19–42). Advanced Knowledge International.

    Google Scholar 

  • Dretske, F. (1995). Naturalizing the mind. MIT Press.

    Google Scholar 

  • Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. MIT Press.

    Google Scholar 

  • Dreyfus, H. L. (2007). Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Artificial Intelligence, 171, 1137–1160.

    Article  Google Scholar 

  • Elpidorou, A., & Freeman, L. (2015). Affectivity in Heidegger I: Moods and Emotions in Being and Time. Philosophy Compass, 10(10), 661–671.

    Google Scholar 

  • Ford, K. M., & Pylyshyn, Z. W. (1996). The robot’s dilemma revisited: The frame problem in artificial intelligence. Ablex.

    Google Scholar 

  • Freeman, L. (2014). Toward a phenomenology of mood. The Southern Journal of Philosophy, 52(4), 445–476.

    Article  Google Scholar 

  • Frijda, N. H. (1986). The Emotions. Cambridge University Press.

    Google Scholar 

  • Froese, T., & Ziemke, T. (2009). Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial Intelligence, 173, 466–500.

    Google Scholar 

  • Gallagher, S. (2014). Phenomenology and embodied cognition. In L. Shapiro (Ed.), The Routledge handbook of embodied cognition (pp. 9–18). Routledge.

    Google Scholar 

  • Goldie, P. (2000). The emotions: A philosophical exploration. Oxford University Press.

    Google Scholar 

  • Götz, K. G. (1972). Principles of optomotor reactions in insects. Bibliotheca Ophthalmologica : Supplementa Ad Ophthalmologica, 82, 251–259.

    Google Scholar 

  • Heidegger, M. (1927). Being and time. Translated by Joan Stambaugh. State University of New York Press.

    Google Scholar 

  • Hickton, L., Lewis, M., & Cañamero, L. (2017). A flexible component-based robot control architecture for hormonal modulation of behaviour and affect. In Y. Gao et al. (eds.) TAROS 2017, LNAI 10454 (pp. 464–474). Springer.

    Google Scholar 

  • Ketelaar, T., & Todd, P. M. (2001). Framing our thoughts: ecological rationality as evolutionary psychology’s answer to the frame problem. In H. R. Holcomb III (ed.) Conceptual Challenges in Evolutionary Psychology: Innovative Research Strategies (pp. 179–211). Kluwer Academic Publishers.

    Google Scholar 

  • Kim, J. (2011). Philosophy of mind. Westview Press.

    Google Scholar 

  • Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and Autonomous Systems, 58(3), 322–332.

    Article  Google Scholar 

  • Kriegel, U. (2017). Reductive representationalism and emotional phenomenology. Midwest Studies in Philosophy, 41(1), 41–59.

    Article  Google Scholar 

  • Kriegel, U. (2012). Towards a new feeling theory of emotion. European Journal of Philosophy, 22(3), 420–442.

    Article  Google Scholar 

  • Lazarus, R. S. (1991). Emotion and Adaptation. Oxford University Press USA.

    Google Scholar 

  • Merleau-Ponty, M. (1945). Phenomenology of perception. Translated by Donald Landes. Routledge.

    Google Scholar 

  • Moss, H. (2016). Genes, affect, and reason: Why autonomous robot intelligence will be nothing like human intelligence. Techné: Research in Philosophy and Technology, 20(1), 1–15.

    Google Scholar 

  • Parisi, D., & Petrosino, G. (2010). Robots that have emotions. Adaptive Behaviour, 18(6), 453–469.

    Article  Google Scholar 

  • Picard, R. W. (2003). Affective computing: Challenges. MIT Press.

    Google Scholar 

  • Price, C. (2006). Affect without object: Moods and objectless emotions. European Journal of Analytic Philosophy, 2(1), 49–68.

    Google Scholar 

  • Ransom, M. (2013). Why emotions do not solve the frame problem. In V. V. Muller (Ed.), Fundamental issues of artificial intelligence (pp. 353–365). Springer.

    Google Scholar 

  • Ratcliffe, M. (2005). The feeling of being. Journal of Consciousness Studies, 12(8–10), 43–60.

    Google Scholar 

  • Ratcliffe, M. (2010). Depression, Guilt and Emotional Depth. Inquiry: An Interdisciplinary Journal of Philosophy, 53(6), 602–626.

    Google Scholar 

  • Ratcliffe, M. (2013). What is it to lose hope? Phenomenology and the Cognitive Sciences, 12(4), 597–614.

    Google Scholar 

  • Ratcliffe, M. (2014). The phenomenology of depression and the nature of empathy. Medicine, Health Care and Philosophy, 17(2), 269–280.

    Article  Google Scholar 

  • Rosenthal, D. M. (1991). The independence of consciousness and sensory quality. Philosophical Issues, 1, 15–36.

    Article  Google Scholar 

  • Scherer, K. R., Banziger, T., & Roesch, E. B. (2010). Blueprint for affective computing. Oxford University Press.

    Google Scholar 

  • Shanahan, M. P. (1997). Solving the frame problem: A mathematical investigation of the common sense law of inertia. MIT Press.

    Google Scholar 

  • Shoemaker, S. (1982). The inverted spectrum. Journal of Philosophy, 79, 357–382.

    Article  Google Scholar 

  • Slaby, J. (2014). The other side of existence: Heidegger on boredom. In S. Flach & J. Soffner (eds.), Habitus in habitat II: Other sides of cognition (pp. 101–120). Lang.

    Google Scholar 

  • Sloman, A., & Chrisley, R. (2003). Virtual machines and consciousness. Journal of Consciousness Studies, 10(4–5), 133–172.

    Google Scholar 

  • Smith, J. (2016). Experiencing phenomenology: An introduction. Routledge.

    Google Scholar 

  • Stephan, A. (2009). On the nature of artificial feelings. In B. Roettger-Roessler & H. J. Markowitsch (Eds.), Emotions as bio-cultural processes (pp. 216–225). Springer.

    Google Scholar 

  • Stephan, A. (2017). Moods in layers. Philosophia, 45, 1481–1495.

    Article  Google Scholar 

  • Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind. MIT Press.

    Google Scholar 

  • Webb, B. (1996). A Cricket Robot. Scientific American, 275(6), 94–99.

    Article  Google Scholar 

  • Welton, D. (2012). Bodily intentionality, affectivity and basic affects. In D. Zahavi (Ed.), The Oxford handbook of contemporary phenomenology (pp. 177–198). Oxford University Press.

    Google Scholar 

  • Wheeler, M. (2008). Cognition in context: Phenomenology, situated robotics and the frame problem. International Journal of Philosophical Studies, 16(3), 323–349.

    Article  Google Scholar 

  • Zimbardo, P. G. (1972). Stanford prison experiment: A simulation study of the psychology of imprisonment. Zimbardo.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lydia Farina .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Farina, L. (2023). The Route to Artificial Phenomenology; ‘Attunement to the World’ and Representationalism of Affective States. In: Misselhorn, C., Poljanšek, T., Störzinger, T., Klein, M. (eds) Emotional Machines. Technikzukünfte, Wissenschaft und Gesellschaft / Futures of Technology, Science and Society. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-37641-3_5

Download citation

Publish with us

Policies and ethics