Psychonomic Bulletin & Review

, Volume 24, Issue 6, pp 2031–2036 | Cite as

Revisiting the role of language in spatial cognition: Categorical perception of spatial relations in English and Korean speakers

  • Kevin J. Holmes
  • Kelsey Moty
  • Terry Regier
Brief Report


The spatial relation of support has been regarded as universally privileged in nonlinguistic cognition and immune to the influence of language. English, but not Korean, obligatorily distinguishes support from nonsupport via basic spatial terms. Despite this linguistic difference, previous research suggests that English and Korean speakers show comparable nonlinguistic sensitivity to the support/nonsupport distinction. Here, using a paradigm previously found to elicit cross-language differences in color discrimination, we provide evidence for a difference in sensitivity to support/nonsupport between native English speakers and native Korean speakers who were late English learners and tested in a context that privileged Korean. Whereas the former group showed categorical perception (CP) when discriminating spatial scenes capturing the support/nonsupport distinction, the latter did not. An additional group of native Korean speakers—relatively early English learners tested in an English-salient context—patterned with the native English speakers in showing CP for support/nonsupport. These findings suggest that obligatory marking of support/nonsupport in one’s native language can affect nonlinguistic sensitivity to this distinction, contra earlier findings, but that such sensitivity may also depend on aspects of language background and the immediate linguistic context.


Spatial cognition Categorical perception Language and thought Lateralization Bilinguals 

Languages differ in how they partition the world through their words and grammatical devices (Malt & Majid, 2013). Such linguistic differences suggest the possibility of corresponding differences in nonlinguistic cognition, consistent with the classic hypothesis that language shapes thought (Whorf, 1956). Support for this possibility comes from studies showing that speakers of different languages perform differently—in a manner predicted by the semantic differences among the languages—on tasks that do not require language (see Wolff & Holmes, 2011). Other research suggests, however, that such cognitive diversity is constrained. Structural regularities—for example, biomechanical discontinuities in human locomotion (Malt et al., 2008) and the irregular shape of perceptual color space (Regier, Kay, & Khetarpal, 2007)—appear to limit the scope of semantic variation across languages and hence the potential for cognitive differences across language groups. Moreover, it has been suggested that certain structural distinctions (e.g., among objects or spatial relations) are so foundational to human experience as to resist the influence of semantic variation—and, indeed, research investigating cross-language cognitive differences for such distinctions has failed to find them (e.g., Munnich, Landau, & Dosher, 2001). Here we suggest that it may be premature to conclude that there are no such cross-language cognitive differences. Using a paradigm previously found to elicit cross-language differences in color discrimination, we show that sensitivity to a structural distinction proposed to be cognitively universal—the spatial distinction between support and nonsupport—can differ in speakers of languages that encode the distinction differently. We also show that such sensitivity depends on aspects of language background and context, suggesting a more complex picture of the relation between spatial language and spatial cognition than previously acknowledged.

The spatial relation of support provides an interesting test case for exploring possible effects of language on spatial cognition because, despite being regarded as a cognitive universal (Munnich et al., 2001), this relation is encoded differently across languages. In English, the distinction between immediate support (i.e., one object resting on or attached to another) and nonsupport is marked obligatorily by basic spatial terms such as on versus above; the term on cannot be used when the figure object is even slightly out of contact with the reference object. In Korean, however, a single term (위; wi) can be used to describe the relation of two objects regardless of the presence of support; other terms for expressing this relation, while available, are optional and may be used infrequently even for scenes depicting clear support relationships (Munnich et al., 2001).

Munnich et al. (2001) investigated whether this difference in spatial language correlates with differences in nonlinguistic spatial cognition. Native speakers of English and Korean were asked to name spatial scenes showing a figure object in or out of contact with a reference object, and other speakers of the two languages were tested on their memory for the same locations. Whereas the naming task produced the expected cross-language difference in encoding of the support/nonsupport distinction, the memory task yielded similar performance in the two language groups, providing no evidence of linguistic influence.1 Speakers of both languages remembered locations in which the figure object was adjacent to the reference object better than those not adjacent, suggesting comparable sensitivity to the support/nonsupport distinction. Munnich et al. concluded that contact/support may be universally privileged in nonlinguistic cognition and immune to any effects of cross-language semantic variation.

These findings stand in contrast to evidence from other domains, such as color, in which cross-language semantic differences are mirrored by differences in perceptual discrimination. Winawer et al. (2007), for example, found that Russian speakers, but not English speakers, showed enhanced discrimination of colors that straddled the boundary between light blue and dark blue—a basic lexical contrast in Russian (goluboy vs. siniy) but not in English. Such categorical perception (CP)2 has also been observed within a single language group when comparing discrimination across the two halves of the visual field. Several studies have found stronger CP in the right visual field (RVF) than in the left (LVF)—possibly reflecting the language dominance of the left hemisphere, to which the RVF projects (e.g., Gilbert, Regier, Kay, & Ivry, 2006, 2008; Holmes & Regier, 2016; but see Brown, Lindsey, & Guckes, 2011; Witzel & Gegenfurtner, 2011, for failures to replicate this pattern). Using a visual search task that combined cross-language and cross-hemisphere comparisons, Roberson, Pak, and Hanley (2008) found CP for a Korean-specific distinction between yellow–green (yeondu) and green (chorok) colors in Korean speakers but not in English speakers, and this effect was lateralized to the RVF in fast-responding Korean participants.

Although these studies provide evidence that speakers of different languages differ in their sensitivity to certain categorical distinctions when making perceptual judgments, the distinctions investigated (e.g., light vs. dark blue; yellow–green vs. green) are not widely claimed to be resistant to the influence of language. We asked whether such a cross-language difference might be observed for a distinction that has been proposed to be immune to linguistic influence, and cognitively universal: spatial support/nonsupport (Munnich et al., 2001). That is, we used the cognitive distinction probed by Munnich et al. (spatial support/nonsupport) and the task of Roberson et al. (2008) and others (visual search) to investigate whether speakers of languages that encode support/nonsupport differently would differ in their perceptual discrimination of spatial scenes capturing this distinction.

Using a visual search task testing for CP for support/nonsupport, we compared the performance of the same two language groups tested by Munnich et al. (2001): monolingual English speakers, for whom support/nonsupport is marked by basic spatial terms, and bilingual Korean–English speakers, whose native language (Korean) does not mark this distinction obligatorily. Both groups were tested in their native language. If support/nonsupport is cognitively universal as previously claimed, CP for this distinction should be observed in both groups, and to comparable degrees. However, if language affects sensitivity to support/nonsupport such that obligatory marking of the distinction in one’s native language is necessary for (or enhances) sensitivity, CP should be observed in the native English speakers but not (or not as strongly) in the native Korean speakers. We also tested an additional group of native Korean speakers who had learned English relatively early in life compared to the first Korean group and were tested in a setting that rendered English relatively more salient. Given evidence that bilinguals’ categorization patterns differ depending on the age of L2 immersion (e.g., Munnich & Landau, 2010) and the ambient language (e.g., Athanasopoulos et al., 2015; Fuhrman et al., 2011), we expected that this Korean-with-English-reinforcement group (hereafter, +E Korean) would be more likely to show CP for support/nonsupport than would the Korean-without-English-reinforcement group (described above; hereafter, -E Korean), for whom English was less supported by the experimental context and by the participants’ own language background.


In addition to evidence of RVF-lateralized CP for color (e.g., Gilbert et al., 2006) and other perceptual stimuli (e.g., Gilbert et al., 2008; Holmes & Wolff, 2012), this pattern of lateralization might be especially likely to occur in the spatial domain, given the left hemisphere’s specialization for the processing of categorical spatial relations (e.g., Kosslyn et al., 1989). Therefore, following Roberson et al. (2008), we used a method that allowed us to test for differences in CP across visual fields as well as across language groups. The stimuli were four scenes showing a figure object supported or unsupported by a reference object (see Fig. 1), with each stimulus display consisting of three identical distractor scenes and one different target scene (see Fig. 2). The target and distractor scenes were either from the same English-marked category (within-category trials; i.e., both support or both nonsupport) or from different categories (between-category trials; i.e., support vs. nonsupport). On each trial, participants were asked to indicate the side on which the target appeared by making speeded keyboard responses. In this task, CP for support/nonsupport would be indicated by faster responses on between-category than within-category trials, and lateralized CP would be indicated by a larger such between-category response time (RT) advantage in the RVF than in the LVF. Our method was similar to that of Munnich et al. (2001) in using scenes that either crossed the support/nonsupport boundary (between-category trials) or did not do so (within-category trials). However, whereas Munnich et al.’s task involved judging whether two scenes separated by a delay displayed the same spatial relation, ours required discrimination of a target scene from multiple distractor scenes, all presented simultaneously.
Fig. 1

Support (a, b) and nonsupport (c, d) scenes used in the visual search task

Fig. 2

Sample visual search displays, with the target in the lower left position of each: a within-category trial, b between-category trial


Nineteen native English speakers and 37 native Korean speakers, recruited from the University of California, Berkeley, community, participated for course credit or payment. None of the English speakers reported familiarity with Korean. The Korean speakers were divided into two groups: the -E Korean group (n = 18), who had not been exposed to English regularly before age 12 (as in Munnich et al., 2001; mean age of English immersion: 16.7 years; range: 13–22) and who completed the experiment entirely in Korean, including reading a Korean consent form and interacting with a Korean-speaking experimenter, and the +E Korean group (n = 19), for whom no English age-of-immersion cutoff was imposed (M = 8.9 years; range: 0–14) and who read an English consent form and interacted with an English-speaking experimenter. All participants were right-handed and reported normal or corrected-to-normal vision. Two participants (one English speaker and one +E Korean speaker) were excluded for chance-level accuracy on the visual search task, leaving 18 participants in each group (comparable to the group sizes in Munnich et al.’s memory task).


The stimuli were four scenes of a figure object (ball) and a reference object (table), similar to those in Munnich et al.’s (2001) Experiment 2. Two of the scenes displayed a support relation between the figure and reference objects (see Figs. 1a–b), and the other two displayed a nonsupport relation (see Figs. 1c–d). The two scenes displaying each relation were selected based on pilot work to be perceptually discriminable (i.e., the figure object was sufficiently far apart in the two scenes) given the brief presentation interval of our task.


For the English and -E Korean groups, the experiment was conducted entirely in participants’ native language. For the +E Korean group, task instructions were presented in Korean, but all other aspects (including the written consent form and spoken interactions with the experimenter) were in English. Each participant sat in a darkened room with her head positioned in a chin rest such that the center of the computer screen was at eye level, at a viewing distance of 60 cm. On each trial of the visual search task, a fixation marker appeared centrally for 1,000 ms, followed by a stimulus display for 200 ms (an interval that discouraged eye movements). The display consisted of four scenes surrounding the fixation marker, each 7.7° × 7.7° in size (see Fig. 2). Three of the scenes were identical (distractors) and differed from the fourth (target). The target and distractor scenes displayed spatial relations from either the same English-marked category (within-category trials; e.g., both labeled above/over in English and wi in Korean) or different categories (between-category trials; e.g., target labeled on and distractor labeled above/over in English, but both labeled wi in Korean). Participants were asked to indicate, as quickly as possible, the side of the screen containing the target (“odd one out”) by pressing the left (Q) or right (P) computer key with the corresponding index finger. The next trial began 250 ms after participants made a response.

There were 32 practice trials and 224 test trials, half within-category and half between-category trials, presented in random order. On within-category trials, both the target and distractor scenes displayed support or nonsupport relations. In pilot work, participants took considerably longer to discriminate the non-support scene in which the ball was near the table (see Fig. 1d) from either of the two support scenes, compared to all other pairwise combinations of the four scenes. Therefore, to minimize differences in perceptual similarity across trial types, on between-category trials one of the two support scenes was always paired with the nonsupport scene in which the ball was relatively far from the table (see Fig. 1c).3 Across trials, each scene served as target and distractor equally frequently at all four display positions.

After the visual search task, both groups of Korean speakers were shown the four support/nonsupport scenes individually in random order and were asked to describe the relation between the figure and reference objects by filling in the blank in the following sentence (translated by a native Korean speaker): “공은테이블의 ___________ 에있습니다.” (“The ball is __________ the table”). As in Munnich et al.’s (2001) naming task, participants were instructed to use a simple word or phrase, and to avoid using compass, clock-face, or degree-of-angle terms. A separate group of 14 native English speakers completed the sentence “The ball is __________ the table” for the same stimuli. This task served as a manipulation check to ensure that the stimuli captured the key difference between English and Korean in the encoding of support/nonsupport.


Mean accuracy on the visual search task was 81.9% (SD = 10.4). Trials in which participants responded incorrectly or RT was greater than 2.5 standard deviations from individual means (3.2% of trials) were excluded from RT analyses. To test for lateralized CP, we conducted a mixed ANOVA on the remaining RTs, with visual field (LVF/RVF) and categorical relationship (within/between category) as within-participant factors and group (English/+E Korean/-E Korean) as a between-participant factor. This analysis yielded a main effect of categorical relationship, F(1, 51) = 4.14, p = .05, η2 = .08, indicating faster responses on between-category trials than within-category trials overall, and a three-way interaction, F(2, 51) = 4.11, p = .02, η2 = .14, with no other significant effects (ps > .1).

As shown in Fig. 3, the English and -E Korean groups exhibited distinctly different RT profiles, and the +E Korean group patterned with the English group. To confirm these observations and further analyze the three-way interaction, we conducted separate ANOVAs on each pair of the three groups. In both the English/-E Korean and +E Korean/-E Korean analyses, the three-way interaction remained significant, English/-E Korean: F(1, 34) = 6.47, p = .02, η2 = .16; +E Korean/-E Korean: F(1, 34) = 4.95, p = .03, η2 = .13, and no other effects were observed (ps > .1). In contrast, in the English/+E Korean analysis, there was an interaction between visual field and categorical relationship, F(1, 34) = 6.63, p = .01, η2 = .16, but no three-way interaction (p > .9, η2 < .001) and no other effects (ps > .05). The English and +E Korean groups showed lateralized CP: responses were faster on between-category than within-category trials when the target appeared in the RVF, English: t(17) = 2.40, p = .03, d = .58; +E Korean: t(17) = 2.18, p = .04, d = .53, but no such between-category advantage occurred when the target appeared in the LVF (ps > .6). The -E Korean group, unlike the other two, showed no between-category advantage in either visual field (ps > .1).
Fig. 3

Mean reaction time by visual field and categorical relationship for each language group. Whereas the English and +E Korean groups showed CP in the RVF but not in the LVF (i.e., lateralized CP), the -E Korean group showed no CP in either visual field. LVF = left visual field; RVF = right visual field. *p < .05; ns = nonsignificant. Error bars represent 95% within-participant confidence intervals

An analogous ANOVA on the accuracy data yielded a main effect of group, F(2, 51) = 4.08, p = .02, η2 = .14, with both groups of Korean speakers (-E: M = 83.0%; +E: M = 85.9%) showing higher accuracy overall than the English group (M = 76.7%). This factor did not interact with visual field or categorical relationship, and no other significant effects were observed (ps > .06), suggesting that lateralized CP for RTs was not due to a speed–accuracy trade-off.

The results of the naming task confirmed that the stimuli were readily distinguished by the use of support/nonsupport terms in English but not in Korean. As judged independently by two native Korean speakers blind to the experimental hypotheses, only 12 of the 144 responses (8.3%) given by the two groups of Korean speakers included terms that explicitly encoded contact/support or a lack thereof between the figure and reference objects. In contrast, all 56 responses given by the English speakers included such terms (e.g., on/touching vs. above/over).

In summary, lateralized CP for support/nonsupport was observed in native English speakers and in native Korean speakers who were relatively early English learners and tested in an English-salient context but not in native Korean speakers who were late English learners and tested in a Korean-salient context.


Our results provide evidence of a cross-language difference in spatial cognition for a structural distinction proposed to be immune to linguistic influence. English, but not Korean, obligatorily marks the putatively cognitively universal support/nonsupport distinction with basic spatial terms, and lateralized CP for this distinction was observed in native English speakers but not in Korean–English bilinguals whose language background and testing context privileged Korean (the -E Korean group). This finding builds on previous evidence of cross-language differences in CP (e.g., Winawer et al., 2007), showing that such effects extend to the domain of spatial relations. Although lateralized CP is not always observed, at least for color (e.g., Witzel & Gegenfurtner, 2011), cases in which it is observed have been interpreted as reflecting an influence of categories on nonlinguistic perceptual processing (Holmes & Wolff, 2012; Roberson et al., 2008)—possibly mediated by linguistic representations in the left hemisphere (Gilbert et al., 2006, 2008). Therefore, the cross-language difference in lateralized CP observed here for the support/nonsupport distinction suggests—contrary to previous claims (Munnich et al., 2001)—that support/nonsupport is not universally available in nonlinguistic cognitive tasks like our visual search task. Rather, its availability appears to depend in part on whether this distinction is marked obligatorily by the basic spatial terms of one’s native language.

The apparent discrepancy between our findings and those of Munnich et al. (2001) may be driven largely by task differences: our task required perceptual discrimination of lateralized stimuli presented simultaneously, rather than same/different judgments for nonlateralized stimuli presented across a delay. Future research might directly compare the two tasks within the same language groups to determine which procedural aspects are critical for yielding cross-language differences. Despite the contrasting findings, our results nonetheless converge with those of Munnich et al. (2001) in showing that support/nonsupport is cognitively accessible under certain conditions even when one’s native language does not mark the distinction obligatorily—at least in bilinguals whose second language does so.4 Our third group of participants (the +E Korean group)—native Korean speakers who were relatively early English learners and tested in an English-salient context—patterned with the native English speakers, and with the Korean speakers in Munnich et al.’s study, in showing sensitivity to support/nonsupport. Munnich et al.’s Korean participants were late English learners and viewed task instructions in Korean (like our -E Korean group) but were tested by an English-speaking experimenter (like our +E Korean group; E. Munnich, personal communication, August 13, 2016). This raises the possibility that an English-salient testing context may be sufficient to yield sensitivity to support/nonsupport in Korean–English bilinguals—consistent with other work showing effects of the ambient language on cognitive processes (Athanasopoulos et al., 2015; Fuhrman et al., 2011).

In conclusion, our findings together with those of Munnich et al. (2001) suggest that the salience of categories in nonlinguistic cognition is not dictated by one’s native language in an all-or-none fashion but instead can depend on aspects of language background and the immediate linguistic context. At the same time, importantly, our findings are unique in demonstrating that the psychological salience of even a seemingly foundational property of the spatial world can be susceptible to the influence of one’s native language, resulting in differences in spatial cognition across language groups.


  1. 1.

    Japanese, like Korean, does not encode support/nonsupport obligatorily. In a separate experiment, Munnich et al. tested English and Japanese speakers on the same tasks, but the stimuli apparently did not depict unambiguous support relationships; some participants regarded the scenes as two-dimensional, and no cross-language difference in naming of support/nonsupport locations was found. Thus, Munnich et al.’s Japanese–English experiment cannot speak to whether such differences, where observed, correlate with nonlinguistic cognitive differences. Their Korean–English experiment, in contrast, does speak to this issue, which is why we focus on it.

  2. 2.

    Following the general convention of the language-and-thought literature, we use the term categorical perception in a broad sense. The findings discussed here, and our own, indicate categorical influences on cognitive processes associated with perceptual discrimination, not necessarily on perception per se.

  3. 3.

    We did not attempt to fully equate perceptual similarity between scenes because the stimuli differ on multiple dimensions for which there is no common metric (i.e., contact, vertical distance, horizontal distance). However, we did not see this as essential because, given previous evidence of lateralized CP, we predicted an interaction, namely that visual search performance in the RVF would reflect the influence of support/nonsupport categories only in the English group (and perhaps the +E Korean group), over and above any effect of perceptual similarity, but that performance in the LVF would largely reflect perceptual similarity in all participants.

  4. 4.

    Our -E group showed no CP for support/nonsupport despite knowledge of English, perhaps in part because late English learners do not reliably distinguish support from non-support in semantic production and comprehension tasks (Munnich & Landau, 2010).


Author note

This research was supported by the National Science Foundation ( under Grants SBE-1206361 (KJH) and SBE-1041707 (TR). We thank Edward Munnich for helpful discussion and comments on the manuscript; Mark Saviano for statistical consultation; Caitlyn Brady, Alex Carstensen, and Seul Lee for assistance with data collection; and Geun Ho Ahn, Jongmin Jerome Baek, David Hong, Jaeho Kim, and Sang Hoon Maeng for translation and coding assistance.


  1. Athanasopoulos, P., Bylund, E., Montero-Melis, G., Damjanovic, L., Schartner, A., Kibbe, A., ... Thierry, G. (2015). Two languages, two minds: Flexible cognitive processing driven by language of operation. Psychological Science, 26, 518–526.Google Scholar
  2. Brown, A. M., Lindsey, D. T., & Guckes, K. M. (2011). Color names, color categories, and color-cued visual search: Sometimes, color perception is not categorical. Journal of Vision, 11. doi: 10.1167/11.12.2
  3. Fuhrman, O., McCormick, K., Chen, E., Jiang, H., Shu, D., Mao, S., & Boroditsky, L. (2011). How linguistic and cultural forces shape conceptions of time: English and Mandarin time in 3D. Cognitive Science, 35, 1305–1328.CrossRefPubMedGoogle Scholar
  4. Gilbert, A. L., Regier, T., Kay, P., & Ivry, R. B. (2006). Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences, 103, 489–494.CrossRefGoogle Scholar
  5. Gilbert, A. L., Regier, T., Kay, P., & Ivry, R. B. (2008). Support for lateralization of the Whorf effect beyond the realm of color discrimination. Brain and Language, 105, 91–98.CrossRefPubMedGoogle Scholar
  6. Holmes, K. J., & Regier, T. (2016). Categorical perception beyond the basic level: The case of warm and cool colors. Cognitive Science. doi: 10.1111/cogs.12393 PubMedGoogle Scholar
  7. Holmes, K. J., & Wolff, P. (2012). Does categorical perception in the left hemisphere depend on language? Journal of Experimental Psychology: General, 141, 439–443.CrossRefGoogle Scholar
  8. Kosslyn, S. M., Koenig, O., Barrett, A., Cave, C. B., Tang, J., & Gabrieli, J. D. E. (1989). Evidence for two types of spatial representations: Hemispheric specialization for categorical and coordinate relations. Journal of Experimental Psychology: Human Perception and Performance, 15, 723–735.PubMedGoogle Scholar
  9. Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19, 232–240.CrossRefPubMedGoogle Scholar
  10. Malt, B. C., & Majid, A. (2013). How thought is mapped into words. Wiley Interdisciplinary Reviews: Cognitive Science, 4, 583–597.PubMedGoogle Scholar
  11. Munnich, E., & Landau, B. (2010). Developmental decline in the acquisition of spatial language. Language Learning and Development, 6, 32–59.CrossRefGoogle Scholar
  12. Munnich, E., Landau, B., & Dosher, B. A. (2001). Spatial language and spatial representation: A cross-linguistic comparison. Cognition, 81, 171–208.CrossRefPubMedGoogle Scholar
  13. Regier, T., Kay, P., & Khetarpal, N. (2007). Color naming reflects optimal partitions of color space. Proceedings of the National Academy of Sciences, 104, 1436–1441.CrossRefGoogle Scholar
  14. Roberson, D., Pak, H., & Hanley, J. R. (2008). Categorical perception of colour in the left and right visual field is verbally mediated: Evidence from Korean. Cognition, 107, 752–762.CrossRefPubMedGoogle Scholar
  15. Whorf, B. L. (1956). Language, thought, and reality. Cambridge: MIT Press.Google Scholar
  16. Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, A. R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the National Academy of Sciences, 104, 7780–7785.CrossRefGoogle Scholar
  17. Witzel, C., & Gegenfurtner, K. R. (2011). Is there a lateralized category effect for color? Journal of Vision, 11. doi:10.1167/11.12.16Google Scholar
  18. Wolff, P., & Holmes, K. J. (2011). Linguistic relativity. Wiley Interdisciplinary Reviews: Cognitive Science, 2, 253–265.PubMedGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  1. 1.Colorado CollegeColorado SpringsUSA
  2. 2.Department of PsychologyLehigh UniversityBethlehem, PAUSA
  3. 3.Department of Linguistics and Cognitive Science ProgramUniversity of California, BerkeleyBerkeleyUSA

Personalised recommendations