Abstract
Spatial orientation is a complex ability that emerges from the interaction of several systems in a way that is still unclear. One of the reasons limiting the research on the topic is the lack of methodologies aimed at studying multimodal psychophysics in an ecological manner and with affordable settings. Virtual reality can provide a workaround to this impasse by using virtual stimuli rather than real ones. However, the available virtual reality development platforms are not meant for psychophysical testing; therefore, using them as such can be very difficult for newcomers, especially the ones new to coding. For this reason, we developed SALLO, the Suite for the Assessment of Low-Level cues on Orientation, which is a suite of utilities that simplifies assessing the psychophysics of multimodal spatial orientation in virtual reality. The tools in it cover all the fundamental steps to design a psychophysical experiment. Plus, dedicated tracks guide the users in extending the suite components to simplify developing new experiments. An experimental use-case used SALLO and virtual reality to show that the head posture affects both the egocentric and the allocentric mental representations of spatial orientation. Such a use-case demonstrated how SALLO and virtual reality can be used to accelerate hypothesis testing concerning the psychophysics of spatial orientation and, more broadly, how the community of researchers in the field may benefit from such a tool to carry out their investigations.
Similar content being viewed by others
Introduction
The ability to understand accurately and precisely the orientation of the surrounding items with respect to oneself and the other items in space is fundamental. As an example, a monkey escaping from a tiger, in order to maximize its survival chances, needs to understand the direction from which the tiger is arriving and identify the tree that is closer to itself and, at the same time, farther from the predator. This fundamental ability is, computationally speaking, quite intensive. As a matter of fact, it remains controversial how the sense of spatial orientation depends on the multiple low-level, sensori-motor cues that compose it (Berthoz & Viaud-Delmon, 1999; Epstein et al., 2017; Grieves & Jeffery, 2017). One of the obstacles to comprehending such links is the lack of methodologies to investigate complex percepts emerging from the interaction of multiple sensory and motor cues. Indeed, in psychophysics, the branch of psychology that aims to map physical stimuli to percepts (Gescheider, 2013), the effect under investigation is typically isolated to limit confounding factors. In a typical psychophysical experiment, participants must keep a specific body position, and their movements are often constrained. The stimuli used are simple and delivered using precise and accurate devices that give experimenters control over the physical dimension under investigation (Kingdom & Prins, 2016). This approach is powerful because it clearly outlines a mathematical relationship between stimulus and percept; indeed, it was born to map the perception of simple features, such as light intensity, mechanical pressure, or sound frequencies (Stevens, 1960). However, the experimental settings used in psychophysics are oversimplified compared to real-life conditions (De Gelder & Bertelson, 2003): as the percepts under investigation depart from the primary sensations, the stimulus-perception maps found in the lab may differ from those of everyday life’s dynamic and multisensory world, that is, the control over the stimulus properties precedes the results’ ecological validity (Holleman et al., 2020; Loomis et al., 1999). To overcome the limitations of classical, unimodal psychophysical paradigms, researchers in the field of spatial orientation have been proposing experimental paradigms based on more and more complex technological solutions. Some examples of experimental settings and devices used in this research domain are roto-translational chairs (Butler et al., 2010; Zanchi et al., 2022) and treadmills (Frissen et al., 2011), optical motion tracking systems (Kolarik et al., 2016), anechoic chambers (Zahorik et al., 1995), wall-sized speaker arrays (Populin, 2008), robotic manipulanda (Volpe et al., 2009). For example, Barnett-Cowan and colleagues developed a 3D motion simulator consisting of a seat mounted to the flange of a modified KUKA anthropomorphic robot arm to study the contribution of visual and vestibulo-kinaesthetic cues to the perception of self-motion along different axes (Barnett-Cowan et al., 2012). Whereas these complex tools provide researchers with precise control over multiple cues and return accurate measures, they are expensive and bulky, and some require dedicated rooms; as a result, only a few institutions can afford them. Virtual reality (VR) offers an alternative route to increase ecological validity without losing control over the stimulation delivered: it uses known psychophysical laws to simulate the presence of items in space. For example, to acoustically simulate the presence of a virtual object at a given angle, a VR application would deliver the sound via headphones, with specific time and intensity differences between the two channels: the same interaural time (ITD) and intensity, or level, differences (ILD) that the physical object placed at such angle would have elicited (Middlebrooks & Green, 1991). This way, the complexity of controlling the physical stimuli disappears, and the cost decreases. VR has been employed in behavioral experiments for the last two decades (Cogné et al., 2017; Loomis et al., 1999), and because of its versatility, it has been recognized as a valuable tool to study the sensori-motor system at potentially any processing level: from early-stage sensory processing (Cogné et al., 2017; Parseihian & Katz, 2012; Zanchi et al., 2022) to sensori-motor interactions (Esposito et al., 2021a, b, 2023), up until high-level representations such as memory and emotions (Cogné et al., 2017; Rus-Calafell et al., 2018). However, so far, VR has been used scarcely to extend the range of possible psychophysics-based experimental design; it has been used much more to simulate realistic scenarios and study high-level cognitive abilities such as spatial memory and spatial navigation strategies: please refer to Cogné et al., 2017, and Montana et al., 2019, for some recent reviews. As these reviews reported, the virtual stimuli used commonly are virtual environments such as mazes, streets, or rooms, which the user can freely navigate. Although more straightforward and controlled than natural environments, these virtual environments are still rich topographically and geometrically. Therefore, it is difficult to assess the contribution of single cues to the spatial ability under investigation, as a psychophysical assessment would require. One reason why VR has not been broadly adopted to study the psychophysics of spatial orientation may be that the graphics engines used to develop VR applications, such as Unity (Unity Technologies, 2019e) and Unreal (Epic Games, 2022), are not built to develop scientific experiments, but rather to be general-purpose tools (Unity Technologies, 2019h). This feature makes the development of scientific experiments with stock graphic engines somewhat inefficient (de la Rosa & Breidt, 2018). Some plugins have been developed to simplify designing experiments with graphic engines. One example is the "Unity Experiment Framework" (UXF) (Brookes et al., 2020), a Unity plugin that modifies Unity's native life cycle to make it an iteration of blocks and trials (Fig. 1) and provides other handy tools for scientific investigation, such as tools to track items and save data. Another example is the "BiomotionLab Toolkit for Unity Experiments" (bmlTUX) (Bebko & Troje, 2020), which provides a simple graphical user interface to design experiments in terms of variable entry, trial order, counterbalancing, randomization, and blocking. Such plugins try to reshape Unity to simplify the development of generic behavioral experiments, but they do not focus on psychophysics, let alone psychophysics of spatial orientation. To date, no packages provide tools focused on psychophysics (e.g., template stimuli, tasks, psychophysical methods) and on the study of spatial orientation from a low-level perspective. For this reason, we developed a Unity package tailored to the psychophysical assessments of sensori-motor cues over spatial orientation, called "Suite for the Assessment of Low-Level cues on Orientation" (SALLO). SALLO is a suite of utilities that gives experimenters control over participant positioning, audio-visual stimuli selection, stimuli delivery, and response methods (forced choices, pointing, and so on). It aims at simplifying psychophysical testing and results' replicability by employing VR. Figure 2 describes the added value of SALLO with respect to the existing tools for behavioral testing in VR. In the following, the methodological aspects of the suite will be discussed, and a validation experiment will be presented to demonstrate the utility and versatility of SALLO.
Methodology
SALLO is a suite of tools that simplifies designing psychophysical assessments about spatial orientation in the virtual space of Unity. It was developed following the guidelines Kingdom and Prins drew to design a psychophysical experiment (Kingdom & Prins, 2016). In their book, the authors suggest that a psychophysical experiment comprises five separate elements: stimulus, task, method, analysis, and measure. Taking inspiration from Kingdom and Prins' guidelines, the tools in SALLO focus independently on the stimulus, the task, and the psychophysical method. Moreover, since SALLO focuses specifically on spatial orientation, it also includes tools for the spatial positioning of stimuli and participants. Finally, since SALLO focuses on the psychophysical experiment execution rather than the performance evaluation, it does not contain tools for analysis or measurement.
The SALLO back-end
SALLO contains two types of tools: Unity Components (Unity—Manual: Introduction to Components, 2019b) and GameObject prefabs (Unity—Manual: Prefabs, 2019c). In Unity, GameObjects are the basic entities populating the virtual space (Unity—Manual: GameObjects, 2019a). They must have a position and orientation in the virtual space and can contain other GameObjects, thus working as local reference frames. A GameObject that contains other GameObjects is called "Parent" of the latter; in turn, the contained GameObjects are called "Children" of the former. Components are the entities that add functionality to the GameObjects (Unity—Manual: Introduction to Components, 2019b). Some examples of Components are rendering meshes, materials, audio players, or even custom C# classes. GameObject prefabs are preformatted GameObjects, each with their pool of Components and children GameObjects (Unity—Manual: Prefabs, 2019c). They are stored in dedicated files and can be instantiated in the virtual space whenever needed. The following sections will introduce the tools in SALLO, with sub-sections dedicated to each element of the psychophysical experiment.
Stimulus
Although Unity offers all the tools to create any stimulus, they are not developed for psychophysics; therefore, they can be hard to use for researchers that are novice to Unity. Let’s take as examples two very common stimuli employed in psychophysics: a Gaussian blob and a stream of white noise. One way to create a Gaussian blob in Unity’s 3D environment from scratch is to: create a GameObject hierarchy with an empty GameObject as root, a sphere GameObject and a quad GameObject as children; set the sphere’s “material” Component properties (color, light emission, light reflection, texture, etc.) according to one’s needs; place the quad between the sphere and the observer’s point of view, change the quad’s shader with a custom one that implements the gaussian blur via code. Instead, to create a stream of white noise from scratch, one needs to: create a GameObject with an “AudioSource” Component; create a custom Component that fills the audio buffer with random numbers every time the system requests access to it. Self-implementing both these examples requires a good understanding of Unity’s audio-visual rendering workflow and a basic understanding of object-oriented programming. SALLO aims to reduce such minimum skills requirements by providing a sample audio-visual stimulus with desirable features to study audio-visual cross-modal effects in space. In fact, the basic stimulus that SALLO provides is a GameObject rendered as a light grey, blurred sphere emitting spatialized white noise generated in real-time (Fig. 3A). The stimulus dimension is arbitrarily set at 1 unity unit, the basic measurement unit for distance in the virtual space. This value is arbitrary because the actual stimulus dimension depends on the distance from the observer's virtual point of view, given the visual field size. The visual stimulus is blurred through a virtual surface placed between the stimulus and the observer, whose material was rendered with a custom shader (Unity Technologies, 2019f) that does not support single-pass stereo rendering (Unity Technologies, 2019g); therefore, multi-pass stereo rendering must be used (Unity Technologies, 2019g). The acoustic stimulus can be spatialized with any audio spatializer plugin. Two examples are the "Resonance Audio" plugin (Google, 2018) and the Unity wrapper for the "3D Tune-In" Toolkit (Cuevas-Rodríguez et al., 2019), an open-source library for real-time binaural spatialization.
Task
The next challenge in the design of a psychophysical experiment is deciding what task to use. To the best of our knowledge, no packages provide tools to simplify or templates and guidelines to standardize the development of a psychophysical task in Unity. SALLO filled the gap by providing those tools. Their design revolved around the argument that most psychophysical tasks for assessing spatial orientation share a common structure. We propose three features common to any generic psychophysical task for spatial orientation assessment: (i) the task delivers a set of stimuli (one or more) with a specific spatio-temporal structure; (ii) the task requires a trigger for the stimuli delivery (e.g., the previous trial's end or the participant reaching a specific orientation); (iii) the task requires an answer from the participant (e.g., head-pointing or 2AFC). Following these points, we defined a C# abstract class (i.e., a class that cannot be used per se) called "Task", from which any other class must inherit to implement specific behaviors. In the subsequent passages, the name "Task X" will identify a generic class derived from the "Task" base class. The "Task" class implements the methods for perceptual channel selection and forces every “Task X” class to implement the three common points mentioned above. Then, each "Task X" class will implement its specific behavior.
The "Task X" class defines the task logic, but a body is necessary as well to exist in the virtual environment. Therefore, SALLO requires the creation of a "Task X" GameObject prefab for each "Task X" class, with the corresponding "Task X" class assigned as a Component (Fig. 3B). SALLO already includes GameObject prefabs for some psychophysical tasks: localization (Zimmermann, 2021), repositioning (Roren et al., 2009), left–right discrimination (Lewald et al., 2000) and space bisection (Gori et al., 2014). We designed the SALLO tools presented in this subsection with the aim of standardizing the development of psychophysical tasks. Indeed, following the scheme summarized in Fig. 3C, experimenters can create their custom tasks and eventually share them with the community for reuse.
Psychophysical method
The third aspect SALLO handles is the psychophysical method. In the last century and a half, many methods have been developed, each with peculiar pros, cons, and best use cases (Kingdom & Prins, 2016). Typically, those methods are divided into non-adaptive and adaptive (Aleci, 2021). As the name suggests, non-adaptive methods choose the value for the feature of interest from a predefined pool unaffected by the previous participant's answers. Adaptive methods, instead, select the value for the feature of interest according to the previous participant's responses. In principle, adaptive methods require fewer trials than non-adaptive methods but make more assumptions about the percept's psychometric properties (Aleci, 2021). SALLO includes one method per category: the method of constant stimuli for what concerns non-adaptive procedures (Kingdom & Prins, 2016) and the QUEST method for what concerns adaptive procedures (Watson & Pelli, 1983). The method of constant stimuli was chosen because it is the most accurate non-adaptive method (Gescheider, 2013). The QUEST method was chosen because it is efficient (Watson & Fitzhugh, 1990). SALLO implements the desired psychophysical methods as classes derived from the base "PsyMethod" abstract C# class. It contains the list of desired values to test, the list of repetitions for each value, and a method to extract a randomized sequence of trials based on the desired values and repetitions. The constant stimuli method implementation is the "ConstantStimuli" class, derived from the "PsyMethod" class, without any addition. The QUEST method’s implementation that SALLO includes relies on a set of dedicated classes involving multiple programming languages. The QUEST algorithm comes from the "VisionEgg" Python 2.7 package (Straw, 2008). It can run in Unity thanks to the Unity extension "Python for Unity" 2.1.1 (Unity Technologies, 2019d). The session-specific instance of the QUEST algorithm runs in a separate Python thread, and the C# class "pyQuest" works as a Unity-Python interface: it queries the Python thread and translates the obtained values from Python to C#. We used the Python code for the QUEST algorithm and implemented the Python-Unity C# interface because we could not find any C# open-source implementation.
Positioning
The features described previously are minimal for a generic task to work. However, since the SALLO suite focuses on spatial orientation, it must also consider the spatial properties in the experimental design. SALLO includes object-related and observer-related spatial properties that let the experimenters track and guide the position of the entities in the virtual environment without additional code, with specific features tailored to the entities’ types.
Observer-related spatial properties
SALLO includes tools to track, guide, and react to the observer's movements. The tool to track the observer's movements is the "PositionWatcher" Component. It keeps track of the observer's orientation and signals if the observer exits from a given range. Moreover, "PositionWatcher" partners with another component, "PitchController", to provide acoustic pitch-based feedback if the experimental design requires the participant to have a specific orientation. Instead, the visual feedback for participant orientation guidance is a GameObject pointer in the form of a dark grey rectangle, slightly larger than the field of view, with a red dot in the middle. Every "Task X" GameObject contains a GameObject pointer and has the "PositionWatcher" and the "PitchController" Components attached (Fig. 3B).
Object-related spatial properties
SALLO offers several tools for GameObjects placement in the horizontal plane. Those GameObjects can be stimuli, "Task X" GameObjects, or other GameObjects used as reference frames. To cope with these three GameObject types' different requirements, SALLO treats them hierarchically according to their level of aggregation, that is, the amount of different GameObject types (i.e., stimuli, tasks, reference frames) the GameObject of interest can contain. The simplest GameObject is the stimulus, which should not contain any other GameObject type. The "Task X" GameObject follows since it can contain several stimuli. The reference frames close the hierarchy since they can contain a set of tasks or even a set of other reference frames. SALLO includes three components, each dedicated to specific levels of such hierarchy.
The first component is the class "CylindricalCoordinates"; it defines a GameObject position in terms of cylindrical coordinates (radius, angle, and elevation) instead of Cartesian coordinates (length, width, and elevation), the coordinate system Unity uses by default. "CylindricalCoordinates" encodes spatial orientation directly as an angle. Moreover, it contains a method to compute the stimuli radius in the virtual space according to the desired stimuli's visual angle. Since every virtual item can benefit from using cylindrical coordinates instead of cartesian ones, “CylindricalCoordinates” is useful at every aggregation level.
The second component is the class "ArrayPlacer"; it handles the spatial relationships among the items of a GameObject array, such as the angular distance among the array items or a common offset. With this tool, developers can control the relative position of, for example, the stimuli within a “Task X” GameObject or the multiple reference frames that the “Task X” GameObject can have.
The third component is the class "Houser"; it simplifies changing the GameObjects’ Parent GameObject among a set of available ones. It is useful in experiments with multiple reference frames to switch among them easily.
Figure 4 schematically highlights the positioning system's hierarchical structure and the dedicated tools.
The SALLO front-end
The current section shows how to program an experiment with SALLO effectively and what it looks like. Notice that SALLO is a suite of tools that help design psychophysical experiments, not a standalone tool to run them; therefore, the final interface depends on the experiment-running tool used. SALLO uses an event-related paradigm, therefore it is virtually compatible with every experiment-running tool for Unity that employs events. However, SALLO was developed using the UXF as an experiment-running tool; therefore, this section will show how to use SALLO with UXF.
The experimental design is defined entirely in the UXF experiment settings file. There are specific settings that every SALLO experiment requires, and they rule the experimental task in use, the sensory channel stimulated, the stimuli's temporal and spatial properties, the psychophysical method in use, and so on. The comprehensive settings list is reported in Table 1, with the required data type and an explanation for each setting variable. After choosing the proper values for the experiment settings, the experiment session follows the UXF session structure, which divides the session into blocks and the block in trials. The SALLO suite shapes the trials and blocks, introducing critical steps related to the stimuli delivery. The complete flowchart of a SALLO-UXF experiment is reported in Fig. 5.
The SALLO package contains the source code to run a sample experiment based on SALLO and UXF. The files provided with the sample experiment are the packages used (SALLO and UXF), the experiment settings files, the additional scripts used to setup and control the experiment flow, all the additional Unity files used in the Unity scene, and the Unity scene itself. Such a set of files is the set needed to ensure the study's reproducibility. In Unity, it can be easily exported as a unitypackage file and shared together with the article.
Experimental use-case
The next section aims to showcase how researchers in the field of psychophysics may benefit from SALLO and VR to investigate the interactions among the multiple cues that shape the sense of spatial orientation. To do so, we describe an experimental use case that employs audiovisual stimuli whose position spans the whole frontal hemifield and depends on the head orientation. Performing such an experiment with physical stimuli would require using a large screen, multiple ones, or a small one that can move, paired with one large speaker array or a small one that can move. Such an apparatus is hardly portable; in fact, it may even require a dedicated room (Lewald et al., 2009; Populin, 2008). As this section will show, using VR made the whole setup much more portable, simpler in terms of hardware, and likely cheaper. The code used to run this experimental use-case is in the unitypackage file provided with this article's supplementary materials for reproducibility.
Background
The human brain represents information concerning spatial orientation in multiple ways, typically divided into egocentric and allocentric representations (Klatzky, 1998). Egocentric representations encode spatial information with respect to the observer's point of view, e.g., "the car is on my right". Allocentric representations encode spatial information regardless of the observer's point of view, e.g., "the car is between the bicycle and the bus stop". Despite the allocentric representations' ideal independence from the observer, it has been shown that bodily inputs such as vestibulo-proprioceptive cues and motor efferent copies are involved in their neural computation (Ferrè et al., 2021; Lackner & DiZio, 2005; Laurens & Angelaki, 2018; Roncesvalles et al., 2005; Winter & Taube, 2014); therefore, the body posture, an inherently egocentric cue, may affect the estimation of allocentric representations as well. At the same time, it is unclear if the interaction between spatial reasoning and body posture differs when the spatial information is perceived via different sensory channels, that is, via vision or hearing (Cui et al., 2010; Lewald et al., 2009). The present study aimed to address both these open questions, focusing specifically on one effect: the distortion in the sense of spatial orientation arising from the orientation of the head on the trunk (Garcia et al., 2017; Lackner, 1974; Lewald et al., 1998, 2000; Odegaard et al., 2015; Schicke et al., 2002). This effect was chosen because it is consistent, and it has been replicated with a multitude of psychophysical tasks, such as head-pointing, pointer alignment, verbal reports, and so on (Garcia et al., 2017; Lackner, 1974; Lewald et al., 1998, 2000; Odegaard et al., 2015; Schicke et al., 2002). One psychophysical task that has been used to expose this effect is the left–right discrimination, where a stimulus appears at a given angle with respect to the physical median plane of the head of the participants, and they respond if they perceive the stimulus to the right or the left of their nose using a two-alternative forced-choice (2AFC) response pattern (Gescheider, 2013). This task has been used to investigate the head-on-trunk orientation-related distortion of the spatial orientation in the auditory domain, using both dichotic (Lewald et al., 2000) and free-field (Lackner, 1974) listening. Doing the task with the head turned with respect to the trunk has been shown to cause a shift in the psychometric curve's point of subjective equality (PSE), corresponding to the perceived head auditory median plane (HAMP) (Lackner, 1974; Lewald et al., 2000). The left–right discrimination task has never been employed in the visual domain to investigate the head-on-trunk orientation-related distortion of the perceived head visual median plane (HVMP). However, as there is evidence that the egocentric coordinates of hearing and vision can differ (Cui et al., 2010; Lewald et al., 2009), comparing how the head-on-trunk orientation affects the perceived median plane of the head (H*MP) depending on the perceptual modality involved can provide useful evidence to better understand the nature of such egocentric biases. In addition, the results obtained in the left–right discrimination task can be compared directly to those obtained in another task, the spatial bisection (Aggius-Vella et al., 2020; Amadeo et al., 2019; Gori et al., 2014; Rabini et al., 2019; Bertonati et al., 2023), which can be intended as its allocentric counterpart. The space bisection task consists of presenting three stimuli in sequence at monotonically increasing or decreasing angles and asking the participant which anchor stimulus, the rightmost or the leftmost, the second one was closer to. It can be intended as the allocentric counterpart of the left–right discrimination task because it differs from the latter only in the reference defining the left and right: oneself (egocentric reference) for the left–right discrimination, or the anchor stimuli (allocentric reference) for the space bisection. By testing participants on both the left–right discrimination and the space bisection tasks with different head-on-trunk orientations, using visual or acoustic stimuli, we investigated to what extent the head-on-trunk orientation effect depends on the spatial representation or perceptual modality involved. Specifically, in all the above mentioned conditions, we compared the difference in PSE obtained when the participants’ head was turned to the right or to the left. If the head-on-trunk orientation were perceptual modality-dependent, the differences in PSE obtained in the conditions with visual stimuli and those obtained with acoustic stimuli should differ. If the head-on-trunk orientation were spatial representation-dependent, the differences in PSE obtained in the left–right discrimination tasks and those obtained in the space bisection tasks should differ.
Materials and methods
Participants
In total, ten sighted individuals were involved in the study (five males, five females, age 34.7 ± 1.79 years old). The participants were enrolled from the local contacts of Genova. Informed consent was obtained from all of them. The study followed the tenets of the Declaration of Helsinki and was approved by the ethics committee of the local health service (Comitato Etico, ASL 3, Genova).
Apparatus and setting
The study used Unity 2019 LTS on an Alienware 13 R3 laptop to run the experiment. An HTC Vive Pro head-mounted display (HMD) tracked the participant's head orientation and displacement in 3D and delivered the spatialized audio via integrated headphones. An XSENS MTw Awinda inertial measurement unit (IMU) (Paulich et al., 2018) tracked the participant's trunk orientation and linear acceleration in 3D. A backpack-like harness kept the IMU on the backbone at the shoulder level. The experiment was performed in a dimly lit and silent room. Participants sat on a chair for the whole experiment and were instructed to keep their torsos far from the chair's backrest during experimental blocks. They held the HTC VIVE pro controllers, one per hand.
The study used UXF and SALLO together to run the experiment in Unity. The virtual stimulus was SALLO's default: a light grey blurred sphere emitting intermittent white noise generated at runtime. The sphere was blurred utilizing a quad GameObject placed in front of the stimulus, whose shader implemented a Gaussian blur. The stimulus had the following physical properties: diameter of 30°, illuminance of 1.58 lx at the eye level, and volume of 70 dBSPL. A further audio-visual stimulus was used to guide the participants in their head orientation’s self-adjustment. The visual guide was a dark grey rectangle, slightly larger than the VR HMD’s field of view (FoV), with a circular pointer in the center. It was placed in the virtual space such that the pointer matched the desired angle and the rectangle matched the head orientation in 3D: the orientation was correct if the dark grey rectangle covered the whole FoV. The background rectangle’s illuminance was 1.05 lx at eye level. As the stimulus was displayed with the rectangle covering the FoV completely, the Weber contrast was computed as the contrast between the background rectangle illuminance and the stimulus illuminance, and it was 0.5. The acoustic guide was a metronome-like sound whose pitch varied according to the angular distance between the instantaneous head orientation and the desired one, peaking at 0° distance. The guidance sound was 6 dBSPL quieter than the stimulus. The visual rendering was performed using Unity’s built-in rendering pipeline (Unity Technologies, 2019g). The "Resonance" audio plugin rendered the audio spatialization via a non-individualized head-related transfer function (HRTF). See Appendix A for a more comprehensive description of the virtual stimuli’s physical properties characterization.
The experimental design consisted of four tasks: left–right discrimination, visual and acoustic, and space bisection, visual and acoustic. They were implemented by parameterizing the UXF experiment settings file accordingly. The settings file content for the four tasks is reported in Fig. 6
Experimental procedure
Apart from the stimuli modality and sequence and the question asked, the procedure used for the four SALLO-based experimental sessions was the same. Therefore, the common procedure is described first, and the task-specific aspects are described in separate paragraphs.
Common procedure
The experimenter explained the task, verbally described the sounds at play, and prepared the participant. Then, the participant did some familiarization trials to understand the trial structure. Less than ten trials were enough in all cases. The trial was structured as follows. A guidance stimulus helped the participants orient their heads towards the reference angle for that experimental block, θ. θ was the same for the whole experimental block; therefore, participants were instructed to try and keep their heads at that orientation for the whole block duration. The guidance stimulus disappeared 1 to 3 s after the participant's head lay in an acceptability range of θ ± 3°; the stimuli delivery sequence started 1 s later. The stimuli were in head-centered coordinates, and their position depended upon the QUEST psychophysical adaptive procedure (Watson & Pelli, 1983), which indicated an orientation for the test stimulus close to the most informative orientation. Meanwhile, a parallel thread checked if the head orientation lay in the acceptability range θ ± 3° throughout the whole stimuli sequence delivery. The participant answered the task-related question by clicking the controller's trigger in the appropriate hand: left hand to answer left, right hand to answer right. No time constraints were placed upon the answer. After the participant answered if they kept the head in place during the stimuli sequence delivery, the trial parameters and results were saved, and the next trial started; otherwise, the current trial results were discarded, and the trial was added back to the current block's trials queue. The inter-trial interval time was 1 s. Figure 7 illustrates the common trial structure. The experimental session for each task consisted of two blocks: one with θ at -45° and one at + 45°. The blocks were counterbalanced among participants using partial randomization. A short break of no more than 5 min interspersed the experimental blocks and tasks. The execution order of the four tasks was counterbalanced using partial randomization. The whole SALLO-based experiment lasted around 60 min, breaks included.
Left–right discrimination task
The left–right discrimination task consists of estimating on which side a virtual sound stimulus is perceived with respect to the head's median plane. Specifically, participants were instructed to answer "right" if they heard the stimulus to the right of their nose or "left" vice-versa. The virtual stimulus lasted 300 ms. This value was chosen because it is a good trade-off between duration and spatial precision for the acoustic modality (Middlebrooks & Green, 1991) that ensures a balance between vision and hearing. Its position was chosen using the QUEST algorithm. The algorithm could sample from the range ± 30°, with a minimum step of 0.25°. Each block consisted of 40 trials. Of these trials, 4 were "catch trials" placed at -25° and at + 25° to ensure the task was performed correctly. The QUEST parameters were chosen based on results from pilot participants. The whole session lasted, on average, 10 min, with familiarization and breaks included.
Space bisection task
The space bisection task estimates the spatial relationship among three virtual stimuli appearing sequentially. Each stimulus lies at a monotonically increasing or decreasing angle from the previous one. The anchor stimuli are the first and last at the extreme positions. The second stimulus, lying between the anchors, is the test one. Participants were instructed to answer "right" if they perceived the test stimulus closer to the right-most anchor or "left" vice-versa. The virtual sounds lasted 300 ms, separated by 500 ms inter-stimulus intervals. The stimuli duration was chosen for consistency with the left–right discrimination task. The anchor stimuli were placed at 50° from each other, the same value used in previous studies (Aggius-Vella et al., 2020; Amadeo et al., 2019; Gori et al., 2014; Rabini et al., 2019), and could appear in the range ± 30° with respect to the anchors' midpoint to make sure the three stimuli could all appear in the same hemispace and therefore counterbalance differences in left and right space perception (Jewell & McCourt, 2000). The test stimulus position was chosen using the QUEST algorithm. The algorithm could sample the value in the range ± 25° from the anchors' midpoint, with a minimum step of 1.5°. Each block consisted of 55 trials. Of these trials, 4 were "catch trials" placed at -20° and + 20° to ensure the task was feasible and correctly performed. The QUEST parameters were chosen based on results from pilot participants. The whole session lasted, on average, 20 min, with familiarization and breaks included.
Data analysis
As said above, the study aimed to investigate how the head-on-trunk orientation affects the PSEs of spatial tasks that use similar stimuli to probe different spatial representations through different perceptual modalities. The PSEs were computed as the medians of the psychometric curves fitted on raw data for each task, perceptual modality, experimental block, and participant. The psychometric functions were fitted using a cumulative Gaussian (Kingdom & Prins, 2016) with guess and lapse rate as fixed parameters constrained to the [0,0.1] interval and PSE and JND as free parameters. The bootstrap method was used to estimate the PSEs confidence intervals (Kingdom & Prins, 2016). The study compared the differences between the PSE obtained with the head turned to the right and the PSEs obtained with the head turned to the left (∆PSE) in a given condition. The ∆PSE distributions from different perceptual modalities and tasks were tested for normality with the Shapiro–Wilk test. Since the normality tests on the ∆PSE samples did not reach significance, the ∆PSE comparisons were conducted using the repeated measures ANOVA test, accompanied by the partial eta squared (ηp2) as an estimate of standardized effect size. The post hoc analyses, instead, used the paired t test (t), accompanied by the Cohen’s d (d) as an estimate of standardized effect size. The psychometric curve fitting was performed in MATLAB r2020a (MATLAB, 2020) using a custom MATLAB function. The statistical analysis was performed in R (R Core Team, 2020).
Results
The study analyzed the ∆PSEs, that is, the difference between the PSEs obtained with the head turned 45° rightward (PSE+45) and those obtained with the head turned 45° leftward (PSE-45). Table 2 shows the individual PSE+45 and PSE-45 estimates and their standard error for each task and perceptual modality. Positive PSE+45 and PSE-45 values indicate rightward shifts with respect to the real median plane of the head; negative values indicate leftward shifts. ∆PSE values larger than zero indicate that the PSEs shifted away from each other. ∆PSE values smaller than zero indicate that the PSE shifted toward each other.
The ∆PSEs (Fig. 8) were analyzed via a repeated measures ANOVA test with “sense” and “task” as within-subject effects. The repeated measures ANOVA test was significant for the main effect “task”, F(1,9) = 9.108, ηp2 = 0.216, p = 0.015. It did not reach significance for the main effect “sense”, F(1,9) = 0.044, ηp2 = 0.002, p = 0.838, nor for the interaction effect “task:sense”, F(1,9) = 0.021, ηp2 = 0.001, p = 0.887.
The post-hoc analysis was conducted on the tasks’ “grand” ∆PSEs, computed as the average of the ∆PSEs that every individual obtained in each task's visual and acoustic modalities. The post-hoc analysis revealed that the ∆PSE distribution obtained in the left–right discrimination task (M = 0.898°, 95%CI [0.120,1.677]) was significantly larger (t(9) = 3.018, d = 0.95, 95%CI [0.19, 1.79], p = 0.015) than the one obtained in the space bisection task (M = -1.51°, 95%CI [-0.010,-3.010]), that is, the PSEs shifted away from each other in the left–right discrimination and toward each other in the space bisection.
Discussion
The present study aimed to showcase the SALLO suite utility for designing and developing psychophysical experiments focused on spatial orientation in VR. To do so, the study used SALLO, UXF, Unity, a commercial VR headset, and an inertial sensor to implement the visual and the acoustic versions of the left–right discrimination and of the space bisection tasks. The left–right discrimination task aimed to replicate the head-on-trunk rotation-related distortion of the egocentric space in both the acoustic and the visual domains; the space bisection task aimed to extend the literature by investigating if the head-on-trunk rotation could affect the allocentric reasoning as well.
The study found a significant positive ∆PSE in the left–right discrimination task, thus replicating the head-on-trunk rotation-related distortion of the egocentric space previously reported (Lackner, 1974; Lackner & DiZio, 2005; Lewald, 2002; Lewald et al., 1998, 2000), without significant differences between perceptual modalities, suggesting that the process causing the egocentric space distortion takes place similarly for both the acoustic and the visual modalities. The effect replication showed SALLO's reliability for conducting studies about the influence of low-level bodily cues on spatial encoding. That being said, in the left–right discrimination, the effect direction is not so informative per se since the literature has reported distortion effects with different directions depending on the specificities of the paradigm in use, like the acoustic stimulation method (free-field vs. dichotic listening (Lackner, 1974)) or the kinematics of the head rotations (Lackner, 1974; Lackner & DiZio, 2005; Lewald et al., 2000). Such variability in the literature may depend on the different parameterizations of the same task affecting the bodily and the spatial cues differently. Unfortunately, the left–right discrimination alone does not reveal if the effect found underlies a distortion of the body or the space perception. Taking as an example the positive ∆PSE observed here, it can be explained by the underestimation of the stimuli eccentricity with respect to the head, the overestimation of the perceived head-on-trunk orientation with respect to the stimuli, or both (Fig. 9). Further dedicated investigations are required to address how bodily and spatial cues contribute to the outcome of the left–right discrimination task.
The study found a significant negative ∆PSE in the space bisection task. This result confirmed our hypothesis that the allocentric representation is not “pure”, that is, completely observer-independent (Filimon, 2015), but rather that it can be affected by the body posture, as the neuroanatomical and neurofunctional evidence from the literature suggested (Ferrè et al., 2021; Lackner & DiZio, 2005; Laurens & Angelaki, 2018; Roncesvalles et al., 2005; Winter & Taube, 2014). In particular, the presence of an effect in the space bisection task supports the idea that the effect has an external origin (external space compression), as a somatosensory origin (head-on-trunk rotation overestimation) would have affected the external stimuli equally, and therefore would not have affected their perceived relative distance. Moreover, the effect's negative direction in the space bisection is consistent with the positive direction in the left–right discrimination, as it can be attributed to the compression of the distance between the anchors driven by the peripheral anchor’s eccentricity underestimation. While we could not find studies addressing similar effects concerning allocentric reasoning, the literature provides relevant evidence about the egocentric processing of peripheral stimuli. In those studies, it has been shown that the eccentricity of both acoustic and visual stimuli can be underestimated as long as the eccentricity is computed with respect to the body midline (Becker & Saglam, 2001; Esposito et al., 2023; Occhigrossi et al., 2021). In light of such literature, we propose that the effect we found in the space bisection underlay a two-stage process where the external stimuli were encoded in an external egocentric space first, and then the allocentric judgments were performed based on the stimuli positions in the external egocentric coordinates. This interpretation is in line with the research of Aggius-Vella, who found that the performance difference in the acoustic space bisection task changes if the stimuli are presented to the participants' front or to their back (Aggius-Vella et al., 2018, 2020), suggesting that the allocentric space representation is in fact anisotropic in egocentric coordinates.
In conclusion, the study replicated the previous literature about the effect of head-on-trunk orientation on egocentric reasoning and showed that allocentric reasoning is also affected. Moreover, the effect directions suggested that the allocentric estimates may rely on an intermediate processing stage that encodes the objects’ position in space in egocentric coordinates. Other dedicated experiments are required to test the latter hypothesis, and in this regard, SALLO provides an optimal framework since it makes it easy for experimenters to change stimuli and tasks independently, thereby simplifying the decoupling of the somatosensory, external egocentric, and allocentric processing of low-level cues.
Conclusion
The study introduced SALLO, a suite of tools for the (multi-modal) psychophysical assessment of the effect of bodily cues on spatial orientation in auditory, visual, and audio-visual VR. It guides and simplifies the experimental paradigm design, providing utilities that, altogether, take care of all the necessary steps: stimuli choice, stimuli delivery, answer collection, and spatial properties. SALLO guides the experimenters in their VR-based psychophysics experimental design and saves them from implementing every step from scratch with general-purpose, dispersive tools. An experimental use case demonstrated the reliability of SALLO-based experiments in probing the contribution of low-level bodily cues to spatial orientation and the utility of SALLO in this regard. It did so by replicating an effect well established in the literature: the distortion of the egocentric space due to the head-on-trunk orientation (Lackner, 1974; Lackner & DiZio, 2005; Lewald, 2002; Lewald et al., 2000). The experimental use case also contributed to the literature, showing that rotating the head affects the allocentric space. These results could be obtained by performing different tasks with the same stimuli and setting and just minimal differences in the software parameterization. By simplifying psychophysical testing in VR, SALLO proved itself a useful asset to rapidly prototype and run low-cost experiments requiring complex and expensive hardware. For this reason, it has the power to speed up the research about the contribution of low-level cues on spatial orientation. SALLO is an open-source project aiming to provide researchers with the tools needed to conduct their research more simply. To that aim, we plan to improve SALLO's core and extend the set of stimuli, tasks, and psychophysical methods. Hopefully, this will happen according to the needs of the future community of SALLO users and with their help.
Data availability
The raw and processed data sets and the statistical analysis code are openly available in the “Zenodo” repository at the URL https://doi.org/10.5281/zenodo.7152647. The experiment was not pre-registered.
Code availability
The code used to run the experimental use-case is openly available in the “Zenodo” repository at the URL https://doi.org/10.5281/zenodo.8060398. SALLO is openly available and maintained in the git-hub repository at the URL https://github.com/DavideSpot/SALLO.git
References
Aggius-Vella, E., Campus, C., & Gori, M. (2018). Different audio spatial metric representation around the body. Scientific Reports, 8(1), 1–9. https://doi.org/10.1038/s41598-018-27370-9
Aggius-Vella, E., Kolarik, A. J., Gori, M., Cirstea, S., Campus, C., Moore, B. C. J., & Pardhan, S. (2020). Comparison of auditory spatial bisection and minimum audible angle in front, lateral, and back space. Scientific Reports, 10(1), 1–9. https://doi.org/10.1038/s41598-020-62983-z
Aleci, C. (2021). Chapter 7 Psychophysical procedures. In Measuring the soul (pp. 35–40). EDP Sciences. https://doi.org/10.1051/978-2-7598-2518-9.c009
Amadeo, M. B., Campus, C., & Gori, M. (2019). Impact of years of blindness on neural circuits underlying auditory spatial representation. NeuroImage, 191, 140–149. https://doi.org/10.1016/j.neuroimage.2019.01.073
Barnett-Cowan, M., Meilinger, T., Vidal, M., Teufel, H., & Bülthoff, H. H. (2012). MPI cybermotion simulator: Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions. Journal of Visualized Experiments, 63. https://doi.org/10.3791/3436
Bebko, A. O., & Troje, N. F. (2020). bmlTUX: Design and control of experiments in virtual reality and beyond. I-Perception, 11(4). https://doi.org/10.1177/2041669520938400
Becker, W., & Saglam, H. (2001). Perception of angular head position during attempted alignment with eccentric visual objects. Experimental Brain Research, 138(2), 185–192. https://doi.org/10.1007/s002210100703
Berthoz, A., & Viaud-Delmon, I. (1999). Multisensory integration in spatial orientation. Current Opinion in Neurobiology, 9(6), 708–712. https://doi.org/10.1016/S0959-4388(99)00041-0
Bertonati, G., Amadeo, M. B., Campus, C., & Gori, M. (2023). Task-dependent spatial processing in the visual cortex. Human Brain Mapping, 1–10. https://doi.org/10.1002/hbm.26489
Brookes, J., Warburton, M., Alghadier, M., Mon-Williams, M., & Mushtaq, F. (2020). Studying human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods, 52(2), 455–463. https://doi.org/10.3758/S13428-019-01242-0/FIGURES/7
Butler, J. S., Smith, S. T., Campos, J. L., & Bulthoff, H. H. (2010). Bayesian integration of visual and vestibular signals for heading. Journal of Vision, 10(11), 23–23. https://doi.org/10.1167/10.11.23
Cogné, M., Taillade, M., N’Kaoua, B., Tarruella, A., Klinger, E., Larrue, F., Sauzéon, H., Joseph, P. A., & Sorita, E. (2017). The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review. Annals of Physical and Rehabilitation Medicine, 60(3), 164–176. https://doi.org/10.1016/J.REHAB.2015.12.004
Cuevas-Rodríguez, M., Picinali, L., González-Toledo, D., Garre, C., de la Rubia-Cuestas, E., Molina-Tanco, L., & Reyes-Lecuona, A. (2019). 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation. PLoS ONE, 14(3), e0211899. https://doi.org/10.1371/JOURNAL.PONE.0211899
Cui, Q. N., Razavi, B., O’Neill, W. E., & Paige, G. D. (2010). Perception of Auditory, Visual, and Egocentric Spatial Alignment Adapts Differently to Changes in Eye Position. Journal of Neurophysiology, 103(2), 1020–1035. https://doi.org/10.1152/jn.00500.2009
De Gelder, B., & Bertelson, P. (2003). Multisensory integration, perception and ecological validity. Trends in Cognitive Sciences, 7(10), 460–467. https://doi.org/10.1016/J.TICS.2003.08.014
de la Rosa, S., & Breidt, M. (2018). Virtual reality: A new track in psychological research. British Journal of Psychology, 109(3), 427. https://doi.org/10.1111/BJOP.12302
Epic Games. (2022). Unreal engine: Virtual reality best practices. Retrieved May 4, 2022, from https://docs.unrealengine.com/4.26/en-US/SharingAndReleasing/XRDevelopment/VR/DevelopVR/ContentSetup/. Accessed 8 Apr 2022.
Epstein, R. A., Patai, E. Z., Julian, J. B., & Spiers, H. J. (2017). The cognitive map in humans: Spatial navigation and beyond. Nature Neuroscience, 20(11), 1504–1513. https://doi.org/10.1038/NN.4656
Esposito, D., Bollini, A., & Gori, M. (2021a). The link between blindness onset and audiospatial processing: Testing audiomotor cues in acoustic virtual reality. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society, pp. 5880–5884. https://doi.org/10.1109/EMBC46164.2021.9629699
Esposito, D., Bollini, A., & Gori, M. (2021b). Virtual Reality Archery to quantify the development of Head-Trunk Coordination, Visuomotor transformation And Egocentric Spatial Representation. 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 1–6. https://doi.org/10.1109/MeMeA52024.2021.9478772
Esposito, D., Miehlbradt, J., Tonelli, A., Alberto, M., & Gori, M. (2023). Young children can use their subjective straight-ahead to remap visuo-motor alterations. Scientific Reports, 13(1), 1–11. https://doi.org/10.1038/s41598-023-33127-w
Ferrè, E. R., Alsmith, A. J. T., Haggard, P., & Longo, M. R. (2021). The vestibular system modulates the contributions of head and torso to egocentric spatial judgements. Experimental Brain Research, 239(7), 2295–2302. https://doi.org/10.1007/s00221-021-06119-3
Filimon, F. (2015). Are all spatial reference frames egocentric? Reinterpreting evidence for allocentric, object-centered, or world-centered reference frames. Frontiers in Human Neuroscience, 9(DEC), 648. https://doi.org/10.3389/fnhum.2015.00648
Frissen, I., Campos, J. L., Souman, J. L., & Ernst, M. O. (2011). Integration of vestibular and proprioceptive signals for spatial updating. Experimental Brain Research, 212(2), 163–176. https://doi.org/10.1007/S00221-011-2717-9/FIGURES/5
Garcia, S. E., Jones, P. R., Rubin, G. S., & Nardini, M. (2017). Auditory localisation biases increase with sensory uncertainty. Scientific Reports, 7(1), 1–10. https://doi.org/10.1038/srep40567
Gescheider, G. A. (2013). Psychophysics. Psychology Press.https://doi.org/10.4324/9780203774458
Google. (2018). Resonance audio SDK for unity (version 1.2.1) [computer software]. https://resonance-audio.github.io/resonance-audio/develop/unity/getting-started.html. Accessed 8 Apr 2022.
Gori, M., Sandini, G., Martinoli, C., & Burr, D. C. (2014). Impairment of auditory spatial localization in congenitally blind human subjects. Brain, 137(1), 288–293. https://doi.org/10.1093/brain/awt311
Grieves, R. M., & Jeffery, K. J. (2017). The representation of space in the brain. Behavioural Processes, 135, 113–131. https://doi.org/10.1016/J.BEPROC.2016.12.012
Holleman, G. A., Hooge, I. T. C., Kemner, C., & Hessels, R. S. (2020). The ‘real-world approach’ and its problems: A critique of the term ecological validity. Frontiers in Psychology, 11, 721. https://doi.org/10.3389/FPSYG.2020.00721/BIBTEX
Jewell, G., & McCourt, M. E. (2000). Pseudoneglect: A review and meta-analysis of performance factors in line bisection tasks. Neuropsychologia, 38(1), 93–110. https://doi.org/10.1016/S0028-3932(99)00045-7
Kingdom, F. A. A., & Prins, N. (2016). Psychophysics: A practical introduction: Second edition. Elsevier Academic Press. https://doi.org/10.1016/C2012-0-01278-1
Klatzky, R. L. (1998). Allocentric and egocentric spatial representations: Definitions, distinctions, and interconnections. In Spatial Cognition. Lecture Notes in Computer Science 1404(8):1–17). https://doi.org/10.1007/3-540-69342-4_1
Kolarik, A. J., Scarfe, A. C., Moore, B. C. J., & Pardhan, S. (2016). An assessment of auditory-guided locomotion in an obstacle circumvention task. Experimental Brain Research, 234(6), 1725–1735. https://doi.org/10.1007/S00221-016-4567-Y/FIGURES/6
Lackner, J. R. (1974). The role of posture in sound localization. The Quarterly Journal of Experimental Psychology, 26(2), 235–251. https://doi.org/10.1080/14640747408400409
Lackner, J. R., & DiZio, P. (2005). Vestibular, proprioceptive, and haptic contributions to spatial orientation. Annual Review of Psychology, 56(1), 115–147. https://doi.org/10.1146/annurev.psych.55.090902.142023
Laurens, J., & Angelaki, D. E. (2018). The brain compass: A perspective on how self-motion updates the head direction cell attractor. Neuron, 97(2), 275–289. https://doi.org/10.1016/j.neuron.2017.12.020
Lewald, J. (2002). Opposing effects of head position on sound localization in blind and sighted human subjects. European Journal of Neuroscience, 15(7), 1219–1224. https://doi.org/10.1046/j.1460-9568.2002.01949.x
Lewald, J., Dörrscheidt, G. J., & Ehrenstein, W. H. (2000). Sound localization with eccentric head position. Behavioural Brain Research, 108(2), 105–125. https://doi.org/10.1016/S0166-4328(99)00141-2
Lewald, J., Ehrenstein, W. H., Lewald Kognitions-und Umweltpsychologie, J. A., & Ehrenstein, W. (1998). Influence of head-to-trunk position on sound lateralization. Experimental Brain Research, 121(3), 230–238. https://doi.org/10.1007/S002210050456
Lewald, J., Peters, S., Tegenthoff, M., & Hausmann, M. (2009). Dissociation of auditory and visual straight ahead in hemianopia. Brain Research, 1287, 111–117. https://doi.org/10.1016/J.BRAINRES.2009.06.085
Loomis, J. M., Blascovich, J. J., & Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behavior Research Methods, Instruments, & Computers, 31(4), 557–564. https://doi.org/10.3758/BF03200735
Middlebrooks, J. C., & Green, D. M. (1991). Sound localization by human listeners. Annual Review of Psychology, 42(1), 135–159. https://doi.org/10.1146/annurev.ps.42.020191.001031
Montana, J. I., Tuena, C., Serino, S., Cipresso, P., & Riva, G. (2019). Neurorehabilitation of spatial memory using virtual environments: A systematic review. Journal of Clinical Medicine, 8(10). https://doi.org/10.3390/JCM8101516
Murray, R. F., Patel, K. Y., & Wiedenmann, E. S. (2022). Luminance calibration of virtual reality displays in Unity. Journal of Vision, 22, 1–1.
Occhigrossi, C., Brosch, M., Giommetti, G., Panichi, R., Ricci, G., Ferraresi, A., Roscini, M., Pettorossi, V. E., & Faralli, M. (2021). Auditory perception is influenced by the orientation of the trunk relative to a sound source. Experimental Brain Research, 239(4), 1223–1234. https://doi.org/10.1007/s00221-021-06047-2
Odegaard, B., Wozny, D. R., & Shams, L. (2015). Biases in visual, auditory, and audiovisual perception of space. PLOS Computational Biology, 11(12), e1004649. https://doi.org/10.1371/journal.pcbi.1004649
Parseihian, G., & Katz, B. F. G. (2012). Rapid head-related transfer function adaptation using a virtual auditory environment. The Journal of the Acoustical Society of America, 131(4), 2948–2957. https://doi.org/10.1121/1.3687448
Paulich, M., Schepers, M., Rudigkeit, N., & Bellusci, G. (2018). Xsens MTw Awinda: Miniature wireless inertial-magnetic motion tracker for highly accurate 3D kinematic applications. [white paper] Retrieved March 24, 2021, from https://www.xsens.com/hubfs/3446270/Downloads/Manuals/MTwAwinda_WhitePaper.pdf. Accessed 8 Apr 2022.
Populin, L. C. (2008). Human sound localization: Measurements in untrained, head-unrestrained subjects using gaze as a pointer. Experimental Brain Research, 190(1), 11–30. https://doi.org/10.1007/s00221-008-1445-2
R Core Team. (2020). R: A language and environment for statistical computing (version 4.0.3) [computer software]. https://www.r-project.org/. Accessed 8 Apr 2022.
Rabini, G., Altobelli, E., & Pavani, F. (2019). Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Scientific Reports, 9(1), 7892. https://doi.org/10.1038/s41598-019-44267-3
Roncesvalles, M. N., Schmitz, C., Zedka, M., Assaiante, C., & Woollacott, M. (2005). From egocentric to exocentric spatial orientation: Development of posture control in bimanual and trunk inclination tasks. Journal of Motor Behavior, 37(5), 404–416. https://doi.org/10.3200/JMBR.37.5.404-416
Roren, A., Mayoux-Benhamou, M. A., Fayad, F., Poiraudeau, S., Lantz, D., & Revel, M. (2009). Comparison of visual and ultrasound based techniques to measure head repositioning in healthy and neck-pain subjects. Manual Therapy, 14(3), 270–277. https://doi.org/10.1016/J.MATH.2008.03.002
Rus-Calafell, M., Garety, P., Sason, E., Craig, T. J. K., & Valmaggia, L. R. (2018). Virtual reality in the assessment and treatment of psychosis: A systematic review of its utility, acceptability and effectiveness. Psychological Medicine, 48(3), 362–391. https://doi.org/10.1017/S0033291717001945
Schicke, T., Demuth, L., & Röder, B. (2002). Influence of visual information on the auditory median plane of the head. NeuroReport, 13(13), 1627–1629. https://doi.org/10.1097/00001756-200209160-00011
Stevens, S. S. (1960). The psychophysics of sensory function. American Scientist, 48, 226–253.
Straw, A. D. (2008). Vision egg: An open-source library for realtime visual stimulus generation. Frontiers in Neuroinformatics, 2(NOV), 4. https://doi.org/10.3389/NEURO.11.004.2008/BIBTEX
Unity - Manual: GameObjects. (2019a). Retrieved April 3, 2022, from https://docs.unity3d.com/Manual/GameObjects.html
Unity Technologies (2019b). Unity - Manual: Introduction to components. Retrieved April 3, 2022, from https://docs.unity3d.com/Manual/Components.html
Unity Technologies (2019c). Unity - Manual: Prefabs. Retrieved April 3, 2022, from https://docs.unity3d.com/Manual/Prefabs.html
Unity Technologies. (2019d). Unity - manual: Python for unity 2.0.1-preview.2. Retrieved July 21, 2021, from https://docs.unity3d.com/Packages/com.unity.scripting.python@2.0/manual/index.html
Unity Technologies. (2019e). Unity (version 2019.4 LTS) [computer software]. https://unity.com/
Unity Technologies. (2019f). Unity - Manual: Shaders. Retrieved April 3, 2022, from https://docs.unity3d.com/2019.4/Documentation/Manual/Shaders.html
Unity Technologies. (2019g). Unity - manual: Single pass stereo rendering. Retrieved June 21, 2021, from https://docs.unity3d.com/2019.4/Documentation/Manual/SinglePassStereoRendering.html
Unity Technologies. (2019h). Unity - manual. Retrieved May 04, 2020, from https://docs.unity3d.com/2019.4/Documentation/Manual/index.html
Volpe, B. T., Huerta, P. T., Zipse, J. L., Rykman, A., Edwards, D., Dipietro, L., Hogan, N., & Krebs, H. I. (2009). Robotic devices as therapeutic and diagnostic tools for stroke recovery. Archives of Neurology, 66(9), 1086–1090. https://doi.org/10.1001/ARCHNEUROL.2009.182
Watson, A. B., & Fitzhugh, A. (1990). The method of constant stimuli is inefficient. Perception & Psychophysics, 47(1), 87–91. https://doi.org/10.3758/BF03208169
Watson, A. B., & Pelli, D. G. (1983). Quest: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33(2), 113–120. https://doi.org/10.3758/BF03202828
Winter, S. S., & Taube, J. S. (2014). Head direction cells: From generation to integration. Space, Time and Memory in the Hippocampal Formation, 83–106. https://doi.org/10.1007/978-3-7091-1292-2_4
Zahorik, P., Wightman, F., & Kistler, D. (1995). On the discriminability of virtual and real sound sources. Proceedings of 1995 Workshop on Applications of Signal Processing to Audio and Acoustics, 76–79. https://doi.org/10.1109/ASPAA.1995.482951
Zanchi, S., Cuturi, L.F., Sandini, G., & Gori M. (2022). How much I moved: Robust biases in self-rotation perception. Atten Percept Psychophys, 84, 2670–2683. https://doi.org/10.3758/s13414-022-02589-x
Zimmermann, E. (2021). Sensorimotor serial dependencies in head movements. Journal of Neurophysiology, 126(3), 913–923. https://doi.org/10.1152/JN.00231.2021/ASSET/IMAGES/MEDIUM/JN-00231-2021R01.PNG
Funding
Open access funding provided by Istituto Italiano di Tecnologia within the CRUI-CARE Agreement. The research is partially supported by the MYSpace project to Monica Gori, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 948349). Also, this work was carried out within the framework of the project "RAISE - Robotics and AI for Socio-economic Empowerment” and has been supported by European Union - NextGenerationEU. However, the views and opinions expressed are those of the authors alone and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
Author information
Authors and Affiliations
Contributions
Conceptualization, Data curation, Formal analysis, Investigation, Software, Writing—original draft preparation: E.D.; Methodology, Visualization: E.D., A.B.; Writing—review and editing: E.D., A.B., M.G.; Supervision: A.B., M.G.; Funding acquisition, Resources: M.G.
Corresponding author
Ethics declarations
Conflicts of interest/Competing interests
The authors declare no conflicts of interest.
Ethics approval
The study followed the tenets of the Declaration of Helsinki and was approved by the ethics committee of the local health service (Comitato Etico, ASL 3, Genova).
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Consent for publication
The participants signed informed consent regarding publishing their data and photographs.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: characterization of the stimulus’ audio-visual properties
Appendix A: characterization of the stimulus’ audio-visual properties
To ensure the reproducibility of psychophysical studies, it is important to characterize the physical properties of the stimuli delivered. Typically, the physical properties are measured at the source. This is impossible for extended reality (XR) as the source is simulated, and, to date, no standard procedures for such measurements have been established yet. In this study, we measured the audio-visual properties of the stimuli at the receptor level rather than at the source. We did so by following a procedure similar to one of those that Murray, Patel, and Wiedenmann suggested in their work on how to calibrate the HMDs’ luminance (Murray et al., 2022). They made a pinhole camera out of a dummy head, embedding the necessary sensors in it. We readapted their approach for the measurement of both acoustic and visual characteristics. The characteristics we measured were the Weber illuminance contrast and the sound intensity. The following section describes the procedure used to measure them.
Apparatus
The sound intensity measurement used a pair of SP-TFB-2 low noise in-ear binaural microphones, a M-AUDIO Mobile Pre mk2 USB audio interface, and the measurements were visualized in a DELL Latitude 3380 laptop running the software Audacity. The illuminance measurement used a Vishay BPW34 PIN photodiode, a 1 MΩ resistor, an Arduino UNO microcontroller, and the readings were visualized in the same DELL Latitude 3380 laptop running the Arduino IDE (Fig. 10).
The stimuli under measurement were presented using the same apparatus used in the experiment.
Procedure
Sound intensity
The in-ear microphones were placed on the dummy head’s ear using tape. Then, the HMD was placed on the dummy head with the earphones covering the microphones, the acoustic stimulus was delivered, and the measured sound intensity was visualized in the laptop running Audacity.
Weber illuminance contrast
The photodiode was embedded in the dummy head’s left eye. The surface surrounding the “sensor eye” was covered in black tape to reduce stray light. Then, the HMD was placed on the dummy head, the visual stimulus was delivered, and the measured illuminance (lx) was visualized in the Arduino IDE serial port console. The illuminance was measured for the background and stimulus colors, viewed at full screen. The Weber contrast was computed using the following formula: \(WC = \frac{{I}_{s}-{I}_{b}}{{I}_{b}}\), where Is is the stimulus-related illuminance and Ib is the background-related illuminance.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Esposito, D., Bollini, A. & Gori, M. The Suite for the Assessment of Low-Level cues on Orientation (SALLO): The psychophysics of spatial orientation in virtual reality. Behav Res (2023). https://doi.org/10.3758/s13428-023-02265-4
Accepted:
Published:
DOI: https://doi.org/10.3758/s13428-023-02265-4