of Wind by Visuo-Audio-Haptic Cross-Modal Eﬀects

. Wind displays, which simulate the sensation of wind, have been known to enhance the immersion of virtual reality content. However, certain wind displays require an excessive number of wind sources to simulate wind from various directions. To realize wind displays with fewer wind sources, a method to manipulate the perceived directions of wind by audio-haptic cross-modal eﬀects was proposed in our previous study. As the visuo-haptic cross-modal eﬀect on perceived wind directions has not yet been quantitatively investigated, this study focuses on the eﬀect of visual stimuli on the perception of wind direction. We present virtual images of ﬂowing particles and three-dimensional sounds of wind as information to indicate wind directions and induce cross-modal eﬀects in users. The user study has demonstrated that adding visual stimuli eﬀectively improved the result corresponding to certain virtual wind directions. Our results suggest that perceived wind directions can be manipulated by both visuo-haptic and audio-haptic cross-modal eﬀects.


Introduction
Improving immersion is necessary for most virtual reality (VR) content. An approach to ensure an immersive VR experience involves multisensory presentation. In this context, "wind displays" that simulate the sensation of wind for their users have become a popular topic of study. Heilig used wind along with odors and vibrations in Sensorama [1]. Moon et al. proposed WindCube [9], which simulates wind from several directions using 20 fans.
As many people experience the sensation of wind on their entire body every day, we can easily immerse ourselves in VR presentations with wind displays. The latter can improve immersion by faithfully reproducing the motion of objects in VR environments [5], self-motions [6,12], and climates [11]. The reproduction of wind blowing from various directions is often crucial to simulate realistic wind. An array of fans [9] is the easiest and most applicable approach to this problem. However, the number of wind sources must match the number of desired directions of simulated wind. Therefore, wind display devices tend to be complicated and large under this implementation. VaiR [12] addresses this problem by implementing two rotatable bow-shaped frames that enable continuous change in wind directions. This approach could reduce the required number of wind sources, but requires actuators and mechanisms to move the device. We propose the presentation method of wind directions without an entire reproduction of physical wind by changing human perception.
Human perception of the directions of wind is investigated to design effective wind displays. Nakano et al. [10] demonstrated that the angles with respect to the human head corresponding to just noticeable differences (JND) in wind directions are approximately 4 • in the front and rear regions and approximately 11 • in the lateral region. Saito et al. [13] investigated the wind JND angles by presenting users with audio-visual stimuli. They reported that the JND angle values were much higher than those reported by Nakano et al. and suggested that the accuracy of wind perception was lowered by multisensory stimuli.
When we receive multisensory stimuli, different sensations are sometimes integrated with each other and our perception is altered. These phenomena are called cross-modal effects and they can alter the perception of physical stimuli. It has already been established that haptic sensations are altered by the visuohaptic [7] and audio-haptic integrations [4]. Through cross-modal effects, we can provide rich tactile experiences without reproducing the stimulating physical phenomena completely faithfully.
We proposed a method to manipulate perceived wind directions by audiohaptic cross-modal effect [2] in order to simulate directional winds with simple hardware. We performed experiments that simultaneously presented wind from two fans and three-dimensional (3D) wind sounds, and concluded that the perceived wind directions could be changed by up to 67.12 • by this effect.
It is suggested that congruent stimuli from two modalities strengthen the effect of cross-modal illusion on an incongruent stimulus from the other modality in tri-modal perception [14]. Therefore, we designed AlteredWind [3], which com-bines congruent visual and audio information about the wind direction to more effectively manipulate the perceived wind. We presented the audio-visual information through a head-mounted display (HMD) and headphones, as depicted in Fig. 1. Although we evaluated the perceived wind directions in a user study, there was only qualitative analysis and the sample size was small. In this study, we redesigned the experiment to quantitatively verify the visuo-audio-haptic cross-modal effects on wind direction perception. Our experiment compares the perceived wind directions across the combinations of different sensory modalities and placements of wind sources. Our result suggests new designs of wind displays utilizing cross-modal effects.

Implementing Visual, Audio, and Wind Presentation
In this paper, we define "virtual wind direction" as the direction of wind which is presented by multimodal stimuli and is different from the physical one. We implemented software for visual and audio presentations of the virtual wind directions and hardware for actual wind presentation.

Visual Presentation of Virtual Wind Directions
The virtual wind direction can be visually conveyed to a user by two primary methods-by suggesting the generation of wind or the existence of wind. The former effect can be realized by displaying a virtual image of a rotating fan. However, the wind approaching from behind the user cannot be expressed by this technique because the images would lie outside the field of vision. The latter technique uses images of particles being blown by the wind and images of flags or plants swaying in the wind. This technique can be used to convey wind originating from any direction. In this study, images of particles moving horizontally were used to verify the effects of visual information in a simple environment. We used an HMD (HTC Vive Pro) to display the image threedimensionally and immersively. We programmed 1000 particles to emerge per second so that they were visible in the HMD along any flowing direction, as depicted in Fig. 2.

Audio Presentation of Virtual Wind Directions
We manipulated the perceived directions of wind by using 3D sounds recorded by a dummy head [2], which is a life-sized model of the human head with ears. The perception of a sound source can be localized around listeners when recorded sound is played binaurally. We installed a dummy head and a fan in an anechoic room and recorded the sound of the fan blowing wind against the dummy head from directions 30 • apart from each other [2]. The sound was presented to users through noise-cancelling headphones (SONY WH-1000XM2). A-weighted sound pressure level (L A ) near the headphones measured with a sound level meter (ONO SOKKI LA-4350) was in the range of 46.8 dB-58.1 dB (it varied depending on the direction of the sound image). The direction of the audio information was always congruent with that of the visual one.

Proposed Device for Wind Presentation
We had designed a wind display device by placing a fan in front of the users and another behind them in our previous study [2]. In this experiment, we placed four fans (SANYODENKI San ACE 172) around the users' heads at 90-degree intervals. Each fan could be independently controlled by Arduino UNO connected to a computer. During the experiment, wind was generated from the fans at the front and the back or from those to the left and right. The four fans were affixed to camera monopods on a circular rail of 800 mm diameter and directed to the users' heads in about 300 mm ahead. In our previous study, results may have been affected by the participants' prior knowledge of the positions of fans. To ensure that the participants were not aware of the placement of the fans, we made the monopods detachable from the circular rail. The fans and the monopods were removed from the rail and hidden before the experiment and attached to the rail after the participants had worn the HMD.
We had continuously controlled the wind velocities corresponding to the two fans facing each other in the previous research [2] and confirmed that it was effective for manipulation of the perceived wind direction. "Continuously" here means that the wind from a particular fan was weakened as it moved further away from the virtual wind direction. We applied the same method in this study also and changed the wind velocities from 0.8 m/s to 2.0 m/s as shown in Fig. 3.

Experiment Design
We conducted an experiment to verify the visuo-audio-haptic cross-modal effects on perceived wind directions. We presented the virtual wind directions which are not congruent with the actual wind directions by visual and audio information in this experiment. Six conditions were prepared by varying the existence of multimodal information (visual and audio, visual only, and audio only) and the placements of the fans (front-behind and left-right). Since we confirmed that the actual directions of wind are perceived without multimodal information [2], we omitted that condition in this user study. The experiment had a withinsubjects design. For each condition, we presented wind coming from 12 virtual wind directions at intervals of 30 • and each direction was repeated twice. Thus, there were a total of 144 presentations for each participant.
The Ethics Committee of the University of Tokyo approved the experiment (No. 19-170). Written informed consent was obtained from every participant. Each participant was instructed to sit in a chair and wear the HMD. As mentioned in Subsect. 2.3, the fans were attached after the participants wore the HMD, preventing them from being aware of the exact location (Fig. 4). To make the wind apparent on the head and neck of the participants, the lower ends of the fans were adjusted to match the height of the participants' shoulders. We asked the participants to stare toward the frontal direction marked in the VR view.
Each presentation consisted of a stimulation time of 12 s and an answering time of a few seconds. Considering the delay in starting and stopping the fans, the timing of wind presentation was advanced by 2 s compared with that of the other stimuli. The order of the presentations was randomized for each participant, and neither the participants nor the experimental staff was aware of the order beforehand. The participants responded with perceived wind direction of wind indicating the direction on the trackpad of a controller as shown in Fig. 5. Finally, they answered questionnaires about their experiences during the experiment.  As the participants wore the HMD and the noise-cancelling headphones, fan operations could not be seen or heard by them. All the windows of the experiment room were closed and the air conditioner was turned off to ensure that there was no wind except the wind from the apparatus. Instead of air conditioning, two oil heaters which produce no airflow were used for warming. The apparatus was at least 1 m away from the walls so that they would not affect the airflow.

Results
Twelve people of ages 22-49 participated in the experiment (7 men 23.9 ± 1.2 years old and 5 women 30.0 ± 11.0 years old). Two of them had ever researched haptics. Ten persons sensed that the perceived wind directions were affected by visual and audio stimuli. Six persons answered that the effect of the auditory stimuli was stronger and four answered that the visual one was stronger.
We calculated the average perceived wind direction (θ perceived ) for every virtual wind direction (θ virtual ). The directions represent clockwise angles from the front. If θ perceived was close to corresponding θ virtual , we can judge that the perceived wind directions are manipulated effectively to θ virtual . We define such conditions as "good performance" of the manipulation. Figure 6 shows correspondence between θ perceived and θ virtual . Data points near diagonal lines mean the good performance. Ones near horizontal lines mean a bad performance because it means that the actual wind directions are perceived instead of θ virtual .
We performed statistical tests following the methods of directional statistics [8] for θ perceived under three conditions corresponding to each fan placement. The Mardia-Watson-Wheeler test, which is a non-parametric test for two or more samples, was applied because Watson's U 2 test showed that some of the perceived directions do not arise from von Mises distribution. For directions with significant differences, post-hoc Mardia-Watson-Wheeler tests, with Hommel's improved Bonferroni procedure, were performed. The results have been tabulated in Table 1.

Discussion
We abbreviate visual and audio as VA, visual as V, and audio as A in the following discussion. Under the condition of the front-behind fans, the condition VA caused better performance of manipulation than the condition V when θ virtual was 90 • . The condition VA was marginally better than the condition A when (θ virtual = 150, 240 • ). On the other hand, the performance was better under the condition A than under the condition VA or V for several θ virtual . Under the condition of the left-right fans, the condition VA had a significantly (θ virtual = 90 • ) or marginally (θ virtual = 120 • ) better performance than the condition A. It was confirmed that the condition V was significantly (θ virtual = 30 • ) or marginally (θ virtual = 0 • ) better than the condition A. From these results, the combination of visual and auditory stimuli and the fans in a front-back position has shown overall better performance in the manipulation of the perceived wind directions.
Still, the performance of manipulation with the condition V or A was better than ones with the condition VA in several virtual wind directions. In these directions, the performance was already sufficient (the differences of θ perceived and θ virtual were at most 12 • ) with the condition V or A. Therefore, we conclude that the manipulation of the perceived wind directions is improved by using visuo-audio-haptic cross-modal effects under the condition that the perception cannot be sufficiently changed by using only visual or auditory stimuli.
For the left-right fan placement, the performance was worse than in frontbehind one in condition VA and A, and there were little differences between condition VA and V. The reason may be that the participants did not localize correct sound images due to front-back confusions. Improvement of 3D sounds may enhance the cross-modal effects under left-right placements of fans.
Using more realistic images than the simple flowing particles could increase the effectiveness of visual stimuli. Further, the image of the particles may disturb the contents in practical applications. If the manipulation of the perceived wind can be realized with more diegetic visual information such as flags or swaying trees, it may be effectively used in practical applications using the wind display.

Conclusion
In this study, we proposed a method to manipulate the perceived directions of wind through visuo-audio-haptic cross-modal effects. We used virtual images of flowing particles as visual stimuli and 3D sounds of the wind as audio stimuli to make users perceive virtual wind directions. The user study demonstrated that combining visual and audio stimuli exerted significant effects than each stimulus alone corresponding to certain virtual directions. Combining the two modalities are considered to be effective when the perception cannot be sufficiently manipulated by only visual or auditory stimuli.
These results suggest that perceived wind directions can be altered by both visuo-haptic and audio-haptic cross-modal effects. Further improvements of the visual and audio stimuli used for manipulation should be considered in the future. The findings of this study can be applied to wind display technology to present various wind directions with limited equipment.