In the current study, we explored the potential use of 360° video technology for memory assessment trough a preliminary evaluation of a 360° adaptation of the Picture Recognition sub-test included in the RBMT-III. We named this adaptation ObReco-360° (Object Recognition-360°).
The participants of the present study were enrolled among the outpatients coming from the Department of Medical Rehabilitation of Istituto Auxologico Italiano in Milan. The resulting sample of twenty-four people included nine females and fifteen males, with a mean age of 70.4 (SD = 8.5) and a mean of 9.7 (SD = 3.7) years of education. The exclusion criteria for the enrollment included the presence of severe internist, psychiatric and neurological impairments. Regarding the cognitive status, only the participants who obtained a corrected score above 18 points in the Mini Mental State Examination (MMSE) (Folstein et al. 1975) Italian Version (Measso et al. 1993) were considered for the recruitment.
The study was conducted in compliance with the Helsinki Declaration of 1975 (as revised in 2008) and received ethical approval by the Ethical Committee of the Istituto Auxologico Italiano. The demographic and neuropsychological data of the final sample are showed in Table 1.
This pilot study involved randomized within-subject data collection. For this reason, participants were examined two times in a week to avoid learning effect and interferences between materials. Two different assessment protocols were administered: The standard one included only classic paper-and-pencil tests, while the 360° one also included the administration of the ObReco360° and two user-experience (UX) rating scales. During the 360° session all the participants were sitting on a turning chair, in order to freely explore the virtual environments using an Oculus Go© HMD.
The neuropsychological tests administered were the MMSE, the Frontal Assessment Battery (FAB) (Dubois et al. 2000) Italian Version (Appollonio et al. 2005), the Picture Recognition sub-test included in the RBMT-III Italian Version (Beschin et al. 2013) and the Babcock Story Recall Test (BSRT) Italian Version (Spinnler and Tognoni 1987).
The MMSE is a brief screening test including thirty simple tasks (e.g., repeating and remembering words, copying a figure) oriented to a first-level assessment of different cognitive functions.
The FAB is a rapid test for the screening of executive functions which includes six tasks linked to frontal lobes activity, such as conceptualization (e.g., find common characteristics between objects) and inhibitory control (e.g., following rules given by the examiner).
The Story Recall tests aim to assess short-term and long-term memory abilities, respectively, trough an immediate or delayed recall of all the details contained in a brief tale.
RBMT-III picture recognition subtask
The Picture Recognition sub-test of the RBMT-III is divided in two phases. During the first phase (Encoding Phase), a set of 15 pictures representing common animate and inanimate objects (e.g., a clock, a chicken) are shown separately to the participants, who had to recognize and name each one of them. In the second phase (Recognition Phase), the participants must observe a total of 30 pictures including target items (i.e., the 15 pictures presented in the Encoding Phase) and distractors (i.e., 15 pictures non presented in the Encoding Phase): For each of these, they must answer yes if the picture was presented previously or no if it was not. The raw score obtained in the subtest is the number of pictures correctly recognized. In addition, before the Recognition Phase we included a Free Recall task, which required the participants to remember every object he/she could from those presented in the Encoding Phase. The raw score is defined by the number of object correctly reported. The flowchart of the procedure is presented in Fig. 1a.
The UX assessment procedure included two questionnaires. The first instrument was the Independent Television Commission-Sense of Presence Inventory (ITC-SOPI) (Lessiter et al. 2001), a questionnaire including forty-four items which define a set of affirmations addressing the individual’s feelings after the VR experience. Participants are asked to determine their degree of agreement with each of these affirmations using a five-points Likert scale ranging from “Strongly Agree” to “Strongly Disagree”. The ITC-SOPI is divided into 4 subscales: Sense of Physical Space (19 items), Engagement (13 items), Ecological Validity (5 items) and Negative Effects (6 items), each one linked a singular score.
The second instrument was the System Usability Scale (SUS) (Brooke et al. 1996), a questionnaire composed of ten sentences describing the user’s feeling concerning the interaction with the product to evaluate. For each of these answer, the participant needs to define their degree of agreement using a five-points Likert scale ranging from “Strongly Agree” to “Strongly Disagree”. The computed score ranges from 0 to 100.
The ObReco-360° is a novel neuropsychological assessment tool, developed using 360° immersive photo and video as VEs and derived from the Picture Recognition sub-test of the RBMT-III. The included VEs were recorded using an omnidirectional video camera, the Ricoh Theta S(c), which can record spherical photos with a resolution of 5376 × 2688 pixels and spherical videos with a resolution on 1920 × 1080 pixels. The final version of the ObReco-360° consists in a custom Android application which can be sideloaded on an Oculus Go© headset. The application was developed using the InstaVR© software, which allowed to organize the virtual environments in a single experience. The ObReco-360° test includes four different phases: the Familiarization Phase, the Encoding Phase, the Free Recall Phase and the Recognition Phase.
The Familiarization Phase is aimed to make the participants comfortable with the experience and to detect possible side-effects linked to VR exposure (e.g., dizziness, nausea). Here, the participants find themselves in a black room with a floating icon showing the number one on the center. Then, they are asked to point and select the icon with the Oculus Go© controller, in order to show a text message, “search for the number 2”, which shows the instruction for the task. The procedure is the same for numbers from 2 to 4, which are positioned in the four cardinal points around the participants: when they finally find and select the number 4 (Fig. 2a), the second virtual environment is loaded.
The Encoding Phase represents the starting point of the proper test: The first scenario includes a 3D wall showing the instructions for the task (Fig. 2b), which are also presented in auditory modality. Then, the participants can choose whether to playback the instructions or go to the test phase. In the test phase, participants must pay attention to different objects presented by a virtual clinician (Fig. 3a). The objects are randomly placed in an office room; the target objects are 10 mixed with other 17 non-target ones. During the video, the clinician moves around in the room and presents the target objects closely to the camera for 5 s; in the meanwhile, the participants must name the object showed. At the end of this task, the participants are invited to take off the headset and join the “real” clinician for a 10-min long session of non-interferent tests.
Then, the next step is represented by the Free Recall phase. The tasks simply require the participant to remember the 10 objects presented 10 min earlier in the Encoding Phase. The raw score is the number of objects correctly reported.
After wearing back the headset, the Recognition Phase begins. Again, the scenario includes a visual and auditory presentation of the instructions for the task, which asks the participants to search an immersive 360° photo of the same room included in the first task to find and nominate all the ten objects previously showed, located among other 17 non-target objects (Fig. 3b).
We organized all the data collected in a Windows Excel sheet and computed different indexes, both for the standard and VR assessment protocol. For the Free Recall tasks, we computed the accuracy percentages observed in the performances. For the Recognition tasks, we computed three different scores: the Hit Rate (HR, the proportion of yes responses to old items), the False Alarm Rate (the proportion of yes responses to new items) and the discrimination score PR (i.e., hits -false alarms; Snodgrass and Corwin 1988). All these scores are reported in percentages.
Then, we performed four Wilcoxon signed-rank tests to compare the Free Recall and Recognition scores in both classic and 360° mode, investigating the statistical significance of the detected differences in performances. Finally, we performed a second statistical analysis, aimed to explore the presence of significant correlations between the computed scores in the two conditions. All the analyses were performed using JASP (Version 0.14.1.0).