Abstract
Executive functions include functions such as planning, working memory, inhibition, mental flexibility, and action monitoring and initiation, and are essential to carry out an independent everyday life. Individuals suffering from brain injury, such as a stroke, very commonly experience executive deficits that reduce the capacity to regain functional independence. In recent years, there has been a growing interest in developing tablet computer-based cognitive training programs for stroke patients and healthy aging adults since such programs can be included in non-supervised environments. In this respect, we described and evaluated the usability of a novel tablet application (app) for executive function training, developed in the context of the MEMORI-net project, a cross-border Italy-Slovenia program for the rehabilitation of stroke patients. We conducted a pilot study with a non-clinical sample of 16 participants to obtain information about the usability of the sFEra APP. Our descriptive analyses suggest that most users were satisfied with the overall experience and the app was highly usable, and instructions were clear, even with little previous experience with tablet applications. Acceptability and effectiveness will need to be evaluated in a clinical randomized controlled study.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Executive functions (EFs) refer to those processes which are crucial for goal-oriented behaviors which allow to successfully interact in the world (Gazzaniga et al., 2006). Under this overarching definition, a heterogeneous group of mental capacities such as planning, working memory, inhibition, mental flexibility, and action monitoring and initiation fundamental for most of the daily actions is included (Burgess et al., 2000; Chan et al., 2008; Damasio, 1995 for a review) (see Table 1).
Indeed, the substantial increase in interest in EFs over the years is likely due to their pivotal role in everyday life (Vaughan & Giovanello, 2010). EFs are critical to organizing our day (e.g., what has to be done, in which order, how long does it take) or when it is necessary to modify our plans due to changing contingencies or unexpected events, and unforeseen actions that need to be taken. We routinely use EFs to learn new actions, make decisions and correct mistakes, engage in non-automatic routine behaviors that require constant monitoring, or complete complex or even dangerous behaviors. EFs are also essential to accurately evaluate and predict the success or failure of our behaviors: they are crucial to analyze the causes of mistakes and review the sequence of actions to plan a better strategy for the next occasion. A significant amount of flexibility is necessary to generate an alternative plan and switch from one coping behavior to another.
An impairment of such functions will necessarily lead to a dramatic impoverished everyday life by limiting the ability to adjust to environmental demands or changes. Disorders of attention-executive functioning are among the most frequent cognitive deficits in adults and they can be observed in aging, as well as following numerous neurological diseases, such as traumatic brain injury, stroke, multiple sclerosis, dementia, and Parkinson’s disease. In particular, executive deficits occur in as much as 75% of stroke patients (Povroznik et al., 2018) and result in a worse functional outcome since they interfere with the process of rehabilitation and recovery (Lesniak et al., 2008; Jankowska et al., 2017).
Given the considerable impact of EFs impairment on functional outcomes, it is of primary importance to identify rehabilitation strategies that are effective in increasing and improving these fundamental capacities. Historically, restoration and compensation are the two distinct approaches that have been proposed to rehabilitate cognitive functions in brain-damaged patients (Mateer, 2005). Restorative interventions have been usually offered in paper-and-pencil mode but, with the rapid growth and spreading of computer devices, computer-based training has gained popularity and nowadays represents an appealing alternative option (Sigmundsdottir et al., 2016). Indeed, it has several advantages over conventional training practices. Traditional cognitive programs usually require face-to-face contact, reaching the hospital or the therapist’s practice, arranging schedules, and travel time; furthermore, as they are carried out in one-to-one mode, they can be very expensive. Computer-assisted training instead can be carried out at any time and is cost-effective since people can do it at home; it can be self-paced and tailored to particular impairments and can provide immediate feedback regarding the success or failure of the exercises. Also, it consents the recording of patients’ sessions, storing their results, as well as graphs of rehabilitation progress. With recent technological innovations, applications (apps) for mobile phones and tablets have quickly developed making cognitive training more attractive, stimulating, and fun, all aspects that are important to promote neuroplasticity (Tacchino et al., 2015). Apps make also training sessions more feasible and flexible because they are user-friendly, and people can exercise wherever they want and at the most convenient time of the day. Indeed, patients’ withdrawal and boredom are quite common in conventional treatment programs, especially post-hospital discharge, as a result, many of them do not achieve the recommended intensity and duration of rehabilitation, thus reducing the clinical efficacy of protocols. Several factors may affect subjects’ commitment to therapy sessions: sessions are time-consuming, therapist-dependent, and imply travel costs. A growing body of research suggests that information and communication technologies (such as mobile apps) have an increasingly important role in the neuropsychological rehabilitation of patients with acquired brain injury (Gamito et al., 2015) and may help in overcoming such challenges. Indeed they have proved to be associated with high adherence to treatment in different contexts (see, for example, Arean et al., 2016) and individuals can potentially benefit from a longer duration of rehabilitation through the extension of therapeutic processes beyond the hospital, like patient’s home.
Currently, there are multiple apps for cognitive training that have been used in individuals with brain damage, i.e., Cogmed (working memory training program, www.cogmed.uk.com), Luminosity (games-based brain training program, www.lumosity.com), and Cognifit (neuropsychological tests and brain training program, www.cognifit.com/it). These app-based games closely mimic traditional cognitive tasks, such as digit span and dual tasks. Evidence for transfer effects on working memory, processing speed, and attention is strong (Harris et al., 2018; Melby-Lervåg & Hulme, 2013). However, to our knowledge, only a few of them have specifically focused on executive functions and have been developed to be used as a clinical tool with stroke patients, therefore the need to develop the present tablet application.
Here, we describe the initial usability testing of the sFEra APP, a novel tablet app-based cognitive training that is focused on attention-executive functioning. The rationale for designing a new app was to develop a cognitive rehabilitation tool suitable for stroke patients. In particular, to tap the major cognitive processes of executive functions (attention, working memory, inhibition, planning, and flexibility) since executive disorders are increasingly being acknowledged as a recurring consequence of anterior, posterior, and subcortical stroke (Jankowska et al., 2017), and play a critical role in predicting stroke functional recovery. The abilities involved in the sFEra APP range from attention, characterized by visual attention and executive-oriented shifts, working memory, in which information is updated and monitored, inhibition, which consents to deliberately suppress automatic responses when needed, shifting between tasks or mental states, planning which involves mapping sequences of actions or moves in preparation of a task or an action, and flexibility, the ability to change and adapt the behavior to different contexts or demands (Posner & Petersen, 1990; Sohlberg & Mateer, 1987; Lezak, 1993; for a review Diamond, 2013; for the neural substrates of each ability Miyake et al., 2000; see also Table 1). Unlike common treatment paradigms where only impaired sub-functions are predominantly treated, the sFEra APP program is designed to improve the activity of fronto-parietal circuits and the Central Executive Network (see Shallice & Burgess, 1991).
The app has been developed in the context of the MEMORI-net program, a cross-border Italy-Slovenia project that aimed to delineate new common clinical standardized protocols for the rehabilitation of stroke patients. Testing the usability of the sFEra APP was necessary before a possible deployment during the rehabilitation of stroke patients involved in the program. In this pilot study, we tested the app usability in untrained healthy individuals, and, in particular, we paid great attention to the clarity of the exercises’ instructions. As mentioned above, tablet application training has the advantage that it can be carried out at any time at home by the patient. However, to adopt the app in a non-supervised environment, it must offer a reliable administration of each exercise. For this reason, the digitized exercises were performed by a group of 16 healthy adults who also completed a usability questionnaire (see Tacchino et al., 2015).
Methods
Participants
This study involves adults who do not have cognitive impairment. For this reason, the following inclusion criteria were defined: (1) aged ≥ 45 years and (2) a Montreal Cognitive Assessment score ≥ 21 (Conti, et al., 2015). In addition, they completed the Frontal Assessment Battery (FAB, Appollonio et al., 2005; Dubois, et al., 2000). All participants provided informed consent to take part in the study which was approved by SISSA’s Ethics committee. The study conformed with the Declaration of Helsinki.
sFEra APP
Overview
sFEra is an Android-based app. The opening screen is shown in Fig. 1. It includes five areas of exercises: (a) Attention, (b) Control and inhibition, (c) Working memory, (d) Planning, (e) Flexibility.
Each area includes two exercises, and each exercise has 10 levels with incremental difficulties (except one of the exercises, see below). The app records each task session, provides real-time feedback, and keeps a record of subjects’ performance.
-
(a) Attention This section includes two exercises: 1- “Chi cerca trova” (a barrage task) and 2- “L’imprevisto” (an oddball task).
-
1- Barrage task
This exercise was designed to evaluate and train participants’ ability to direct their attention on the target items tuning out the non-relevant information (distractors).
The participants are instructed to search for the target items in a panel of distractors (see Fig. 2). The distractors can change in both color and size of the item. Participants have to respond as quickly as possible. The following measures are recorded: total time of completion of each level (in milliseconds), number of correct targets selected (hits), number of targets selected more than once, number of distractors erroneously selected (false alarms). This exercise has 10 levels with incremental difficulties. Each level has three different scenarios (fruits, fishes, and cups) and three different backgrounds (easy, medium, hard). Different instructions characterize each level, and the participant is instructed to select the target items based on color, shape, or size. Once participants select the item, either a green tic (correct answer) or a red cross (incorrect answer) will appear on the item. Once the participant has pressed the end button on the screen, the level is considered completed and the participant will receive feedback with the score of the level. The participants will continue to the next level once 75% of the correct answers are reached.
-
2- Oddball task
This exercise was designed to evaluate and train participants’ sustained attention and vigilance.
The participants are instructed to avoid the target item (a hole in the street, see Fig. 2) by pressing the (x) button positioned below the screen, while an avatar is riding a bicycle through a street of a city. Other non-target distractors appear on the screen (puddle, manhole, a stack of leaves, wastepaper). The percentage of targets and distractors varies at each level, the target is always infrequent (max 15% of total trials) since it is aimed at testing participants’ sustained attention and vigilance, and the total time (in minutes) of each level also increases (range = 3–12 min). The following measures are recorded: average time (in milliseconds) in response to the target stimuli, accuracy was quantified as the number of target stimuli detected, missed targets (omissions), and the total number of errors (responses to non-target stimuli).
At the end of each level, the participants receive feedback on their performance based on their accuracy (total number of correct answers). The participants will continue to the next level once 75% of the correct answers are reached.
-
(b) Control and inhibition This section includes two exercises: 1- “Alto là” (a go/no go task) and 2- “Non farti distrarre” (a Stroop-like task).
-
1- Go/no go task
This exercise was designed to evaluate and train participants’ inhibitory control function since in the task they are forced to refrain from their actions. The participants are instructed to press the button positioned below each item (see Fig. 2) as fast as they can only when the “go” stimuli are presented in the center of the screen and not to press for the “no-go” stimuli. Both “go” and “no-go” stimuli changed in the different levels (i.e., simple geometric shapes, outline simple shapes, colored pictures). The chosen stimuli differed across the 10 levels but included colored geometrical shapes (i.e., square, triangle, circle, pentagon, see Fig. 2), colored outline object figures (i.e., flower, tree, bell, book), colored outline animal figures (i.e., sheep, duck, pig, tortoise), and colored images depicting objects, animals, flowers and plants, or foods. Throughout the levels, the participants had to respond to the color, shape, or identity of the stimuli. Each stimulus was presented at the center of the screen with an inter-trial interval (ITI) randomly assigned between 500 and 1000 ms the image remained on the screen for 1000 ms. The following measures are recorded: total time (in milliseconds) to respond to each element of the go-no go tasks (reaction times, RTs) and total time for each level (in seconds). RTs to each “go” item and the average for all of the “go” stimuli are registered. Number of correct responses to “go” stimuli (hits), number of incorrect responses to “no-go” stimuli (false alarms), number of missed responses to “go” stimuli (misses). After each level, the participant would receive feedback informing on the total number of correct answers. The participants will continue to the next level once 75% of the correct answers are reached.
-
2- Stroop-like task
This exercise was designed to evaluate and train participants’ ability to stay focused and inhibit distractions and interferences since participants had to actively ignore aspects of the stimuli that could lead to mistakes. This exercise had to versions each with 5 levels with incremental difficulties: a numerical one (2a. “Occhio ai numeri”) and a verbal one (2b. “Occhio alle parole”).
-
2a. Numerical Stroop-like task
We had participants respond to the bigger (or smaller depending on the block) number with a bigger meaning written in a larger font regardless of the quantity; therefore, participants had to ignore the actual numerical value of the number since it could distract them and lead to errors (see Fig. 2). The following measures are recorded: accuracy (% of correct answers) and reaction times (RTs in milliseconds to each stimulus and average RTs in milliseconds for each block).
-
2b. Verbal Stroop-like task
Participants had to respond with the button “big” to a word written in upper case and “small” to a word written in lower case (regardless of the meaning of the word, i.e., giant, dwarf, big, SMALL). Participants had to ignore the meaning of the word since it could distract them and lead to errors. The following measures are recorded: accuracy (% of correct answers) and reaction times (RTs in milliseconds to each stimulus and average RTs in milliseconds for each block).
-
(c) Working memory This section includes two exercises: 1- “Tienimi a mente” ( a running span Task) and 2- “Occhio alla regola” ( an information manipulation task).
-
1- Running span
This exercise was designed to evaluate and train participants’ working memory span, the numbers (or letters) that they can keep in mind. Strings of elements (numbers or letters) would visually appear on the center of the screen (1 item at a time, see Fig. 2) in the end a keyboard, and participants had to select from memory elements depending on the instructions. The number of elements of the string and the number of elements to recall increased across levels, initially participants were presented with strings of 2 and 3 elements and had to recall the last element, the elements increased to 4 and participants had to recall the last 2 elements, and in the final level, the strings were long up to 10 elements and participants had to recall the last 3 elements. Presentation time (initially 2 s) decreases to 1 s with increasing levels. Each level lasted 120 s, and after each level, the participant receives feedback on the performance. The following measures are recorded: accuracy (number of recalled elements, both raw score, and percentage of correct answers), the total time (in milliseconds) to recall each sequence, and total time (in seconds) to complete the level. The participants will continue to the next level once 75% of the correct answers are reached.
-
2- Information manipulation task
This exercise was designed to evaluate and train participants’ working memory abilities to update different rules they had to keep in mind. Stimuli were presented in the auditory modality in this exercise. The participant listens to a sequence of numbers (or letters) and has to respond on a keyboard ordering them following the rule of the instruction, following some examples of rules: order first odd and then even numbers in a sequence of numbers, first consonants, and then vowels in a sequence of letters, ordering numbers presented with a random order in increasing order from the smallest to the largest, ordering letters presented in a random order alphabetically. The following measures are recorded: accuracy (number of recalled elements, both raw score, and percentage of correct answers), the total time (in milliseconds) to recall each sequence, and total time (in seconds) to complete the level. The participants will continue to the next level once 75% of the correct answers are reached.
-
(d) Planning This section includes two exercises: 1- “Passo dopo passo” (an action sequences task) 2- “Pianifica le tue mosse” (a planning task).
-
1- Action sequences
This task was designed to evaluate and train participants’ ability to order action sequences, starting from simple common actions requiring few steps to more complicated and elaborate ones. Stimuli were presented on the screen as boxes on the left side of the screen containing sentences that the participant had to order, by moving them to the boxes on the right of the screen, according to the order of actions required to complete the sequence (see Fig. 2). Simple actions including 3 steps were, for example, “Drink: 1) take the bottle, 2) open the bottle, and 3) drink from the bottle” and “Brush your teeth: 1) take the toothbrush, 2) put the toothpaste, and 3) brush my teeth” and more complex action sequences including 5 actions were, for example, “Planting a plant: 1) take a vase, 2) put the soil in the vase, 3) plant the seed, 4) water the soil and 5) put the vase in the sun”, sequences on level 10 reached 10 steps of unusual actions like “changing the tire of a bicycle.” The following measures are recorded: accuracy (number sentences in the correct order). Participants received feedback after each sequence with the number of correct answers. The participants will continue to the next level once all sentences describing the action are correctly ordered.
-
2- Planning task
This task was designed to evaluate and train participants’ ability to strategically plan actions to find the way to solve the task since once the action began one could not go back. The exercise required the participant to collect shells presented on a grid representing a net (see Fig. 2), to move along the grid there were specific rules: at each intersection of the grid, the subject can move left or right but cannot go backward, the subject had to follow the lines of the grid. Additional rules, based on the color of the shells, were added moving up with the harder levels, from level 4 there was a 1-min limit to complete the task. The following measures are recorded: total time (in milliseconds) to complete each exercise and total time for each level (in seconds). The participants will continue to the next level once 100% of the correct answers are reached in the two exercises of the level.
-
(e) Flexibility This section includes two exercises: 1- “Pronti a cambiare” (switching task) and 2- “2 in 1” (dual task).
-
1- Number/letter switching task
This task was adapted from Rogers and Monsell (1995) and was designed to evaluate and train participants’ ability to change their responses based on quick and variable instructions. The participants were presented with a rectangle divided into 4 quadrants (top left, top right, bottom left, bottom right); in the rectangle, a couple of elements composed of a letter and a number could appear on each of the 4 quadrants; when a couple of elements (i.e., N2) was presented in the top quadrants (both left and right), the subject was instructed to respond only to the number element; on the other hand, when a couple of elements was presented in the bottom quadrants (both left and right), the subject was instructed to respond only to the letter element. Depending on the level, the subject had to answer based on the properties of the elements such as is it a vowel or consonant letter, odd or even number, keeping in mind the quadrant rule, which was also present on the screen for the whole time of the exercise; the subject provided yes and no answers through a keypad on the screen.
The following measures are recorded: total time (in milliseconds) to respond to each element, total time for each level (in seconds), and accuracy (number of correct answers, raw value, and % of correct answers). The participants will continue to the next level once 75% of correct answers are reached in the two exercises of the level.
-
2- Dual task
This task was designed to evaluate and train participants’ ability to perform two tasks at the same time. The first task was the visual go-no go task (Exercise b1) described above with the same characteristics and same stimuli, and the second task was an auditory go-no go task but with sounds as stimuli instead of figures/pictures. The instructions regarding the visual go-no go could include rules based on the color, shape, or identity of the stimuli and for the auditory go-no go, the subjects had to respond to the sound presenting a certain pitch (low or high depending on the level) and refrain from answering to the other pitch; both sounds were presented at the beginning to the subject to familiarize with them.
The following measures are recorded: total time (in milliseconds) to respond to each element (separate visual and auditory go-no go tasks, reaction times (RTs)), total time for each level (in seconds), and accuracy (number of correct answers, raw value, and % of correct answers). RTs to each “go” item and the average for all of the “go” stimuli are registered. Number of correct responses to go stimuli (hits), number of incorrect responses to “no-go” stimuli (false alarms), number of missed responses to “go” stimuli (misses). After each level, the subject would receive feedback informing on the total number of correct answers. The participants will continue to the next level once 75% of the correct answers are reached.
For what concerns the training program, in each session, participants are supposed to complete an exercise from all 5 executive sub-components (attention, working memory, inhibition, planning, flexibility), possibly varying from one session to another. Overall, the duration of each cognitive session varies between 30 and 50 min. The order of presentation of each exercise is at the discretion of the therapist. Training schedule and dosage are supposed to depend on the clinical condition of the targeted population: we can expect limited resistance for acute post-stroke patients, and intensification of sessions as rehabilitation progresses. Since we were interested in evaluating the usability of the app mainly from a qualitative point of view, participants in the pilot study performed only two levels per exercise: level 1 of each exercise of each area to familiarize themselves with the task and assess the clarity of the instructions and a more challenging level which changed among exercises to gain a sample of the effort required (i.e., level 7 area 1 exercise 1, level 5 area 3 exercise 1 and 2, level 6 area 5 exercise 1). We did not include easier levels (i.e., levels 2 or 3) nor the hardest levels (i.e., levels 8 or 9) to avoid irritability and stress during the app execution. At the end of the session, they completed the Usability Questionnaire and a Questionnaire on experience and use of technology items (see also Tacchino et al., 2015).
Usability Questionnaire
At the end of each exercise, participants had to respond to the following questions on a 5-point Likert scale (ranging from 1 to 5; in brackets are the extremes of the scale).
-
QA: “How would you judge the clarity of the instructions?” ([1] “very poor”–[5] “very good”).
-
QB: “How would you judge the difficulty of the exercise?” ([1] “very low”–[5] “very high”).
-
QC: “How would you judge the degree of satisfaction during the execution of the exercise?” ([1] “very poor”–[5] “very good”).
-
Moreover, at the end of all of the exercises, the participant had to respond to the following questions referring to the whole sFEra APP on a 5-point Likert scale (ranging from 1 to 5; in brackets are the extremes of the scale).
-
Q1: “Overall how would you judge the clarity of the instructions of the app?” ([1] “very poor”–[5] “very good”).
-
Q2: “How would you judge your interest in the app exercises?” ([1] “very poor”–[5] “very good”).
-
Q3: “How would you judge your motivation to use again the app?” ([1] “very poor”–[5] “very good”).
-
Q4: “How would you judge the graphics of the app?” ([1] “very poor”–[5] “very good”).
-
Q5: “How would you judge your motivation while you executed the exercises?” ([1] “very poor”–[5] “very good”).
-
Q6: “How would you judge your stress level while you executed the exercises?” ([1] “very poor”–[5] “very good”).
-
Q7: “How would you judge your boredom level while you executed the exercises?” ([1] “very poor”–[5] “very good”).
-
Q8: “How would you judge your entertainment level while you executed the exercises?” ([1] “very poor”–[5] “very good”).
-
Q9: “How useful would you judge the execution of the exercises?” ([1] “very poor”–[5] “very good”).
Questionnaire on Experience and Use of Technology Items
Participants had to respond also to the following questions assessing their experience and hours per week with 3 different technology items (personal computer, tablet, and smartphone). Responses were provided on a 5-point Likert Scale (ranging from 1 to 5; in brackets are the extremes of the scale).
-
Q1: “How would you judge your experience with … (personal computer/tablet/smartphone)” ([1] “very poor”–[5] “very good”).
-
Q2: “On average how many hours per week do you use … (personal computer/tablet/smartphone)” ([1] “less than 1 h”– “more than 15 h” [5]).
-
Q3: “Did you have previous experience with cognitive training applications/exercises in the past” ([1] “never”– “very often” [5]).
Results
Participants
Sixteen subjects (11 females) took part in the pilot study, participants’ mean age was 59 years old (SD = 8.04, range = 48–76), and their mean years of education was 13.12 years (SD = 3.70, range = 5–19).
Prior studies have reported that five participants found about two-thirds of all usability problems of an application (Lewis, 1994; Virzi, 1992), whereas ten participants can identify a minimum of 80% of the problems (Faulkner, 2003).
All participants were Italian native speakers and did not have any neurodegenerative diseases or peripheral motor deficits. Participants’ mean raw score on the MoCA scale was 28.12 (SD = 2.03, range = 23–30). Participants mean raw score on the FAB scale was 17.87 (SD = 0.34, range = 17–18).
Usability Questionnaire
The questionnaire showed positive results. Most users were pleased with the overall experience. Specifically, 68.8% judged the experience as entertaining (Q8 of the usability questionnaire, the percentage calculated considering scores 4 and 5; overall mean = 3.6) and 68.8% as useful (Q9, mean = 3.7). Moreover, 81.3% found that the graphics of the app has a high quality (Q4, overall mean = 4.3), and 56.3% found the exercises of the app interesting (Q2, overall mean = 3.8). Also, 81.25% of them felt highly motivated during the execution of the task (Q5, overall mean = 4.3). All experienced low levels of stress (Q6, overall mean = 2.5) and were not bored (87.5, Q7, overall mean = 2). Importantly, the instructions of the app were easy to understand by 81.5% of the users (Q1, overall mean = 4). Finally, users felt on average motivated to use the app again (37.5%, Q3, overall mean = 2.93) (see Fig. 3 for the complete distribution of the responses for each question).
For what concerns the single exercises, the average evaluations concerning the clarity of the instructions were in the range of 2.9 to 4.6, which corresponds to a moderate to a very good rating. The exercise receiving the lowest mean ranking was the manipulation task (“Occhio alla regola”). This task was designed to train participants’ working memory abilities to update different rules they had to keep in mind. For what concerns the difficulty of the exercises, the average evaluations were in the range of 1.5 to 3.3 indicating that they were perceived as low or moderately difficult. The exercise with the highest mean ranking was the planning task (“Pianifica le tue mosse”). This task was designed to train participants’ ability to strategically plan actions to find the way to solve the task. Finally, the average evaluations concerning the level of satisfaction for each exercise were in the range of 3 to 3.8 which corresponds to a moderate to a good rating. The exercise with the highest satisfaction score was the barrage task (“Chi cerca trova”), while the exercise with the lowest score was the oddball task (“L’imprevisto”) (see Fig. 4 for the complete scores for each exercise).
Questionnaire on Experience and Use of Technology Items
Although most of the participants were familiar in using computers (mean = 3.19 SD = 1.16), tablet (mean = 2.56 SD = 1.31), and smartphones (mean = 3.44 SD = 1.03), their previous experience with tablet applications was low (mean = 1.12 SD = 0.34) and they report to spend relatively little time in their use. Participants reported spending on average 2.75 h per week on the personal computer (SD = 1.57), 1.87 h per week on the tablet (SD = 1.26), and 2.65 h per week on the smartphone (SD = 1.50).
Pearson’s correlations between previous experience with tablets and the usability questionnaire were performed (see Table 2), resulting in a positive correlation between previous experience with tablets and the overall entertainment of the app (r(14) = 0.53, p = 0.04). Interestingly, previous experience with tablets did not correlate with the clarity of the instructions, the motivation to perform the exercises, and the stress level. A second Pearson’s correlation was performed between previous experience with brain training applications and the usability questionnaire (see Table 3), resulting in a positive correlation between previous experience with brain training applications and the overall willingness to use the application again (r(14) = 0.52, p = 0.04). Again, previous experience with brain training apps did not correlate with any other question on the usability questionnaire.
Discussion
In this study, we assessed the usability of a new tablet app, the sFEra APP, that has been developed as a tool for the rehabilitation of executive functioning in stroke patients according to the MEMORI-net program, a cross-border Italy-Slovenia project that aims to delineate new common clinical standardized protocols for the rehabilitation of stroke. According to the international standard ISO 9241–11, usability can be defined as “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Results show that participants judged the use of the app as a positive experience and three themes emerged from usability testing:
-
(i) Clarity in the instructions, graphics, and content of the exercise obtained good ratings. Instructions appeared to be clear to the participants, and overall, the app was scored 4 on a 5-point scale. Considering the single exercises, the clarity of instruction evaluations was in the range of 2.9 to 4.6, which corresponds to a moderate to a very good rating. One of the most critical components of computerized task performance is clarity (Reppa & McDougall, 2015). The construct of clarity pertains to the transmission (or communication) of information, and the information quantity itself. Therefore, it includes the meaning of simplicity, as well. It also refers to how accessible and comprehensible the informational load of a given exercise is. Not only the clarity of instructions but also visual clarity plays an essential role in the user experience, as its impact is determined in the very first seconds of the interaction and determined by the sensory system of the individual (Bolte et al., 2015). Results from recent studies (see Lindgaard et al., 2006) suggest that aesthetics’ judgment influences the totality of the subsequent experience. The same experiment also suggests that the time frame in which the first impressions form can be as low as 50 ms. Although they take place in a small amount of time compared to the total interaction, the immediate cognitive responses to the visual stimulation are necessary to the experience’s evaluation.
-
(ii) Compliance was found to be high since participants resulted interested and motivated during task completion. This positive adherence seems to be triggered by the gamification process, a process where the user is rewarded upon achieving the goal of the app (Deterding et al., 2011). Different studies suggest that participants are pleased to be able to keep track of their progress and adherence to the treatment (Cheng et al., 2019, for a review). In this pilot test, a similar process is suggested by the positive correlation between experience with brain training applications and the willingness to use the application again, which is a promising aspect also for the continuation of the training after rehabilitation programs.
-
(iii) Importantly, it emerged that previous experience with tablets or brain training apps was correlated with the usability questions, meaning that individuals without experience could clearly understand the tasks and perform them with enjoyment and motivation, without stress or boredom.
Nonadherence to clinical treatments is an important issue that needs to be addressed among rehabilitation programs, as it is partially responsible for a good rehabilitation outcome (Agency for Healthcare Research and Quality, 2016). Nonadherence to protocols not only affects patient clinical health outcomes but also patient quality of life and health care costs (World Health Organization, 2003). Mobile apps have the potential to improve adherence and management of treatments (Kaushal & Bates, 2002). The Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003, 2012) affirms that behavioral intention is the strongest predictor of technology use.
Altogether, results suggest that the sFEra APP is highly usable and motivating, and as such can be proposed for cognitive rehabilitation interventions. Nevertheless, future investigations should test the acceptability and the effectiveness of the app and its functional outcomes through a randomized controlled study on a cohort of stroke patients, as well as measuring the cost-effectiveness of the app and its potential use in clinical practice. A future study involving stroke patients will allow us to assess the effectiveness of the application and any potential impact on everyday life, the sFEra APP is equipped with a sophisticated performance monitoring system, and it will be possible to measure the effectiveness of the training for each patient considering the progress to the different levels. Several qualitative studies (see, e.g., Carabeo et al., 2014; White et al., 2015) support the implementation of computer devices during stroke recovery and demonstrate high acceptance and satisfaction of iPad-based rehabilitation programs. Tablets are considered easy to use, engaging, and beneficial (White et al., 2015); in addition, tablet-based rehabilitation seems to be even preferred over conventional therapy (Carabeo et al., 2014). However, it is not yet clear whether tablet computerized brain training programs can improve attention and executive functions. While many studies have shown positive results in healthy adults (e.g., Anguera et al., 2013; Corbett et al., 2015), there is concern that improvements could be related to enhanced skills in using the apps rather than an actual improvement in cognition since marginal transfer effects to daily activities have been found (Gajewski et al., 2020; Owen et al., 2010). The feasibility of the app should also be explored with stroke survivors to identify motor or communicative barriers that might prevent its adoption and utilization because of motor or speech/language impairments.
While our current results represent an important step in the development of a novel tablet-based app program guaranteeing the usability of the sFEra APP, we acknowledge that the present study suffers from some limitations. First, we did not test the whole training program but participants performed only two levels per exercise in each area to address the usability of the sFEra APP and the clarity of instructions. However, they did not identify major problems with the application and referred to a pleasant experience, even without previous experience with tablet apps or brain training programs, which allows for predicting a positive adherence with the sFEra APP program.
Second, the sample was not balanced for gender differences, and, third, it included mostly participants with medium/high levels of years of education. Despite these two variables are expected to have no or a low impact on the usability questionnaire, they can certainly affect the participant’s performance and need to be taken into account in the future when investigating the app’s effectiveness in the target clinical population (i.e., stroke patients).
Conclusion
Individuals with executive deficits are those finding it harder to follow and benefit from the cognitive rehabilitation given their difficulty in initiating activities, maintaining a response, inhibiting action, and generalizing instructions to other tasks (Park et al., 2017). It is therefore mandatory to develop cognitive training specifically focused on these disorders, which may result effective and which may ensure a high adherence to the treatment and allow patients to perform everyday tasks involving planning, working memory, and more. The results of this pilot study indicate that sFEra APP is a usable app and pave the way for future investigations to confirm its clinical validity. Research on mobile apps is necessary since this technology can augment the effects of face-to-face therapy and provide the opportunity for individuals to engage in homework tasks. This aspect is even more important in the light of the recent COVID-19 pandemic which has heightened the need for virtual cognitive assessment and training. We interpret the positive feedback on user experience as evidence that the sFEra APP is highly functional, motivating, readily accepted, and, as such, has the potential to represent an attractive tool for cognitive improvement and enhance therapy adherence. We also expect that the multidomain cognitive training of the app will greater enhance executive functioning and promote wider generalization to real-life than single cognitive process protocols.
Data Accessibility
The data of the present study can be requested to the corresponding author.
References
Agency for Healthcare Research and Quality US (2016). Patient safety in ambulatory settings. Rockville
Anderson, V., Levin, HS., Jacobs, R. (2002). Executive functions after frontal lobe injury: A developmental perspective.
Anguera, J., Boccanfuso, J., Rintoul, J., Al-Hashimi, O., Faraji, F., Janowich, J., Kong, E., Larraburo, Y., Rolle, C., Johnston, E., & Gazzaley, A. (2013). Video game training enhances cognitive control in older adults. Nature, 501, 97–101.
Appollonio, I., Leone, M., Isella, V., Piamarta, F., Consoli, T., Villa, M. L., Forapani, E., Russo, A., & Nichelli, P. (2005). The Frontal Assessment Battery (FAB): Normative values in an Italian population sample. Neurological Sciences, 26(2), 108–116.
Arean, P. A., Hallgren, K. A., Jordan, J. T., Gazzaley, A., Atkins, D. C., Heagerty, P. J., & Anguera, J. A. (2016). The use and effectiveness of mobile apps for depression: Results from a fully remote clinical trial. Journal of Medical Internet Research, 18(12), e330.
Bölte, J., Hösker, TM., Hirschfeld, G., Thielsch, MT. (2017). Electrophysiological correlates of aesthetic processing of webpages: A comparison of experts and laypersons. PeerJ, 5
Burgess, P. W., Veitch, E., & de lacy Costello, A., Shallice, T. (2000). The cognitive and neuroanatomical correlates of multi-tasking. Neuropsychologia, 38, 848–863.
Carabeo, C. G. G., Dalida, C. M. M., Padilla, E. M. Z., & Rodrigo, M. M. T. (2014). Stroke patient rehabilitation: A pilot study of an android-based game. Simulation & Gaming, 45, 151–166.
Chan, R. C., Shum, D., Toulopoulou, T., & Chen, E. Y. (2008). Assessment of executive functions: Review of instruments and identification of critical issues. Archives of Clinical Neuropsychology, 23(2), 201–216.
Cheng, V. W. S., Davenport, T., Johnson, D., Vella, K., & Hickie, I. B. (2019). amification in apps and technologies for improving mental health and well-being: Systematic review. JMIR Ment Health 26, 6(6), e13717.
Conti, S., Bonazzi, S., Laiacona, M., Masina, M., & Coralli, M. V. (2015). Montreal Cognitive Assessment (MoCA)-Italian version: Regression-based norms and equivalent scores. Neurological Sciences, 36(2), 209–214.
Corbett, A., Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., Williams, N., Williams, G., & Ballard, C. (2015). The effect of an online cognitive training package in healthy older adults: An online randomized controlled trial. Journal of the American Medical Directors Association 1, 16(11), 990–7.
Damasio, A. R. (1995). Toward a neurobiology of emotion and feeling: Operational concepts and hypotheses. The Neuroscientist, 1(1), 19–25.
Deterding S, Dixon D, Khaled R, Nacke L. (2011). From game design elements to gamefulness: Defining. Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments; September 28–30, 2011; Tampere, Finland.
Dubois, B., Slachevsky, A., Litvan, I., & Pillon, B. F. A. B. (2000). The FAB: A frontal assessment battery at bedside. Neurology, 55(11), 1621–1626.
Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 35(3), 379–383.
Gajewski, P. D., Thönes, S., Falkenstein, M., Wascher, E., & Getzmann, S. (2020). Multidomain cognitive training transfers to attentional and executive functions in healthy older adults. Frontiers in Human Neuroscience, 14, 586963. https://doi.org/10.3389/fnhum.2020.586963
Gamito, P., Oliveira, J., Coelho, C., Morais, D., Lopes, P., Pacheco, J., Brito, R., Soares, F., Santos, N., & Barata, A. F. (2015). Cognitive training on stroke patients via virtual reality-based serious games. Disability and Rehabilitation, 39(4), 385–388.
Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R. (2006). Cognitive neuroscience. The biology of the mind. New York.
Harris DJ, Wilson MR and Vine SJ (2018) A systematic review of commercial cognitive training devices: Implications for use in sport. Front. Psychol.
ISO 9241–11:1998 Ergonomic requirements for office work with visual display terminals (VDTs) — Part 11: Guidance on usability. 1998. [2019–11–21]. ISO: International Organization for Standardization
Jankowska, A. M., Klimkiewicz, R., Kubsik, A., Klimkiewicz, P., Śmigielski, J., & Woldańska-Okońska, M. (2017). Location of the ischemic focus in rehabilitated stroke patients with impairment of executive functions. Advances in Clinical and Experimental Medicine: Official Organ Wroclaw Medical University, 26(5), 767–776.
Kaushal, R., & Bates, D. (2002). Information technology and medication safety: What is the benefit? Quality & Safety in Health Care, 11(3), 261–265.
Klimova, B., & Valis, M. (2018). Smartphone applications can serve as effective cognitive training tools in healthy aging. Front. Aging Neurosci, 9, 436.
Leśniak, M., Bak, T., Czepiel, W., Seniów, J., & Członkowska, A. (2008). Frequency and prognostic value of cognitive disorders in stroke patients. Dementia and Geriatric Cognitive Disorders, 26(4), 356–363.
Lewis, J. R. (1994). Sample sizes for usability studies: Additional considerations. Human Factors, 36(2), 368–378.
Lezak, MD. (1993). Newer contributions to the neuropsychological assessment of executive functions. The Journal of Head Trauma Rehabilitation.
Lindgaard, G., Fernandes, G., Dudek, C., & Brown, J. (2006). Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour & Information Technology, 25(2), 115–126.
Mateer, C. A. (2005). Fundamentals of cognitive rehabilitation. In P. W. Halligan & D. T. Wade (Eds.), Effectiveness of rehabilitation for cognitive deficits (pp. 21–30). Oxford University Press.
Melby-Lervåg, M., & Hulme, C. (2013). Is working memory training effective? A meta- analytic review. Developmental Psychology, 49, 270–291.
Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A. S., Howard, R. J., & Ballard, C. G. (2010). Putting brain training to the test. Nature, 465, 775–778.
Park, S. H., Sohn, M. K., Jee, S., & Yang, S. S. (2017). The characteristics of cognitive impairment and their effects on functional outcome after inpatient rehabilitation in subacute stroke patients. Annals of Rehabilitation Medicine, 41(5), 734.
Patel, M., Coshall, C., Rudd, A. G., & Wolfe, C. D. (2003). Natural history of cognitive impairment after stroke and factors associated with its recovery. Clinical Rehabilitation, 17(2), 158–166.
Posner, M. I., & Petersen, S. E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13(1), 25–42.
Povroznik, J. M., Ozga, J. E., Haar, C. V., & Engler-Chiurazzi, E. B. (2018). Executive (dys) function after stroke: Special considerations for behavioral pharmacology. Behavioural Pharmacology, 29(7), 638.
Reppa, I., & McDougall, S. (2015). When the going gets tough the beautiful get going: Aesthetic appeal facilitates task performance. Psychon Bull Rev, 22(5), 1243–54.
Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124, 207–231.
Seshadri, S., & Wolf, P. A. (2007). Lifetime risk of stroke and dementia: Current concepts, and estimates from the Framingham Study. The Lancet Neurology, 6(12), 1106–1114.
Shallice, T. I. M., & Burgess, P. W. (1991). Deficits in strategy application following frontal lobe damage in man. Brain, 114(2), 727–741.
Sigmundsdottir, L., Longley, W. A., & Tate, R. L. (2016). Computerised cognitive training in acquired brain injury: A systematic review of outcomes using the international classification of functioning (ICF). Neuropsychological Rehabilitation, 26(5–6), 673–741.
Sohlberg, M. M., & Mateer, C. A. (1987). Effectiveness of an attention-training program. Journal of Clinical and Experimental Neuropsychology, 9(2), 117–130.
Tacchino, A., Pedullà, L., Bonzano, L., Vassallo, C., Battaglia, M. A., Mancardi, G., & Brichetto, G. (2015). A new app for at-home cognitive training: Description and pilot testing on patients with multiple sclerosis. JMIR mHealth and uHealth, 3(3), e85.
Vaughan, L., & Giovanello, K. (2010). Executive function in daily life: Age-related influences of executive processes on instrumental activities of daily living. Psychology and Aging, 25(2), 343.
Venkatesh V, Morris MG, Davis FD, Davis GB. (2003). User acceptance of information technology: Toward a unified view. MIS Q:425–478.
Venkatesh V, Thong J, Xu X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q:157–178.
Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34(4), 457–468.
White, J., Janssen, H., Jordan, L., & Pollack, M. (2015). Tablet technology during stroke recovery: A survivor’s perspective. Disability and Rehabilitation, 37, 1186–1192.
World Health Organization (2003). Adherence to long term therapies evidence actions. Switzerland:2003.
Acknowledgements
We thank Enrico Tongiorgi for coordinating the program MEMORI-net and Francesco Darek Costa for helping with data collection.
Funding
The present study was financed by the Interreg V-A Italia-Slovenia 2014–2020 program MEMORI-net.
Author information
Authors and Affiliations
Contributions
CC, MA, AL, and GG conceived the original idea of the present study and developed the exercises of the application which was realized by PIKKART Srl. CC and FDC collected the data for the present study under the supervision of MA. CC and MA performed the statistical analysis. CC, MA, AL, GG, and RIR contributed in interpreting the results. CC, MA, AL, GG, and RIR wrote the manuscript, reviewed and finalized the manuscript.
Corresponding author
Ethics declarations
Ethical Approval
The study was carried out in accordance with the international ethical guidelines for research involving humans included in the Declaration of Helsinki. The study protocol was approved by the Ethics Committee of the International School for Advanced Studies (SISSA) of Trieste. All participants were informed of their rights as voluntary participants.
Conflict of Interest
The authors declare no competing interests.
Informed Consent
Written informed consent was obtained from all participants prior participation in the study; participants approved anonymized data to be published in a journal article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Coricelli, C., Aiello, M., Lunardelli, A. et al. sFEra APP: Description and Usability of a Novel Tablet Application for Executive Functions Training. J Cogn Enhanc 6, 389–401 (2022). https://doi.org/10.1007/s41465-022-00245-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41465-022-00245-8