Keywords

1 Introduction

It is important for game designers to consider the accessibility of their products. According to census data, almost 11 % of the population in the United States [1] and 15 % of the population in the European Union [2] have some form of cognitive, motor, or sensory impairment. Many such individuals want to play games, but cannot [1]. Yet, enabling them to do so would likely improve their quality of life [3] and need not be difficult to achieve [4, 5].

However, designers can encounter several challenges such as: understanding the constraints associated with specific impairments; evaluating designs in terms of those constraints; and selecting designs [6–9]. The Virtual and Augmented Environments and Realistic User Interactions to Achieve Embedded Accessibility Designs (VERITAS) project helps to overcome these challenges by simulating impairments and using data to help designers assess their designs. A previous study demonstrates adequate acceptance and usability [10], but the participants were recruited from many different industries and so the specific challenges encountered by game designers were not explored. A particular concern is the diversity of employees in the games industry. As no domain-specific qualification is needed for a career in games design, designers in this field can possess a wide range of skills as a result of their diverse backgrounds. This means that any individual designer could have a broad range of technical competence. Thus, this article aims to addresses the following question:

  • What are the key challenges that game designers encounter while using the VERITAS framework to design GUI-based games?

In answer, this article will illustrate three themes and their associated implications for the design of future accessibility testing tools that are better suited for the serious games industry.

2 The VERITAS Simulation Framework

The VERITAS approach to accessibility is driven by simulations and metrics. The process incorporates three phases: (i) virtual user modeling; (ii) simulation scenario definition; and (iii) the simulation of an impaired virtual user. Three tools are used to achieve this: the User Model Generator (VerGen), to specify the nature of the virtual user’s impairments; the Simulation Editor (VerSEd-GUI), to define the actions to test; and the Simulation Viewer (VerSim-GUI), to simulate the experience of the impaired virtual user. These form a workflow that includes the tasks listed below in Table 1:

Table 1. Tasks involved in the VERITAS assessment workflow

Figure 1 on the next page illustrates this process. It is important to note that designers have two sources of information on the accessibility of their design: the post-simulation metrics, providing support for criterion-based assessment and comparison; as well as the experience within the simulation itself, providing the designer with insight into the impact of a proposed design on a particular user.

Fig. 1.
figure 1

Workflow of the VERITAS framework for evaluating the accessibility of GUI-based digital games (from [10])

3 Methodology

To assess the usability of the framework, each tool in the VERITAS framework was assessed using an empirical user testing approach in which expert users use the tool under observation in a lab setting (based on [11, 12]). A group of 31 evaluators from the serious games community, with a high level of task-related design expertise, were recruited. They used each tool to assess the accessibility of a sample game provided by the research team. A mixed-methods approach to data collection was adopted in order to identify areas of key concern, while providing rich insights. As such, while the participants used the tool, log-files were generated. Descriptive statistics, such as the number of click events and total duration required to complete each task, were compared to a benchmark set by an experienced user to identify those tasks which were problematic. The designers then made comments on each tool using an open-ended questionnaire (derived from well-known heuristics, e.g., [13]), thereby enabling a thematic analysis to elicit insight into each challenge that emerged.

4 Findings

4.1 Log File Analysis

Figure 2 below shows the mean duration, measured in seconds, that the evaluators needed to perform each task, listed in Table 1, when compared to the experienced user:

Fig. 2.
figure 2

Mean total duration required to complete each task (N = 10)

As expected, the evaluators were consistently slower compared to the experienced user. Of the users that successfully completed tasks, they were only marginally slower in eight (~62 %) of the tasks. Areas of concern include the definition of the user model (1.3–1.5), which typically required approximately more than 15 s to setup the user model compared to the experienced user for each task. Additionally, they were also approximately 20 s slower at setting the hot areas (2.3).

Figure 3 below shows the mean count of click events needed by the evaluators to complete each task compared to the experienced user:

Fig. 3.
figure 3

Mean count of click events to complete each task (N = 10)

In many cases, the evaluators required a greater number of clicks to complete tasks compared to the experienced user, suggesting lower efficiency. It is interesting to note, however, the fewer clicks during the initialization of the user model (1.2–1.3), which hints at less sophisticated models being defined. The evaluators were also less efficient at setting hot areas (2.3) and there was notably less interaction during the analysis of the simulation results (3.3), suggesting challenges in setting up the test scenario and then reviewing the results.

4.2 Thematic Analysis

Qualitative data procured through the questionnaires were analyzed using two types of thematic analysis, following the initial stages posed in [14] and [15]. Figure 4 shows an example of a VOSviewer visualization of the frequency and relatedness of terms used by evaluators. Figure 5 shows an example of a thematic map constructed through the inductive and deductive coding (see [16] for more details) of the questionnaire responses in nVIVO and Microsoft Excel.

Fig. 4.
figure 4

An example of a VOSviewer heat map visualization showing the frequency and relatedness of words in questionnaire responses

Fig. 5.
figure 5

An example of a thematic map illustrating common challenges associated with the VerGen tool

As can be seen in Fig. 4, the meanings of the values required to setup the virtual users parameters appear quite prominently on the left. Likewise, on the right, the setup of events and assigning hot areas to images were raised frequently. To a lesser extent, the workflow of the tools and finding buttons also appeared frequently.

Figure 5 expands on these issues, providing greater insight into specific issues such as unclear units of measurement. Here, potential reasons behind some of these issues, such as unclear feedback in the way evaluators know that a model parameter values have changed, begin to appear more prominently.

5 Conclusion

In general, the feedback provided by the users was positive in nature. Furthermore, evaluator performance was comparable to an experienced user for most of the tasks. However, based on a triangulation of the analyses presented in the previous sections, four themes are proposed as key challenges that game designers can encounter while using the VERITAS framework:

  • Comprehending Model Parameters and Interface Features;

  • Understanding the Workflow of the Simulation Tools;

  • Efficiently Setting Up The Simulation Scenario;

  • Responding to Feedback Provided by the Tools

In order to overcome these challenges, it is recommended that additional features be incorporated to better meet the background knowledge of designers as well as the demands of their work environment. In particular, incorporating support features that: guide designers through the terminology and interface used in each tool to improve comprehension; address low familiarity with simulation tools to improve ease of use; streamline the workflow with as much automation as possible to reduce complexity and time required to complete tasks; and present clearer feedback in order to facilitate the setup of realistic virtual users while better supporting decision making between different design features.

Nevertheless, the VERITAS framework has received an encouraging evaluation, paving the way for a radical change in how accessibility concerns are addressed in serious games. With further improvements, in line with these recommendations, it is hoped that adoption of the framework will increase and subsequently enable improved access to games, thereby enhancing the quality of life of those with impairments.