1 Introduction

In the 2016 Barcelona Grand Prix, Mercedes Benz started first and second on the grid. Their previous performance was such that they could reasonably be expected to finish in those positions, yet they collided with each other on the first lap and both cars retired. The ultimate reason behind this both costly and dangerous outcome was a steering wheel rotary switch in the incorrect position [1]. The complexity of steering wheels in Formula One has significantly increased in the last 25 years, partly driven by the requirement for drivers to be able to dynamically optimize the performance of new technologies fitted to the vehicles and partly due to regulatory requirements [2]. Due to the integrated technologies allowed by the regulations, the 2009 Ferrari F1 car featured approximately 30 separate controls on the front of the steering wheel. In the past 5 years, regulations have simplified some car systems, resulting in the mean number of controls on Ferrari F1 cars reducing to between 20 and 25, possibly also suggesting an upper bound was discovered [2]. For the 2016 season, the use of team radio coaching was outlawed, making the driver solely responsible for understanding the interface and deciding on adjustments. This resulted in both World Champion drivers Lewis Hamilton and Fernando Alonso complaining that the complexity of the cars is too high for the drivers to bear this responsibility alone [3]. There are multiple reports of accidents and incidents that can be attributed to drivers being distracted by their interfaces, and costly performance losses through interface mode errors [4]. These interfaces, therefore, clearly require improvements in usability in order to reduce both distraction and errors.

In order to carry out a comprehensive analysis of the usability of interfaces in motorsport, it is necessary to employ a range of methods, Harvey and Stanton [5] constructed a development framework and subsequent toolset for the usability analysis of in-vehicle information systems (IVIS). This development framework was employed to formalize the identification of suitable methods for motorsport-based interface analysis. The first stage was to define a set of usability criteria from Nielson [6], Shackel [7], Stanton and Baber [8], ISO 9241 [9], Bevan [10] and McGrenere and Ho [11]. These were refined based on the specific context of use within motorsport. Driving is a complex activity involving a range of tasks [5]. Motor racing adds additional demands that require drivers to carry out compensatory tasks, such as balancing their vehicles on the limit of adhesion, pursuing tasks, and placing their vehicle on the racing line [12]. They are subject to multiple stresses: high temperatures, high g-forces, vibration, emotional stresses, and the requirement for significant muscular effort [13]. All of these stresses affect cognitive performance in some way [14]. A set of motorsport-specific usability criteria were derived which enabled the generation of a set of five key performance indicators (KPIs). The objectives are ultimately to identify the most appropriate methods based on the KPIs, in order to create a set of tools to assess usability of steering wheel-based interfaces in motorsport. They should allow designers to improve usability, focusing primarily on the documented issues of workload and error reduction. These tools should provide both qualitative and quantitative data, to provide cross-validation. The toolkit framework in Fig. 1 should be designed to provide metrics at all stages of development, allowing iterative improvements. The toolkit should be applicable to all forms of motorsport, and the experimental stages should be designed to be suitable for simulator and real track-based testing. The aims of the toolkit are to provide a means to improve the level of safety and competitiveness, and a reduction in cost both monetarily and in terms of lost points and reputation.

Fig. 1
figure 1

Toolkit development process

2 Defining the Methodology

Harvey and Stanton [5] defined three areas requiring definition when preparing usability evaluations:

  1. 1.

    Usability criteria.

  2. 2.

    Context of use.

  3. 3.

    Task/user/system interaction.

These areas for definition share considerable overlap. The inter-relation between task, user and system is similar to Shackel’s [7] framework, outlining how usability is defined through the dynamic interaction of environment, tool, task and user. The context of use is encompassed within the user/environment interaction. Usability criteria are established through the interaction of the four components. In order to identify the appropriate human factors methods to assess the steering wheel control interface designs, it was necessary to carry out the multi-stage process, as specified by Harvey and Stanton [5]. A literature review highlighted the applicable generic usability criteria; extensive research was then carried out into the motorsport domain to identify via a thematic analysis the primary contextual factors. These were revealed to be:

  1. 1.

    High cognitive task load/dual task performance.

  2. 2.

    Human, regulatory and performance constraints.

  3. 3.

    Frequency of use.

  4. 4.

    Control criticality.

These four contextual factors acted as a lens for the focusing of generic usability factors into sixteen motorsport-specific usability criteria. Harvey and Stanton [5] applied a direct mapping of domain-specific criteria to KPIs; however, in this case, the resultant set of human factors methods for thoroughly assessing each would be considerably large. The motorsport-specific usability criteria were therefore thematically grouped into five categories, which result through a direct mapping, in the derivation of the five corresponding KPIs. There, however, remain more complex relationships between individual domain-specific usability criteria and the KPIs, and these are illustrated in Fig. 2. Employing the five distinct resultant KPIs, a set of human factors method categories are defined. Within each category, a set of appropriate methods are selected for quantification of each KPI. An application process is also designed to allow an iterative and agile approach to interface analysis.

Fig. 2
figure 2

Relationships among the categories that dictate the KPIs

3 Usability Criteria

A literature review identified the primary generic usability criteria for mode error and distraction scenarios, drawn from Nielson [6], Shackel [7], Stanton and Baber [8], ISO 9241 [9], Bevan [10] and McGrenere and Ho [11]. These represented the same sources identified by Harvey and Stanton [5], as they exhibited high levels of citations and represented the major authors in aspects of usability. These provided the basis for the identification of KPIs. The usability criteria are listed below:

  1. 1.

    Learnability

  2. 2.

    Memorability

  3. 3.

    Error resistance

  4. 4.

    Effectiveness

  5. 5.

    Attitude

  6. 6.

    Task match

  7. 7.

    Task characteristics

  8. 8.

    User characteristics

  9. 9.

    Understandability

  10. 10.

    Operability

  11. 11.

    Clear affordances

  12. 12.

    Consistency

  13. 13.

    Feedback

  14. 14.

    Efficiency

There exist multiple documented cases of both driver distraction and mode error occurring, resulting in a loss of performance [4, 15] or a loss of control [16, 17]. According to multiple resource theory, there are finite cognitive resources available to drivers [18, 19]. Tasks are divided into those that are primary and secondary. In this context, driving is the primary task, and interface usage belongs to the secondary task [20]. Drivers spending significant amounts of time looking at, considering how to interact, or interacting with their interface may not be devoting enough effort to their primary task, resulting in driving errors such as Kubica colliding with pit-crew when distracted by his interface [17]. Crundall et al. [21] state that it is widely accepted that 90% of driving information is visual; therefore, the interface should minimize the requirement for the driver to look at the controls. Conversely, not applying the requisite cognitive effort to an interface interaction may result in interface-based errors, such as the 2016 Rosberg incident [1].

4 Racing Driver Context of Use

In order to define the context of use, it is necessary to consider and examine a wide range of literature and additional sources. A thematic analysis of the information gathered from these sources result in four distinct categories, all of which can be highly influential with regards to usability. Steering wheel-based control context of use for racing drivers is a complex field, to which a combination of environmental, regulatory and performance constraints apply [14]. The constraints imposed by the drivers’ environment, vehicle performance requirements and the sports regulations are outlined in Sect. 4.2. Driving a racing car generates a high cognitive workload [22] and risks associated with driving or interface errors can be significant due to the high speeds and competitive nature of the sport. The driving task is explored in more detail in Sect. 4.1, in addition to the nature of the secondary task. Frequency of use (Sect. 4.3) and control criticality (Sect. 4.4) also represent important factors that may significantly influence the interface designs.

4.1 High Cognitive Workload/Dual Task Performance

Henderson [12] described the complexity of the dynamic tracking task involved in driving a racing car at competitive speeds. The dynamic tracking is a combination of “pursuit” and “compensatory” tasks. Guiding the car on the racing line constitutes the “pursuit” aspect, and maintaining control of the car through the management of slip angles represents the “compensatory” aspect. There will be an optimum steering angle and throttle opening/brake pressure at any given moment to ensure that the car is travelling as fast as possible within the bounds of available grip and traction [12]. Drivers have to continually adjust the primary controls, at a high frequency and with high precision to remain at the non-constant limit of adhesion. In addition to this, drivers are required to carry out secondary tasks using their steering wheel-based controls at regular intervals to maintain performance. Baldisserri et al. [23] noted a high variation in lap-time and reported workload in a simulator-based study examining the effects of secondary tasks on racing drivers.

4.2 Environmental, Regulatory and Performance Constraints

In environmental terms, there are five main stresses: vibration, emotional stress, muscular effort, g-forces and high temperatures [13, 24]. The majority of these stresses affect the driver’s cognitive abilities in some way, and all have physiological effects [14]. Regulatory constraints include the requirement for the interface to feature-specific functionality, such as displays indicating flag status [25]. Performance constraints result in the requirement for the steering wheel to feature a large range of functionality, over 50% of controls on a generic F1 steering wheel, comprising 20–30 elements, are dedicated to performance optimization. The overall cockpit design is optimized from an external performance perspective, dictating the diminutive dimensions of the steering wheel and the driving position. Control density is therefore high due to the required number and small area available. These influences on the interface design play an important role in defining the KPIs.

4.3 Frequency of Use

Usage frequencies will vary between controls; some, such as those used to control brake bias, might be used multiple times a lap throughout the duration of a race lasting 60–70 laps. Others, such as the pit lane speed limiter, are only used two or three times during a race. The controls’ properties, such as position, size and type, may therefore benefit from optimizations based on usage frequency. Frequency of use can also vary among drivers, depending upon their cognitive abilities, or the demands made by their current primary tasks. A driver with a highly demanding task, such as chasing a competitor in wet conditions, will likely be under a considerable cognitive load, leaving little in reserve for secondary tasks [18].

4.4 Control Criticality

Controls vary in the criticality of their role, in terms of the effect of incorrect usage; this includes accidental operation or failure to operate. Some teams fit guards or raised sections between some controls, in order to prevent accidental usage. For example, the 2017 Red Bull RB13 has a raised edge surrounding the neutral and pit lane speed limiter buttons. In the 2019 Bathurst 12-h GT race, one driver accidentally operated the engine stop button instead of the pit lane speed limiter on two separate occasions when he was re-joining the track [26]. This may have been due to a number of factors; however, the buttons were located in the same quadrant of the wheel and were similarly coloured, and the engine stop button had recently been introduced into that position. This highlights the importance of placing critical controls. Criticality can be defined in terms of the effect that incorrect or lack of usage would have on the outcome of a race or in terms of safety. Failure to activate the pit lane speed limiter in time would result in a time penalty for the team; however, failure to place the car in neutral during a pitstop might have potential safety consequences. Controls might therefore benefit from being ranked using an error-risk scale, such as that described by Jordan [27], to provide feedback into the design process.

5 Motorsport-Specific Usability Criteria

Any potential changes to current interface designs will be limited by the constraints dictated by the context of use; however, within those constraints, multiple optimizations may be possible, through the amendment of aspects such as control layouts, sizes, types, torques, shapes, detents and other parameters. Error resistance [6] and efficiency [6, 9] could be considered the most important generic usability criteria, as they map directly to the error and distraction scenarios respectively, they also apply to multiple contextual factors. The interface should remain highly error resistant when the primary task presents the driver with a high cognitive workload, and under changing environmental constraints. Drivers faced with high levels of workload will have less cognitive resources available and less time to operate their interface. This can potentially result in performance breakdowns [18, 28], which could take the form of driving errors or interface-based errors. Controls that are of critical importance should receive special attention in terms of error resistance to reduce the probability of critical errors. Gkikas [4] suggested the use of objective primacy principle to ensure that trajectory and control tasks are prioritized over tactical and strategic tasks. Equally, if an interface element has the capacity to significantly affect vehicle control, such as the pitlane speed limiter or neutral gear buttons, their design and placement should come under special scrutiny to minimize the potentially more harmful effects of accidental usage. Efficiency is important in order to reduce distraction by minimizing the time it takes for the driver to use a control. This applies to both the durations required to traverse to the controls [29] and the time taken to operate them. This is of greatest importance with regard to controls used with high frequencies, as repetition compounds any inefficiencies. In addition to this, an inefficient control activation that requires a driver to glance at the wheel, move their hand and make an adjustment would cause distraction through multiple demands, including cognitive, visual and motoric ones [30]. Both learnability [6, 7, 10] and memorability [6] play a role in reducing distraction, by reducing the secondary task demand through the removal of the need for the driver to search for a control, thus minimizing thinking time and improving efficiency. Use of a consistent design philosophy [11] enables the driver to use intuition. Miniukovich and De Angeli [31] describe gestalt principles such as prototypicality, symmetry and grouping that suggest interfaces designed taking these into consideration may place controls in locations where they would be expected. Effective feedback to the driver of control settings might reduce the potential for mode error [32], which may be exacerbated by primary tasks with high cognitive load reducing the available cognitive resources [18, 31]. It could also provide extra feedback as necessary in proportion to operation criticality. Clear affordances could provide the same benefits by minimizing the cognitive requirements to gain mode confirmation [11]. Maximizing understandability [10] may have an effect on reducing errors and distraction. Equally, the driver will not have to spend time and cognitive resources attempting to gain an understanding of the location or function of a control prior to utilizing it, thus reducing distraction [33]. User characteristics and attitude may play a role in error and distraction reduction through ensuring that the driver is content with the specifications and layout of their controls. Drivers in the highest echelons of motorsport often have bespoke steering wheel control layouts [34]; these involve the reassigning of control functionality rather than control position change. Care should, however, be taken to ensure that drivers do not dictate control parameters that are non-optimal, solely on the basis of familiarity. Effective controls [7, 9] optimized in terms of operability [10] allow them to retain a high level of usability throughout a range of environmental conditions; these, in particular, relate to the driver’s physiological and psychological state [14]. The interface should remain usable even when the driver experiences considerable duress, for example, when dehydrated and under extreme pressure from a competitor. Stanton and Baber [8] also define task match, which, within the motorsport context, covers the mapping between a driver’s needs and the functions presented on the wheel. Approximately half of steering wheel control functionality is concerned with vehicle performance optimization. To avoid overloading the driver, the optimal number of these controls should be identified, considering the performance benefit they provide against the accentuated risk of error that may accompany additional driver workload [18]. The area available to position controls is limited, with some controls stipulated by regulations [25]; therefore, control density and its effects should also be considered; as according to classical information theory, the time required to process information correlates with its complexity [35]. Drivers’ use of gloves must also be considered, as they may have effects on usability [36]. Task characteristics, as specified by Stanton and Baber [8], are highly relevant as drivers face changing conditions and scenarios during races. Specific sequences, such as starting a race or entering the pits, involve multiple controls in a set order. In addition to this, certain situations might require some controls to be used with greater or less frequency. Tasks should be examined for criticality or priority as this may also have a potential effect on design decisions [4].

Figure 2 illustrates the complex relationships among the categories that dictate the KPIs.

6 Derivation of Key Performance Indicators

KPIs were derived based on Harvey and Stanton’s [5] usability translation method. Harvey and Stanton mapped KPIs directly to context-based usability criteria; however, there are significant crossovers between these criteria in motorsport; therefore, the KPIs were defined on a thematic basis. The motorsport-specific criteria fell within five clear themes; these themes map ostensibly directly to corresponding KPIs; hence, the five KPIs are in line with the five sets of motorsport-specific criteria. However, the relationships are more complex, and this is illustrated by the connecting lines in Fig. 2. Figure 2 illustrates the relationships between general usability criteria, motorsport-based contextual factors, motorsport-specific usability criteria and the resultant KPIs.

Five KPIs were identified:

  1. 1.

    Interface error rates should be minimized, particularly when the primary task results in a high cognitive load.

  2. 2.

    Task times should be minimized to reduce distraction and improve competitive performance.

  3. 3.

    Usability should be optimized to minimize visual distraction.

  4. 4.

    Interface task load should be minimized to reduce effect on primary task.

  5. 5.

    Interface functionality should be easy to learn and recall.

These represent the first iteration of KPI identification. Motorsport is a significantly different discipline to non-competitive driving, and whilst the context has been defined from multiple angles, validation is necessary to refine KPIs in terms of context aspect weightings. For example, frequency of use and control criticality may contradict each other in some circumstances. It might be useful to have a critical control located close to the driver’s hand to reduce the probability of error; however, if it is used infrequently, it might appear to be more appropriate to move it further away from the driver’s reach to allow a more frequently used control to be closer and reduce task times. There is a balance to be found in the design, and this can only be identified through iterative testing. Employing a set of human factors methods to quantify each KPI will allow a validation of the KPIs and a calibration of the context aspect weightings. Future work will carry out this validation using a high-fidelity motorsport simulator developed specifically for the analysis of steering wheel-based controls. Four differing GT steering wheels from 2016 have been selected for replication.

These replica wheels will then be tested by a set of racing drivers from various disciplines. Both quantitative and qualitative data will be collected, based on the human factors methods selected for each KPI. These data will then inform on the weightings and effectiveness of the KPIs and how they can be further improved. A photograph of one of the replica wheels to be used in the simulator validation experiment is shown in Fig. 3. It illustrates the complexity of one of the simpler GT steering wheel configurations.

Fig. 3
figure 3

A replica of a Porsche 911 GT3 R steering wheel

7 Selection of Method Categories

The first stage of method selection was to identify the applicable categories. Stanton et al. [37] grouped human factors methods into the following: data collection, mental workload assessment, charting, interface analysis, human error identification/human reliability analysis, team performance analysis, cognitive task analysis, situation awareness assessment and task analysis. Data collection techniques were described by Stanton et al. [37] as being the first step in the development of a new system as it is necessary to model and understand the current system. Task analyses such as hierarchical task analysis (HTA) form the basis of multiple additional methods [37]; their usage would provide a clear understanding of the interface-based tasks drivers face by breaking them down to their lowest levels [38]. Employing metrics that provide data specifically on operation times, such as critical path analysis (CPA) and the goals, operators, methods, selection (KLM–GOMS) keystroke-level model (KLM) allow the attainable performance of the interface to be quantified [39, 40]. This is particularly important due to the time-critical nature of motorsport. Identifying functions that are often repeated, or those that require considerable durations allow the interface to be modified to reduce operation times [40] and therefore reduce cognitive and physical operation durations, which may lead to improved primary task performance. One of the fundamental issues to be resolved is the improvement of the drivers’ interface to prevent erroneous control usage. The human error identification (HEI) or human reliability analysis (HRA) techniques HET and HEART could provide insight into the errors most likely to occur and their potential consequences [41, 42]. Mental workload (MWL) assessment techniques are particularly appropriate due to the necessity to understand the effects of MWL on the interaction between the drivers’ primary and secondary tasks [18]. Mehler et al. [43] stressed the importance of workload measurement in the design of interfaces related to automobile safety. They stated that ideal MWL should be assessed using a range of methods, such as performance-based measures, physiological measures, self-reporting and behavioural observation. They stated that in dynamic conditions such as driving, the combinatory nature of MWL measurements have the advantage of being fairly continuous and objective. Interface analysis techniques are also likely to provide essential information on the efficiency of the interface, and potential effects on task times and associated distraction [37]. Team performance assessment techniques are unlikely to be of major benefit, unless the engineer’s communications with the driver are being considered. This is because during periods of duress, when the driver is most likely to experience cognitive overload (representing the precise conditions that warrant interface optimizations), engineers will deliberately avoid communicating with the driver to prevent causing distraction. Data on individual driver situation awareness (SA) can be gathered through the use of eye tracking and questionnaires. Racing drivers have to maintain high levels of SA for their primary task [4]; metrics that can identify when this is being compromised will provide insight into potential design changes that might be required.

Harvey and Stanton [5] described four general principles requiring consideration when selecting usability evaluation methods. The form of information produced by the methods should be matched to that required according to the KPIs. A combination of objective and subjective methods was advised by Harvey and Stanton [5] to provide both qualitative and quantitative data. Analytical methods should be applied first, to provide as much design insight as possible, prior to more resource-intensive empirical methods. A range of methods were considered within each applicable category, and selections were made based upon the principles mentioned above and the documented levels of efficacy and suitability. The complete set of selected methods is listed in Table 1. KPIs that are specifically assessed by the methodologies are marked as “primary”. If a method provides some assessment of a KPI, it is marked as “secondary”.

Table 1 Methods matched to KPIs

8 Method Application Process and Sequence

The methods are applied in a logical sequence as shown in Fig. 4; this provides an iterative and agile approach to the development and improvement of interface designs. The initial pre-simulation stages 2 and 3 allow for high-frequency iterations of paper-based design tests, providing the most improved designs to be fed into the simulation and post-simulation stages of development. These provide further data on the designs which can then be further refined via more cycles of simulation testing. If post-simulation results suggest a significantly improved set of designs have been developed, then prototype wheels can be constructed for on-track testing.

Fig. 4
figure 4

Flow diagram illustrating the methodology application sequence

User feedback can also be gained at all stages where participants are involved. This subjective information can be utilized to further refine designs.

8.1 Stage 1—Task Analysis (Pre-simulator)

The first stage of an interface assessment will involve carrying out a hierarchical task analysis of a set of discrete operations as recommended by Stanton et al. [37]. This can be carried out as soon as information on required operations is gathered and forms the foundation of the analyses that follow. Stanton et al. [37] described the following method of generating an HTA. The experimenter first identifies and defines the individual tasks involved, then collects data on user/machine interactions, the technology employed, required steps, task constraints and decision-making constraints. An overall goal is placed at the top of the HTA; approximately five sub-goals are then defined. Sub-goals are further decomposed into operations and another layer of sub-goals. All sub-goals require decomposing to operations at the lowest level. With operations defined, plans are generated that identify how goals are met. The resultant HTA can provide a detailed overview of procedures within the modelled system. An effective method for gathering data to create an HTA is use of verbal protocol analysis of footage. Transcriptions of racing drivers’ vocalizations of their task when reviewing pre-recorded video footage enable a rich picture to be gained. The cognitive demand and fast pace of motor racing make it both difficult and potentially dangerous to collect the verbalizations in real time. Both the GOMS-KLM and CPA analyses can be carried out at this early stage, provided task time data is available from a pilot study or similar source. The KLM method would involve constructing a sequential list of the driver’s tasks, and assigning them operators, based on the task type, including thinking time, as specified by the associated KLM rule set. Each task is assigned a duration; this allows a summation of tasks to be calculated that represents the total predicted time for a specific sequence [40]. Stanton et al. [37] described the process of carrying out a CPA analysis as initially requiring an HTA to have been generated to provide the lowest level of individual tasks. These would then be arranged in order, using a flow chart, based on temporal dependencies. Tasks are then assigned to a modality table, the modalities of which are defined by the context. Within motorsport these might be visual, auditory, cognition, manual or equilibrioceptive, as drivers gain information from a range of sources including feeling the car’s adhesion levels through its movement. Tasks are then placed on a multimodal CPA diagram that separates them based by modality and time. Each task is represented by a box, called a node. Within each node is the task name, duration, earliest start time (EST), latest finish time (LFT) and float which represents the “free time” available to that task without delaying the next task. The duration of the task sequence can then be calculated based on the node values of the tasks in the critical path [37].

8.2 Stage 2—Interface Analysis (Pre-simulator)

The second stage of interface analysis can also be carried out without the need for participants, although participant feedback, especially from SMEs may be useful. The requirements are the interface designs and the scenarios of usage or HTA results. This allows for prototype designs to be assessed prior to the physical creation of prototype steering wheels. This is highly beneficial in that a large set of initial paper-based prototypes can be designed and analysed using the combined set of interface analysis methodologies and reduced to an improved subset for further analysis. This stage will involve the application of both link and layout analysis, Fitts’s law and Hick–Hyman law. Link analysis requires a graphical visualization of the interface and a sequential list of the operations that are being used for analysis. Links are drawn on the graphic showing the traversals from control to control, in sequence. These links are then recorded in table form and illustrate how interfaces can be redesigned logically [37]. Layout analysis involves the iterative grouping and resultant spatial diagrams of control layouts based on function, importance of use, sequence and frequency. The control layout is then redesigned based on the prototype layout spatial diagrams, resulting in a logical and improved design [37]. Fitts’s law application requires an accurate graphical image of the interface, with coordinates and dimensions for each control. For a given set of sequential operations, an index of difficulty and index of performance can be derived using formulae for each traversal between controls, based on their spacing and dimensions. Hick–Hyman law provides a formula for predicting choice-reaction times based on the number of stimuli [controls] [35]. In a motorsport context, this can provide insight into the potential delay and subsequent distraction that may be caused due to drivers making selections from a large number of steering wheel-based controls.

8.3 Stage 3—Error Identification (Pre-simulator)

Stanton et al. [37] described the initial requirement for an HTA prior to carrying out a human error template (HET) analysis. The analyst(s) apply the HET to every bottom-level step of the HTA, identifying and recording which of the 12 external error modes (EEMs) are applicable together with a description of the potential outcome. EEMs represent the types of errors that occur, such as executing the wrong task, failing to execute a task or executing a task too late. The probability of each potential error occurring is estimated as low/medium/high. The criticality of errors is estimated as low/medium/high. Potential errors that score high in both probability and criticality are classed as “fails”, whilst all other combinations are considered “passes” [37].

Kirwan [44] and Stanton [37] described the process involved in a HEART analysis. The tasks to be assessed should be identified and then an HTA carried out. The set of representative tasks are then classified based on a set of eight specific categories representing task complexity/familiarity. These have an associated numeric value which reflects the “proposed nominal human unreliability”. Relevant error-producing conditions (EPCs) are then identified using the HEART EPC set, each of which has an associated numeric “multiplier”. For every relevant EPC, the analyst has to provide a “proportion of effect” as a value between 0 and 1. Using the HEART equation, it is then possible to calculate a “nominal likelihood of failure”. The HEART process includes the proposal of remedial measure dependent upon the analysis results. The combination of HET and HEART should provide a comprehensive understanding of the errors that may occur based on frequency, criticality and conditions. Based on this information, the interface may be modified to reduce the likelihood of errors occurring, especially those with high criticality.

8.4 Stage 4—Performance Measures (Intra-simulator)

The next stage of analysis involves participants testing the interface designs through the use of prototype steering wheels within a motorsport simulator. In order to carry out investigations into primary and secondary task performance, it is likely that participants will drive for specific durations and either be subjected to variations of track conditions, instructed to drive to various levels of their ability, or maintain specific lap times. During experiments, the simulator software will log a wide range of variables in real time, including control inputs, vehicle behaviour, sector, and lap times. These provide insight directly into driver performance which constitutes the primary task. By examining these variables, it may be possible to determine how driver behaviour changes as the primary task difficulty increases [23, 45, 46]. Similarly, steering wheel control usage performance metrics can be recorded; these include task times and error rates. During a set of constant, fixed-demand driving tasks, steering wheel usage scenarios can be varied in difficulty, revealing the independent effect of secondary task load on primary task performance. Baldisserri et al. [23] also measured response times to stimuli, for example, a radio message to change a mode might take longer for a driver to attend to when under high primary task load.

8.5 Stage 5—Physiological Measures (Intra-simulator)

By fitting participants with an eye-tracking system whilst they are using the simulator, a range of variables can be recorded, such as pupil diameter, areas of interest, fixations and blinks. It is also possible to extrapolate workload via various means such as the index of cognitive activity (ICA) [47]. The system logs the data from the eye-tracker’s cameras and processes it using the supplied software; the outputs then provide quantitative data on the two categories of greatest interest: interface distraction and cognitive workload. This data can then be fed forward into subsequent interface designs to focus on improving usability specifically where workload and/or distraction levels are high. HRV would be extracted from electrocardiographic data (ECG); electrodes would be placed on the participants and connected to a monitoring device. Data would then be recorded as the participants used the simulator. HRV data would be extracted by locating the R waves and calculating inter-beat intervals (IBIs) [43, 48]. Mehler et al. [43] state that the majority of HRV analyses utilize the 0–1 Hz range. Wilson [48] specifically examined middle and high frequency bands of HRV 0.06–0.14 Hz and 0.15–0.40 Hz, respectively.

8.6 Stage 6—Questionnaires (Post-simulator)

On the completion of an experimental condition, drivers would immediately be presented with questionnaires. The NASA-TLX provides a subjective indication of the cognitive workload experienced in the study via ten questions which are answered on a scale. The raw results are then calculated using a formula [49]. The system usability scale (SUS) provides subjective data on drivers’ opinions on various aspects of usability via a ten-item questionnaire. Scores are calculated based on a formula and the resultant value between 0 and 100 represents the overall system usability [50]. The driving activity load index (DALI) consists of six factors; each represents a different aspect of driver workload [51]. A scale is provided for each factor, and drivers select a point on the scale for each. Results are calculated using the same method as the NASA-TLX metric.

9 Conclusion

There exist a set of rationales for the improvement and optimization of steering wheel-based controls in high-level motorsport. Usability criteria were refined into those that are motorsport specific, based upon the available literature, and a clear context of use by racing drivers was established employing real-world examples of the effects of usability issues. Key performance indicators were then derived using a proven framework. These KPIs provided a set of focused goals to guide the identification of the most appropriate usability assessment categories and methods. An application process was derived to provide an agile and iterative technique through which the selected human factors methods can be applied to quantify the KPIs. The ultimate aim is to develop a toolset that can provide an in-depth analysis of steering wheel-based interfaces in motorsport, enabling potential improvements to be made, thus improving safety and competitiveness. A quantitative paper is planned that will employ the use of a motorsport simulator. The proposed toolset will be applied to the analysis of four existing interface designs, and the quantitative and qualitative outputs from the methods analysed to inform on the usability levels. These data will also form the first validation of the KPIs and resultant toolset, providing insight into potential improvements.