Abstract
Task analysis identifies user goals and tasks when using an interactive system. In the case of users performing real-life work, task analysis can be a cumbersome process gathering a huge amount of unorganized information. Task Models provide a mean for the analysts to organize information gathered during task analysis in an abstract way and to detail it further if needed. This chapter presents the benefits of using task models for task analysis with a practical view on the process for building task models. As task models can be large, it is important to provide the analyst with computer-based tools for editing task models and for simulating them. In this chapter, we illustrate the presented concepts with the HAMSTERS notation and its associated eponym tool.
Introduction
Task analysis is a cornerstone of User Centered Design (UCD) approaches (Diaper 2004), aiming to collect information from users about the work they are doing and the way they perform it. According to (Johnson 1992), “any Task Analysis is comprised of three major activities; first, the collection of data; second, the analysis of that data; and third, the modeling of the task domain ” (p. 165). In the case of users performing real-life work, task analysis can be a cumbersome process gathering a huge amount of unorganized information stored in different formats such as paper documentation, text and video (from interviews), transcripts from scenarios... The means for representing the outcomes of task analysis has important implications for the value and insight gained from the process, not least because any omissions cannot be discussed (among the stakeholders) or taken into consideration in later design phases. Task models are an accurate mean to represent the outcome of task analysis and they consist in a graphical representation of the work the users perform with an interactive application or system.
Several task analysis methods with their associated task modeling notation were introduced during the last fifty years. Each of them support user tasks analysis for a particular application domain, or for a particular stage of the design, development, or usage of an interactive system. The first task analysis method and associated notation, HTA (Hierarchical Task Analysis), coined in the late 1960’s (Anett 2004; Meyer et al. 1967) for the steel production industry, helps understanding the skills required in complex non-repetitive operator tasks. Since this seminal work, additional contributions on methods and notations have been developed for various purposes such as providing support for task centered system design (Greenberg 2004; Mori et al. 2002), estimating human performance (Kieras 2004), automatic generation of interactive applications (Paternò 2002) or taking into account potential human errors at design time (Paterno and Santoro 2002). For each of these contributions, the notation elements match the objective of the proposed method. For example, in the TKS method and associated eponym notation (Johnson et al. 2000), the element of notation “type of knowledge” (which may be declarative or procedural) match the objective of identifying the knowledge prerequisites for a user to be able to use the system. Moreover, the contributions on task analysis and modeling go beyond the scope of the design and development of an interactive application. For example, they include training program implementation (Martinie et al. 2011a) and contextual help at runtime (Palanque and Martinie 2011) (Sect. 2 develops exhaustively these benefits). In order to take advantage of all of these benefits, there are a minimum set of common ground concepts for a task modeling notation. It includes the hierarchy between user goals, sub-goals and tasks (Anett 2004; Paternò 2002); the task types (Paternò 2002) and their temporal ordering (Anett 2004; Paternò 2002); the objects (Paternò 2002) and the knowledge (Johnson et al. 2000) required to perform the tasks; and at last, the collaborative tasks between different users (Pinelle et al. 2003; van der Veer et al. 1996).
This chapter highlights the benefits of using task models and presents the HAMSTERS notation for task modeling, which embeds the common ground concepts for task modeling, as well as additional elements of notation to refine task descriptions and to handle the representation of large amount of data. This chapter also presents a stepwise process to describe users’ tasks in a systematic and structured way. This chapter is composed of six main sections. In the next section, we highlight the benefits and the scope of task modeling for task analysis. Section 3 presents the main elements of the notation HAMSTERS . Section 4 describes structuring mechanisms for task models, which enable the scalability of task modeling for the description of multiple and refined tasks, as well as the reusability of the models. Section 5 presents the elements of the HAMSTERS notation for the description of collaborative activities. Section 6 provides a practical view on task analysis and modeling with HAMSTERS, including the eponym software tool. Section 7 discusses the scope of task models when compared to system models and scenarios, which are two artefacts widely used in UCD approaches. Section 7 concludes the chapter.
Purpose and Characteristics of Task Analysis and Task Modeling
This section highlights the main objectives and characteristics of task analysis, as well as the relevance of task modeling to support task analysis, and the main limitation of existing notations.
Benefits of Task Analysis and Modeling in UCD Approaches
Task analysis is the principal technique for ensuring the effectiveness of an interactive system, i.e., to guarantee that users can perform their work and can reach their goals. Many instances of task analysis and modeling techniques exist to provide support for the design and evaluation of interactive systems and of user performance while interacting with a system. Following is a non-exhaustive list of objectives of using task analysis and modeling techniques, which highlights the broadness and large number of its benefits:
-
Identification and description of the required functions for interactive system (Greenberg 2004; Paternò 2002)
-
Identification and description of knowledge required to perform a task (Johnson et al. 2000)
-
Identification and description of the temporal ordering of the user actions with the system (Paternò 1999)
-
Designing new applications consistent with the user conceptual model (Paternò 2002)
-
Identification and description of the different user roles and actors for groupware systems (Pinelle et al. 2003; van der Veer et al. 1996)
-
Identification and description of the workflow between users for collaborative activities (Pinelle et al. 2003; van der Veer et al. 1996)
-
Understanding of an application domain (Paternò 1999)
-
Recording the results of interdisciplinary discussions (Paternò 1999)
-
Production of scenarios for user evaluation (Winckler et al. 2004), as well as identification and generation of relevant test cases (Campos et al. 2017)
-
Heuristic evaluation of the usability of interactive applications (Cockton and Woolrych 2001; Pinelle et al. 2003)
-
Predictive assessment of task complexity and workload (motor, cognitive, perceptive) (O’Donnell and Eggemeier 1986)
-
Predictive assessment of user performance when interacting with the system (John and Kieras 1996)
-
Exploration of the range of ways in which the system may be used (Pinelle et al. 2003)
-
Analysis of usability and user experience evaluation data (Bernhaupt et al. 2018)
-
Preparation and implementation of training programs (Anett 2004; Annett and Duncan 1967; Martinie et al. 2011a)
-
Production of user manuals (Gong and Elkerton 1990; Paternò 1999)
-
Contextual help at runtime (Gribova 2008; Palanque et al. 1993; Pangoli and Paternò 1995; Palanque and Martinie 2011)
-
Identification and description of possible allocation of functions and tasks between the system and the user (Martinie et al. 2011b; Bouzekri et al. 2021)
-
Identification and description of possible allocation of authority, responsibility and control transitions between the system and the user (Bouzekri et al. 2021)
-
Identification and description of potential user errors (Fahssi et al. 2015)
-
Identification of possible cybersecurity threats on user tasks (Broders et al. 2020)
Task Analysis, Task Description, and Task Modeling
As stated by Annett (Anett 2004) “Analysis is not just a matter of listing the actions or the physical and cognitive processes involved in carrying out a task, although it is likely to refer to either or both. Analysis , as opposed to description , is a procedure aimed at identifying performance problems (i.e., sources of error) and proposing solutions” (page 69). This sentence clearly defines the border between so-called description activities and analysis ones. In that sentence, however, there is a confusion between analysis of the work of users (“standard” task analysis) and the analysis of the descriptions/models. These descriptions/models are the result from the activity of organizing information gathered while performing task analysis. Such description/models can, in turn, be analyzed in order to identify missing, redundant or inconsistent information or in order to identify better organization of work, i.e., redefine users’ goals and tasks, and allocate tasks differently between users. Another confusion can be found in (Diaper 2004) in the sentence “Task analyses produce one or more models of the world and such models describe the world and how work is performed in it” (page 6). Of course, in a UCD approach , task analysis and task modeling are activities that should be intertwined and task models grow when task analysis progress.
Main Limitations of Existing Task Modeling Notations
Each of the existing task modeling notations matches a specific objective of the task analysis method to which it originally belongs. The elements of a specific task modeling notation thus enable to represent and describe some of the aspects of users’ work that are required to make the corresponding specific analysis, but these elements may not enable to make a different type of analysis (Martinie et al. 2019), meaning that each task modeling notation usually limits to its original purpose. In the introduction, we presented the example of the TKS notation (Johnson et al. 2000) with the notation element type of knowledge. The TKS notation enables to identify and represent systematically the declarative and procedural knowledge required by a user to use a system, which is important to prepare the user manual or training before the system is in use. However, TKS notation does not enable to describe the types of task that the user may perform (e.g., interactive task, cognitive task…), which is also important at design time to analyze task complexity.
In order to take advantage of all of the benefits of the task modeling contributions and to provide a wide range of benefits of task analysis, a task modeling notation has thus to embed as many relevant elements of notation as possible.
Foundations of the HAMSTERS Notation
The main philosophy of the HAMSTERS notation is to provide as many relevant elements of notation as possible to get as many benefits of task analysis as possible. The HAMSTERS notation thus embeds common ground elements of task modeling notations and we regularly extend the notation to adapt to constant evolution of application domains, technologies, and nature of users’ work.
In this section, we present the main elements of the HAMSTERS notation , which match the common ground elements of the state of the art on task modeling. In order to illustrate these elements, we use the example of user tasks with an ATM (Automated Teller Machine). An ATM is an interactive system, usually located in a public space, that many people use on a regular basis to withdraw money from their bank account. Figure 1 presents a picture of an instance of such system. We chose this illustrative example because it is simple and it enables to present the use of the main HAMSTERS notation elements.
Figure 2 presents the task model that describes the user tasks to reach the goal “Withdraw money”. It embeds several elements of the HAMSTERS notation. We present these elements one by one in the following sections.
Main Goal of the User and Abstract Tasks
User has specific goals when interacting with the system. Analyzing user tasks requires to identify the user’s main goal. The element of notation “Main goal” (depicted in Fig. 3a) describes such main goal of the user. The user may have to reach several intermediate sub-goals to be able to reach the main goal. The element of notation “Abstract task ” (depicted in Fig. 3b) describes such sub-goal. In the task model “Withdraw money” in Fig. 2, the main goal is “Withdraw money,” and the sub-goals or abstract tasks are “Identify,” “Select amount,” “Wait during ATM processing request,” and “Finalize withdrawal” are sub-goals.
The element of notation “Abstract task ” may also describe a not yet refined or broken down task, when the process of modeling is incomplete.
Hierarchical Structuring of Sub-Goals and Tasks
Task models are structured representations of user tasks. The representation of the hierarchy between user tasks supports to identify the different abstraction levels of the user activities. At the higher abstraction level, the top node represents the main goal of the user. This goal breaks down into sub-goals, and each sub-goal breaks down into sub-sub-goal or user tasks. For example, in the task model “Withdraw money” in Fig. 2, the main goal labelled “Withdraw money” breaks down into the sub-goals “Identify,” “Select amount,” “Wait during ATM processing request,” and “Finalize withdrawal”. The sub-goal “Select amount” breaks down into the tasks “Display possible amounts,” “Recognize needed amount,” and “Select amount”. The sub-goal “Identify” breaks down into sub-sub goals and each of these sub-sub goals decomposes into tasks. There is no restriction on the number of intermediate sub-goals. The tasks that a user will concretely perform are the leaves of the task model, i.e., the tasks that are at the last bottom level in the task model. Sub-goals or tasks in the intermediate levels between top and bottom are abstract groupings.
In order to break down a goal into sub-goals or tasks, one must answer to the question “How to reach this goal?”. In the other way round, a task belongs to a sub-goal, or a sub-goal belongs to a goal by answering to the question “Why to perform this task or why to reach this sub-goal?”. The side arrows “Why?” and “How?” in the right part of Fig. 4 depict these reading directions. In addition, the arrow “Time” at the bottom of Fig. 4 indicates that the tasks temporally execute from left to right. Applying the read directions, if we ask the question “How to withdraw money,” the answer is that to withdraw money the bank client has to “Identify,” then to “Select an amount,” then to “Wait during ATM processing request ” and at last to “Finalize withdrawal”. In the other way round, if we ask the question “Why the user selects an amount?” and the answer is that the user selects an amount in order to withdraw money.
Temporal Ordering of Tasks
A user performs a set of tasks according to a particular temporal ordering, which may be guided by the interactive system. For example, an ATM user cannot achieve the goal “Withdraw the notes” until having completed the sub-goal “Select amount”. Temporal ordering operators describe temporal relationships between tasks. The HAMSTERS notation proposes several temporal ordering operators for the description of the possible temporal orderings between tasks. Table 1 presents these operators.
Table 2 and Table 3 present how to use the temporal ordering operators for the description of temporal constraints between tasks, as well as how to read the temporal constraints for each type of operator.
The task model “Withdraw money” in Fig. 2 provides also examples of the usage of these three temporal ordering operators. The user may reach the “Withdraw money” main goal by performing the sub-goals “Identify,” “Select amount,” “Wait during ATM processing request,” and “Finalize withdrawal” in a sequential order. The user has to “Wait” concurrently with the ATM processing the request. The user may choose to “Get money and receipt” or to “Get money” only to the reach the sub-goal “Withdraw the notes”.
The task model “Withdraw money” in Fig. 2 also provides an example of the usage of the temporal ordering operator “Order independent”. The user may either “Take money” first and then “Take receipt,” or “Take receipt first,” and then “Take money”.
In addition to temporal ordering operators , temporal properties describe additional temporal possibilities for tasks. These are the optional property and the iterative property (depicted in Fig. 5). An optional task (depicted in Fig. 5a) is a task that the user chooses to perform or not. An iterative task (depicted in Fig. 5b) is a task that the user perform several times. In the case where the user performs the task an undefined number of times, another task may stop and disable this iterative task. In that case, the temporal ordering operator “Disable” on top of the two tasks enables to describe such stop (by representing the iterative task at the bottom left of the “Disable” operator and another task at the bottom right of the “Disable” operator). Furthermore, a task may be optional and iterative (Fig. 5c)).
There are three ways to detail an optional task for a user. It is possible to indicate that:
-
The user should perform the task (Fig. 6a).
-
The user could perform the task (Fig. 6b).
-
The user won’t be able to perform the task within the specified context (Fig. 6c).
These possible refinements for an optional task enable to describe priorities between tasks to perform. The “Should,” “Could,” and “Won’t” are categories belonging to the MoSCoW method (Stapleton 2003), and they enable the prioritization of requirements when managing projects. This method aims to prioritize the achievement of tasks in cases where there are not enough resources to perform all of the tasks. Concerning the “Won’t” optional task, the use of this notation element enables to point out a potential issue with the interactive system, and for which a re-design may provide a solution.
The MoSCoW method also includes the “Must” category, which means that the user must achieve the task. This category corresponds to a non-optional task in the HAMSTERS notation .
Task Types
Task types are elements of notation to refine sub-goals or abstract tasks and to represent the nature of the task. Task types also enable to represent whether it is the user or the system who performs the task. Table 4 presents the main types of task.
The main goal task type (row 1 in Table 4) describes a main goal for the user.
An abstract task type (row 2 in Table 4) describes a sub-goal in the task model, or a task not yet been refined, which may happen at the beginning of the analysis process.
A user task type (row 3 in Table 4) describes a refined task of human information processing. In particular, a user task may refine in one of the three types of information processing defined by (Card et al. 1983), which are motor, cognitive and perceptive. For example, the user may perform a motor task (such as grabbing a card), or a cognitive task (such as recalling a PIN code), or a perceptive task (such as seeing a message displayed on a screen). Such refinement supports the analysis of task complexity as well as of cognitive load, motor load, or required perceptive capabilities. Furthermore, according to the Parasuraman model of human information processing (Parasuraman et al. 2000), a cognitive user task may refine in a cognitive analysis task or in a cognitive decision task. This refinement helps to understand the types of cognitive task carried out by the user, and in particular, what kind of cognitive information processing the user has to perform. This refinement supports the analysis of allocation of tasks and functions between the user and the system.
An interactive task (row 4 in Table 4) describes an interaction between the user and the interactive system. An interactive task may be an action performed by the user to input information to the system (interactive input task). It may also be an action performed by the system to bring information to the attention of the user (interactive output task). Interactive input/output tasks provide supports to describe both cases.
System tasks (row 5 in Table 4) describes the tasks that the system executes. The system may execute a processing task (such as checking the card and account numbers). The system may execute an input task, i.e., the production and processing of an event produced by an action performed by the user on an input device. It may also execute and output task, i.e., a rendering on an output device (such as displaying a new frame on a screen).
Figure 7 presents the extract of the task model “Withdraw money” (presented in Fig. 2) that describes tasks of different refined types to reach the sub-goal “Insert code”.
The extract of the task model “Insert code” depicted in Fig. 7 reads in the following way. In order to reach the sub-goal “Insert code,” the user has first to accomplish the cognitive task “Recall PIN code”. Then, the user performs the interactive input tasks “Enter digit 1,” then the ATM displays a star character on the screen (interactive output task “Display “*”). The user then enters the second digit (interactive input task “Enter digit 2”), and then the ATM displays the two star characters on the screen (interactive output task “Display “**”), and the sequence goes on.
Figure 8 presents another possible decomposition of the sub-goal “Insert code,” including the detailed representation of user motor and perceptive tasks.
The extract of the task model “Insert code” presented in Fig. 8 reads in the following way. In order to reach the sub-goal “Insert code,” the user has to accomplish the cognitive task “Recall PIN code,” then the user has to decide to enter the first digit (cognitive decision task “Decide to enter 1st digit”). Then, the user pushes the button corresponding to the first digit (user motor task “Push button corresponding to 1st digit”), then the system creates an input event (system input task “Input digit 1”). Then the system displays the character “*” (system output task “Display “*”), then the user perceives the rendering of the character “*” (perceptive task “see “*””), then s/he analyzes that the first digit has been entered (cognitive analysis task “Analyze that 1st digit has been entered). Then the user decides to enter the second digit (cognitive decision task “Decide to enter 2nd digit”), and the sequence goes on.
For the second type of decomposition, an interactive input task breaks down in a sequence of a user motor task followed by a system input task, and an interactive output task breaks down in a sequence of a system output task followed by a user perceptive task. This second type of decomposition of the sub-goal is a fine grain specification of user tasks, which enables the analysis of the task complexity (including the amount of cognitive, motor, and perceptive tasks that are required to use an interactive system), whereas the first type of decomposition focuses on user interactions with the interactive system.
Systems and Their Elements
Users may require the use of several systems, devices and software when performing their tasks. These devices and systems may belong to the following types: hardware elements, input and output devices, software applications. The elements of notation presented in Table 5 provide support to describe precisely whether the use of a particular system is required and/or what part of the system each task requires.
Figure 9 presents the task model of the sub-goal “Insert code,” which includes the representation of the input and output devices that the user manipulates to accomplish her/his tasks.
The task model presented in Fig. 9 reads in the following way. The user uses the keypad (input device “In D: Keypad”) to enter digits of her/his PIN code (interactive task connected to the input device by a simple line); and the screen (output device “out D: Screen”) supports the performance of the output interactive tasks of displaying “*” characters (output task connected to the output device by a simple line).
The manipulation of the elements of the system by the user is interactive. Consequently, the systems and their elements support the execution of interactive tasks, system tasks, as well as user perceptive and motor tasks, depending on the type of element (e.g., a display support perceptive tasks and a keyboard supports motor tasks).
Objects, Information, and Knowledge
The execution of a task may require the use of objects, information or knowledge. Table 6 presents the representation of these HAMSTERS notation elements , as well as the category of data to which they belong.
A software object is a data that a system needs, creates or manipulates.
Information and declarative knowledge are data that the user needs, creates or manipulates to accomplish a task. Declarative knowledge may be refined in strategic knowledge or situational knowledge (Martinie et al. 2013). These elements describe information and knowledge needed by the user to perform tasks. Such identification supports the preparation of user manuals or training programs.
Figure 10 presents an extract of the task model “Insert code” including information, knowledge and software object needed to perform the tasks.
The extract of the task model “Insert code” presented in Fig. 10 reads in the following way. The user knows that a card bank account PIN number is 4 digits long and recalls PIN code (incoming arrow from the declarative knowledge rectangle “DK: A card bank account PIN code is 4 digits long” to the cognitive task “Recall PIN code”). The recall of the bank account PIN code produces an information containing the PIN code in the user’s mind (outgoing arrow from the cognitive task “Recall PIN code” to the information rectangle “Inf: “PIN code”). This information is then used to enter the digits of the PIN code (incoming arrows from the information rectangle “Inf: PIN code” to the interactive input tasks). The interactive input tasks modify the software object containing the PIN code in the ATM (outgoing arrow from interactive input tasks to the rectangle “SW Obj: PIN code”). The system uses this software object later by to authenticate the user.
Table 7 details the possible types of relationships between the task types and the data types.
Duration and Triggering of a Task
The execution of a task may have a specific duration and/or the execution of a task may be performed at a specific moment in time. The HAMSTERS notation supports the description of such quantitative temporal aspects of the execution of a task. The “Duration” notation element, presented in Fig. 11, describes the length of time a task can last.
Figure 12 presents an example of how to use the “Duration” element of notation in a task model. Figure 12 describes that the user task labelled “Wait” lasts 10 seconds.
The “Timestamp” notation element, presented in Fig. 13, describes a moment in time. The completion of a task may produce this type of data, which can then in turn be needed for the execution of another task.
Table 8 presents the main types of relationship between quantitative temporal elements and tasks.
Although the description of the temporal ordering of tasks is possible thanks to the temporal ordering operators, a task may trigger at a particular moment in time or for a specific event. The elements of notation “Event” and “Calendar event,” presented in Fig. 14, support the description of such specific task trigger.
Table 9 presents the main types of relationship between events and tasks.
More generally, the temporal ordering operators along with the elements duration, timestamp, event, and calendar event support the description of the five types of task trigger identified by Dix et al. (2004), which are immediate, sporadic, temporal, external event, and environmental cue:
-
The task trigger “Immediate” describes that, once the previous task is complete, the following task starts immediately. All the tasks in the “Withdraw money” task model in Fig. 2 are immediate.
-
The task trigger “Sporadic” describes that an individual responsible for a task performs this task when s/he remembers to perform it. The task trigger “Sporadic” requires the use a cognitive task before starting the recalled task.
-
The task trigger “Environmental cue” describes that something in the environment remind the user to perform a task. The modeling of the task trigger “Environmental cue” requires the use of a perceptive task followed by a cognitive task before starting the recalled task.
-
The task triggers “Temporal” and “External event” require the use of the HAMSTERS data types “Time” and “Event” for their description. This is because these triggers relate to an event.
Table 10 illustrates how the task triggers “Temporal” and “External event” describe in a task model using HAMSTERS. For these illustrations, the extract of the presented task model belong to the model of a technician’s tasks which main goal is to perform the maintenance of the ATM.
These duration and triggering notation elements enable to describe temporal constraints about the minimum and/or maximum amount of time required to perform a task. The duration notation element enables to describe that a task shall last a specific amount of time (in that case, minimum and maximum duration values are the same, as in example b) in the first row in Table 10). The notation element duration associated to the notation element timestamp or to the notation element calendar event enable to describe that after a given time or not before a given time, a particular task will happen. Presents an example of the usage of these notation elements to describe that the ATM will swallow the card if the user exceeds a certain duration to retrieve the card (Fig. 15).
Mechanisms for Structuring Task Models
The design and development of large real world applications, such as aircraft cockpits, space systems and air traffic control, require the description of hundreds of user tasks. Whereas these tasks may fit in one task model with tool support, such as visualization mechanisms for large amount of data (Paternò and Zini 2004), the editing and reuse of task models require additional mechanisms to increase efficiency of task modeling. The HAMSTERS notation provides three mechanisms to deal with large amount of tasks in task models. The sub-routine mechanism enable to structure large task models and to reuse parts of them, while allowing parametrization of behavior (Martinie et al. 2011c). The sub-models and components mechanisms support rapid task-model development by increasing the reuse possibilities of existing task models (Forbrig et al. 2014).
Sub-Models
Sub-models are elementary reusable parts of task models. A sub-model may be an elementary task, or a sub-tree composed of several tasks. A large task model may decompose into several duplications of sub-models. A sub-model may appear in several places of a task model and even in several task models.
While task notations propose reuse at the task type level (an example of such a task type being an “interactive input task type”), the sub-model proposes reuse at the instance level. To illustrate this mechanism, we also use the extract of the presented model of a technician’s tasks whose main goal is to perform the maintenance of the ATM.
For example, in Fig. 16, the interactive input task instance “Press security button” appears two times, because the user (i.e., the maintenance technician of the ATM) has to perform twice the same task of pressing the security button during the sequence of tasks to open the ATM door. The symbol “COPY” displays on the interactive input task icon to indicate that these two tasks are two instances of the same task.
Sub-Routines
Sub-routines are reusable task models. They enable to structure large task models in several simpler ones and to define information passing between these simpler task models. This mechanism is similar to procedure calls in programming languages and parameterization of the behavior is possible via input and output parameters.
The sub-routine makes it possible to describe recurring behaviors and to describe explicitly the input and output parameters, as well as how they influence the task model behavior. Figure 17 presents the four different types of visual representation of the main goal of a sub-routine according to the presence or absence of input and/or output parameters. The concave pentagon on the top left of the abstract goal icon represents input parameters. It is empty when there is no input parameters (see a and c in Fig. 17) and filled in in Orange when there is an input parameter (see b and d in Fig. 17). The convex pentagon on the top right of the abstract goal icon represents the output parameters. It is empty when there is no input parameters (see a and b in Fig. 17) and filled in in Orange when there is an input parameter (see c and d in Fig. 17).
Figure 18 presents the task model “Withdraw money” fully structured using sub-routines. This model is the transformation of the initial task model presented in Fig. 2. Each high-level sub-goal in the task model of Fig. 2 becomes a sub-routine in the task model of Fig. 18.
The task model in Fig. 18 uses the sub-routine “Finalize withdrawal” described in Fig. 19. The sub-routine “Finalize withdrawal” has no input parameter and three output parameters (i.e., the physical objects “Card,” “Receipt,” and “Notes”). This sub-routine describes the hierarchical and temporally ordered set of tasks to finalize the withdrawal of money.
Generic Components
Generic components are a specific type of reusable task models. The goal of using generic components for task models is to allow for reuse of modeling efforts at a refined level of interaction, including perceptive, cognitive and motor tasks. For example, to enter a digit using a keypad decomposes in several tasks that are very similar whatever the button pressed. A generic component enables to describe and reuse the description of a task model by providing tunable parameters. Figure 20 presents the main visual representation of the main goal of a generic component. This visual representation may differ according to the presence or absence of input and/or output parameters, in the same way that the visual representation of sub-routines may differ according to the presence or absence of input and/or output parameters (represented in Fig. 17 for sub-routines).
Figure 21 presents how the sub goal “Insert code” may decompose in four instances of the generic component “Enter digit”. The usage of the generic component enables to describe the tasks to enter a digit only once and to instantiate it for the four digits.
Figure 22 presents the task model of the generic component “Enter digit [number],” where “number” is the main parameter of the generic component. The set of tasks described in the generic component “Enter digit,” as well as the data manipulated to perform these tasks are the same, but the position of digit in the PIN code is different.
Collaborative Tasks
The achievement of a goal may require a collaboration between several users and may require the use of interactive systems , which are named groupware in that particular case and support collaborative work. We extended the HAMSTERS notation to support the representation of computer supported collaborative work (Martinie et al. 2014). We first based our extension on the common ground concepts for collaborative task modeling introduced by Paterno (Paternò 1999) (e.g., cooperative tasks) and from van Welie and van der Weer (van Welie and van der Veer 2003) (e.g., roles). We then refined the notation using the literature on collaborative tasks within groups of people.
Role
Collaborative work involves several persons, each one potentially contributing to common group goals. A role gathers a set of tasks and relationships among them (Mori et al. 2002). A role thus encompasses a set of goals . For example, “bank client” is a role. To withdraw money using an ATM belongs to the role of the bank client, but other goals may belong to this role such as to withdraw money in the bank office, at a cashier’s desk, or to verify a bank statement upon reception. The description of these goals and tasks may thus require the use of several task models.
Collaborative Task Types
Collaborative work is made of tasks at different abstraction levels: at the group level and at the individual level. A group task is a set of task that a group carries out in order to achieve a common goal (McGrath 1984) whereas a cooperative task is an individual task a person perform in order to contribute to the achievement of the common goal (Roschelle and Teasley 1995). The CTT task modeling notation embeds the cooperative task generic type (Paternò 1999) but neither enables to describe cooperative task types (e.g., user, interactive) nor to describe the group task types. The HAMSTERS notation Table 11 presents the main types of tasks in HAMSTERS for describing collaborative tasks in task models.
An abstract cooperative task (first row in Table 11) describes a sub-goal performed in a collaborative way, or a collaborative task not yet been refined, which may happen at the beginning of the analysis process.
A group task (second row in Table 11) describes high-level tasks that a group of persons (user group task) and/or group composed of person(s) and system(s) (hybrid group task) and/or group of systems (system group task) have to accomplish.
A cooperative task (third and fourth row in Table 11) describes the refinement of a group task into tasks performed by an individual (user cooperative task in the third row and interactive cooperative task in the fourth row). A cooperative task refines in the types belonging to the main categories of user tasks (cognitive, motor, perceptive) and interactive tasks (interactive input, interactive output and interactive input/output).
Specific Properties of Collaborative Tasks
A cooperative task may execute within various space-time constraints (local/distant, synchronous/asynchronous) (Ellis et al. 1991). Table 12 presents the notation elements in HAMSTERS to describe these constraints.
The time and space properties support the description of the distribution of cooperative activities according to time and space. This distribution description then supports to analyze whether the tasks are compliant with groupware guidelines at design time. For example, time and space constraints may have an impact on common ground and awareness (Heer and Agrawala 2008). To identify and represent these constraints at design time thus support to propose usable groupware.
Furthermore, a cooperative task may have the property to contribute to one or several of the following collaboration objectives: production, coordination, communication (Calvary et al. 1997). Depending on the collaboration objective(s), again design guidelines may apply (Heer and Agrawala 2008). The identification of the possible associated collaboration objective(s) thus enables to select appropriate design solutions. Figure 23 presents an example of description of associated to collaboration objective(s) for a cooperative task. Figure 23 presents three round-shaped forms entitled “Production,” “Coordination,” and “Communication”. In Fig. 23a, the filled in “coordination” form means that the cooperative task contributes to a coordination objective. In Fig. 23b, the filled in “coordination” and “communication” circles mean that the cooperative task contributes to both coordination and communication objectives.
Relationships Between Tasks of Different Roles
A cooperative task belongs to a role and is associated to at least one another cooperative task belonging to another role. For example, a bank client withdrawing money at a cashier’s desk in a bank office cooperates with the cashier to get money. Figures 24 and 25 present examples of task models describing cooperative tasks between the two roles “bank client” and “cashier” in the case where the bank client withdraws money in a bank office and asks for a specific type of notes. Figure 24 presents the task model “Get notes of a specific type,” which belongs to the role “bank client”. Figure 25 presents the task model “Serve notes,” which belongs to the role “cashier”.
The task model in Fig. 24 reads in the following way: the bank client first asks for a specific type of notes, and is then asked the needed number of specific notes. The bank client then answers with the number of needed notes and finally takes the notes. All the tasks in this model are cooperative user tasks as the bank client is cooperating with the cashier. Furthermore, these tasks are all synchronous and local.
The task model in Fig. 25 reads in the following way: the cashier first listens to the question and then asks for the number of needed notes (user cooperative tasks). The cashier then listens and understands the number of needed notes (user cooperative task). Then, the cashier prepares the notes (user individual task) and then gives the notes to the client (user cooperative task).
Each cooperative task in the task model “Get notes of a specific type” (Fig. 24) is associated to or constrained by a cooperative task in the task model “Serve notes” (Fig. 25). For example, the cooperative task “Ask for a specific type of notes” represented in the task model “Get notes of a specific type” (Fig. 24) executes in cooperation with the task “Listen to question” represented in the task model “Serve notes” (Fig. 25). The cooperative task “Ask for a specific type of notes” always starts before the cooperative task “Listen to question”. Table 13 describes the associations and constraints for each cooperative task in the task model of both bank client role and cashier role. The identification of the order of execution of individual cooperative tasks for each set of associated cooperative tasks enables to describe the blocking aspects and constraints in the execution of the collaborative tasks.
It is interesting to note that previous work on collaborative task modeling associated a workflow representation to task models in order to facilitate the description of the relationships between tasks of different roles. This is the case for the notation GTA (van der Veer et al. 1996) and FlowiXML (Guerrero et al. 2008) where swimlines representations (one line for one role) enable to visualize the sequence of cooperative tasks between users having different roles.
Practical View on Task Analysis and Modeling with HAMSTERS
This section presents the main steps for building task models using HAMSTERS and its associated eponym tool, as well as the integration of these steps with task analysis.
Process for Performing a Task Analysis That Relies on Task Models
Task analysis starts with the identification of the objectives of the analysis, as well as of the required output format for the task analysis (Anett 2004; Diaper 2004). The objectives of the task analysis enable to select the tasks described in the models as well as their levels of refinement (Anett 2004). proposes to identify these objectives using the following questions: What is the purpose of the description? What will it be used for? For example, we may design and develop a new version of the ATM. We have to ensure that the new version of the ATM will be at least as usable as the current one. We can formulate the main objective of task analysis for this redesign as “Foresee if the envisioned version of the ATM could be at least as usable as the existing version”. Produced task models will enable the comparison of contributing factors to usability for the two versions of the ATM: predicted effectiveness and predictive efficiency (through task complexity). Once the scope of the task analysis is clear and the need for models noted, the main steps of the task analysis consist in collecting information about user tasks. This is the first step presented in Fig. 26, which summarizes the main steps (depicted as rectangles), as well as their outcome (depicted as “document” shapes), of a task analysis that relies on task models. Collecting information about the ATM and its users consists in observing users interacting with the ATM, interviewing them about how they use the ATM, reading documentation about the ATM … For the envisioned version of the ATM, collecting information about tasks consists in analyzing the prototype for this new version, in order to identify the possible tasks and interactions.
Second step of the process is the production of task models using the collected informal descriptions. Section 6.3 details this step.
Third step is the validation of task models (with eventually mending them according to the results of the validation). Validating the models consists in the identification of potential discrepancies between the collected information and the description presented in the model, as well as discrepancies between the ATM (current version or prototype) and the models.
Last step is the processing of models for the analysis. According to our example of objective “Foresee if the envisioned version of the ATM could be at least as usable as the existing version” and using the two versions of the tasks models (task models for the current ATM and task models for the envisioned ATM), we compare:
-
To what extent the user goals are reachable with both versions of the ATM, as well as their reachability according to the planned temporal ordering. This comparison enables to analyze whether the predicted effectiveness, contributing factor of usability (International Standard Organization 2018), is at least as good with the envisioned version of the ATM as with the current version.
-
The number of tasks, the number of information that a user has to recall to be able to reach the goals. This comparison enables to analyze the task complexity of both versions, and whether the predicted efficiency, contributing factor of usability (International Standard Organization 2018), is at least as good with the envisioned version of the ATM as with the current version.
The task analysis results (outcome of the last step in Fig. 26) are the results of these comparisons.
Building a Task Model
In this section, the focus is set on the step “Production of the task models” of the process for performing a task analysis that relies on task models (presented previous section and in Fig. 26). The following process, inspired from HTA (Anett 2004) and specialized according to the element of the HAMSTERS notation , applies to the production of task models:
-
1.
Gather information about main goals, sub-goals and their ordering.
-
2.
Format this gathered information in an initial version of the task models.
-
3.
Refine the task models by describing in details:
-
(a)
Concrete tasks that have to be accomplished.
-
(b)
Data, systems and devices required to perform these actions.
-
(a)
-
4.
Implement appropriate structuration mechanisms.
-
(c)
Use subroutines to avoid duplication of sets of tasks and increase legibility.
-
(d)
Use generic component mechanisms to abstract set of tasks that can be performed with a particular part of the user interface (independently from the related system function).
-
(c)
Repeat this sequence of modeling steps until the task models are suitable for the purpose of the task analysis.
Overview of the HAMSTERS Software Tool
As user tasks may be numerous and complicated, it is important to provide the analyst with computer-based tools for editing task models, refining and structuring tasks, and for analyzing them. To this end, the eponym HAMSTERS software tool supports the editing and simulation of the execution of task models.
Editing Mode of the HAMSTERS Tool
Figure 27 presents a screenshot of the HAMSTERS software tool for editing task models. The editing view is composed of five main areas, each of them supporting specific tasks for the production of task models:
-
The panel labelled “1. Project structure” in the top left in Fig. 27, is a project explorer panel to browse the different models of the project (store in a dedicated directory). This panel presents the models gathered by role and by type of model (task models, subroutines, generic components, scenarios…), as well as alphabetically ordered, in order to facilitate the search of a specific model.
-
The panel labelled “2. Task model editing” in the center in Fig. 27, is the editing area to add/modify/remove elements of the task model.
-
The panel labelled “3. Palette” in the top right in Fig. 27, is a panel that contains all of the elements of the HAMSTERS notation. It enables to drag and drop elements of notation in the task model editing area, which fastens the construction of the task model.
-
The panel labelled “4. Task properties” in the bottom right in Fig. 27, is a panel that contains the set of properties associated to the element selected in the editing panel. It provides support to modify the properties of an element of the task model.
-
The panel labelled “5. Task model tree view” in the bottom left in Fig. 27, is a navigator panel. This panel presents a simplified and synthesized hierarchy of the different elements of the task model, as well as the list of data, objects, and devices described in the task model. It facilitates the search for a specific element of the task model.
Simulation Mode of the HAMSTERS Tool
Figure 28 presents a screenshot of the HAMSTERS software tool in its state of simulation of task models. The simulation view is composed of two main areas of interest:
-
The central area of the HAMSTERS software tool, labelled “1. Panel containing the instance of task model under simulation,” presents the executing instance of task model. The tasks that are available for execution displays with a light green background and the already executed tasks display with a light green tick on their upper right side.
-
The simulation panel in the right of the HAMSTERS software tool, labelled “2. Simulation control panel,” which is composed of three areas:
-
The tasks execution control area, at the top of the panel, presents the tasks that are available for execution. If the simulation involves several role, the set of available tasks displays grouped per role (a tab per role).
-
The selected task area, in the middle of the panel, presents the task currently selected for execution and, if any, displays an area to input data required for the execution of the task.
-
The on-going executed scenario area at the bottom of the simulation control panel, labelled “Scenario under completion,” presents the sequence of tasks already been executed.
-
It is important to note that the editing view is different from the simulation view. Both views co-exist in the tool but the simulation view is read-only. We implemented the concept of instance of simulated task model because the simulation may require the execution of several instances of a task model at the same time. Such parallel execution of several instances of a task model could occur when two user roles require the execution of a sub-routine. For example, a sub-routine describing task to select a graphical item in a collaborative user interface may execute for several roles using this interface.
Task Models Versus System Models and Scenarios
When describing users’ tasks and interactive tasks , it is possible to integrate the description of the allocation of tasks between the user and the system. The refinement of the description of such allocation of tasks can lead to the description of the system behavior. Indeed, one can be tempted to integrate state representation, detailed behavior, events … that should remain in a model of the computing device and not in the task models. This aspect is addressed in Sect. 6.3.1 which aims at defining which aspects of the computing device has to be represented in the task model and what should remain outside. As scenarios are widely in UCD approaches, as for example the Scenario-based design approach proposed in (Rosson and Carroll 2002), and that they are also description of users’ tasks, Sect. 6.3.2 defines the difference between a task model and a scenario as well as how these two artefacts relate to each other.
Task Models Versus Computing System Models
Modeling activities is one of the corner stone of computer science as this is the only way to handle the complexity of these systems. Many modeling techniques and notations have been proposed in that domain reaching a climax with the proposal of UML (Rumbaugh et al. 2004) and more recently SysML (Friedenthal et al. 2011), where 11 different notations are introduced for describing computer systems. Among all of these notations, the “use cases” notation targets at describing user activity. Those “use cases” are very different from task models even though, at a high level of abstraction, they aim at the same objective. Describing commonalities and differences between task models and use cases is beyond the scope of this chapter but the interested reader can find detailed information in Sinnig et al. (2013).
One of the pitfalls of task modeling is to avoid representation in the task models of information that should appear in one of the computer system models . This is not an easy job as the task model needs to represent information about interaction, i.e., the activities of the user that are performed triggering commands in the underlying system. As far as the interactive aspects of the computing system are concerned some of the information such as internal behavior (e.g., system internal state changes) should remain in the system model. However, user interface components triggering (e.g., UI button triggering or entering a value) should explicitly appear in the task models, as they are parts of the user activity. Task models may represent information processing by the computing system if they result in providing feedback to the user or if they need time imposing waiting time on the user side.
Task Models Versus Scenarios
As defined by Rosson and Carroll (2002), scenarios “consist of a setting, or situation state, one or more actors with personal motivations, knowledge, and capabilities, and various tools and objects that the actors encounter and manipulate. The scenario describes a sequence of actions and events that lead to an outcome. These actions and events are related in a usage context that includes the goals, plans, and reactions of the people taking part in the episode” (p. 1). A scenario is thus a sequence of execution of tasks to reach a subset of goals in a particular situation. Task models are mainly different from scenarios because task models consist in an abstract description of the whole user tasks structured in terms of goals, sub-goals and concrete tasks (Anderson et al. 1990). There are additional differences between scenarios and task models. Table 14 summarizes side-by-side these differences between scenarios and task models.
Scenarios and task models are different but complementary. Scenario may support the construction of task models. Their analysis support to identify verbs (that will correspond to tasks), roles (and for each of them a set of task models will be built), and nouns (that will correspond to objects and devices in the task model) (Paternò and Mancini 1999). In the other way round, the extraction of scenarios from task models enables to generate systematically a set of scenarios that cover the whole set of user tasks or that focus on a specific group of tasks. Such extraction is useful to prepare usability evaluations (Mori et al. 2002; Winckler et al. 2004).
Conclusion
Models are representations of the real world. They are tools for conceptual thinking and enable at the same time to focus on details of importance and to abstract away irrelevant information. Task models provide precise and unambiguous information about user tasks. They support almost all the phases of a user-centered design process, if their level of detail matches the task analysis objectives. This level of detail relies on the expressive power of the notation. The version of the HAMSTERS notation presented in this chapter embeds several extensions integrated since its creation. These extensions testify about the research effort to increase the precision of the descriptions along with the broadness of the scope of task analysis that rely on HAMSTERS task models.
In this chapter, we introduced the main elements of the HAMSTERS notation including tasks types, temporal ordering operators and temporal constraints, data, objects, devices, collaborative tasks, as well as structuring mechanisms to deal with large amount of user tasks. We exemplified these elements and mechanisms with the illustrative example of the Automated Teller Machine, which is easy to grasp and remains of a reasonable size for such a presentation. However, we also used HAMSTERS for several projects in various industrial domains such as aeronautics (Lallai et al. 2021), space (Martinie et al. 2014), and entertainment (Bernhaupt et al. 2018).
The current state of the HAMSTERS notation is at this time quite advanced compared to other task modeling notation, because its development relies on the continuous integration of relevant contributions as well as on the increase of its expressive power. However, we plan to mend and refine it as long as we can so that it remains relevant whatever the evolution of technologies, application domains, and nature of operators’ work. This work is very valuable to us as it helps to analyze and understand the main concepts associated to these evolutions.
At last, designing a notation and its associated tool is a complex and time-consuming process. Given the large spectrum of possible technologies and contexts, extending the notation to cover all of the description needs is very difficult, and could be counterproductive as the number of possible elements and associated tool features could decrease the overall usability of the notation and of the tool. To overcome these issues, the HAMSTERS-XL evolution of HAMSTERS, and its supporting tool HAMSTERS-XLE, enable customizing the main task types and data types of the HAMSTERS notation, as well as its supporting tool (Martinie et al. 2019).
The HAMSTERS-XLE tool is publicly available and accessible for download.Footnote 1
Notes
References
Anderson R, Carroll J, Grudin J, McGrew L, Scapin D (1990) Task analysis: the oft missing step in the development of computer-human interfaces; its desirable nature, value, and role. INTERACT:1051–1054
Anett J (2004) Hierarchical task analysis. In: Dan D, Neville S (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 67–82
Annett J, Duncan K (1967) Task analysis and training design. Occup Psychol 41:211–221
Bernhaupt R, Palanque P, Drouet D, Martinie C (2018) Enriching task models with us-ability and user experience evaluation data. In: Bogdan C, Kuusinen K, Lárusdóttir M, Palanque P, Winckler M (eds) Human-centered software engineering. HCSE 2018, Lecture Notes in Computer Science, vol 11262. Springer, Cham
Bouzekri E, Martinie C, Palanque P, Atwood K, Gris C (2021) Should I add recommendations to my warning system? The RCRAFT framework can answer this and other questions about supporting the assessment of automation designs. In: Ardito C et al (eds) Human-computer interaction – INTERACT 2021. INTERACT 2021, Lecture Notes in Computer Science, vol 12935. Springer, Cham. https://doi.org/10.1007/978-3-030-85610-6_24
Broders N, Martinie C, Palanque P, Winckler M, Halunen K (2020) A generic multimodels-based approach for the analysis of usability and security of authentication mechanisms. In: Bernhaupt R, Ardito C, Sauer S (eds) Human-centered software engineering. HCSE 2020, Lecture Notes in Computer Science, vol 12481. Springer, Cham. https://doi.org/10.1007/978-3-030-64266-2_4
Calvary G, Coutaz J, Nigay L (1997) From single-user architectural design to PAC*: a generic software architecture model for CSCW. In Proc. of CHI '97. ACM, 242–249
Campos JC, Fayollas C, Gonçalves M, Martinie C, Navarre D, Palanque P, Pinto M (2017) A more intelligent test case generation approach through task models manipulation. Proc ACM Hum-Comput Interact. 1, EICS, Article 9, 20 p
Card S, Moran T, Newell A (1983) The psychology of human-computer interaction. Erlbaum, ISBN 0898598591, pp. I-XIII, 1–469
Cockton G, Woolrych A (2001) Understanding inspection methods: lessons from an assessment of heuristic evaluation. Springer, People and Computers, pp 171–192
Diaper D (2004) Understanding task analysis for human-computer interaction. Lawrence Erlbaum Associates, The handbook of task analysis for human-computer interaction
Dix A, Ramduny-Ellis D, Wilkinson J (2004) Chapter 19:Trigger analysis - understanding broken tasks. In: Diaper D, Stanton N (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 381–400
Ellis CA, Gibbs SJ, Rein G (1991) Groupware: some issues and experiences. Comm ACM 34(1):39–58
Fahssi R, Martinie C, Palanque P (2015) Enhanced task modelling for systematic identification and explicit representation of human errors. IFIP TC 13 INTERACT conference, LNCS 9299, part IV, Springer Verlag
Forbrig P., Martinie C., Palanque P., Winckler M., Fahssi R (2014) Rapid task-models development using sub-models, sub-routines and generic components. IFIP conf. on Human-Centric Software Eng., HCSE pp 144–163
Friedenthal S, Moore A, Steiner R (2011) A practical guide to SysML: the systems modeling language, 2nd edn. The MK/OMG Press
Gong R, Elkerton J (1990) Designing minimal documentation using the GOMS model: a usability evaluation of an engineering approach. CHI 90 Proc ACM DL
Greenberg S (2004) Working through task-centered system design. In: Diaper D, Stanton N (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 49–66
Gribova V (2008) A method of context-sensitive help generation using a task project. Int J Info Theories Appl 15:391–395
Guerrero J, Vanderdonckt J, Gonzalez Calleros J (2008) FlowiXML: a step towards designing workflow management systems. J Web Eng:163–182
Heer J, Agrawala M (2008) Design considerations for collaborative visual analytics. Info Visualiz 7(1):49–62
International Standard Organization (2018). ISO 9241-11:2018 ergonomics of human-system interaction part 11: usability: Definitions and concepts, 2018, ISO
John B, Kieras DE (1996) The GOMS family of user interface analysis techniques: comparison and contrast. ACM Trans Comput-Hum Interact 3(4):320–351
Johnson P (1992) Human-computer interaction: psychology, task analysis and software engineering. McGraw Hill, Maidenhead
Johnson P, Johnson H, Hamilton F (2000) Getting the knowledge into HCI: theoretical and practical aspects of task knowledge structures. In: Schraagen J, Chipman S, Shalin V (eds) Cognitive task analysis. LEA
Kieras D (2004) GOMS models for task analysis. The handbook of task analysis for human-computer interaction, Lawrence Erlbaum Associates, pp 83–116
Lallai G, Loi ZG, Martinie C, Palanque P, Pisano M, Spano LD (2021) Engineering task-based augmented reality guidance: application to the training of aircraft flight procedures. Interact Comput 33(1):17–39
Martinie C, Palanque P, Navarre D, Winckler M, Poupart E (2011a) Model-based training: an approach supporting operability of critical interactive systems: application to satellite ground segments, EICS 2011, ACM DL. pp. 141–151
Martinie C, Palanque P, Barboni E, Ragosta M (2011b) Task-model based assessment of automation levels: application to space ground segments. Proc of the IEEE SMC, Anchorage
Martinie C, Palanque P, Winckler M (2011c) Structuring and composition mechanisms to address scalability issues in task models. In: IFIP TC 13 INTERACT conference. Springer Verlag, pp 589–609
Martinie C, Palanque P, Ragosta M, Fahssi R (2013) Extending procedural task models by systematic explicit integration of objects, knowledge and information. Europ Conf Cognitive Ergonomics: 23-34, ACM DL
Martinie C, Barboni E, Navarre D, Palanque P, Fahssi R, Poupart E, Cubero-Castan E (2014) Multi-models-based engineering of collaborative systems: application to collision avoidance operations for spacecraft. Proc. of the 2014 ACM SIGCHI symposium on engineering interactive computing systems (EICS '14). ACM, New York, pp 85–94
Martinie C, Palanque P, Bouzekri E, Cockburn A, Canny A, Barboni E (2019) Analysing and demonstrating tool-supported customizable task notations. PACM on human-computer interaction, Vol. 3, EICS, Article 12, 26 p
McGrath JE (1984) Groups: interaction and performance. Prentice Hall, Inc., Englewood Cliffs
Meyer DE, Annett J, Duncan KD (1967) Task analysis and training design. J Occup Psychol 41
Mori G, Paternó F, Santoro C (2002) CTTE : support for developing and analyzing task models for interactive system design. TOSE J 28(8):797–813
Navarre D, Palanque P, Bastide R, Paternó F, Santoro C (2001) A tool suite for integrating task and system models through scenarios. DSV-IS'2001; LNCS 2220. Springer
O’Donnell RD, Eggemeier FT (1986) Workload assessment methodology. In: Handbook of perception and human performance, Vol. II Cognitive Processes and Performance. Wi, pp 42–49
Palanque P, Martinie C (2011) Contextual help for supporting critical Systems' operators: application to space ground segments activity in context workshop, AAAI conference on. Artif Intell
Palanque P, Bastide R, Dourte L (1993) Contextual help for free with formal dialogue design. Proc HCI Int 1993:615–620
Pangoli S, Paternò F (1995) Automatic generation of task-oriented help. ACM Symp UIST:181–187
Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. Syst Man Cybernetics Part A: Syst Humans IEEE Trans 30(3):286–297
Paternò F (1999) Model-based design and evaluation of interactive application. Springer. ISBN 1-85233-155-0
Paternò F (2002) Task models in interactive software systems. In: Handbook of software engineering and knowledge engineering, vol 1. World Scientific, pp 1–19
Paternò F, Mancini C (1999) Developing task models from informal scenarios. CHI Extended Abstracts pp 228–229
Paterno F, Santoro C (2002) Preventing user errors by systematic analysis of deviations from the system task model. Int J Human Comput Syst 56(2):225–245
Paternò F, Zini E (2004) Applying information visualization techniques to visual representations of task models. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, pp 105–111
Pinelle D, Gutwin C, Greenberg S (2003) Task analysis for groupware usability evaluation: modeling shared-workspace tasks with the mechanics of collaboration. ToCHI 10(4):281–311
Roschelle J, Teasley SD (1995) The construction of shared knowledge in collaborative problem solving. In C. E. O'Malley (Ed.), Computer-supported collaborative learning. pp 69–197
Rosson MB, Carroll JM (2002) Chapter 53: Scenario-based design. In: Jacko J, Sears A (eds) The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications. Lawrence Erlbaum Associates, pp 1032–1050
Rumbaugh J, Jacobson I, Booch G (2004) Unified modeling language reference manual. Pearson Higher Education
Sinnig D, Chalin P, Khendek F (2013) Use case and task models: an integrated development methodology and its formal foundation. ACM TSEM 22(3):27
Stapleton J (ed) (2003) DSDM: business focused development. Pearson Education
van der Veer GC, Lenting VF, Bergevoet BA (1996) GTA: groupware task analysis - modeling complexity. Acta Psychol 91:297–322
van Welie M, van der Veer GC (2003) Groupware task analysis. In: Handbook of cognitive task design. LEA, NJ, pp 447–476
Winckler M, Palanque P, Freitas C (2004) Tasks and scenario-based evaluation of information visualization techniques. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, NY, USA, pp 165–172
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this entry
Cite this entry
Martinie, C., Palanque, P., Barboni, E. (2022). Principles of Task Analysis and Modeling: Understanding Activity, Modeling Tasks, and Analyzing Models. In: Vanderdonckt, J., Palanque, P., Winckler, M. (eds) Handbook of Human Computer Interaction. Springer, Cham. https://doi.org/10.1007/978-3-319-27648-9_57-1
Download citation
DOI: https://doi.org/10.1007/978-3-319-27648-9_57-1
Received:
Accepted:
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-27648-9
Online ISBN: 978-3-319-27648-9
eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering