Keywords

1 Introduction

Nowadays, the success of organizations depends on the analysis of large amounts of data [4] and solid decision making based on them. Through the evaluation of different possibilities and scenarios one can lay the foundations for a consistent decision making. Analyzing information taking into account facts and data can increase the chances of success in decisions.

Data analysis process requires human judgment to make the best possible evaluation of incomplete, inconsistent and potentially deceptive information in face of rapidly changing situations. Visual analytic (VA), the science of analytically reasoning facilitated by interactive visual interfaces, aims to extract and identify useful information and knowledge from large volumes of data [31]. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them [8]. This complexity requires that the VA tools design addresses the challenges of facilitating interaction to support understanding, manipulation and analysis of large number of information.

Recently, the Human-data interaction (HDI) area has investigated how people interact with data in a manner analogous to the research conducted by the Human-computer Interaction (HCI) area on the relationship between people and computers [12, 15]. HDI is referenced as the “human manipulation, analysis and meaning creation from bulky, unstructured and complex datasets” [11]. We consider HDI aspects related to VA tools. We adopt an approach to the HDI that highlights the importance of taking into account the various stakeholders during all the development cycle.

Designing VA tools based on guidelines is an important approach to help materializing the knowledge and experience acquired by various experts in the field [11, 26]. However, this approach does not favour the engagement of people in the process of shared construction of software design. Similarly, participatory approaches [16, 21] allow obtaining diverse knowledge to improve products through people’s vision who potentially are affected by their construction. Nevertheless, the participation of people with diverse profiles does not favor the use of technical inputs e.g., design guidelines. We claim that the option for the use of design approach based on guidelines should not necessarily exclude the possibility of taking advantage of the participation of people with different profiles.

In this study, we propose evaluation techniques for supporting the design decisions of VA tools based on HDI guidelines to be taken in a participatory way. The challenges brought by our approach involve research and exploration of methods to select and clarify the guidelines relevant to a given decision, as well as the definition of appropriate practices to ensure the adequate participation of key stakeholders in the project decision on the use of guidelines. The contribution of this investigation consists in new practices to create the necessary conditions to help participants of all profiles collaborate with the design process.

Our methodology started with a research on participatory practices that can be used in the evaluation phase and with a study of alternatives to adapt the practices to the context of the HDI design guidelines. Then, we investigated and selected design guidelines related to the design problem and looked for examples and systems to facilitate the guidelines understanding by the involved stakeholders. We created tasks for the participants to carry out and we elaborated questions to guide the decision making.

This investigation was conducted and applied in a case study related to a data intensive environment. The evaluation of the proposed practices was carried out in the context of the UNISIM laboratory of the Center for Petroleum Studies (CEPETRO) at University of Campinas, which develops methodologies and tools to assist in the decision analysis process [30]. In this context, one of the research lines emphasizes the study concerned with the selection of production strategy in oil fields based on a 12-steps methodology. In this research, the practices were conducted in workshops involving design decisions about SEPIA, a VA tool developed by the UNISIM laboratory. This tool supports VA tasks commonly performed by domain engineers and researchers. In particular, we addressed how to design coordinated visualizations [2]. Obtained results based on the conducted workshops indicate that the participatory practices for the evaluation of the visualization guidelines was relevant for the design decisions in SEPIA.

The remainder of this article is organized as follows: Sect. 2 presents the background with the fundamental concepts and related work. Section 3 reports on the proposed practices. Section 4 describes the conducted case study. Whereas Sect. 5 discusses the findings and lessons learned, Sect. 6 presents our final considerations and directions for future research.

2 Background and Related Work

The proposal for design practices presented by Churchill [9] attempted to demystify the “genius designer” whose instincts and intuition lead to great design decisions. Her work states that is needed to take a proactive and critical stance to design, develop, or evaluate products that incorporate the capture, storage and data analysis. In this sense, we address the design of VA tools based on guidelines in a participatory design approach.

Participatory Design. The field of Participatory Design spans a rich diversity of theories, practices, analyses and actions, with the goal of working directly with users and other stakeholders in the design of social systems that are part of human work [16]. This approach considers that everyone involved in a design situation is capable of contributing for it [21].

The area is rich in terms of practices and extent of theoretical development. There is a large number of practices that vary in relation to the phase of development life cycle and address who participates with whom in what, and appropriate group size and the type of project that has been used [16, 21].

In our proposal, workshops with a participatory approach is used for taking advantage of people’s participation in activities in all the design cycle. We use some practices described in the literature and propose new participatory practices with the goal of conducting guidelines evaluation.

Design Guidelines. From their experience in various projects, design specialists can compile recommendations and provide designers with the ability to determine the consequences of their design decisions. The use of these recommendations allows less experienced designers to enjoy knowledge of the most experienced. Design rules in the form of standards and guidelines provide direction for design. They are recommendations a designer can follow to enhance the interactive properties of the system [10]. Design guidelines vary in their level of abstraction, generality and authority.

We use the term guideline in a broad sense to talk about design recommendations made by experts that can be used in the design of other systems in a comprehensive way, without distinguishing the level of generality, abstraction or authority. One example of guideline is the information density guideline that suggests “to provide only necessary and immediately usable data; do not overload your views with irrelevant data” [29]. We use HDI guidelines, in the phase of evaluation, as an approach to bring specialists’ knowledge to help the identification of points for redesign that favour HDI.

Coordinated Visualizations Guidelines. A multiple view system uses two or more distinct view to support the investigation of a single conceptual entity [2]. The advantages of using multiple visualization coordination are improved user performance; discovery of unforeseen relationships; and unification of view [24]. The design of this kind of system involves decisions, ranging from determining layout to constructing sophisticated coordination mechanisms and interactions between the various dimensions of space. Some guidelines were provided to support decisions involved in this context [2]. In this context, there are two set of guidelines. The first supports the decision of using or not coordinated visualizations, e.g., the rule of complementarity: “Use multiple views when different views bring out correlations or disparities” [2]. Other set of guidelines helps deciding how to design coordinated visualizations, e.g., the rule of self-evidence “Use perceptual cues to make relationships among multiple views more apparent to the user” [2]. In this study, we explore both set of coordinated visualizations design guidelines associated with other more generic interaction guidelines.

Evaluation Techniques. Evaluation tests are used to assess the usability, functionality and acceptability of an interactive system. Some approaches are based on experts’ evaluation whereas others involve users. Evaluation seeks to assess the quality of an interface design, both during the development process and when the software is almost ready. An evaluation method must be chosen carefully and must be suitable for the system under analysis [10].

Among several methods, we highlight the Heuristic Evaluation [23], which consists in conducting the inspection of the interface based on a list of heuristics. Heuristic evaluation refers to a method for finding usability problems in user interface design. A review of Nielsen’s Heuristic evaluation method based on participatory approaches was proposed by including users (work-domain experts) as inspectors by Muller et al. [22]. He extended the original Nielsen’s heuristic set with several process-oriented heuristics. The evaluation method guided an iterative designs process.

This technique is similar to one of the practices proposed in our work, but it is focused in a specific Nielsen’s heuristics set extension. Our initial set of guidelines includes the Nielsen heuristic set and further covers a larger set of heuristics in the context of HDI and specific issues, e.g., coordinated visualizations guidelines. This type of assessment is generally adopted for the design phase. However, it is not appropriate to our participatory approach, because it is important to have input from stakeholders at all stages of the project. We propose adaptations to the heuristic evaluation method for engaging stakeholders while aggregating the knowledge and experience provided by the experts through the use of guidelines.

The online community was the target of a study that applied a combination of participatory design and development methods with heuristic evaluation [27]. A specific set of heuristics was developed, extending the Nielsen’s heuristics and adding a specific set of sociability heuristic. The set of heuristics was then turned into a questionnaire that was iteratively tested with online communities. Refining the set of sociability heuristics was one of the goals of the study.

The studies conducted by Muller et al. [22] and Preece et al. [27] conceived practices of evaluation by guidelines combined with participatory methods. However, these studies did not involve VA neither emphasize HDI. These facts highlight the innovation of our proposed practices which combines stakeholders participation and HDI guidelines for a VA tool in a complex domain.

Leman et al. [17] studied typical data visualizations that results from linear pipelines that start by characterizing data and end by displaying the data. The proposal goal was to provide natural means to adjust the displays to support good HDI. This method supports a dynamic process for defining visualizations in which users learn from visualizations and the visualizations adjust to the expert’s judgment. This proposal differs from ours mainly because it is a method for the execution time and not a process for design VA in the HDI context.

3 Participatory Practices for HDI Guidelines Evaluation

Our proposal supports design decisions based on guidelines to be taken in a participatory way. Subsect. 3.1 presents an overview of a design process for HDI [32], in which our evaluation techniques are developed. The following sections detail the defined evaluation tasks for supporting the design decisions. These activities involve selection of HDI design guidelines (cf. Subsect. 3.2); preparation of workshops (cf. Subsect. 3.3); procedure to assist participants in the understanding of HDI design guidelines (cf. Subsect. 3.4); and a technique for conducting participatory evaluation with HDI guidelines (cf. Subsect. 3.5).

3.1 Overview of the Process for HDI Designing

We summarize the entire design process for HDI that combines guidelines with participatory practices [32]. It includes several activities that are orchestrated by the flow illustrated in Fig. 1. It starts with problem clarification activities. Initially, it is necessary to know the stakeholders, understand the concepts and values of each one involved in the design problem (cf. item A of Fig. 1). All stakeholders and their interest have to be identified [19]. We explore supporting artifacts that help thinking beyond traditional participants [3]. It is possible to discover people who are not directly involved with the tasks of the selected design scope, but they are affected by the results produced by these tasks. From the obtained stakeholders, it is important to know the problems and issues as well as the ideas and solutions related to each stakeholder [3]. They can have different perspectives about the subject. In this context, the well known elicitation techniques of requirements engineering are used to understand the subject.

Fig. 1.
figure 1

(adapted from [32])

Process for HDI design by combining guidelines with participatory design approaches

Design activities should focus on meeting the needs of stakeholders by providing solutions to the problems and issues reported by them. Participants of all hierarchical levels should give their contributions during the participatory design workshops (cf. item B of Fig. 1). We use Storyboarding and BrainDraw [21] as the main techniques to support design activities.

In our proposal, during design activities the group creates a low fidelity prototype without design guidelines orientation. The participants should propose alternatives freely in the design phase without worrying about guidelines. In this sense, they do not stop thinking in creative ways to solve the problem for fear of not attending a guideline. The guidelines are introduced afterwards.

Before beginning the evaluation activities, there is a pre-evaluation phase (cf. item C of Fig. 1). It includes the selection of guidelines (cf. item E of Fig. 2) and the preparation of the workshops for evaluation (cf. item F of Fig. 2), detailed in Subsects. 3.2 and 3.3, respectively. In this phase, a non-functional navigable prototype must be constructed so that participants can interact with it and better understand how their suggestions given in previous activities would be mapped to the software.

We need to verify to which extent users think the prototype might help them accomplishing their tasks; and how the prototype can be improved. All the activities of evaluation phase help making decisions about the prototype refinement (cf. item D of Fig. 1). To know the impression that a prototype cause we use techniques such as the Participatory Thinking-aloud Evaluation [18] and User Evaluation. In the first, a participant is invited to interact with the navigable prototype to complete an use case, conducting a pre-defined task. In our proposal, all participants in a workshop speak aloud while one participant interacts with the prototype. When the design of the prototype is mature, an user evaluation based in the Query Technique [10] is conducted asking the participants about the results directly. It is applied by interview or questionnaire.

The navigable prototype and the selected guidelines support the evaluation activities. We recommend the use of guidelines in more advanced phases of the design. They are introduced to the participants only in the evaluation phase. This phase includes the understanding of guidelines for all participants (cf. item G of Fig. 2) and the participatory evaluation with the guidelines (cf. item H of Fig. 2), detailed in Subsects. 3.4 and 3.5, respectively.

After the evaluation activities, the team has the opportunity to decide if they are going to make a redesign activity to adjust the prototype to the issues identified. The decision to be taken, as a result of the activities of evaluation, is whether the guidelines will be adopted in the prototype solution. The activities may also result in suggestions of how to use them. If the group decision is to change the prototype, it may be necessary to return to the design activities. This cycle can be repeated more than once until participants feel that the prototype design is appropriate for their needs.

Fig. 2.
figure 2

Pre-evaluation and evaluation activities for evaluation of HDI design guidelines

3.2 Selection of HDI Design Guidelines

At the beginning of the evaluation process, before the evaluation workshops, it is necessary to conduct tasks with the objective of defining which guidelines are relevant for supporting decisions in a specific context (item E of Fig. 2). The designer should select guidelines considering specific problems being addressed in the design, characteristics of the solutions being analyzed, and design phase.

The selection activity should be based on a large set of guidelines related to the subject. Given the complexity of the problem of facilitating the HDI for VA application, we aggregate guidelines of the VA, HCI and HDI area. We draw upon a previously compiled set of guidelines and heuristics [26]. It brings together the guidelines found in influential contributions in the VA and HCI areas. We add HDI guidelines found in the literature [5,6,7, 13,14,15, 20, 28].

We need to include in the set the guidelines those for the application domain or standards used by the target organization if they exist. For example, if the design problem involves a particular type of interaction or visualization, the designer should look for specific guidelines in that context.

At this stage, you group the guidelines by subject to facilitate the selection of the recommendations that matter to one specific scope. One group of guidelines could aggregate, e.g., guidelines related to decision about using a specific type of resource; and other group join all guidelines about how to use this resource.

This large set is a facilitator for the next task of deciding which guidelines will be useful in the workshops. However, it is unfeasible to work with so many guidelines at the same time. Then, you need to analyze which design decisions need to be made due to the design situation at that time and the subjects involved. You should select the specific guidelines more closely related to the prototype issues and interactions that are discussed in a given design phase.

With the progress of the design and refinement of prototype, new features may be introduced or new types of interaction may become necessary. At each cycle of the process, it is important to re-evaluate the design context and select guidelines regarding the new issues being discussed. The result of this activity consists in a set of guidelines that will be used during the evaluation workshop.

3.3 Preparation of the Workshops

Besides the guidelines selection, there are more preparation before conducting the workshops. The objective of this activity (cf. F of Fig. 2) is to create conditions for all those involved to understand the guidelines and take advantage of them, making possible the participation in design decisions.

Knowing what guidelines will be used, it is necessary to prepare the materials used to facilitate their understanding. The guidelines are resources to specific contexts of design; we assume that workshop’s participants need to learn about the content of the involved guidelines. Once the designer chose the set of guidelines, it is necessary to find ways to unravel, explain and facilitate their understanding to allow participants comprehending them and to decide on their use. The designer must prepare a form to support the understanding of the guidelines by the following steps.

  1. 1.

    Evaluate the best ways of organizing and explaining the guidelines selected. It should be clear what decisions the participants have to make and what they need to be concerned about in each evaluation workshop. In this regard, the participants’ background should be considered. If participants are not familiar with the use of design guidelines, you need to prepare explanations that relate the concepts involved to things they know.

  2. 2.

    For each guideline, choose simple examples to explain the involved concepts to the participants without the specific background. It is convenient to use examples from different contexts that involve interactions similar to those being analyzed in order to avoid biases. During the analysis of literature on the guidelines, it is possible to find illustrated examples that can be used.

  3. 3.

    To complement the explanation, look for websites or VA tools that allow the practical exercise of interaction. Use references from the literature and consult other designers and users to find websites or VA tools that have prototype-like interactions. The focus of the choice should be HDI. Then, choose the visualizations and scenarios for analysis that involve interactions and data types similar to the design context. For example, if the design problem involves interacting the location of some elements, look for systems exploring interactions with maps.

  4. 4.

    At this stage, you can navigate and explore the interaction alternatives and information presented in the visualizations from the chosen tools. List points that users should observe focusing on the interactions being studied, e.g., if the study is about coordinated visualizations, it is important to highlight interactions in a visualization that impact other visualizations.

  5. 5.

    Select different alternatives available for the same type of interaction, e.g., if the application involves maps, choose websites that react in different way to similar interactions, as zoom operations. Some websites may automatically update the contents of coordinated visualizations while others do not.

  6. 6.

    Find alternatives related to the moment when actions are triggered due to user interaction, e.g., if the user changes the position in a slider altering the value of some visualization parameter, in some websites other visualizations are regenerated and updated while the user moves the mouse and others do this only when the mouse is released.

  7. 7.

    Prepare some tasks for the participants to do using the websites or VA tools to exercise the content of the guidelines. For example, if a website regarding real estate is used for the exploration, it may be interesting that participants choose a property to rent in a particular region of the city.

The result is an activity guide to support the participants in the understanding of guidelines. For each guideline, list all the relevant points for participants to explore and tasks to be executed. A form should be defined and should clearly state the decisions that must be made and ask questions that drive the decisions.

3.4 Participants’ Understanding of HDI Design Guidelines

During the workshops, initial activities aim to ensure that all participants understand the guidelines and they are prepared to participate in the evaluation. The materials generated in the preparation step (cf. F of Fig. 2) is used as a support in the understanding phase (cf. G of Fig. 2). In this step, users should be guided through the following steps:

  1. 1.

    Guide the participants in the understanding of the guidelines so they can make sense of them. Introduce each guideline or group of guideline to participants with explanation and examples of applications.

  2. 2.

    The participants must explore by themselves the interactions in websites or VA tools in different context from the design under evaluation. The exploration should be conducted in small groups of 2 or 3 people. They should answer the questions set out in the form.

  3. 3.

    We understand that HDI guidelines analysis should be done in a practical way. The participants should perform tasks involving the interaction to understand the aspects related to their accomplishment. The exploration of selected websites helps the understanding of the interaction aspects, since the participants have the opportunity to exercise the interaction.

  4. 4.

    After the exploitation of the websites, all the participants are gathered to discuss their understanding of the guidelines. At the end of this activity, participants must understand well the guidelines and their application in the explored websites or VA tools.

3.5 Participatory Evaluation of HDI Design Guidelines

The last activity is held in a participatory workshop (cf. H of Fig. 2). The guidelines are evaluated by participants led by a designer. The participants should map the guidelines for the prototype interactions and make the relevant design decisions. This is conducted through the following steps:

  1. 1.

    You should clarify which design questions need to be answered in the workshop. It may be convenient to discuss the related guidelines in a grouped way. Each evaluation workshop should focus on a well-defined decision and discuss a set of related guidelines that can support that decision making.

  2. 2.

    Participants explore the navigable prototype. It may be convenient to explore the prototype in a thinking-aloud activity (cf. Subsect. 3.1).

  3. 3.

    Highlight and clarify the relationship between the guideline and the prototype. Show participants possible alternatives for mapping the guidelines in the prototype under analysis.

  4. 4.

    Make sure that the participants are aware that the suggestions of design mappings are a starting point for the discussions, and there are alternatives that need to be sought and discussed. They are used to help participants collaborate with the design process.

  5. 5.

    In the light of the previous steps, participants should discuss the application of the guidelines to the interactions covered by the prototype. Participants should discuss the impacts as well the advantages and disadvantages of the adoption of the guideline. The outcome of the discussion is the decision of the participants on the adoption of the guidelines. The activity generates a subset of the discussed guidelines potentially useful and the associated ideas for redesign.

4 Case Study

One of the challenges being addressed by UNISIM consists in the investigation of technologies for optimal production strategy selection in oil fields [30]. This process involves a lot of efforts in analysis of voluminous data. SEPIA is a VA software tool developed to facilitate this process. The results of this study consist of requirements for the tool and in future will be incorporated the production version of SEPIA.

One step for optimization of the production strategy requires, among other activities, the performance of many simulations, with some variations among them. After some simulations, it is necessary to make comparisons to verify the impact in results of the changes from one simulation to another. SEPIA supports several types of VA activities, but does not have specific functionalities to support this scenario. We addressed how to allow the enhancement of SEPIA with HDI design for comparisons among different oil production strategies.

The project was conducted in 2 cycles. The first cycle of the process resulted in the prototype for the comparison interface screens to be supported by SEPIA. In the second cycle, we dealt with design aspects related to the specific type of interaction with coordinated visualizations.

In this study, we emphasized evaluation activities based on design guidelines conducted in the second cycle. The application of the process for the second cycle required 2 workshops lasting approximately 3 h each. Thus, the whole process was conducted in approximately 6 h of meeting with 6 participants on average. In addition, 12 h of effort were required for the designers prepare the presentations and practices for each workshop.

The activities of this study involved 2 Computer Science researchers and 4 participants from UNISIM playing different roles with different reasons for engagement. One of the Computer Science researcher (one of the authors) played the role of designer throughout the process. We present the results for the evaluation activities carried out in the second cycle.

4.1 Results of the Design Activities

In the first cycle, the process began with stakeholders identification followed by issues and requirements elicitation. The stakeholders identified involved developers, designers, development project manager, researcher and users of the VA tool. The UNISIM team involves engineers and researchers who are very important stakeholders. Some of them are real users of SEPIA tool and others develop their own visualizations due to specific demands of their research.

In the elicitation phase, there were many meetings and presentations about the complex domain of strategies for petroleum exploration. To deepen the understanding of the domain, we conducted individual interviews with the purpose of clarifying the problem and eliciting the requirements. The results of the interviews revealed that one of the important issues was the comparison between the results of several attempts to optimize results. This subject was chosen as the central requirement to be addressed in the prototype (cf. Subsect. 4.2).

The issues, problems, ideas and solutions identified during the problem clarification activities were the source for the initial design activities conducted with storyboard and braindraw techniques. After getting consensus for the first version of the storyboard, it was possible to identify the goal of each interface involved in the process. We conducted one separated braindraw for each state identified in the storyboard.

We selected the guidelines that were most related to the prototype scope and they supported a participatory evaluation. One example of a VA guideline included was about information density: “Provide only immediately usable data; do not overload visualizations with irrelevant data” [29]. A preliminary participatory thinking-aloud evaluation of the prototype was undertaken.

4.2 Prototype

The time, volume of data and number of files involved in these attempts pose difficulties the execution of comparisons among obtained results from simulations. Participants suggested alternatives for the design of the comparison functionality so that users could quickly focus on the points being compared without being distracted by the other data or visualizations not involved. The decision made by the group adhered to the previously mentioned guideline, (cf. Subsect. 4.1), and the new prototype reduced the density of information by providing only immediately usable data and not overloading views with irrelevant data.

In the first cycle, the results of the braindraw workshops, the low fidelity prototypes, were transformed into a navigable prototype. This was very useful during evaluation activities and helped to raise issues, questions and suggestions. Figure 3 presents two screens of the SEPIA prototype defining the design for the visualization functionality of results comparison.

Fig. 3.
figure 3

Screens of SEPIA prototype for oil production simulation results comparisons

According to the new design, after the user chooses the comparison feature and selects the parameters for comparison, the system shows the comparison dashboard. To format this dashboard, the system allows the user to opt for a layout previously constructed (cf. item I of Fig. 3) or choose manually the position and format of the frames (cf. item J of Fig. 3). The user must define the visualizations to be shown in each frame defining types of graphics, e.g., production time curve or bar chart (cf. item K of Fig. 3); the sites, e.g., a specific producer well or reservoir (cf. item L of Fig. 3); and the data series, e.g., production of oil or water (cf. item M of Fig. 3).

To facilitate the task of constructing views, the system must allow the user to copy the definitions of visualization from one frame to another and modify it later. The user can ask the system to link, two or more frames indicating which parameter, e.g., site, should be linked. When the user changes one information, e.g., the site in one of the linked frames (cf. item N of Fig. 3), the system must automatically change a related information in other linked frames as well (cf. O of Fig. 3). In this sense, when the user requests to change an information in a given frame, the system should verify if it is needed to change the information in other different frames, maintaining consistency between them and facilitating user data analysis. Therefore, linked frames in SEPIA must be treated as coordinated views. In the second cycle, we dealt with design challenges related to various aspects of coordination between visualizations.

4.3 Results of the Selection of HDI Design Guidelines

Before the evaluation phase, we selected the useful guidelines explored in each specific workshop. The guidelines were chosen based on the large set of guidelines we had assembled and on the issues being addressed in each workshop.

The design decision taken to the prototype required to deepen the understanding of the guidelines for coordinated visualizations. These recommendations were incorporated into the set of guidelines. One guideline included to deal with coordinated visualizations was: “Use perceptual cues to make relationships among multiple views more apparent to the user” [2].

In this study, it was necessary to decide if coordinated visualizations would be used (decision I). If the decision was positive to this question, it was required to decide how they would be used (decision II). In this context, two separate groups of guidelines were organized.

First, we decided whether using coordinated visualizations. In this context, we selected guidelines that helped identifying which features of the prototype required the use of coordinated visualizations. We used guidelines regarding: (1) the diversity of information; (2) correlations and disparities revealed by visualizations; (3) the partitioning of information into manageable parts; and (4) the parsimony in the use of complex resources [1, 2, 20].

The second decision was about how to support coordinated visualizations. The selected guidelines addressed very specific aspects of coordinated visualizations such as: (1) the identification of the visualizations to be used in a coordinated way; (2) the presentation of coordinated visualizations; (3) the actions in each visualization that affect other visualizations; (4) the techniques of interaction; and (5) how to make the coordination clear to the user [2, 25].

4.4 Results of the Preparation for the Workshops

Preparation activities were necessary to arrange the materials used during the explanation of guidelines and the workshops. We organized the materials for two workshops, one for each decision (I and II). For each workshop, we organized the materials related to the specific decision, which included: (1) the explanation of the specific set of guidelines; (2) the definition of examples; (3) ways of participants exploring similar interactions; and (4) forms with questions to guide the activities and design decisions.

We prepared a presentation to show the guidelines. We looked for simple examples to explain the concepts involved in the guidelines. We avoided to use examples of the oil production area. First, we search for examples of similar interactions adopted in tools used by all the participants in their daily activities, e.g., file manager applications and spreadsheets. Afterwards, we used more complex and complete examples.

We looked for websites to allow a practical exploration of similar interactions. When it was impossible to find websites pointed out in the literature, we looked for websites that dealt with subjects similar to those mentioned in the literature examples, e.g., restaurant selection based on quality and location using a map or a site about the stock price index.

We prepared a list of questions included in the form to guide the workshops and design decisions, as follows.

  1. 1.

    Exploration of Websites

    • Does the guidelines apply to the context of the website?

    • How the interactions handled by the guidelines are supported?

    • What are the positive and negative points? Justify.

  2. 2.

    Mapping to the prototype:

    • Do the guidelines apply to the prototype interactions? In what contexts?

    • How could it be supported? It could be supported in the same way as the exploration examples, with some adaptation, or with another approach?

    • What are the design alternatives for the prototype?

    • What are the positive and negative points? Justify.

  3. 3.

    Open questions

    • Are there other similar design situations that could be addressed?

    • What other questions should we ask ourselves about the guidelines?

4.5 Results of the Participant’s Understanding the Guidelines

We held two workshops in separated days, one for each decision. The first part of each workshop considered activities to ensure understanding and enable the participation of all. The activities of understanding took on average 2 h in each workshop. All those who participated in the decision-making activities also participated in the understanding activity. In this stage, 3 groups of 2 participants were organized to carry out the exploration, which was directed by the activity guide form.

Participants used the QuintoAndarFootnote 1 website for the explorations with the aim of familiarizing with the interaction techniques used in coordinated visualization. We requested participants to search apartments with 3 bedrooms in a specific neighborhood and then compare with the offers of another neighborhood. As another exercise, participants were invited to compare the interaction techniques in the website regarding crimes in SeatleFootnote 2.

We suggested the participants to use the Yahoo’s FinanceFootnote 3 website to explore ways of making relationships between visualizations more evident. We asked participants about the coordination between the closing price and the negotiated volume. In addition, we proposed explorations to be carried out with the website KekantoFootnote 4. The goal was to enable participants to compare different ways of coordination between textual information and maps, besides the time to trigger consistency maintenance actions.

Participants with a user profile reported great knowledge gain with the proposed activities. Knowledge among the various participants was equated and everyone was able to participate in the next activity in which decision-making was actually easily carried out.

4.6 Results of the Participatory Evaluation of Guidelines

In each workshop, the selected guidelines were explained to the participants and they explored the different websites. Afterwards, the participants were able to decide whether the orientation made by the guidelines could benefit the prototype. After the activities, we consolidated the answers to the questionnaires.

In the first workshop conducted, the set of guidelines evaluated was regarding the benefits, costs, advantages and disadvantages involved in the use of coordinated views. The guideline that oriented all evaluation and can summarize the work is: “Participants should balance the benefits of having multiple coordinated visualizations and the complexity that comes with their introduction” [2]. Regarding decision I, the group indicated that it was convenient to maintain the coordinated visualizations in the solution of the comparison feature.

The second workshop reviewed the set of guidelines on how to support interactions between coordinated views. The most discussed guideline in this workshop was: “Make the interfaces for multiple views consistent, and make the states of multiple views consistent attention” [2].

The discussion of this guideline began by dealing with the comparison dashboard in a general way. However, the participants were unable to evolve due to the large number of possible situations that require consistency between visualizations. Then, the strategy adopted was to choose more specific scenarios for discussion. Once all the scenarios have been discussed, a detailed analysis identified the possible points of generalization.

We discussed a specific scenario about analyzing the production curve of oil and water, the curve for the net present value for a given period and specific wells. It also included a bar chart with the accumulated amount of the production for all the wells. The participants subdivided the decision II into 5 more specific design decisions. The following are the issues and decisions taken for the specific scenario discussed.

  1. 1.

    Issue: How should coupling be done between views? What are the mapping functions? Decision: Coupling must be done when the user includes or change a simulation model; and when the user changes the production time.

  2. 2.

    Issue: What are the perception tips used to make relationships self evident? Decision: The same color is used for the same simulation model in different visualizations. There should be a bar to indicate that the time of the production curve has changed.

  3. 3.

    Issue: What information should be kept consistent across multiple views? What attributes need to be kept consistent and what consistency rules should be used? Decision: The value of the total time used to calculate the accumulated production value should be kept consistent with the position in the time bar.

  4. 4.

    Issue: What situations require the consistency rule to be triggered? Decision: When the user changes the considered time.

  5. 5.

    Issue: At what specific time should consistency updates be triggered? Decision: When the user releases the mouse from button in the time bar.

4.7 Participants’ Assessment of the Activities

The participants were invited to evaluate the activities via a questionnaire and open questions. We used a Likert scale to understand participants’ assessment of the activities in which they were engaged. We considered the following grades: 1 to poor; 2 for bad; 3 for indifferent; 4 for good and 5 for very good. Table 1 presents the obtained results. Evaluations were considered positive when the notes are equal to or greater than 4.

Five participants answered the questionnaire. All of them participated in the design and evaluation activities. The process overall evaluation, time involved, practices, achieving the goal and quality of results were considered positive for 100.0% of the participants and presented average grade 4.0. The items with the best grade were about the understanding of guidelines and support to think about problems and solutions, followed by the help given by the examples in this understanding. The worst aspect considered by the participants was the value of their own participation in the workshop, which was considered positive by only 60% of them.

Table 1. Participants’ assessment of the evaluation activities.

5 Discussion

We proposed a methodology for evaluating guidelines through participatory practices focusing on HDI design for decisions supported by VA. We faced the challenges of identifying suitable methods for selecting and clarifying the relevant guidelines; and the appropriate definition of practices to the activities that ensure effective participation of key stakeholders.

Our achieved results indicated that the methodology application produces good effects. The participatory practices allowed to observe how the shared understanding about the problem domain can be obtained and different viewpoints conciliated.

Our findings suggest that it is necessary to create conditions for all involved to understand the guidelines and to take advantage of them for the guidelines evaluation practices to be truly participatory. To facilitate the understanding of the concepts it is relevant to use simple examples and exploration from different scenarios. Afterwards, when the participants are clear about the guidelines, it is important to help them map to the context of the evaluation.

The use of examples from other contexts familiar to the users avoids the bias that using examples from the domain itself can bring. We merged very simple examples that illustrated the concepts with more sophisticated examples to give new ideas of application. The examples were well evaluated and appeared to be useful to all participants.

We introduced guided explorations of websites that support interactions similar to the ones under analysis. The participants (users and domain experts) considered it very relevant, mainly for the understanding of the details related to the interactions. It helped all participants to know several alternative types of interaction and stimulated the generation of new ideas on how to apply them in SEPIA in a way that is useful for their daily work. However, the exploration practice did not have a uniform assessment. The developers did not consider the practice very helpful for themselves and reported that they already knew the possibilities of interaction in coordinated visualizations. Our understanding is that exploration is very interesting for people who are unaware of the possibility of interaction and unproductive for those who dominates the subject.

We found that it was not easy to analyze the use of guidelines based on generic scenarios. The use of specific scenarios facilitates discussion and understanding. Even for general purpose tool such as a dashboard, we noticed that the discussion flowed better when it came to simple, concrete scenarios based on a day-to-day project task. In this sense, everyone can follow the discussion, think of possibilities and contribute with suggestions.

During the preparation and conduction of the workshops we found limitations for the construction of the navigable prototype to support a better understanding of multiple coordinated visualizations. In general, prototyping tools do not have sufficient resources to allow exploring the effects of user interaction in multiple areas with multiple possible paths with different results in each area.

As future steps, we consider to identify scopes in the domain of oil production strategy that can be easily explained to design specialists and conduct guidelines evaluation with classical approach. This would make it possible to compare the results obtained by guidelines evaluation involving several design specialists and those obtained in a participatory approach.

6 Conclusion

The adequate use of design guidelines requires the definition of participatory practices to improve design decisions. This article presented the feasibility of including diversified profiles in participatory evaluation practices based on HDI design guidelines. Our methodology defined practices to create the necessary conditions for the understanding of the guidelines using examples and explorations of similar interactions. We applied the evaluation activities in a VA tool created to support optimal production strategy selection in oil fields by involving complex decisions and make good usage of participants knowledge. Our results lead to believe that the participatory evaluation of complex guidelines is favored by the use of examples and the exploration of interaction in other contexts, followed by the mapping to the domain under evaluation. We plan to evaluate the guidelines for coordinated visualizations in other scenarios of the optimization domain of petroleum production strategy.