1 Introduction

One of the activities in a requirements engineering (RE) process is setting requirement weights; deciding how important each requirement is relative to other requirements or deciding whether requirements are mandatory for options if they are to be considered. The mental process that actors involved in RE processes go through while assessing requirement weights is called the importance assessment process. In this article, we propose a course for making stakeholders in RE processes aware of the relevance of importance assessment and giving them some experience with specific aspects of the importance assessment process. This experience is provided by executing a number of assignments. These are designed to have the potential to be developed into tools for facilitating importance assessment processes in organizations, as part of RE processes. Although the development of the assignments into tools is not the focus of this article, we will address this issue briefly in Sect. 6. The aims of this contribution are:

  1. 1.

    To present the course, including the rationale for setting it up the way we did;

  2. 2.

    To describe some of the experiences of the participants during pilot sessions of the course.

This paper is not a formal evaluation. It relates experiences and opinions, not actual effects.

The relevance of the course is threefold. First, it can make stakeholders in RE processes aware of the relevance of importance assessment (see Sect. 2). Second, the course is aimed at introducing some notions to its participants that can be useful when weights are to be given to requirements. This is done by means of the assignments described in Sect. 3.4. Finally, arousing interest in importance assessment processes can be a starting point for actions to improve these processes (discussed in Sect. 6).

Importance assessment takes place during requirements definition and weighing activities, activities that can have a profound impact on organizational performance. According to a large survey by Ellis and Berry [18], an organization’s requirements definition and management are highly correlated with the success of large commercial applications for which these requirements were defined. As we shall see, the definition or description of requirements is tightly connected with importance assessment; for weighing, it must be absolutely clear what constitutes the content, the meaning, of the requirement to be weighed. According to Brace and Ekman [12], ‘inadequate development of requirements can affect subsequent development activities’. Hoffman and Lehner [34] state that ranking the priority of requirements is what caused requirements engineering teams in their study the most difficulties. The course that we have developed may help to cope with these difficulties.

The experience with importance assessment that actors gain with our course can also help them to explain, motivate, and communicate about, their priorities with others involved in the RE process. These deliberations may also lead to identifying and tackling ambiguities and interdependencies in requirements that can, according to the literature, greatly complicate a requirements engineering process [15, 60]. Encouraging discussion in an RE process seems a good way to increase the quality and acceptance of the results. Clarity and discussion can contribute to the management of power and politics in requirements engineering processes as analyzed by Milne and Maiden [46]. They also seem to be appreciated by those involved in contract selection processes (as exemplified by Belton [8]). Tools for eliciting weights like analytic hierarchy process (AHP) and conjoint measuring [42, 55] do not preclude discussion and may sometimes encourage it, but their focus is on individual weight elicitation, not on communication or discussion.

All in all, previous literature shows that paying attention to defining and prioritizing requirements, and communicating about them, is relevant for the success of RE processes. The course we have developed addresses the process of defining and prioritizing attributes and, by the nature of the assignments, stimulates communication about the importance assessment process.

1.1 Differences with other training instruments

The focus of the course discussed in this paper differs in several ways from that of training programs or instruments featured in contributions about prioritization found in this journal. First, the course is not focused on elicitation, but on generation of weights. Many tools used for assessing the relative importance of requirements are aimed primarily at eliciting what stakeholders (actors) already know, be it implicitly or explicitly. An example is the swing weights method used in the CORAMOD requirements analysis method [12]. However, a large body of research shows that requirement weights are subject to a wide range of distortions, biases, and systematic and unsystematic variations over time [see, e.g., 45]. Moreover, defining or weighing requirements is not a straightforward process. Milne and Maiden [46] do not see requirements as ‘objective facts waiting to be discovered’, but ‘as being subject to negotiations, contestable, moldable, and therefore open to the machinations of power and politics’. And, in our experience, requirements weights are open to genuine differences of perception and opinion.

The second difference is that, according to our literature study, and to the best of our knowledge, our course is the only one based on empirical research on the mental processes people go through when weighing requirements. As will be discussed in Sect. 2.1, importance assessment is in most research seen as a ‘black box’. The course that we present here is new, as is our description of the experiences of participants with it. The various assignments that form the core of the course are mostly not very innovative. Their content is specific for importance assessment, but their general structure can be found elsewhere as well. For example, some assignments are akin to certain creativity-enhancement instruments. Having said this, we want to stress that the focus of this article is on importance assessment, not on the art of instrument design as such.

1.2 Limitations of this research

Our work has several limitations. We limit ourselves to strategic, non-routine, organizational decisions. In this context, ‘strategic’ means vital for the long-term future of an organization [28]. An example is the acquisition of a fleet of minibuses for a transport company. The chosen minibuses determine the kind of services the company can offer (i.e., the size of groups of vacationers that can be accommodated) for a number of years. ‘Non-routine’ means that the participants in an RE process have not been involved in similar decisions before, at least not in the same context, and thus, the importance assessment to be made cannot be readily derived from previous decisions [28]. For example, even if the management board of a transport company was involved in buying the present fleet of minibuses, new competitors may have entered the market since then, new government regulations may have been put in place, and customers may have altered their preferences. ‘Organizational’ means that the course is not meant for purely private decisions, like deciding where to spend one’s holiday [28]. Explicit weighing is highly relevant in these circumstances, and it is often an element of formal decision procedures. Besides, the research on which the course is based was done on strategic, non-routine, organizational decisions. The course may be suitable for other types of decisions, like routine decisions, but we do not know whether this is the case since we did not include other types of decisions in our research.

We did not distinguish between functional and non-functional requirements. However, we believe the course can be used for both (this has of course yet to be empirically proven). When running the courses, we took care to work with requirements that were basically compensatory (more, or less, is better). We left it to the participants to convert compensatory requirements into non-compensatory ones (when an option scores below or above a cutoff point for a certain requirement, it is no longer considered, however good it may score on other requirements). Some participants may convert a compensatory requirement like ‘safety’ (the more safety the better) into something like ‘the option should satisfy the minimum legal safety requirements’. We did point out the difference between compensatory and non-compensatory requirements during the course.

Another limitation is that we do not assess whether our course actually improves knowledge or skills. We assess what participants thought they learned from the course, what they thought of the relevance of the assignments, etc., but this does not necessarily concern actual changes in behavior or skills.

This article is relevant for those concerned with improving the quality of RE processes and organizational decision processes in general. Readers may reflect on how consciously they make importance assessments and whether they encounter some of the issues and pitfalls described in this article. If so, the course we present may provide inspiration for remedies.

This contribution starts with addressing the theory of importance assessment and identifying some issues and pitfalls that actors may encounter while assessing the importance of requirements (Sect. 2). The course, based on this theory, is described in Sect. 3. In Sect. 4, we address the methodology for assessing the experiences of the participants in five pilot sessions of the course as we applied it in a number of aerospace organizations. The results are presented in Sect. 5, followed by a discussion in Sect. 6 on possible improvements of the course and on the embedding of importance assessment processes in organizational RE and decision processes.

2 Importance assessment: theoretical background

In Sect. 2.1, we briefly review research on requirement weights. We show that little research has been done on importance assessment and, consequently, on how to introduce the importance assessment process to actors involved in requirements engineering. In Sect. 2.2, we describe a generic importance assessment model. In Sect. 2.3, we use this model to identify issues that deserve special attention or pitfalls that actors may encounter while assessing the importance of requirements. These issues and pitfalls form the starting points for identifying the subjects addressed in the course. The reason for this is that these issues and pitfalls show that importance assessment is not straightforward that it is a worthy subject to learn about. Getting acquainted with them during the course will hopefully give the participants something specific to learn, something that they can immediately use in their work if they want to.

2.1 Previous research concerning requirement weights

Extensive research concerning requirement weights has already been done in previous scholarly work. The main topics from previous research that are relevant in the light of this article are:

  1. 1.

    Measuring weights. There are a number of methods for measuring requirement weights [24, 36, 42, 56], like simply asking actors to verbalize them, the AHP and other methods of pairwise comparison [55], and the structural method in which weights are derived from a series of hypothetical choices presented to actors [e.g., 27]. There are also methods that, although sometimes primarily aimed at weight elicitation, help actors to derive weights from higher-level goals, like value-focused thinking [40, 41, 44]. However, measuring weights gives few clues as to by what thinking processes these weights were arrived at, as was already mentioned in the introduction. Hence, knowledge about the measuring of weights is not likely to help actors very much with assessing the importance of requirements. In the field of requirements engineering, there are many studies in which weights are, explicitly or implicitly, elicited [2, 16, 18, 49, 53]. In RE, it is sometimes possible to use the extent to which a certain attribute contributes to the achievement of a goal as a measure of its importance. This has been explored by, for example, Ali et al. [1], Jureta et al. [38], Prakash and Gosain [52], and Yoo et al. [62]. This subject will be addressed in our course in the form of addressing the common denominator in RE processes.

  2. 2.

    Factors that influence the weights given. Examples are: the way in which the decision context is framed [6, 13, 57], the range of requirement values of the options to be chosen from [7, 20, 23], the role of proxy requirements [21], the number of sub-requirements [10], the consequences of the need to justify decisions or avoid regret [4, 26, 58, 59], and political processes [46]. This type of research provides insights into general characteristics of the thinking process of actors, but it does not elaborate on specific mental actions. For example, we can associate certain behaviors with the desire to avoid possible future regret, but this is merely a general motive. How this motive is converted into requirement weights does not become clear, only that it influences the weights given. Consequently, this area of research treats the importance assessment process as a ‘black box’ and is of little help for developing training instruments addressing this process.

  3. 3.

    The way in which weights, once established, are used in decision processes. Much attention has been devoted to group choices on the basis of group members’ judgments [11, 19, 22, 24, 25, 35, 37, 43, 47]. Examples in requirements engineering research are Barragáns Martinez et al. [5], Kaiya et al. [39], and Richards [54]. However, we are interested in the mental processes before weights are established.

To conclude, the importance assessment process is still largely a ‘black box’. The lack of emphasis on importance assessment processes in organizational decision making was acknowledged in organizations where we tested the course.

This is not surprising. Many scientists and practitioners with whom we had discussions about RE and decision making believe that importance assessment is such a personal affair that hardly any generalizable knowledge can be obtained about it. But as we have discussed elsewhere [28] and will elaborate on in the next section, the general structure of importance assessment processes can be described and analyzed. People’s thoughts may be very personal, but the structure of their thinking often is not, as far as importance assessment processes are concerned. The next section gives a summary of what is already known about the importance assessment process.

2.2 Describing the importance assessment process: the weight assessment model (WAM)

In this section, we address the core characteristics of a descriptive model of the importance assessment process that we used to identify relevant issues and pitfalls that actors can encounter and that were addressed in our course. This model is called the weight assessment model (WAM). We just present the elements of the model that are essential for understanding the issues and pitfalls we identified. We do not discuss the research that formed the basis of the model, or analyze its merits, but we take the model as a frame of reference for developing our course. The variables we address in this paper pertain to the course we present, not to the elements of the model. The model is based on previous research; see [28, 29, 31] for details of the research approach. We use the requirements ‘safety’ and ‘passenger comfort’ of a minibus (which we also used in the assignments of the course) to illustrate the phases of the WAM described below.

The weight assessment model consists of seven phases. These phases are:

Phase 1: Problem identification

activities like elaborating on the task at hand (understanding, concretizing) and re-formulating it in one’s own words. This may occur if, for example, actors did not formulate the assignment themselves, but it was given to them by another stakeholder.

Phase 2: (Sub-)requirement processing

giving the requirements a more precise, or different, meaning. Requirement properties like measurement level, measurement unit, level of abstractness, and precision can change as a result of processing. Several forms of processing were identified [28], but the only one relevant here is: splitting a requirement into sub-requirements. For example, you can split ‘safety of a car’ in sub-requirements like ‘quality of the brakes’ and strength of the bodywork. This gives the actor a more detailed idea of the meaning of a requirement and makes it possible to give the sub-requirements separate weights. One can think of several reasons for giving weights to sub-requirements. For example, actors may feel that sub-requirements are more concrete, more tangible, than the main requirements they are derived from and hence easier to assign weights to. As we shall see, most of the issues and pitfalls that the subjects in our research encountered during the importance assessment process pertain in one way or another to phase 2, so this is the most important phase for understanding the rationale behind the course we developed.

Phase 3: Absolute (sub-)requirement weighing

making a statement about the importance of a (sub-)requirement without making any reference to the importance of other (sub-)requirements (‘safety is important’).

Phase 4: Homogeneous sub-requirement weighing

weighing one sub-requirement against another one of the same main requirement (‘good brakes are more important than a strong bodywork’).

Phase 5: Heterogeneous sub-requirement weighing

weighing sub-requirements that belong to different main requirements against each other [good brakes (sub-requirement of ‘safety’) are more important than comfortable seats (sub-requirement of ‘comfort’)].

Phase 6: Requirement weighing

weighing of the main requirements (‘safety is more important than comfort’).

Phase 7: Evaluation

reflections on activities and on the results.

In the next section, and based on the WAM, we describe some of the main characteristics of the ways in which the subjects in our previous research performed their importance assessment tasks, and some issues and pitfalls they encountered.

2.3 Issues and pitfalls in importance assessment

The design of the course was not based on sophisticated theoretical principles, but on the desire to address specific issues and pitfalls that we identified while observing people performing importance assessments. By focusing on specific issues and pitfalls in importance assessment processes, participants may realize that importance assessment is not a straightforward affair and may become motivated to devote attention to it. This philosophy is reflected in this section and in the next one. In this section, we summarize the main conclusions we have drawn from our previous research concerning importance assessment and their implications for the design of the importance assessment course. See [28, 29, 32] for more details. Then, in Sect. 3, we describe the course, and we link the issues and pitfalls identified in this section to the various assignments in the course.

The issues and pitfalls that actors in our research came across in importance assessment processes are:

  1. 1.

    Much attention (more than 30 % of the total effort) was devoted to phase 2 of the WAM [(sub-)requirement processing]. Phase 2 probably was an important building block for the rest of the process. If mistakes were made in giving meaning to requirements, actors may not have weighed the requirements they thought they were weighing in subsequent phases. In previous research, we observed that requirements sometimes turned out to have many more sub-requirements than the subjects originally thought they had. So, if they did not devote effort to processing the requirements, they may have weighed only part of the requirements they thought they were weighing. Therefore, we devoted special attention to this phase in our course. Three of the six assignments of the course (2, 3, and 5, see below) were devoted to phase 2, while two assignments (1 and 6) partly pertained to this phase.

  2. 2.

    In phase 2, subjects did not define requirements, but they appeared to split them in a large number of sub-requirements. The lowest average number of sub-attributes generated for either ‘safety’ or ‘comfort’ across any of the groups we analyzed in the research that formed the basis for this article was 18.3. This indicates that splitting is a significant activity for giving meaning to requirements. Therefore, in our course, some practice in splitting was offered (Assignment 2). Also, we chose to draw the participants’ attention to the benefits of another way of processing requirements: defining them (see also Sect. 3.2).

  3. 3.

    Subjects appeared to conduct the assessment process in a rather unstructured way. For example, none of the subjects seriously tried to adhere to completeness, independence, and non-redundancy, as is required for proper weighing [60]. Moreover, no one made a causal scheme, cognitive map, or other representation of the relationships between (sub-)requirements. Therefore, we devoted attention to the issue of structuring the assessment process, especially phase 2, in the course (Assignment 3).

  4. 4.

    None of the subjects appeared to formulate a common denominator for the two requirements (safety and comfort), such as ‘cost’. Such a denominator can have the function of ‘utility’ or ‘attractiveness’ in methods to assess options as found in the literature, like the linear additive method [42]. It is, essentially, a good way to ‘compare apples with oranges’. Therefore, we explained the relevance of the common denominator in the course and showed how to look for it in a causal scheme (Assignment 3).

  5. 5.

    The subjects appeared to deliberate about the meaning of ‘importance’, yet, only in a very indirect sense. We do not know whether this influenced the importance assessment process, but we decided to address the question of the meaning of ‘importance’ in the course (Assignment 4).

Now that a number of issues and pitfalls concerning importance assessment processes have been identified, and we will, in the next section, address the course that we have developed to deal with some of them.

3 The course

In this section, we first outline the goals of the course, followed by the discussion of some of its limitations (Sect. 3.2). After a few remarks about the design process in Sect. 3.3, the course program is described in Sect. 3.4.

3.1 Goals of the course

Based on our practical experience, we assumed that organizations may see the potential benefits of paying attention to importance assessment, but are not ready to make substantial investments in it. After all, the benefits of such investments have yet to be proven. So we limited the ambitions for our course to the following goals: (1) to make participants aware of the relevance of the importance assessment process; (2) to make participants aware of the various activities that may take place during importance assessment (splitting vs. defining, checking for completeness, etc.); (3) to make participants aware of relevant issues and pitfalls in the importance assessment process; and (4) to provide practical experience with some activities (in the shape of assignments) that may be helpful in the importance assessment process.

These are short-term goals: At the end of the course, the participants should have made progress with respect to all these goals. But a course of—at most—one day (in the end, we settled for half a day; see Sect. 3.2) will likely yield no lasting changes in the participants’ attitudes or behavior if no follow-up is given. Participants may get insufficient training during the course to be able to use what they learned in practice afterward. And even if they are able to use what they learned within the context of the course, they may fall back to their normal routine under the daily pressures of work or they will simply forget what they learned if they are not reminded of it. So we also had a long-term goal: (5) Elements of the course (i.e., the assignments) should be suitable to be used (perhaps in modified form) in tools designed for structurally improving importance assessment processes in organizations. In this way, what is set in motion during the course can be followed up later in additional sessions. For example, elements of the course can be repeated so as to give actors more practical experience with them, they can be expanded (giving more difficult cases than during the course), or they can be referred to in further theoretical coverage of importance assessment processes.

3.2 Limitations in the design of the course

There were some limitations we had to, or chose to, take into account when designing the course. These limitations are covered in this section.

  1. 1.

    We believed the greatest impact could be achieved by showing actors that importance assessment does not need to be an implicit, fuzzy web of thoughts (as many people with experience in decision processes seemed to think when informally interviewed by us), but something that can actually be described and analyzed. We therefore concentrated on a limited number of issues (packaged in assignments) that are:

    • Quantitatively important in the importance assessment process (e.g., splitting requirements).

    • Relevant for the quality of the importance assessment process (e.g., the need to be comprehensive when splitting requirements).

    • Suitable for giving participants in the course experience with some aspects of importance assessment that are easy to put into practice and simple to learn and remember, so that the attendants of the course get the feeling that they learned something specific, not just heard an exposure of theory.

  2. 2.

    We could not, and did not want to, prescribe how importance assessments should be conducted. The outcomes of our research [2832]—and that of others—did not allow this. We did not try to teach people to do the right things, but tried to help them in doing the things that they do, right. For example, splitting requirements may not be the best way to start an importance assessment process; defining requirements and finding a common denominator seem better to ensure completeness, to avoid redundancy [60], and to check whether sub-requirements fall completely under the main requirements. After requirements are defined, they can be split into sub-requirements that may be more concrete and thus easier to measure. But because splitting requirements is what people do, our course aimed to give them experience in how to do it right. Another consequence was that we took formal RE and decision theory as described in, for example, Keeney [40], not as a norm but as a starting point, using its concepts (requirement scores, weights, etc.) for describing, analyzing, and giving meaning to people’s behavior during importance assessment processes. We took a more naturalistic instead of a formal approach, and the course was an application of various elements of RE and decision theory, in particular importance assessment theory, but not a validation of it. This approach gave us freedom to pick those elements from available theory that contributed to the efficiency and effectiveness of our course without having to be complete in the coverage of theory in the course.

  3. 3.

    We wanted to make participants conscious of what they were doing and what they could do differently, during the importance assessment process, rather than to have them develop a ‘one-size-fits-all’ importance assessment method. That is to say, instilling the desire and the ability to develop importance assessment skills was more important than teaching the skills themselves.

  4. 4.

    The course should not last longer than 1 day, so as not to discourage people from taking part in it. This limitation was chosen intuitively but proved valid when seeking opportunities to test the course, which in the end lasted half a day.

  5. 5.

    Because of the limited duration of the course, we did not aim for completeness. It was enough if we could show that importance assessment can be relevant in requirements engineering and that there are things to learn about it. We left out, for example, the handling of uncertainty, and the conversion of absolute weights into relative weights.

  6. 6.

    Lastly, the course was not meant to cover a particular discipline. Although it was tested within aerospace organizations only, the content is meant to be suitable for all kinds of organizations. The content of the assignments was chosen so that participants from various disciplines could readily identify with it. This ‘generality’ is not a drawback. On the contrary, we believe it is better to use content that the participants are not too familiar with, for then they cannot rely on past experience to ‘take shortcuts’.

3.3 Some remarks on the course design process

The foundation of the course, as far as education theory is concerned, lies in the work of Earl [17]. Hicks [33] was used to aid in choosing the form of the assignments and the way they were presented to the participants. The content of the assignments was, of course, based on the issues and pitfalls identified in Sect. 2.3. As this article focuses on the content of the course, and not on education theory, we will not elaborate on the way education theory was used in the design of the course.

During the design, we consulted a methodologist specializing in research designs, and a researcher experienced in designing decision experiments. We pre-tested the course among 11 students of the University of Twente in the Netherlands.

3.4 The course program

The course program is given in Table 1.

Table 1 Descriptions of the assignments of the course

The course presented in this article started with a brief introduction of the main elements of a decision (options, requirement scores, requirement weights) and the role of importance assessment, followed by Assignment 1. In this assignment, the participants were asked to weigh ‘safety’ against ‘passenger comfort’ in the case of a transport company involved in the acquisition of a fleet of minibuses. This was followed by a feedback session in which the participants were asked how they conducted the assignment. Issues raised were, for example: Did you define requirements or did you split them? If you split them, did you check for completeness? Did you, looking back, consider only a limited number of weights? In this way, the participants were introduced to possible courses of action that could be chosen during the importance assessment process, and, to a certain extent, may have become conscious of the way they worked.

Assignment 2 comprised an exercise in splitting a requirement. The participants went through the following cycle:

  • Formulate a global description of ‘safety’ (not necessarily an exact definition).

  • Formulate splitting criteria (like active vs. passive safety features).

  • Split ‘safety’ in as many requirements as you can think of, using the splitting criteria as inspiration.

  • Try to come to a more formal definition of ‘safety’.

  • Go through this cycle until no new knowledge is gained.

In this way, participants practiced with both splitting and defining (as a possible basis for a common denominator), using one as inspiration for the other. In a short feedback session, the various splitting criteria and definitions, and the way the participants reached them, were discussed. This assignment was designed to address points 2 and 3 in Sect. 2.3: Subjects in our research devoted much effort to splitting but did not do this systematically (Table 2).

Table 2 Relevance, difficulty, and clarity of the assignments (plus number of respondents N)

In Assignment 3, the sub-requirements of Assignment 2 were put into a causal scheme, a so-called cognitive map [9]. Simply put, the sub-requirements were connected by arrows going from cause to effect. Even more important than establishing ‘cause-and-effect’ relationships was the elimination of overlapping or redundant sub-requirements. For example, some participants may have taken ‘vehicle weight’, ‘strength of the chassis’, and ‘braking distance’ as sub-requirements of safety. Obviously, a strong chassis may weigh more and a heavier vehicle is likely to have a longer braking distance. All other things being equal, weight has no direct influence on safety, and as such is not important in itself. So it can be left out. In Assignment 3, participants learned to bring ‘method in the madness’ of sub-requirements as a prelude to weighing them. This relates to points 3 and 4 in Sect. 2.3: We aimed to make the processing of (sub-)requirements more systematic so that superfluous requirements can be eliminated and a common denominator may be found by establishing empirical relationships between (sub-)requirements. For example, it may become clear that there are relationships between cost and some other requirements. Cost may then serve as a common denominator. This is similar to what is done when making goal models [1]. The reason why we worked with sub-requirements in this assignment was because these had been generated in Assignment 2. A cognitive map can just as well be made from the ‘main’ requirements.

Assignment 4 comprised a plenary discussion about the meaning of ‘importance’. It was not clear whether this knowledge actually contributes to a better importance assessment process, but we took it into account as we thought it might contribute to more awareness about the process. In Sect. 2.3, we saw that actors often do not deliberate systematically about the meaning of ‘importance’.

Assignment 5 was similar to Assignment 1 (weighing two requirements against each other), but with requirements that pertained to the working environment of the participants. For example, during a course for aircraft and maintenance services acquisition experts from KLM Royal Dutch Airlines, we used characteristics of airplanes as requirements to be weighed. Due to time constraints, this was the only time Assignment 5 was executed. This assignment gave the participants the opportunity to practice what they had learned in Assignments 2, 3 and 4 and hence addressed points 2 to 4 in Sect. 2.3. In the feedback session held afterward, they reflected on the practicality and usefulness of what they learned.

Assignment 6 was not directly based on our earlier research, but on feedback and ideas developed during a pilot session of the course, and on interviews with decision makers about possible desirable content of the course. It concerned the handling of requirements that participants felt to be important without initially being able to give rational arguments for that sentiment. The assignment started with finding any and all (so also irrational) arguments for the importance of a requirement (e.g., the maximum speed of a car is important for me, because I like the sporty image of fast cars). Subsequently, the participants derived new requirements from these arguments (in this case: ‘image’). Then, they assessed what other requirements determined the image of a car (e.g., the price) and what desirable consequences a good image of a car could have (like being attributed enhanced status by business partners). After a number of questions like these, the requirements generated were represented in a cognitive map. In this way, irrational or intuitive arguments were made explicit and, if desired, could be taken into account when weighing requirements.

The program ended with a discussion session in which remaining issues brought up by the participants were addressed.

In this and the previous sections, we described some issues and pitfalls that may be encountered during importance assessment processes, and the way in which they are addressed in the course we designed. The remainder of this article is devoted to analyzing the experiences of participants in the course as we taught it at a number of Dutch Aerospace Organizations.

4 Assessing participants’ experiences: methodology

4.1 Procedure

The course was given for employees of Amsterdam Schiphol Airport, twice for groups of KLM Royal Dutch Airlines and twice for the Royal Netherlands Air Force. In total, 57 employees followed the course. At the end of each course, the participants filled out a questionnaire. All in all, 55 persons filled out the questionnaire, although some of them omitted a few questions. The choice for aerospace organizations was not driven by methodological, but by practical motives: We had good contacts in the aerospace industry. Although our results are not per se generalizable to other industries, the organizations in our population were so diverse that we see no reason why our findings should be specific for the aerospace industry.

Since the course was confined to merely introducing the importance assessment process and giving the participants some experience with it, we could not expect the participants to use whatever they learned during the course in their daily work to any significant degree. For that, a more elaborate program would be needed (see Sect. 6). Therefore, we could not assess the effects of the course by measuring or observing changes in subsequent behavior. Instead, we wanted to assess what the participants felt they had learned from the course and to what extent they had come to realize the relevance of paying attention to importance assessment in RE processes. We did not want to burden the participants with too many measurements during the course, for which there was little time available as it was. In the end, we decided to use a questionnaire administered immediately after the conclusion of the course.

The questionnaire consisted of eight multiple-choice questions (with sub-questions) using a four- or five-point rating scale and three open questions (further elaborated on in Sect. 5). When designing the questionnaire, we used Cooper and Schindler [14] and Patton [48] as a starting point. As for interpreting the scores, when using a five-point scale, we set the threshold for a ‘positive’ score on 3, equivalent to a score of 6 (‘sufficient’) on a 1–10 scale. This scale is often used in Dutch questionnaires and utilized by educational institutes, so the participants in the course were assumed to be familiar with it. Consequently, the threshold for a sufficient score is a bit above the mean of the best and worst possible score (5.5). When a four-point scale was used, we also took a score of 3 as a threshold, the first integer value denoting a positive qualification (see Sect. 5).

4.2 Validity

The method that we used for assessing participants’ experiences presents several validity issues. First, we did not conduct measurements before the start of the course, so we cannot compare pre- and post-course measurements. We did consider a pre-course questionnaire, but since the course was about something that, according to our previous experience, people are hardly even aware of, we doubted that a pre-course questionnaire would yield information that would enable valid comparisons to be made with post-course measurements. Also, we did not want participants to develop ideas about importance assessment as an effect of a pre-course questionnaire. No doubt, we could have solved these problems, but letting participants fill in two sizable questionnaires during a course lasting only half a day did not seem very motivating for them. We think that, despite its limitations, our post-course questionnaire, with its direct and clear questions, was adequate in the light of our research objectives. It was made clear to the participants that management of their organizations had agreed to conduct the course so as to assess its usefulness and to give us the opportunity to try it out, not because they were convinced outright of the usefulness of the course. So, there was no great pressure on the participants to exaggerate the usefulness of the course as they perceived it; they could afford to be honest. We believe that the participants will have distinguished ideas that they already had before the course from those obtained during the course. In the questionnaire, we explicitly asked about effects of the course, not merely about general attitudes, skills etc., as such. Participants indicated in the post-course questionnaire that they had learned something from the course (see Sect. 5), so, assuming of course that these answers are valid, we conclude that there is indeed a difference between pre-course and post-course experience. We primarily wanted to measure attitudes, awareness and the way participants experienced the course, and we believe a questionnaire is an adequate and efficient tool for this. In a laboratory context, some sort of cognitive test could have worked well; for example, letting participants design a way to set requirements and observing whether they introduce importance assessment notions. But in a practical context, with participants under pressure from daily work, questionnaires seemed more appropriate. Of course, we only measured short-term effects, but, then, the course is expected to have mainly short-term effects and should be followed up by other measures if effects are to last, as is discussed in Sect. 6. Our approach is not ideal. However, we think it is adequate for our purpose.

A second validity issue was already addressed briefly above; the validity of the answers to the questionnaire was not checked by measuring test results or actual behavior. However, the goals of the course concerned appreciation of the relevance of importance assessment processes, not examining behavior. If participants said that the course had made them appreciate the relevance of importance assessment processes, this may manifest itself in behavior during subsequent RE processes in which they might participate. But since the course was expected to have only short-term effects, if not followed up, enduring changes in behavior was not to be expected.

All in all, we do not see the above issues as major problems. The awareness of the relevance of aspects of importance assessment and the way the participants experienced the course were, in our view, adequately measured. The results of the questionnaire show neither a lack of interest in the course nor the questionnaire (expressed, e.g., by means of an excessive number of ‘don’t know’s’ or ‘no opinions’) nor a mindless acceptance of all that the course offered (e.g., by uniform or excessively high scores on questions about the quality or relevance of the course). In our view, the results are valid enough to answer the question whether people can be made aware of the relevance of importance assessment processes and whether they experienced their exposure to elements of importance assessment as relevant. More elaborate measurement would likely not have altered the answers to these questions. Of course, we aim to continue adding to the knowledge in this field of research and to test whether the course can contribute to improving the quality of importance assessment processes (that is, importance assessment behavior), but that is an issue for future research.

We took care to evaluate general threats to validity as discussed by, for example, Wohlin et al. [61]. As far as conclusion validity is concerned, we do not claim to have investigated a statistically representative population, but merely an example of a population for which our course is relevant. We took care to design a valid questionnaire using the methods described by Cooper and Schindler [14] and Patton [48]. Internal validity threats like mortality were not relevant because the course lasted only half a day. Groups were not compared, so social threats were not present. The mono-method bias (construct validity) was certainly an issue. However, other methods seemed impractical (see above). We did not have the ambition for external validity beyond our research population: Actors involved in strategic non-routine organizational decision processes. So, we were not interested in, for example, private decision processes.

Whatever judgments on validity can be made, the results of our study should give anyone interested in the use of the course enough information to judge whether it could be useful to him or her. No doubt, each training program, RE process or organizational environment requires its own adaptations to the course, and we encourage readers of this article to develop their own versions of the course or its assignments.

5 Results

For assessing participants’ experiences with the course, we took the course goals given in Sect. 3.1 as a frame of reference. First, we examined the short-term Goals (1–4).

Goal 1

Make the participants aware of the relevance of the importance assessment process in decisions.

The extent to which the participants learned about the relevance of importance assessment was 2.89 on a scale from 1 to 4 (1 was ‘nothing’, 2 was ‘little’, 3 was ‘reasonably’, and 4 was ‘much’; N = 55). This is somewhat below our (intuitive) target score of 3.0. From the open questions, we found that the subject was rather abstract and that there was a need for more explanation, clearer examples, and more specific feedback on the assignments. The last point has been addressed by giving written feedback on the assignments that the participants could study afterward. However, this did not show in the questionnaires, which were filled out immediately after the course. It is likely that the participants, who had a practical and not a scientific background, needed more time and opportunity to grasp the essentials of the importance assessment process, at least in the way they were presented in the course. Moreover, the course might have been too much of a series of assignments, without sufficient attention to the theoretical framework that integrated the assignments. Finally, importance assessment did not seem to be an issue in the organizations concerned, at least not until we shared with their representatives our suggestion it should be. This is not surprising, given the lack of research in this area (see Sect. 2.1). Thus, it is logical that participants in the course needed some time to grasp the theoretical framework of the course.

Goal 2

Make the participants aware of the various activities that can be considered when performing an importance assessment.

Regarding the issue to what extent the course helped in getting a clear idea of how to assess the importance of requirements, the average score was 2.72 (N = 50), which is again somewhat below the target of 3.0 using the same four-point rating scale as for Goal 1. The main cause might have been the same as the one regarding Goal 1: Given the practical background of the participants, it may have been difficult for them to put the assignments into perspective. This is all the more likely because the separate assignments were judged favorably (see the next paragraph).

Goals 3 and 4

Making participants aware of relevant issues and pitfalls in the importance assessment process and letting the participants practice with some activities that can be of help with the importance assessment process.

As indicators of these goals, we took the participants’ judgment about the assignments. The assignments were designed to address specific issues and pitfalls that we wanted the participants to gain experience with. So, if the participants judged the assignments to be relevant, clear and of the right level (neither too easy nor too difficult), we would conclude that the assignments served their purpose.

The relevance of the assignments scored between 3.74 on a five-point rating scale (Assignment 4; discussing the meaning of ‘importance’) and 4.00 (Assignment 1; making an unassisted importance assessment). The scale ran from 1 (very irrelevant) to 5 (very relevant). The number of participants that answered this question about relevance was 47, except for Assignment 5 that was omitted in all but one of the five courses, due to time constraints. The clarity of the assignments scored between 3.47 (Assignment 3, making a cognitive map) and 3.90 (Assignment 1; N = 47) on a five-point rating scale. These figures are well above the target minimum of 3.0. The level of difficulty was scored between 2.66 (Assignment 1) and 3.55 (Assignment 6; handling non-explicit arguments) on a scale of 1–5 from ‘very easy’ to ‘very difficult’, with 3 being ‘neither easy nor difficult’ (N = 47). So, the individual assignments were all scored rather favorably. The improvements that were suggested comprised: more explanation beforehand, more elaborate feedback, more time for discussion, and more ‘depth’ in the course, even at the expense of the number of assignments.

These relatively high scores, especially in the case of relevance of the assignments, may imply that the quality of importance assessment processes, and the skills needed for it, is indeed an issue within the organizations that took the course, even if the participants may not have realized this before starting the course. It shows the relevance of what we are trying to achieve in the course.

Now we turn to the long-term Goal (5): Elements of the course should be suitable to be used (perhaps in modified form) in tools designed for structurally improving importance assessment processes in organizations.

The feedback discussions after each assignment were, as far as we could assess, of good quality. Relevant questions were asked, problems encountered while executing the assignments were properly identified, and the answers the teachers gave were generally understood, as far as could be judged from subsequent discussions. Given the above, and given the scores on relevance, clarity, and difficulty of the assignments (see Goals 3 and 4), we believe that the quality of the assignments was good. The participants understood them, could fulfill them, and could reflect properly on them afterward. So we see no reason why they cannot serve as a basis for more elaborate tools. However, how and to which extent this is to be done falls outside the scope of this paper.

Our conclusion is that although Goals 1 and 2 were not met to the extent that we would like them to be, this is made up for by the level of achievement of Goals 3 (touching the core of our research) and 4. However, as noted in Sect. 3.1, even if the goals of the course are fulfilled, lasting effects are not to be expected of any intervention that lasts only half a day, no matter how successful. That is to say, we believe that more is needed than just a further improvement of the course. The context in which the course is given should be improved as well.

As far as improvements of the course are concerned, the most important one is the provision of elaborate written feedback on the assignments. Other improvements, like more in-depth explanation of the theory, simply cannot be realized within the duration of the course, but it is no problem to realize this in a more elaborate training program aimed at embedding importance assessment skills and procedures in an organization. Elements of the course can be used for this (Goal 5).

All in all, we think that the course in its present form provides a good way of introducing the concept of importance assessment in organizations, making actors aware of its relevance, and letting them experience some issues and pitfalls of importance assessment processes. The various assignments of the course are useful in getting actors acquainted with the importance assessment process.

6 Discussion

The aims of this article were: (1) to present and (2) to assess participants’ experiences with, a course that makes actors aware of the relevance of importance assessment processes and lets them experience some issues and pitfalls concerning these processes. A course like the one described in this contribution does not yet exist, but may be useful to make stakeholders in RE processes aware of the relevance of importance assessment and to provide a basis for improving the importance assessment activities within RE processes.

We should stress that the conclusions in this article are based on questionnaires answered by course participants immediately after the course. Hence, as was discussed in Sect. 4.2, nothing can be said about long-term effects on opinions and about effects on actual behavior of the participants. So, even though we noted in Sect. 5 that the goals of the course had largely been achieved, it is impossible to say whether this will encourage participants to put into practice what they learned or whether they will remember it for any length of time. But since we assume that the course should be followed up to have any lasting effects, the results make sufficiently clear that the course provides a basis for introducing importance assessment in RE processes.

According to the data obtained by the questionnaires, the strengths of the course we have developed are that participants felt they have a better idea of the relevance of the importance assessment process than before the course and gained some experience with importance assessment. Overall, we consider the course to be a success, but the degree to which two of the five goals were achieved was not as high as we wanted, and the long-term effects of the course are probably limited without further follow-up.

However, this is not a weakness of the course; it is an inherent limitation of any introductory tool. Improving organizational importance assessment processes is a major operation. Such organizational change operations often need a distinct starting point. The course can fulfill this function effectively and efficiently; it only takes a small amount of time from the participants and it is a good introduction to the importance assessment processes that are the subject of the organizational change. It also shows decision makers that there is such a thing as importance assessment and that the quality of importance assessment processes may indeed be improved by means of paying attention to it. These notions may generate support for efforts to improve importance assessment processes. For requirements engineering activities, mobilizing support for paying attention to importance assessment should be somewhat easier than for decision processes in general, since in many methods and techniques used in RE, attention is already paid to explicitly eliciting scores on and weights of requirements (see Sect. 1). The assignments of the course discussed in this article could well be developed to prepare stakeholders for the formulation and elicitation of requirements with RE methods like those proposed by, for example, Andreou [2], Pitula and Radhakrishnan [49], and Prakash and Gosain [52].

As was stated in Sect. 1, the course was directed at RE processes of strategic, non-routine, organizational decisions. However, most organizational decisions are non-strategic and routine. Would the course be suitable for such decisions? We cannot tell from our research, but even with (largely) routine decisions, some requirements may have to be weighed explicitly, either because the weights are not known or because actors feel the weights used hitherto should be re-evaluated. Possibly, the course can be of use then, but the importance assessment processes of routine decisions should be studied and the insights thus gained could lead to redesign of the course. For non-strategic non-routine decisions, we think that the course is potentially as applicable as for strategic decisions. But in this case, much less is often at stake than with strategic decisions, so the investment in the course may be deemed not worthwhile.

An important lesson that we learned from designing and teaching the course is that it is almost impossible to have participants internalize a concept as abstract as ‘importance assessment’ in a period of a few hours. In a lecture, the concept can be explained, and in assignments, it can be practiced, but the need for more explanation, examples and feedback, as expressed by the participants, suggests that either the course should be expanded or the ambition should not be to generate lasting effects. Eliminating assignments to make space for more theory and feedback does not seem to be a good option, given the perceived quality and relevance of the assignments. The limitation mentioned in the introduction that we did not measure changes of behavior was prudent, but these changes were not to be expected anyway.

If an organization has the ambition to devote attention to importance assessment processes in a structured way, this can be done by giving explicit attention to importance assessment during requirements engineering processes (as opposed to just eliciting requirement weights). If an organization wants to do this, a trajectory can be developed to educate employees in importance assessment. Based on the experience in giving the course and in dealing with importance assessment issues in various contexts, we suggest that organizations wanting to devote systematic attention to importance assessment take the following steps:

  1. 1.

    Make actors involved in requirements engineering aware that importance assessment is not the same as eliciting weights. Although some weight-eliciting methods like AHP [55] may induce thinking about weights, this is by no means a certainty. How this awareness is to be achieved depends on the situation. It is probably best to address the most senior management level, where non-routine decisions with strategic consequences take place and where new insights about how to make good decisions may be welcomed.

  2. 2.

    Make actors aware which parts of the deliberations taking place during RE processes involve requirement weights (as opposed to scores). In formal RE methods, these moments should not be too difficult to identify, and they can be the starting points for importance assessment.

  3. 3.

    Train a limited number of key actors involved in high-pay-off non-routine decisions in the issues addressed in our course and help them to implement the acquired skills in RE processes in which they are involved. Requirements engineers would be an obvious choice, both because of the expertise they already possess and because their work involves explicitly handling requirements. The assignments in our course may be used as a basis for this training.

  4. 4.

    Identify moments within ongoing (in)formal RE processes where importance assessment is relevant and point these out to the participants. These moments would likely occur during and after definition of requirements and before eliciting their weights.

  5. 5.

    Insert importance assessment support tools in (in)formal requirements engineering procedures within the organization. This could be done together with some key actors involved in RE processes.

  6. 6.

    Finally, set up a program to train and maintain importance assessment skills for all relevant actors in the company and to instill an ‘importance-assessment-awareness’ culture in the organization.

The development of our course did lead to follow-ups, as we intended. We managed to secure two PhD project positions with KLM Royal Dutch Airlines with the aim of developing tools for improving the quality of importance assessment processes. The course that was presented in this contribution may form a basis for this. This opportunity was not a direct consequence of having given the course. However, being able to explain to the CEO of KLM, the experiences with the course in his organization were very helpful in gaining the opportunity to ask him for support of our research in the first place.

For Amsterdam Schiphol Airport, we are developing a short training that should provide members of crisis management teams (a crisis being, e.g., fires or aircraft accidents) with a common importance assessment heuristic. This heuristic can be practiced during regular training sessions, so that, when needed, crisis management team members can co-ordinate their decisions more efficiently. The opportunity to develop this training was given to us through mediation of one of the attendants of the course that was described in this article.

We conclude with some tips for researchers who want to develop practical applications based on their work, and gain support for this from practitioners. These tips are partly based on our own experience with introducing our course and partly on Antonacopoulou [3], Polzer [50], and Posner [51].

  • Develop a teaser. This does not imply that you need to address all knowledge that you can offer the practitioner, as long as it is appealing enough to get practitioners interested.

  • Look for problems that you can help to solve. No definite proof is needed that the problems exist; just clear indications are enough for developing the teaser.

  • Do not strive for the optimal application. The aim is to have practitioners develop interest in an application based on the teaser, not to make them see the teaser as the perfect solution to their problems.

  • Be patient. Listen to what practitioners want and develop the teaser. It may take many encounters, and many modifications to the teaser, before somebody gets interested.

  • When a practitioner indicates he or she is interested, that is just the beginning. Even after administering the teaser, it may take some time before a follow-up is realized.

  • But if it is, our experience is that there probably is no need to worry about competitors for some time. The teaser may have given you an exposure that others will not quickly eclipse.

More information on the course, including an outline of the course materials, can be obtained upon request from the corresponding author.