1 Introduction

Engineering Design is an activity where what you want to create and how you want to create are fundamental to the underlying process. Both aspects of this activity require selection among alternatives that are intertwined. In engineering, how a problem is defined and the creation and selection of alternatives are the focus of methods performed in a team. How do we carry it out, are we content with the outcome, can we do it better, are important questions to address. These questions have been studied to varying depths in many disciplines including psychology,Footnote 1 neurosciences,Footnote 2 economics,Footnote 3 management,Footnote 4 non-fiction literature,Footnote 5 and engineering.Footnote 6 While we are interested in engineering design contexts, which might be different from other non-engineering contexts, these questions transcend most disciplinary boundaries. In design, there are always many alternative solutions, some of which await discovery and some of which may require additional knowledge for it to work. The choice of multiple criteria for evaluating each alternative, and the specific selection are decided in a team process comprising of participants from diverse disciplines including those that are non-technical.

Over the last 20 years, there have been many studies done on selection among alternatives in design. These studies have been reported in a variety of journals including Research in Engineering Design; they form a web from which part is depicted in Fig. 1. The web reflects the relations between them: some studies rely on other study results (stated “positive reference” in the legend); they criticize previous studies (stated “negative reference”) and they propose improvements to other proposals (thereby describing their limitations, also considered negative); or they merely report what another study stated (“indifferent reference”).Footnote 7

Fig. 1
figure 1

Web of relations between studies related to team-based selection among alternatives. The node labels appear in square braces at the end of references in the references list. The displayed web is incomplete

It is difficult to go through these studies and synthesize a coherent position as they have been conducted by authors from different disciplines, having different beliefs, and often critical of each other. The studies at the top of the figure represent the pool of methods and theoretical work that form the basis of subsequent work. Some of these studies are based on theoretical frameworks, e.g., Sen and Arrow (both from public choice theory) and Keeny and Raiffa (from multi-attribute decision theory); some are based on a theoretical framework with strong influence from practice, e.g., Saaty and Suh, and others originated in practice, e.g., QFD and Pugh. These origins are important because they form a context that permeates the discussion on the questions regarding methods I raised before.

For example, consider a subset of these studies (Fig. 2), Arrows impossibility theorem (AIT) [Ar63]Footnote 8, constructed in the context of public choice by voting, forms the basis of Hazelrigg’s (1996a, b, 1997, 1998, 1999) contention that AIT renders many selection methods in design as irrational. Scott and Antonsson (1999) argued against the perceived importance of AIT and claimed that by-and-large design situations fall outside its scope. Franssen (2005) claimed that Scott and Antonsson’s interpretation of AIT is incorrect and that AIT also applies to multi-criteria evaluation methods. Nevertheless, Franssen pointed out that for particular situations, there are decision procedures that work well and satisfy particular requirements. This calls for a suite of methods that could be used in different situations.

Fig. 2
figure 2

Contradictory web of relations

Keeney (2009) [Ke09] argued that Hazelrigg (1996a, b, 1997, 1998, 1999) and Franssen (2005) have misinterpreted AIT and that AIT is applicable in a restricted and special case and that there are alternative formulations of selection under a decision theoretic framework that are more applicable to collaborative decision contexts and that lead to perfectly logical (i.e., rational) decisions. Finally, Franssen and Bucciarelli (2004) took a path similar to Keeney, limiting the relevance of AIT in engineering design, while proposing game theory as a decision-making formalism that accounts for negotiation and communication between participants as manifested in design settings. This proposal is not new (e.g., Vincent 1983), however, even game theory as a basis for decision-making has been criticized (Barzilai 2006), but this criticism has been criticized as well as being totally wrong (Krantz 2005).

It requires significant effort to digest these and their related papers to form one own position about who is right or wrong or whether it all depends on the assumptions underlying the different models and their relevance to particular engineering design contexts. One interesting point among some of these opposing views is that they appreciate the need for multiplicity of decision-making methods for different problems or contexts.

More specifically, all of the above are centered on some theoretical stance or pragmatic stance. One underlying controversy is what is the success of the pragmatic methods compared to those methods claimed to be based on the theoretical stance. As none of these methods is testable easily in controlled situations—each side may claim that the value of their approach is superior using completely different criteria. Hence, the proponents and opponents of the different positions seem at standstill. No side really seems to have convinced the other. It seems as though they speak different languages. Katsikopoulos (2009) [Ka09] explained this state by proposing that the debate might reflect differences between coherence and correspondence. This is not a new position. It has been mentioned in a different way before as a contrast between two worldviews: scientism and praxis (Reich 1994).

2 ‘We’ as design researchers: 1st point of contrast—worldview

2.1 Praxis: correspondence

There are researchers whose goal is to impact and improve design practice. This goal is translated to taking practice seriously as the ultimate test of research. Therefore, the “truth value” of any hypothesis or claim regarding design methods is solely determined in empirical tests in practice. This approach strives for correspondence. There are clear difficulties in this approach. There are no known best results of such tests. Tests cannot be replicated. Method use is context dependent, therefore, does not transfer easily to new situations. Experiments in practice involve so many parameters that could not be controlled or measured easily. The failure of an experiment could easily be attributed to the context instead of the hypothesis about the method.Footnote 9 These difficulties notwithstanding, it is clear that no practical impact could be claimed without tests in practice.

The acceptance and outcomes of design methods over time determine their proliferation. Take for example, QFD or Six-Sigma as methods; they are popular but did not work for all companies as well as others (Griffin 1992; Coronado and Anthony 2002). Therefore, some may have abandoned them. The advocates of these methods use their proliferation to claim their practical success even though this relation is not necessarily true.

2.2 Scientism: coherence

There are researchers whose goal is to attain some level of theoretical rigor. They tend to pick an extant theory from another domain and use that as the basis for deriving normative and prescriptive methods. The theoretical rigor allows them to verify by derivation that certain desirable properties of the solution are assured when the method is applied. This means coherence,Footnote 10 but coherence and correspondence are still determined independently.

2.3 Mixing worldviews

With these interpretations, the two seemingly contrasting views coexist in harmony because their positions do not really contradict; they simply address different things. Therefore, researchers working in, or adopting these views exercise their research and report on their findings without difficulty or need to address the other views.

Conflicts arise when assumptions are misinterpreted and worldviews become blurred. From such state, unsupported claims are foreseeable. Without proper analysis, such confusion could propagate and diffuse to different studies. Such states should be discouraged.

3 ‘We’ as the community of designers: 2nd point of contrast—designers versus design researchers

3.1 Designers

So far, the analysis centered around design research or on “we” as the community of design researchers. However, we are also designers as design is a fundamental human activity and selection among alternatives is pervasive in our daily life. We all face many decision points in our personal and professional lives. We all entertain whatever we want and can, whether it is information, knowledge, or methods, to make these choices. Some choices end up being favorable or even exceptional and some unfortunate or even devastating.

In light of such varied consequences, we expect design practitioners to use methods that lead to favorable consequences. Indeed, the definitely two most critical properties of methods for most of us are that the methods are useable to us and that we believe they lead us to obtaining good practical results. Cost-effectiveness is the driving force for method utilization, but estimating cost-effectiveness is non-trivial, subjective, and context dependent. Consequently, empirically, we would find different practitioners use different methods justified by a phrase similar to “it is the best cost-effective method for my present need”.

The methods used by practitioners are those they acquired through education, trained by consultants or colleagues, or studied from books. When confronted with a new method, most design practitioners would be reluctant to use it in their practice while replacing their favorite method. Justifying a new method is very difficult because even if we assume that it has been proven cost-effective in some practices, its successful transfer to the particular new context is difficult to establish. This happens because successful transfer requires not just throwing the method over the wall but needs transformation to be embedded into the practices of the firm.

3.2 Design researchers

While design practitioners are pragmatic in their use of methods, we design researchers find ourselves in an asymmetric position. First, we are developers of methods and carry the role of improving the methods available to practitioners. In this capacity, we develop and defend particular methods and advocate for their use in practice. This is our theoretical stance.

But second, as everybody designs, we are also designers, and therefore, users of design methods. It would be expected of us, design researchers studying or developing decision-making methods, that we specifically use our methods in our own practice; elsewhere, I refer to this as the principle of reflexive practice (Reich 2009). Yet, there are hardly any documents on such decision processes.Footnote 11 If we use design methods in our practice, we do so as other practitioners by being pragmatic. We normally do not connect our practical and theoretical stances, hence, potentially casting doubts about the true value of our work or claims.

3.3 Declaring conflict of interests

One criticism might be raised that I have not made clear my position with respect to particular references, e.g., in Fig. 1, and consequently, this discussion suffers from the aforementioned “indifferent referencing” shortcoming. The reason for this is that this is an editorial meant to stir discussion and not present an opinion. Nevertheless, I cannot continue the analysis without mentioning my own personal position that is not hard to discover from my past publications (Reich 1992, 1994, 1995; Reich et al. 1996, 1999; Subrahmanian et al. 1993). I view practice as the ultimate test of research. This does not mean that rigor is surrendered but that it is used when necessary and when possible to help us gain confidence in different methods. Rigor, however, does not determine practical success; it is valuable and desirable but does not replace practical test and the omission of rigor does not necessarily prevents from useful practical utilization (Subrahmanian et al. 1993).

4 Resolution: the challenge

The aforementioned conflict notwithstanding, as an editor, but more importantly, as a researcher interested in improving the practice of design and of research, I have to be impartial, allowing all viewpoints to be presented and defended. When Hazelrigg’s (2010) letter to the editor was received, it was clear that the conflict needs to be laid out in the open with each side presuppositions and arguments explicitly stated, leading to a serious reflective debate in the community; Hazelrigg should be thanked for constantly raising the controversy and for demonstrating clearly the limitations of some design methods, leading to the proposed open debate. This debate should expose the assumptions, the interpretation of results, and potential limitations of different positions. The journal does not have a position nor is taking any stand; it becomes a platform for serious debate that could last as long as the design community wishes, and as long as contesting positions are laid out.

Hazelrigg’s letter to the editor and the reply letter by Frey et al. (2010) that appear following this editorial are just opening remarks. Design researchers or others are invited to submit papers on related subjects including:

  1. 1.

    How can we determine the goodness of decision-making methods in design?

  2. 2.

    Design and validation of benchmark design scenarios that could serve to test selection methods in different contexts.

  3. 3.

    Application of selection methods on benchmark problems and ways to interpret the resulting data.

  4. 4.

    Theoretically proven limitations of methods.

  5. 5.

    Demonstrated limitations of methods in real practice or on benchmark problems.

  6. 6.

    New issues/criteria that must be addressed when dealing with selection methods.

  7. 7.

    Reviews identifying similar controversial situations related to other topics in design.

Each submission will undergo the usual review process. Given the debate between different positions, reviews will be requested from researchers with diverse and conflicting orientations. Reviewers that strongly object to a paper that is going to be published following an editorial decision might be invited to publish their comments in a spirit of a true scholarly debate. Accepted papers will appear in a special section of the journal that could span over multiple issues. As the purpose of this exercise is to foster discussion, papers are expected to explain the existing debate and comment critically on papers published previously related to this challenge.

In addition to papers that directly address the challenge, other occasional papers that deal with decision-making in design, also cannot avoid commenting on the controversy once it is laid out in the open.

Let the challenge begin!