“How should I evaluate my research” is an eternal question we think every researcher should ask. Why? Not because researchers would like to apply an evaluation that matches their research objectives; many do not worry about or are aware of such issues; but because they do want ultimately to publish their research and they do need to pass a review process to make this happen. The reason for asking the question and seeking solutions is purely pragmatic.

For some disciplines, e.g., psychology, there are established research methods and evaluation protocols. The American Psychological Association (APA) even publishes a manual where it lists types of papers and their structure and style. For other disciplines like engineering, research evaluation is not well studied, nor agreed upon. Twenty years after (Reich 1995), there is still a huge amount of work to be done to advance design research methodology.

What really counts, especially in engineering, is what works and not what we think should be done—pragmatics is the driving force in engineering. Bridges were erected in ancient times without a theory but through trial and error and accumulated experience. Similarly, inventors like Watt developed products like the steam engine as a practical improvement of their predecessors and only later came the supporting or explaining theories like thermodynamics.

In a recent workshop on publishing design research papers (PUBLISH-ED 2014), the advice of a group of editors of major design-related journals was to look into papers published in each journal to form an understanding of the particular journal style and culture. Such study might shed light on the kinds of evaluations that pass the review process in that journal; these are the evaluations viewed by reviewers and editors as sufficient to merit publication. It is true that such study lacks the richness of information embedded in the actual reviews and their synthesis into a single editorial decision. Readers of papers would not know which evaluation barely passed the review and which was praised; but nevertheless, such study is a good starting point for every researcher.

In this short editorial, I cannot provide an exhaustive review on the subject, although this would be a worthwhile endeavor; what I could do, is to use the papers in this issue to demonstrate the diversity of evaluations and their relation to paper topics in one issue of one journal. In addition, I wish to point the relation of research evaluations, or research methodology to designing—a subject we all wish to advance.

Moon et al. propose a method for designing product families. They use a case study research methodology, where a single case with few variations is used to establish the validity of the approach. The case is one that was solved previously by the authors so that they could compare their present results to former and claim improvement—an important ingredient in evaluation; nevertheless, the case is not a real design. The selected research methodology is a common approach in optimization but it comes with a host of further research issues that could be explored (Reich 1995).

Kelly and Gero present a general elaborate model of how designers move between situations during design interpretations. They demonstrate this idea with a computational exercise that is quite specific and not necessarily representative but helps to illustrate the model.

Tamaskar et al. develop a measure for assessing aerospace system complexity and apply it to three existing but small satellite missions whose data was available to a large extent. To demonstrate relevance to real and larger design problems, synthetic examples were generated. This is an important way to test the properties of a method and determine its sources of strength and limitations because the data could be designed and generated to serve different purposes.

Morkos et al. compare three models for change prediction. They test them on two projects. The topic requires that an elaborate set of changes be developed for each project. This is done artificially and it is unclear that it is representative of real changes; but how can you ever foresee really unknown events? Creating quality artificial test bed is a research topic on its own. The three methods are run on two cases with simulated change data. This study demonstrates that a study needs to be designed well to have good data for evaluating research results.

Kolberg et al. presents an approach to design development processes for particular contexts. More specifically, they demonstrate that good quality robots could be designed by using a structured set of design tools. The set is evaluated through a multi-year, multi-site research process that allows for conducting statistical analysis and using multiple methods for assessment. Such research is quite difficult to execute. It involves several years of engagement, data collection, analysis and refinement of the research hypotheses and evaluations. This research demonstrates that even when dealing with processes, one need not settle for a single case study but sometimes, extensive studies could be formed leading to conclusive results.

In view of this diverse set of five studies and their evaluations, that worked, in spite of them not being perfect, which method should you then use in your next research? The answer that this editorial suggests is in line with two previous editorials on the role of designing in design science. For it is through designing that we plan our research and the outcome of this designing process—the research plan or design—is the research methodology of the study.

If research methodology of a study is its design, no wonder it has such importance because without proper design we do not have quality products; without proper research methodology we do not have quality research. This observation has another consequence. Rather than looking after the best or single research methodology, a futile endeavor (Reich 2010), it is instrumental to be educated in the basics that will allow us to design an appropriate research methodology for particular research objectives, and defend it for readers to appreciate.