Keywords

1 Background – Problem Space and Method

In recent years, the Design-Based Research (DBR) approach has increased in popularity within the field of education research. Though the root of this approach is mature, the actual term (i.e., DBR) only came into use in the year 2001. Between 2001 and 2010, a total of 1,940 papers using this term were published [2]. DBR is recognized as an intervention method that researches educational designs (products or processes) in real-life settings to generate theories in the domain and to further develop the specific design through iterative processes. DBR is useful to researchers investigating technological developments that support learning and learning processes.Researchers, as I, find it useful when investigating technological developments that support learning and learning processes. I research digital learning processes, and came app. 8 years ago from the human computer interaction and information systems sciences, to the educational sciences. Though I had projects that were within the teaching and learning domain, they were often carried out as action research and interaction design studies, not as DBR projects. In my current research at faculties of education and humanities I find that there are elements from the interaction design and action science approaches that the educational design-based research approaches could benefit from (and probably vice versa, just not the scope of this paper). In this paper I give a brief (historical) introduction to Design-Based Research, where I among others utilize a couple of the good reviews that were written in the last 5 years. These reviews encapsulate some of the key characteristics, and I use them to reflect on the activities and actors involved in DBR research, and to derive the critical perspectives raised. I do this in relation to what I have experienced when discussing with peers and conducting DBR research projects. These projects use technological developments in educational settings, where the users are primarily online and distributed in space and time. Also, the learning process takes place during and transfers into a daily work practice, which means it is not possible to directly observe, as one can observe a classroom activity.

This paper is not a traditional literature review going through the full body of literature, though it does rely on a process of: identifying the key terms, locate literature, critically evaluate and select the literature and write a literature review [9]. However, this review consists also of experiences from existing empirical research, and combines this with the literature, similar to an integrative review [34], and combines it with inspiration from narrative ethnographic approach [9, 32]. Through this I aim to structure my reflections that are situated in cross-disciplinary experiences, and make the more subtle factors and findings explicit (even for myself).

An integrative review can contain theoretical papers, case studies etc. that apply different methodologies (quantitative and qualitative, experimental and non-experimental research) [35]. Whittemore and Knafl describes how this multifaceted approach provides a more rich picture of the topic being reviewed, but that this also raises the complexity and brings challenges: “The integrative review method can summarize past empirical and theoretical literature on a topic of interest….Incorporate diverse methodologies in order to capture the context, processes and subjective elements of the topic. The integrative review method has been critiqued for its potential for bias and lack of rigour” [35, p.552]. Whittemore and Knafl suggest bringing rigour into this process by among others applying Miles and Hubermans [24] processes of data reduction and data display in the qualitative analysis process. I take this a step further by applying an interpretative layer through personal experiences in own research projects, in a reflective narrative [24], thus making the personal elements explicit. This does not remove bias, but may provide insight and clarity to the interpretations made.

Thus this paper is primarily a discussion/reflection paper on a methodological level and it is not a rejection of DBR. The aim is to illustrate that DBR has a lot to offer, and I for one have research projects, where the methods makes great sense to apply and to continue applying. Empirical projects are in the paper included on a vignette and reference level representing the potentials and critical points raised. However, rather than seeing the critique as a rejection of an approach, it is an attempt to show where some of the critical incidents are hidden, leading to identification of possible elements for future action. The paper argues that there is a risk of avoiding real-life factors by isolating the real-life intervention to the actors and actions in the classroom and thus mirroring some of the draw-backs in laboratory experimental research that DBR wanted to distance itself from. The discussion raises issues as users’ needs, resistance, organizational relations, and alternative design solutions. Also, this type of online and competence development processes needs new empirical methods, and an argument for rigour in the DBR analysis and theory generation phases is presented.

2 Design-Based Research

Though the design-based research as a term for an established approach in the educational sciences is new, and the first formal use was in 2001 (according to above), design science as such is of course far from new. Therefore, when I search: “design based research” OR “design-based research” in web of science, scopus and google scholar, the first appearances of one of these two terms is within Engineering, in a talk from 1973 on production technology [17]. Earlier appearances can exist as the databases may not contain a digital version of the papers, or the papers from before this period, are scanned versions, where the body text are not searchable.

No doubt the discussion on design science appears much earlier, which for example Cross provides an introduction to in Designerly Ways of Knowing: Design Discipline versus Design Science [8]. Cross also shows that within the technological domain, design science has primarily been about how to increase the knowledge pool on design methods, and less about how design processes used in research can improve theory generation in any domain [8].

Action research, is an intervention research approach, was primarily coined by Kurt Levin in the 1940-50, where Levin made his famous argument that in his objective one cannot understand something unless you change it. He formulated the unfreeze, change and freeze phases of action research, relying among others on group dynamics and democratic research process which today has evolved to more continues action research change models [18].

Design-Based Research in educational research primarily emerged as response to the need for more usable theories and models, similar to action research. Juuti and Lavonen [20] says that DBR bridges the gap between educational research and practice. The first two papers which have later been named the classical or first movers are Collins in 1990, who called for a design science of education [7] and Brown in 1992, who talks about design experiments [6]. One of the first papers to use the design-based research term is the design collective, with the Design-based research: An emerging paradigm for educational inquiry from 2003 [12]. Many of these people came from a psychological or teacher education research background, where experiments were applied in lab-like settings that tested hypothesis. To put it a bit squarely: real-life was for observations, and the laboratory was for experiments. The objective was to move to real contexts, to develop and work with practical usable methods and theories. [6, 7, 12, 14]. DBR are for some an hypothesis driven approach to theory development: “Through a parallel and retrospective process of reflection upon the design and its outcomes, the design researchers elaborate upon their initial hypotheses and principles, refining, adding, and discarding - gradually knitting together a coherent theory that reflects their understanding of the design experien.” [14, p. 106].

According to Wang and Hannafin in 2005, DBR is “a systematic but flexible methodology aimed to improve educational practices through iterative analysis, design, development, and implementation, based on collaboration among researchers and practitioners in real - world settings, and leading to contextually sensitive design principles and theories” [33, p.6–7]. This is not much different from Anderson and Shattuck [2], who deducted the key characteristics through a review of the five most cited papers each year. Their paper is structured with a heading for each key characteristic, which is shown in the below list (and I will return to this list at the end of the paper). DBR are [2]:

  • Being situated in a real educational context

  • Focusing on the design and testing of a significant intervention

  • Using mixed methods

  • Involving multiple iterations

  • Involving a collaborative partnership between researchers and practitioners

  • Evolution of design principles

  • Comparison to action research [which the authors describe as different from]

  • Practical impact on practice

DBR in education primarily focus on an already designed product/process (perhaps a software prototype or an educational plan for use of a specific already developed technology) and its application into an everyday context, with all its messiness, chaotic and divergent nature. This design is then improved in iterative manners, through several interventions [e.g. 15, 22], which gives knowledge about how the design works, and informs the educational domain about how similar designs and situations would work. The design being used in the intervention can be a new technological product [22], or a technological enhanced learning process [34].

The DBR mind-set rest on an assumption that we as researchers can learn from the participants (teachers and learners) take on the design and the experienced learning process. Amiel and Reeves calls it a democratic research practice for researchers who believe in research as value-added, and that it is a possibility to use DBR to investigate in social responsible research [1]. Through this they distance themselves a little from the more researcher-defined hypothesis-driven approaches to DBR, and in the paper they illustrates the difference between a more typical/traditional predictive research approach and a more inclusive DBR approach were teachers are included in the formulation of the problems: “In contrast, we suggest that design-based research begin with the negotiation of research goals between practitioners and researchers…. The practitioner is seen as a valuable partner in establishing research questions and identifying problems that merit investigation” [1, p. 35].

Learning processes are complex in nature. The DBR researcher Sasha Barab argues from the perspective that cognition is not a disembodied activity of the mind, and that the whole person, the environment and the activity is part of a learning process [3]. However, this also makes it difficult to understand, measure, and differentiate between the dependent and independent variables, as many factors influence. Juuti and Lavonen mentions: classroom settings, social and psychological atmosphere, pupils’ motivation, affection and conceptions toward a topic to be learned or toward schooling as such, and moreover, students’ experiences outside the school [20, p.55].

DRB research results in understandings and knowledge which have the objective to be useful for and often change practice. This duality, and that both are equally important is seen in two sentences in the paper by Barab and Squire: (1) Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field [4, p.5–6] (2) Design-based research that advances theory but does not demonstrate the value of the design in creating an impact on learning in the local context of study has not adequately justified the value of the theory. [4, p.6] Pragmatism is by many authors seen at the underlying paradigm [2, 4, 20, 33]. This entails an ontological perspective of the world as complex and chaotic, where people with ideas and solutions through interaction change the context and the reality; and an epistemology that we need to try our ideas and solutions in real world settings in order to gain knowledge of the world; that the theories we generate need to be practical solutions to real world problems, and the methodological validation, that we can know something substantial about this world through repeated interventions. This is not similar to an understanding that a solution or a theory is final and will always work.

There are however inherent challenges on a methodological level, which has also been discussed and raised by several researchers. I have in particular learnt from the work of Yrje Engeström [16] and Chris Dede [10, 11]. Not everyone who criticizes rejects DBR, but to do this to be aware and work with these factors as the DBR method matures. An often discussed issue is over-methodologized studies: Applying mixed methods strategies often means using many and varied methods in the same DBR study. The extremely large data-sets which these methods lead to, makes alignment and analysis difficult [10]. Another criticism is that it can be difficult for a researcher to stay trustworthy and unbiased, when he/she is involved with the design and the intervention (designing, planning, conducting and evaluating it) [4], and at the same time also is the one interpreting the quality and the lessons learned of the research practice [2], which btw. is comparable to the epistemology in a constructivist and interpretionist viewpoint. A third issue is that the design evolves over time, and with this the methods applied may shift as well [10]. As such, DBR lack rigor in the research process, which means we need robustness of evaluations, as well as ways to determine what a successful design is [10, 20].

Lyon and Moats point out in a paper on intervention research in general (i.e. on reading interventions not specifically DBR) that it may be difficult to replicate interventions because we do not have enough insight in a number of factors [21]. They mention: Sample heterogeneity and definition; Poorly defined interventions; Inadequate control groups; Inadequate intervention time and transfer effects; Effects of past and concurrent instruction; Method or teacher effects; consistency across teachers, and Generalization and maintenance issues [21, p.580]. Though some of these factors shows a desire to aim at a more positivistic paradigm of wanting to find the rules that govern the world (as the desire to replicate), they raise interesting issues relevant for research interventions. Issues that I find are seldom discussed explicitly in DBR (or in interaction design for that matter), as which effect does the teacher has on the intervention, and consistency across teachers. Another discussion is that there are many projects that have very well defined interventions, but because of the evolving nature of these, it is difficult in papers to disseminate knowledge about these precisely enough, to document what took place.

Engeström criticizes that design experiments have what he calls a linear view: “In discourse on ‘design experiments’, it seems to be tacitly assumed that researchers make the grand design, teachers implement it (and contribute to its modification), and students learn better as a result. Scholars do not usually ask: Who does the design and why? This linear view is associated with notions of perfection, completeness and finality.” [16, p3] A point Dede also raise when stating that: “People fascinated by artifacts also are often tempted to start with a predetermined “solution” and seek educational problems to which it can be applied, a strategy that frequently leads to under-conceptualized research”[10, p.107].

Engeström shows how DBR seldom discuss that the linear view makes some research studies blind for how interventions also brings about resistance to change from participant; how people reinvent a strategy and perhaps changes it, while it is being implemented. He sees resistance as natural force (as in action research) and discards design experiments and argues instead for formal interventions, where he among others presents a model to analyze and understand the interventions, namely his renowned model of activity theory [16]. He argues that all actors thereby get a language to talk about what is and has happened in the process. He also argues that the formal intervention unlike DBR has an open starting point, and that the intervention is subject for negotiation, with the aim to focus more on a localized solution than general applicable solutions, and thus a research role that have the aim to foster expansive transformation owned by the participants rather than a process where the researcher tries to control all variables [16].

Majgaard, Misfeldt and Nielsen [22] made use of inspiration between DBR, ID and AR in their case study, which focus on a specific design for children. It shows how even children can aid in the design process, though it does not raise the issues of how to align variables and findings, and to work with resistance or alternative designs as I do here, it does show an interesting example of how the children pointed to theory generation issues within the factor: motivation for learning, which the researchers had not found, if they had relied only on the teachers input [22], which I will return to later. The three approaches, DBR, ID and AR, all have an starting point in pragmatism, and it is possible to get inspiration from ID and AR perspective at the overall level, which I will do in the following sections. I look to interaction design (ID) and action research (AR) for inspiration to some of these issues of resistance, linearity and difficulties in alignment, as well as on working with alternative designs, users’ involvement and methods for knowing about your online and time/space distributed users.

3 Online and Pervasive Settings

Where DBR projects were relatively small to begin with, many projects like the ones I work with today are large in scale, are longitudinal studies over several years, and involves many participants, and/or several research partners [e.g. 23, 25]. Barab writes in his introduction: The goal of DBR is to use the close study of a single learning environment, usually as it passes through multiple iterations and as it occurs in naturalistic contexts, to develop new theories, artifacts, and practices that can be generalized to other schools and classrooms. [3 p. 153]. There are a lot of technology enhanced education that involves designs which are in-class designs (using smartboards, mindstorms, programming computers, using iPads etc.), but what if the single learning environment is not confined to a single physical location?

The projects I work with have an extra dimension of participants working distributed in either time or place or both, and in settings, which physically or mentally are not strictly classroom-like [24, 34]. This means the use situation, the intervention, is not always easily identified, but permutate into other everyday situations and the question becomes: how do we as researchers’ deal with a design and an intervention, which we cannot follow directly due to its pervasive nature?

Anderson and Shattuck [2] reviewed approximately 50 DBR studies, and none of these were explicitly in the competence development domain. They did categorise 5 studies to teacher training, but teacher training does not necessarily entail competence development, and the citations they use refer to results that are presented as useful in pre-service teacher training [2]. However, as the teachers often play a vital role in the studies, a competence development perspective as training experienced teachers could certainly be part of some of these studies, just not an explicit mentioned objective.

Even though many of my projects are situated in a formal educational system, they often have competence development for teachers as one of the objectives, and I have also worked with knowledge workers in consultancy firms, and health care professionals. All of these situations differ from the traditional classroom setting, not only because of the online time and space distribution, but also because the primary learning objective is different. Learning objectives in school contexts (regardless of this being primary school or higher education), are often related to learning outcomes and retention. Of course engagement and satisfaction are important factors, but in the end students are assessed on their knowledge and ability to utilise their domain knowledge, also in more constructivist approaches in for example project work with empirical data, problem-based learning approaches etc. Nearly everything is measured at an exam. However, in competence development, transfer from learning context to working contexts is the key factor. And if users are online, how can we gather information about both the users’ interaction with the solution and the intervention, how do they communicate with and reflect with peers, and how do we know about the effects that intervention have afterwards on their everyday practice?

In the IFIP working group 13.6 on Human Work Interaction Design a number of tools and techniques for exploring the relationship between extensive empirical work-domain studies and interaction design has been presented. The workgroup encourage empirical studies and conceptualizations of the interaction among humans, their variegated social contexts and the technology they use both within and across these contexts (see the proceedings and activities at http://blog.cbs.dk/hwid_cbsdk/). The methods: sketching and mobile probing and probes are relevant in this context and methods that I have worked with in the HWID group. Sketching can work as a way of getting to user needs and requirements as well as unaffordable ways of trying out alternative designs [37]. Mobile Probes and probing is a method in between cultural probes and interviews, where the unknown are explored through questions and assignments send via SMS. Questions about what people are doing here and now, what they have done in a particular area that day, which challenges they met etc.; And assignments as encouraging to use a specific technique the following day, or to interview one of their students/colleagues etc. We find it a fruitful method, when the users are distributed in time and space from the research team and because we gain knowledge about this person while they are “doing” [13].

Methods as this, work with uncovering the unknown and serve as a catalyst for the daily practices. They open for areas that we as researchers did not know we could or should ask about, and that partcipants’ had not verbalized as interesting issues [13, 37]. Other methods that can carry results in these pervasive settings are auto-ethnographic methods of digital nature - as in self-reporting on use via log-books, rich qualitative questionnaires, and digital storytelling/narratives. Interestingly this relates to the use of digital narratives in DBR. Here, digital narratives are reported to be used as reflective tool inbetween researchers, when analysing and discussing the project findings [see 20, which reference to Bell, Hoadley, and Linn (2004)]. Finally, of course many traditional mixed methods strategies are applicable in online environments, as online interviews and focus groups using video conferences, online surveys etc.

In conclusion, I argue that as DBR expand to educational settings that exceed the traditional formal educational classroom setting, so must the methods applied embrace this.

4 The Participants and the Organisation

DBR emphasize interventions in a representative real world setting understood as the classroom setting; investigating learning, learning strategies, perhaps teacher-student relations or even political agendas [2, 33]. Juuti and Lavonen [20] says that design research has three parties: (a) a designer (e.g. researcher), (b) a practitioner (e.g. teacher), and (c) an artefact (e.g. web-based learning environment for science education), but do not mention any other roles in the organization. However, there are many more roles, structures and activities which could be considered, than those present in the classroom. For example the team of teachers, which the teacher in the intervention collaborate with on a daily or almost daily basis, the it-people and administration, the management, or other intangible artefacts as the culture at the school, the voice of the municipality, perhaps even national or international strategies etc. The objective here is not to make educational research into grand scale organizational, social or financial studies, but to illustrate that if real world settings are important, then the organization as a whole is important, and we need to understand or at least reflect upon its role.

Action research has its roots in organisational studies and tends to be more sensitive to the systemic nature of organisations, where many aspects of the organisation and its network relations needs to be taken into account in change and intervention studies. There are many action research methods, but one of the common denominators is that researchers co-construct knowledge together with the practitioners (of course to various degrees in the various methods) [18]. Though there is here some similarity to AR and DBR, AR often provides the opportunity for participants to take ownership over the design and the interventions to a larger degree - sometimes even to a degree where the participants’ finds that the process the organization has been through would have happened anyhow, i.e. without the researchers being present, which is in a way a positive thing. I have also seen, how too much ownership from management means that teachers then almost tacitly agrees to thinking less constructive and engage less in the DBR study. This is in line with the previous mentioned thinking of Engeström who works with resistance as a natural force [16], and in much organizational development literature resistance to change is seen as inherent human trace.

In learning processes that expand the classical classroom, I as researcher know less about how the users interact with the solution and about how it effect their everyday practice, only the users themselves can provide this input. If we at the same time believe in a contextual setting, where interaction among peers, in and around the organisation effect their learning and how their learning transfer to practice, then if we only investigate the three before mentioned parties (the designer, the teacher and the artefact) we may create yet another closed lab-like setting. It may happen in real-life, but it will be an artificial real-life research setting, where we omit too many factors of influence. The problematic about such a statement is of course, that we on the other hand open op for the vast myriad of variables to take into account, and to yet again over-methodologies our studies in order to “capture” these factors effects. But if we could early in the process make a real-life, in a true messy context, explorations of the different possibilities and factors influencing, perhaps this can aid in a better alignment. I will discuss one possible way of dealing with this issue in the following section.

There are certainly some DBR researchers who have become aware of this, and Barab and Squire mentions the naturalistic context boundaries, and provides a rare and much appreciated example of a design in a singular place, that did not scale well, because aspect from the surroundings regarding usability were not adjusted for [4]. So, we need to begin contemplating how to see and investigate such factors.

One challenge is, that the parties involved in an intervention may have different interests, not necessarily opposing interests, but with variation in what they priorities. One example is the difference in focusing on a micro or macro pedagogical level or on differences in time scale. The learners may be interested in learning and motivation with respect to their own learning process (here and now), where the organization is also interested in changes over time (next year students, other classes etc.), and the researchers may be interested in what can be learned from the intervention, which can inform theories and practices in general (meaning even bad examples can be learnt from). Also, who is concerned with the afterlife of the project in the organisation - after the researchers has left? Therefore it is pivotal to start from understanding and working with participants needs, and perhaps even clearly identify the success criteria’s for all parties/stakeholders.

5 Problems and Potentials, Solutions and Suggestions

The idea that DBR is initiated to address problems that are both scientifically and practically significant has been repeatedly addressed in the literature” [23, p. 98], and this objective to make practical useful research results are also present in AR and ID. AR has a similar starting point of addressing problems, whereas in ID one can also work with potentials (as developing design innovations that there is no observed need for yet).

In both ID and AR the underlying belief is to work from a starting point that is explorative in nature, identifying needs and requirements of users in the context, before settling on the design specifications. This initial starting point is somewhat different in DBR, where some are hypothesis driven (in particular in the first papers of Collin [7] and Brown [6]), and often starts with a technological design, full functioning solution or a working prototype (as shown earlier). Ejersbo et al. presents two types of DBR studies, which had different starting points and different iterations. One where the design came first and another where a more ethnographic process of understanding the context was first applied [15]. They do not claim one is better than the other, but argue for what they call the “osmotic mode” of balancing the development of an artefact and the theory generation, and claim that as such DBR is not linear (which can be related to Engestöms [16] critique of DBR as linear discussed earlier).

In AR and ID a distinction is made between user centred design and participatory design. The first is an approach that values users, but where users are not directly involved in making the actual design or change process; whereas in participatory approaches, users are co-designers and not only co-creators of the knowledge, but make co-interpretations [18, 28]. Educational research could certainly work with both user-centred and participatory aspects, and just need to be explicit about the choices made. What is interesting is that the element of being 100 percent participatory may not always be an adequate solution in educational arenas, when for example the participants are on new grounds. This is perhaps best highlighted in the classic Spinuzzi paper [30], where the argument raised is, that users do not always know about thinking creatively about their own situation and henceforth cannot be as innovative as experts are. My experience is, that when participants are at the same time learning about an area, that they now little about, this may very much be the case. It is not only difficult to be creative, for some it is also difficult to leave the comfort zone of “what I usually do”.

It is noteworthy that even though ID and AR researchers start with explorations of user needs and have them participate in the development of the change process, the design of a product/process; the researchers always comes with their expertise in a certain domain, and so the area of research is bound to be within this researchers practice. For example I seldom see empirical studies where the solution is abandoned (it happens, but is rare). In a worst case scenario, intervention research of any kind may end up investigating large scale technological eLearning solutions for problems and opportunities, where a simple paper poster could have done the work. My point is that this form of bias is seldom discussed in any of the three approaches – DBR, ID or AR.

6 Working with Alternative Designs

When working with people in educational research whether in small design experiments or larger DBR projects, I have often asked colleagues, professional it and learning designers, as well as students, if the project they are presenting is iterating on the best way or the first vision? This question deals with the notion that often we as DBR researchers have a vast knowledge of new technological innovations and their possible impact on learning. We are therefore often quite innovative and come up with interesting suggestions for new pedagogical designs. The field of online learning is for example in these days exploding due to the ease of making one’s own digital production, whether as instructional material in a flipped classroom like setting; or as students’ own video-production as a reflection on an interesting topic, to be shared with peers; or as synchronous video conferencing for teaching or informal talks. All of these enable online and distributed learning settings that flows in to our everyday practice. When we as researchers suggest learning designs that involves these, we change both the learning process, as well as the everyday work life. Sometimes these learning designs are great suggestions, which show that there is something important to be done in this area. However, the first vision about something often needs to be reworked into sustainable ways forward. But how do we know if we are working on a vision or one of the best ways forward, out of the many possible ways to reach that vision? i.e. the best way equals the currently best sustainable, scalable and usable design.

One of the suggested criteria for determining if a design is successful are when there are comparable experiences across participants’ roles (as students and teachers, boys and girls etc.), across contexts and when an exhaustion level has been reached (e.g. [25]). This is however only possible with smaller incremental changes of the design, and if what we are comparing are if version 2 works better than version 1. So how do we define criteria’s and find a process for when to abandoning designs that are different designs, rather than seeking to improve a design (a learning solution or process) which may be better off discarded?

Perhaps researchers are in fact already applying alternative designs, but are not doing so explicitly. It is unclear when reading the many studies (that the sheer volume of a reference list cannot cope with in this paper, but for lack of examples look to [2, 33]). If a design or intervention has changed significantly over time, well how many changes can one make, before it is no longer the same design? My point here is not that designs cannot change over time, they will, but rather that there seem to be no work on alternative designs early in the DBR process, that act out the first vision, and few studies that explicitly deals with the fluctuating designs trajectories.

Working incrementally with prototypes with real context serves great purposes - it was and still is a well-renowned ID and systems development approach. In 2005-8 Bill Buxton gave a series of talks with clear distinction between sketching and prototyping, where prototyping leads to refining the same idea, sketching was seen as a way of quickly and affordably trying out various ideas. (This discussion with reference to his talks and book is also shown in [37]). Trying out various ideas of the original vision, has shown me, how the vision in projects, may be fair and reasonable suggestions to an opportunity or problem, but that there are sometimes better ways of realizing that vision in concrete designs.

This and similar arguments has permutated into ID models. For example, in the period between two edition of the renowned interaction design book by Preece, Rogers and Sharp, the simple interaction design cycle change from having the second phase called: (Re) Design (in Fig. 6.7 in 2002 and in the 2007 editions), to its name being Designing Alternatives (in the 2011 and Fig. 9.3 in the 2015 editions) [28].

I believe working with alternative designs, and getting users view on these, is one of two suggested mechanisms for aiding us working in educational contexts and with DBR, that is to get pass the desire to or risk of confirming existing assumptions. The challenge is to implement this to larger DBR projects with external funding that demands relatively set project timelines and milestones. The other mechanism is about rigor in the analysis, which I will discuss in the next section.

7 Theory Generation and Rigor in the Analysis

Many DBR studies often gives rich conducts of the research methods and tools applied when creating and gathering empirical material (as observations, interviews, questionnaires, log-files etc.). The process of analysis on the other hand seems less in focus. Publications include discussion of theories that talks about the same phenomenon as seen in the research results, with quotes from students or teachers, but no signs of how did the researchers choose these citations over others, how were the various data compared, worked-through etc. [15, 22]. Of course the journals have a maximum paper length, which means that all processes cannot be documented. Nevertheless DBR creates a huge number of data and as any qualitative study, the need to perform and document meaningful data reduction and data displays exist [24].

As Baskerville and Pries-Heje [5] I have found great use in grounded theory as a mean for bringing rigour into the analysis process of data in AR projects and as mechanism for theory generation [36]. Though criticized for being a-theoretical this is far from the situation today (if ever depending on which strand one follows). In for example informed grounded theory, the literature and the knowledge we had prior to commencing the study does not leave us, but the approach do take a deliberate starting point in the data, from here open and axial coding begins [31].

While discussing an educational research study, DBR lifecycles and video analysis, Mike Rook wrote in his blog (quoting Doris Ash), that dialog progress discontinuously, and that we need tools to scientifically make sense over time and make connections [29]. Discontinued discussions and learning process are certainly part of online distributed educational and competence development projects, and digital analysis software have enabled me to analyse multimodal material that are dispersed and disjoint. The analytical software present today, as Atlas.ti and NVivo provides the possibility to make open and axial coding on the recordings, rather on transcriptions. This allows for mapping of concepts, working with displays, and applying theories, without loosing the link to the original empirical material. This supports the validity and verification process bringing visibility to myself and others, who can follow the arguments made in the studies.

Nortvig presents a project on video conferencing, where the DBR process did not evolve as planned, and she used grounded theory to align the varied input into categories of mutual and conflicting factors [27]. In this perspective it is the participants in the DBR study, who talks about the findings, and they point to theory-generating subjects via their utterances about what works, about experiences, what motivates and engage, and about what does not work, engage etc. i.e. the participants points to events of interest, and the researcher(s) have the right and responsibility to interpret how these utterances interrelate, and to relate them to which theories says something relevant about this phenomenon.

Another aspect which is seldom visible in the publications on larger DBR project is how research collaboration and findings in-between researchers take place. It is difficult to see how researchers agree on the aforementioned input from the participants. In ID the evaluator effect in usability studies has been discussed for the last almost 20 years. The evaluator is the person, who investigates a number of use situation, and who on the basis of that investigation determine, if there are critical issues in the design. Those issues which are very critical are called major incidents. The evaluator effect deals with the phenomenon, that if two or several evaluators investigate the same use situations (often via recorded sessions) they will not identify the same issues as critical or major incidents. i.e. even those that they do agree on may not be rated to the same severity-level. A new large and systematic study published in 2014, walked through previous studies, and conducted a major study confirming the evaluator effect [19]. Here, it was found that nearly 1/3 of the reported incidents by 19 experienced expert evaluators, which were found to be major incidents of high importance by one evaluator, were at the same time reported as a minor incident by another evaluator. The authors found that: it is important to have several evaluators on a design project; evaluators can benefit from consulting local or domain knowledge; evaluators can consolidate and gain further insights through group processes; unmoderated (and thus also remote evaluations) resulted in the same evaluator effect (and that it can be a cost-effective way of gaining insights); and that reliability as perfect reliable reported incidents are not the objective (but that the process converge through iteration and re-design) [19].

The big issue in this DBR context is not so much that experts within a design science, find and priorities differently. The issue is how we match these findings. Though a group process may be used in DBR, it is not clear how this matching occurs today, neither in the literature nor from the discussions that I have with my peers. This entails two perspectives. First having clear objectives and criteria’s for what we are valuing in the specific DBR project is pivotal. (For example in a study of what authors deem as effective eLearning when doing empirical development studies (in general not just DBR), we found that 10 % did not say what effective learning meant for them [26]). Secondly, if we as researches want to make our arguments robust by combining and do collaborative analysis, how can we ensure that a group process are not enlarging rather than diminishing our blind spots? For example, if we in this process omit the less critical incidents or if we agree to focus on those that we agree is important – could it be that we are omitting those rare incidents that actually changes learning or are vital symptoms of something more crucial? I do not have a clear cut answer, but as being aware of the evaluator effect, and to discuss the incidents reported seem to be a way forward in itself [19], similarly being aware and explicit of DBR-researcher-effects can be important.

8 Unlearning, Capacity Building and Dealing with the Obvious

Part of a well-conducted research process is to make: “assumptions and theoretical bases that underlie the work explicit. At times, this has meant defining assumptions and theory before the design work and other times these have evolved out of the work. However, as theoretical claims became apparent, we discussed them as a group and wrote them down on papereven if they were only naïve conjectures.” [3, p.167] There could be a risk of that DBR with ID and AR perspectives result in solely localised knowledge that is tied to the intervention or the design. However results can also be general insights, and sometimes even these naïve conjectures, turn out to be important inherent naiveties, which need a push.

Majgaard et al. illustrates this in their ID and AR inspired intervention in the domain of mathematic, which led to insights about how children enjoyed and engaged more in the formal learning process, when they could experiment with huge numbers with many digits, than smaller and in the children’s eyes uninteresting small numbers. [22] The paradox, in this specific case, was that teachers found that children should not “play around” with such large numbers, as they did not yet grasp their meaning. However, many of us can probably relate to this state when we were children, or if we have children now. I remember playing with my grandparent’s calculators, making the most outrageous numbers, and trying to get my grandparents to pronounce them for me. And I saw how my children when they were smaller went through the same phase with much joy, fun and laughter, but also with good conversations about which number represented the hundreds, the thousands etc. This type of knowledge that Majgaard et al. extracted could therefore also be criticized of concluding the obvious common sense knowledge for people with educational experience, as Dede claims many DBR studies do [10]. Though I understand the reasoning, I also reason, that if no one makes these observations explicit, then common practical phenomenon may not be translated into what they mean for learning designs and learning materials in the future. In this case, teachers, developers and publishers of learning materials claim that children are not ready for large numbers and need to learn more about the smaller ones and their structures first, before large numbers can be used in school context. But in fact the opposite in this situation seemed the case. Perhaps the children need a dosage of both, and the teachers, developers and publishers need to change their practice. As often, I find that the research findings are of course linked to the possibilities that technologies bring to learning, but that it is also opens for blind spots or difficult issues, where we need to “unlearn”.

In competence development projects, where the aim is changed practice in adults’ work-life, unlearning and capacity building is very much at play. For example in one of our larger projects where science teachers are in focus, the first iterations shows that the facilitation or scaffolding of a learning process is vital. We find that some sort of “voluntarily but pushed” interaction with the teachers is usefull. It makes them explicitly reflect over their own teaching processes, which is necessary if the online learning is to transcend into changed behaviour/changed teaching habits. The online design consists of materials that illustrates and discuss an approach to science teaching, which is based on inquiry and problem-based learning, and the suggested learning model is to work in teams. The material is structured into modules, (but can be used in any sequence) and for each module a suggested route is laid out. This route is based on a mix of getting input from the material online, input from colleagues and from trying things out in one’s own teaching and reflecting on the result. For example one route could be to: First discuss issue A with a colleague, using a pre-defined questions (as how do you normally deal with this issue in your classes); View the video B together with you colleague and discuss the video; Try this approach which video B presentd in your next session; Finally, reflect on the results and discuss this with your colleague. Now, even though the online design suggest that people “walk through this route”, unless one of the researchers are sitting observing the teachers, the teachers would seldom work through the material as suggested. Only a few would remember to talk about what current practice they had, some would skip through large parts of video, text, etc. Even when the researcher was present, as in an early pilot, some teachers would in their discussions with colleagues come up with various strategies for why they should not adopt the material [see 26]. However, we also saw in this early pilot and in another iteration this spring 2015 that others because they used the time, they reflected on current practice and what could be changed, they clearly became inspired from the material online, and this was reflected in their teaching practice. For example in the spring 2015 iteration, mobile probes gave us insight into what the teachers were doing in their practice without us as researchers actually being there. A side effect of this became clear in the post-focus group interviews, where teachers said that getting a question or assignment, was like a gentle but also disciplinary reminder to act and reflect.

Now, stating that scaffolding and facilitating a learning process is vital to do in online distributed education may, as the previous example, state the obvious and naïve. The argument is, that the DBR process with an included explorative angle, has aided us in trying out various alternative design solutions to this vision and enabled our partners to see the necessity of providing a scaffold that provides a subtle “voluntarily but pushed” interaction to support the unlearning and capacity building process.

9 Framing Findings

This integrative review with a personal narrative element is an argument for an approach to DBR that stays true to the ontologies and epistemologies, which open for being explicit about the factors that influence research results in all its phases.

As a reflection on Anderson and Shattuck’s headings [2] (shown earlier in this paper, and represented in italic below), the discussion in the above sections is about getting inspiration from ID and AR. I argue that we could perhaps mature the design-based research approach by reflecting on the consequences, barriers and potentials of working with alternative designs, focusing on the whole organization, on considering how to gain knowledge about the users and their work context, and to consider how to align our empirical data, and even when to reject designs or theories.

10 Conclusions

This research discussion is situated in online educational projects, where participants are distributed in time and space, and where the learning process expands from a traditional classroom and the single physical context, to everyday work and life practices, as in competence development projects. The paper argues that there is a risk of avoiding real-life factors by isolating the real-life intervention to the classroom and thus mirroring some of the draw-backs in laboratory experimental research that DBR wanted to distance itself from. We may work with these issues by investigating factors as users’ needs, resistance, organizational relations, and alternative design solutions. Another issue is that as the educational processes are distributed in space and time, and with many researchers, DBR needs new empirical methods, and rigour in the analysis and theory generation phases, and to consider how to leverage between several researchers evaluations. On the other hand by opening for these steps that I outline in this paper, there is a risk of adding to the volume of techniques, tools and factors involved, leaving the research vulnerable to even more over-methodologizing and making alignment difficult.

However, even with this risk of over-methodologising and adding to the number of factors involved, I argue that as DBR expand to educational settings that exceed the traditional formal educational classroom setting, so must the methods applied embrace this. I suggest methods as mobile probes. A method that mixes interviews/cultural probes over distance, using tasks and questions received during a full day with time intervals and via the mobile. This method and similar methods, as digital narratives and other auto-ethnographic productions made by the users’ themselves, may represent a way to gain knowledge about what we as researchers do not know about the work-context. It also represents an opportunity for the users to reflect on their own learning process and its relation to their practice, and give insights about the organizational factors as a whole. These methods therefore also scaffold the participants learning process, which can of course be in one perspective a bias to the result, but on the other hand also be viewed as excellent tools for learning, not only as techniques for gathering empirical data.

I conclude that the objective is not to make educational research into grand scale organizational, social or financial studies, neither is it to make them into full blown grounded theory or usability studies, rather the objective is to illustrate that if real world settings are important, then the organization as a whole is important, and we need to understand or at least reflect upon its role. Such perspective is also important, if the DBR project is not only interested in the project results, but in how to anchor results and create sustainable theories and solutions. Therefore it is pivotal to start from understanding and working with participants needs, and perhaps clearly identify the success criteria’s for all parties/stakeholders.

I have presented an argument for working with alternative designs as a way to get pass the desire to or risk of confirming existing assumptions. Another mechanism is about rigor in the analysis, and about how to leverage findings, also when many researchers are participating in these large DBR projects that are emerging today. Here I think an interesting point is to find ways of not omitting those rare incidents that actually changes learning or are symptoms of something more crucial.