The Art of Research: A Divergent/Convergent Thinking Framework and Opportunities for Science-Based Approaches
- 5.6k Downloads
Applying science to the current art of producing engineering and research knowledge has proven difficult, in large part because of its seeming complexity. We posit that the microscopic processes underlying research are not so complex, but instead are iterative and interacting cycles of divergent (generation of ideas) and convergent (testing and selecting of ideas) thinking processes. This reductionist framework coherently organizes a wide range of previously disparate microscopic mechanisms which inhibit these processes. We give examples of such inhibitory mechanisms and discuss how deeper scientific understanding of these mechanisms might lead to dis-inhibitory interventions for individuals, networks and institutional levels.
KeywordsDivergent thinking Convergent thinking Science of science Research teams Research ecosystem
Classification CodesO310 O320 O330
Research, the production of new and useful knowledge and products, is an estimated $1.6T/year world enterprise (Grueber and Studt 2014), supporting a community of approximately 11 million active researchers (Haak 2014), and, most importantly, fuelling a large fraction of wealth creation in our modern economy (Jones 1995). As a consequence, it is an essential function of research and engineering management, with a growing literature associated with how it can best be organized and facilitated (Damanpour 1991; Damanpour and Gopalakrishnan 2001; Drucker 2011; Hueske et al. 2015). From our perspective ‘in the trenches’ of research in perhaps the most mature and quantitative of the sciences, the physical sciences and engineering, research is still largely practiced as an ‘art’ where it passed down from the experience of one generation to the next and, as decribed in Beveridge (1957), focuses on the intuitive nature of the scientist. We learn how to do research from our professors, managers, mentors and fellow researchers, just as they did from theirs. To be sure, this art is highly evolved, and successful researchers and research managers have developed a deep understanding of how best to practice this art, won from hard-earned experience. However, this understanding is intuitive, without the reductionist framework necessary to analyse and decode best practices, improve and replicate such practices and then systematically raise the bar everywhere else.
Paralleling this community of research practice, another community has been growing around a field that might broadly be called the ‘science’ of research (Börner et al. 2010; Fealing 2011; Feist 2008; Sawyer 2011; Sismondo 2011)—the understanding of the human and intellectual processes associated with research. Until now, the two communities (the practitioners or ‘artists’ of research and the ‘scientists’ of research) have advanced with minimal interaction. In principle, however, they can benefit each other enormously. Practitioners of research care deeply about how effective they are, and what better way to improve their effectiveness than to apply scientific principles; while scientists of research care deeply about their scientific understanding of research, and what better way to test that understanding than to try to apply it to improving how research is actually done.
Indeed, in the larger domain of research policy, the National Science Foundation’s Science of Science and Innovation Policy (SciSIP) programme aims to bring together scientists of science policy with artists of science policy (Fealing 2011). But that intersection leaves much on the table. Harnessing the science of science and innovation to improve not just science and innovation policy (the top-down organization of the research enterprise) but the research itself (the bottom-up practice of research) is lower hanging fruit, and more amenable to shorter cycle times on experimentation and learning and hence to faster progress. We believe the opportunity to discover and apply the principles governing the effective practice of research is large, and we call here for policy and funding support for such experimentation and learning. In fact, one piece of the intersection between the bottom-up practice of research and its scientific study is already being aggressively explored in the life sciences, through the National Institutes of Health’s Science of Team Science (SciTS) programme (Börner et al. 2010).
We believe a similar opportunity exists in engineering and physical sciences (EPS), and that the physical sciences and engineering bring unique advantages both as an object of study and as an object of self-improvement. Physical science is the ‘exemplar’ science, arguably the deepest and most advanced, hence we have much to learn from how its research practice enabled it to become so. EPS is the most aggressively mathematized, reductionist and data-driven area of research, a perspective which, if reflexively applied to itself, could be uniquely productive. Though direct usefulness to society is not always the goal of research, when it is, the path to such usefulness is cleaner in EPS, e.g. not confounded by the regulatory processes associated with the life sciences. As the most mature of the sciences, at this point in time, EPS spans a huge range of small to large scale and of disciplinary to massively interdisciplinary, research, and thus would provide a severe test of any proposed framework for understanding and improving research. Finally, the physical sciences are hardly over with—many of our most pressing planetary-scale problems—from moving the world towards a sustainable energy diet to interconnecting the world’s people and resources—require solutions rooted in the physical sciences and engineering. We argue for an urgency associated with improving how we do EPS research in service of these problems.
As our own initial step in this direction Sandia National Laboratories hosted a Forum and Roundtable, which brought together distinguished EPS practitioners of the art of research and experts in the emerging social science of research. The two communities engaged in a broad-based discussion, and concluded that there are indeed opportunities for reciprocal benefit. In this chapter, we build on that discussion and outline some of those opportunities. Some of the opportunities overlap with those already identified in the science and engineering management communities. However, since the opportunities emerged from these two different and newly interacting communities, some will be new to the traditional science and engineering management communities. Perhaps most importantly, as we organized new opportunities to collaborate between the two communities, divergent/convergent thinking emerged as a critical framework from the Forum and Roundtable.
14.2 Divergent/Convergent Thinking Framework
The framework rests on three overarching foundational assumptions, or hypotheses, that emerged from our Forum and Roundtable.
First: interactive divergent (idea generation) and convergent (idea test and selection) thinking are the fundamental processes underlying research (Cropley 2006). Here, we think of iterative and closely interacting cycles of idea generation followed by idea filtering, refining and retention (Toulmin 1961). Note there is no strict process ordering from divergent to convergent. Divergent and convergent thinking continue to apply as early theories or plans become increasingly more detailed and elaborate because details can be found to be problematic and in need of replacement with alternatives. Further, some gaps only become apparent as theories are elaborated or applied to new situations. For simplicity, we use for these complementary cycles the common terms divergent and convergent thinking, with the understanding that they are related (but not identical) to other terms used in the cognitive, social and computational sciences: idea generation/test, blind variation and selective retention (BVSR) (Campbell 1960; Simonton 2013), abductive versus deductive reasoning, generative versus analytic thinking, discovery versus hypothesis-driven science (Medawar 1963), creativity versus intelligence, thinking fast versus thinking slow (Kahneman 2011), foraging versus sense-making (Pirolli and Card 2005), exploration versus exploitation (March 1991) and learning/growth versus performance/fixed mindsets (Dweck and Leggett 1988).
Second: the quality, quantity and interactivity of divergent and convergent thinking are directly correlated with research impact. Here, we think of divergent and convergent thinking as coupled (interactive) processes, and that the quality and quantity of the individual process as well as the interactivity between the processes determine the quality and quantity of the end products: ideas (new knowledge). The higher the quality, quantity and interactivity of the underlying processes, the higher the quality and quantity of the new knowledge and the higher the quality and quantity of the research impact.
Third: divergent and convergent thinking, like other aspects of research (Narayanamurti et al. 2009), occur throughout the research ecosystem (Hueske et al. 2015). In using the phrase ‘research ecosystem’, we deliberately make the metaphor to ‘biological ecosystem’ and hence to the importance of the multiple levels of such an ecosystem: individuals; groups of individuals; and the environment which sets the reward/cultural boundary/interaction conditions for the individuals and groups of individuals. These three levels also map to the micro-, meso- and macro-scales identified by the Science of Team Science (SciTS) community (Börner et al. 2010). Moreover, divergent and convergent thinking can be inhibited at all of these levels by various ‘inhibitory mechanisms’. The mechanisms can be: cognitive and operating at the level of the individual researcher; social and operating at the level of the research team; or cultural/organizational and operating at the level of the research institution.
Complementary to these three overarching hypotheses are three overarching science opportunities. First, can we measure divergent and convergent thinking? Are there signatures in the thought and communication patterns—of individuals, of teams and of individuals and teams across an institution—that can be associated with the two kinds of thinking? Second, how can divergent and convergent thinking be correlated with research impact? Third, what are the most important mechanisms by which divergent and convergent thinking are inhibited, and are there dis-inhibitory interventions?
The remainder of the chapter is an enumeration of examples of (a) inhibitory mechanisms, which in our experience as physical science and engineering researchers commonly inhibit divergent and convergent thinking at the three levels of the research ecosystem (individuals, teams, institution), along with a brief description of the foundational social science principles underlying those mechanisms and (b) dis-inhibitory interventions (frequently based in new technologies) that might help neutralize those inhibitory mechanisms. Our first goal with these examples is to clarify, and to make plausible, the centrality of divergent and convergent thinking processes to research. Our second goal is to illustrate how common in current practice inhibitory mechanisms are in decreasing the quality and quantity of divergent and convergent thinking. Our third goal is to catalyse new work on these inhibitory mechanisms—to understand how they operate, how they might be measured scientifically and how they might be neutralized through various dis-inhibitory interventions, especially new approaches that are at least partially based in emerging technologies.
14.3 Individual Researchers: Human Cognitive Constraints and Biases
At the individual researcher level, let us consider first divergent thinking, second convergent thinking and third balancing the two kinds of thinking.
14.3.1 Divergent Thinking: Overcoming Idea Fixation Through Engineered Exposure to New Ideas
Divergent thinking, in essence, is the creation of new ideas, mostly, perhaps always (Arthur 2009), through the recombination of pre-existing ideas. However, humans have cognitive constraints and biases (Kahneman 2011), which can make divergent thinking difficult. Prominent among these is idea fixation, an inability to break free from ideas that preoccupy the mind and hold attention (Linsey et al. 2010). In our experience, such idea fixation is a common inhibitory mechanism to effective divergent thinking.
Nonetheless, productive researchers must and do de-fixate themselves at key stages of their research process. In many cases, the de-fixation takes place through serendipitous exposure to new ideas. A famous example is Charles Darwin’s exposure to the ideas of Thomas Malthus (1826), of which he writes in his autobiography ‘In October 1838, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus on Population, and being prepared to appreciate the struggle for existence which everywhere goes on, from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The result would be the formation of a new species’. (Darwin 1887). Indeed, highly creative institutions value serendipitous exposure to new ideas so highly that they sometimes try to probabilistically enhance informal interactions through engineered physical spaces, e.g.: MIT’s Building 20 (Lehrer 2012), Bell Labs’ ‘Infinite Corridor’ (Tsao et al. 2013), the Janelia Farm Research Campus (2003), Pixar’s Emeryville campus (Catmull and Wallace 2014) and Las Vegas’ Downtown Project (Singer 2014). Break activities such as food in shared spaces can also enhance the cross section for such informal interactions (Emmanuel and Silva 2014).
The engineering of physical spaces to enhance serendipitous exposure to new ideas is thus a possible dis-inhibitory intervention that can help neutralize idea fixation. Here, we propose that engineered exposure to new ideas might be another and more direct possibility. The trick is that the new ideas should be far enough away in analogical space (Chan et al. 2015; Fu et al. 2013) to catalyse shifts in perspective—either because they come from different problem areas of the same discipline, from different disciplines or from different ‘translational’ (science, technology, applications) communities. The ideas should not be so far away in analogical space, however, that conceptual and language gaps are too difficult to bridge.
Such an ‘engineering exposure to optimal-analogic-distance ideas’ dis-inhibitory intervention is certainly practiced in a qualitative way by experienced research managers when they see their staff ‘stuck’. However, advances in modern data analytics, combined with the sheer quantity of digitized knowledge, open up new opportunities for making this practice more quantitative. One opportunity might be scientometric clustering analyses of publications based on bibliographic connectivity. Another opportunity might be lexical clustering analyses based on syntactic/semantic regularities (Mikolov et al. 2013), word-order-based discovery of underlying (‘latent’) constituent topic areas (Blei et al. 2003) and mutual compressibility (Cilibrasi and Vitányi 2005). These analyses could lead to algorithms that go beyond those that power today’s search (Salton 1975) and recommendation (Bennett et al. 2007) engines, by feeding researchers and engineers ideas not just within their comfort zone, but optimally distant from their comfort zone (Fu et al. 2013).
14.3.2 Convergent Thinking: Overcoming Sloppy Thinking Through Disciplined Use of Research Narratives
Convergent thinking, in essence, is the testing and selection from newly generated ideas those worth pursuing, ideally through logic and analysis. As Linus Pauling once said (Mulgan 2006): ‘the way to get good ideas is to get lots of ideas and throw the bad ones away’. Of course, easier said than done, because human cognition is subject to sloppy thinking and errors of logic and analysis. Logic and analysis require hard work, and with finite time and resources, humans have evolved to use heuristics to make decisions which ‘satisfice’ (Simon 1956). In research, however, knowledge builds on knowledge, incorrect knowledge can misguide and negate a pyramid of subsequent research and the balance between heuristics and logic/analysis must be shifted away from heuristics and towards logic/analysis.
There is thus opportunity for understanding the cognitive science basis for the many heuristics that are essentially inhibitory mechanisms for effective convergent thinking, and for developing dis-inhibitory interventions that enable or force those heuristics to be side-stepped. Of particular interest is an intervention that might be called the ‘research narrative’ intervention. Research narratives—storylines which knit together background, hypothesis, methodology, analysis, findings and implications—are essentially tools for logical thinking. Mathematics is perhaps the highest form of such logical thinking, and in our experience as physical scientists, it can be applied at various research stages to overcome sloppy thinking.
Research narratives are obviously important at the end of a research project, when a paper is being written for the scientific community and posterity. It is at this stage that many loose ends are discovered that cause a revisiting of the work itself, or at least of the interpretation of the work. Antoine Lavoisier, for example, famously did not ‘discover’ the role of oxygen in combustion until he began to piece together the research narrative associated with his experiments (Cole 1992). And Paul Dirac, for example, did not ‘discover’ antiparticles until he derived and then followed to its logical conclusion of his equation that combined quantum theory and special relativity to describe the behaviour of an electron moving at relativistic speeds (Dirac 1928).
But research narratives are just as important at the beginning of a project. Emerging cognitive science suggests that narrative and stories are the evolutionary optimal tools for communicating not only with others but even with ourselves (Gottschall 2012). A coarse storyboard of the title, abstract, figures and key references of the anticipated outcome of a project forces clarification of many of its aspects—including those that have been hypothesized (Simonton 2013) to be critical sub-components of creative ideas, such as originality, perceived utility and ‘surprisingness’. A project whose narrative does not hang together is in danger of Wolfgang Pauli’s famously blistering criticism: ‘What you said was so confused that one could not tell whether it was nonsense or not’ (Peierls 1960).
Not everyone is ‘good’ at crafting or even self-assessing their own research narratives, though. There is thus potential for the research narrative intervention to be augmented by modern data analytics. For example, computer algorithms could dispassionately evaluate research narratives just as they are now dispassionately evaluating essays in academic writing courses (Foltz et al. 1999). Or, perhaps more likely, a combination of machines and humans might someday efficiently and accurately evaluate research narratives via machine curation of Yelp-like peer reviews.
Note that this research narrative dis-inhibitory intervention enables one to see the close interplay between divergent and convergent thinking. As one creates a research narrative, one often finds that the conclusion one anticipates is not supported by the narrative; instead, as in the Lavoisier example above, another conclusion is. When the research narrative makes heavy use of mathematics this happens often, as in the Dirac example above: mathematics or simulations, even when guided by a conclusion that intuition has presaged, often leads instead to a different and surprising conclusion. The research narrative has eliminated one conclusion, thus enhancing the effectiveness of a convergent thinking, but it has also unearthed a possible new conclusion, thus enhancing the effectiveness of divergent thinking. Many researchers are effective at doing one precisely because they are so effective at doing the other: those who can, in their heads, analyse and eliminate bad ideas (i.e. who are good at convergent thinking), can free themselves from keeping those ideas in their mind, and can spend more time thinking up new ideas (divergent thinking).
14.3.3 Balancing Divergent and Convergent Thinking
Divergent and convergent thinking are by themselves difficult, but perhaps even more difficult is our ability to know when to switch between the two.
On a large scale, the history of science and innovation is replete with scientists and engineers who were on the wrong track, and would have been more productive switching from convergent to divergent thinking (Isaacson 2007), at least in terms of the problem spaces they were considering. Albert Einstein famously spent the last 30 years of his life on a fruitless quest for a way to combine gravity and electromagnetism into a single elegant ‘unified field’ theory. But the history of science and innovation also has its share of scientists and engineers who abandoned or postponed attacking certain problem spaces, which later they thought they should have attacked. As Edwin Jaynes, a physicist who made many fundamental contributions to statistical mechanics and Bayesian/information theory poignantly expressed it: ‘Looking back over the past forty years, I can see that the greatest mistake I made was to listen to the advice of people who were opposed to my efforts. Just at the peak of my powers I lost several irreplaceable years because I allowed myself to become discouraged by the constant stream of criticism from the Establishment, that descended upon everything I did […]. The result was that my contributions to probability theory were delayed by about a decade, and my potential contributions to electrodynamics—whatever they might have been—are probably lost forever’ (Jaynes 1993). Thus there is a critical dilemma: when researchers are being confronted with something unexpected, they must choose whether to stay the course (convergent thinking) or to treat the unexpected as an opportunity to reconsider possibilities (divergent thinking).
To some extent, people gravitate towards thinking styles with which they are most comfortable, and researchers are no different. Those who are more comfortable thinking divergently will tend to reconsider too soon; those more comfortable thinking convergently will tend to stay the course too long; and perhaps a rare few will be comfortable doing neither [as in F. Scott Fitzgerald’s famous quote ‘The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function’ (Fitzgerald 2009)]. In fact, because of our modern education system’s emphasis on deducing single answers using logical thinking, modern researchers might be biased towards convergent thinking. To avoid this bias, some institutions that value creativity now deliberately hire on the basis not of grade point average (GPA) and scholastic aptitude test (SAT) scores, but of more balanced thinking styles (D’Onfro 2014).
There is thus opportunity to understand and engineer strategies to compensate for intrinsic biases towards either divergent or convergent thinking. For example, at a qualitative level, the research narratives discussed earlier might not just be powerful tools for logical, convergent thinking, but might also be powerful tools for understanding when to cycle between divergent and convergent thinking. If the train of thought that follows from one or more ideas does not hold up to the cold logic (or mathematics) of the research narrative, then very likely new ideas and divergent thinking are needed.
Or, for example, at a quantitative level, some of the lexical analytical techniques mentioned earlier, applied in real time to evolving research narratives and other generated knowledge trails, might be able to discover not only whether divergent or convergent thinking is happening, but whether divergent or convergent thinking is appropriate for the stage of the problem at hand. In the language of chemical engineering, one would like to understand which kind of thinking is rate limiting, and therefore which to focus attention on, at a given instant in time.
14.4 Research Teams: Social Constraints and Biases
At the research team level, let us similarly consider first divergent thinking, second convergent thinking and third with balancing the two.
14.4.1 Divergent Thinking: Overcoming Over-Reliance on Strong Links by Exploiting Weak Links
First, we state the obvious: research teams have the potential to think much more divergently than individuals can. Groups can draw upon the diverse ideas of individuals to create new ideas. And, because much of the knowledge of individuals is tacit (Polanyi 1967) and not accessible in formal codified form, closely interacting groups which can share this tacit knowledge informally can be yet more productive.
However, research teams also bring inefficiencies to divergent thinking. When individuals on a team become too strongly socially linked to each other and have become familiar with each other’s knowledge domains and ways of thinking, they no longer serve as sources of new ideas to each other. Moreover, homophily is common in social networks: we seek those who think as we do and avoid those who do not think as we do (McPherson et al. 2001). For divergent thinking, exposure to the less familiar is important, and thus weak links (Granovetter 1973) in one’s social network can be more powerful than strong links. Creativity has been found to be greatest when the relationship distance between collaborators is intermediate—neither too close nor too far. For example, when using statistical methods to examine the success of Broadway musical artists, Uzzi and Spiro (2005) found a relationship that indicated lack of creativity when artists worked in a social network that was overly familiar or overly unfamiliar to them.
Thus, an over-reliance on strong links in one’s social network, or an over-reliance on the members of one’s research project team, can be thought of as an inhibitory mechanism to effective divergent thinking at the research team level. The obvious dis-inhibitory intervention is to exploit weak links in the team’s social network, and to expose members of the research project team to weakly linked new people who bring new ideas. This exposure to new people can of course be done serendipitously, as discussed above at the individual researcher level. More interestingly, it can also be self-engineered. A famous example is the Wright Brothers, who formed their own nuclear team, but were also in constant contact with like-minded experts from all over the world whose ideas they incorporated into their own airplane designs (Sawyer 2011).
But now the exploitation of weak links can also be externally engineered, through the use of data analytics to identify not just ideas that are an optimal analogic distance away from the current team’s ideas, but people who are an optimal analogic distance away from people in the current team. Because social trust helps foster communication, one might even imagine biasing the matchmaking towards people who are not just an optimal analogic distance away, but are also socially linked in some fashion to the existing team. This would be similar to the matchmaking of people for romantic purposes, but here for the purpose of optimal idea cross-fertilization. The world is a big place, which means that it must be exploited but also that it is difficult to exploit, so engineered links are key.
14.4.2 Convergent Thinking: Balancing Impermeable Teams with Permeable Collaborations
Just as for divergent thinking, research teams can be much more effective at convergent thinking than individuals can be. Convergent thinking requires logical deductive thinking, the deeper and more first-principled the more accurate and often the more surprising (Lucibella and Blewett 2013). In studies of lab groups, other researchers in the lab group regularly found and fixed reasoning errors made by individual researchers (Dunbar 1997). Further, multiple minds can span the expertises required to rigorously test ideas. Consider for example the capabilities contained in the Joint Center for Energy Storage Research, a $120 M (over 5 years) effort to achieve revolutionary advances in battery performance. Because the work spans chemistry, materials science, physics, computational theory and nanoscience, it would be impossible for an individual researcher to span these areas.
There is overhead, however, associated with research teams that contain multiple capabilities and expertises: the teams must contain the right capabilities and expertises. If they do not, then the effectiveness of idea testing and selection (convergent thinking) will be severely compromised. It is not uncommon in our experience for a team to be addressing a problem or testing an idea using very cleverly the capabilities and expertises that exist within the team, but not using a capability or expertise that could be extremely helpful but does not exist within that team. The team may of course be unaware of the helpful outside capability or expertise (you do not know what you do not know), but just as often it is aware but does not have the flexibility to reconfigure itself so as to add the outside capability or expertise. The team might have been formed at a time when the problem required a certain set of capabilities and expertise, but the problem has evolved and now the necessary capabilities and expertises are different. Because of funding constraints and/or social glue/loyalty, it proves difficult to add people to and subtract people from the team as needed. This impermeability of teams to composition reconfiguration is in our experience a common inhibitory mechanism to effective convergent thinking: teams that are highly impermeable pay a non-agility cost and are less able to accommodate in real time quickly evolving goals and approaches.
The logical dis-inhibitory intervention is to rebalance away from impermeable teams and towards permeable collaborations. Self-organized collaborations, in which researchers choose on their own with whom to collaborate for the purpose of solving a particular problem of interest at a particular instant in time, can evolve quickly as new problems arise. Paul Erdös, the most prolific mathematician in history, was famous for his peripatetic always-on-the-lookout-for-the-next-problem-and-the-next-collaborator style: ‘Erdös structured his life to maximize the amount of time he had for mathematics […]. In a never-ending search for good mathematical problems and fresh mathematical talent, Erdös crisscrossed four continents at a frenzied pace, moving from one university or research center to the next. His modus operandi was to show up on the doorstep of a fellow mathematician, declare, “My brain is open”, work with his host for a day or two, until he was bored or his host was run down, and then move on to another home’. (Hoffman 1998).
Understanding how to balance impermeable teams and permeable collaborations is non-trivial, however. The question is analogous to Ronald Coase’s famous economics puzzle—when do we need firms as opposed to free agents interacting in a marketplace? Coase’s answer had to do with transaction costs: firms (and, for us, research teams) reduce certain kinds of transaction costs amongst the employees of the firm that are difficult to reduce when people just interact with each other as completely free agents. Perhaps the most important transaction cost is the building of trust—trust that your firm-mates (and for us, research teammates) will get done what they promise to get done, trust that you have a job and retirement security without having to always be fending for yourself, trust in a basic social safety net. If we take this point of view, then we need teams when impermeability and hierarchy are a net plus—when you need a division of labour to execute efficiently and when the problem/solution spaces are relatively mature hence enabling teams to move forward productively without a need for continual reconfiguration. And you do not want fixed teams when permeability and self-organization are a net plus—when you need agility in idea generation/test and when the problem/solution spaces are relatively immature for the problems at hand and still evolving rapidly.
A major challenge and opportunity are thus to learn how to match the right degree of impermeability/permeability with the maturity/immaturity of the research problem space being attacked. This will require learning how to measure the degree of impermeability or permeability that characterize research teams and collaborations, learning how to measure the degree of maturity or immaturity of the research problem space and then learning how the match or mismatch between the two correlates with research success. Emerging sociometric tools based on physical and electronic communications (Waber 2013) will likely play a role in effective measures of permeability, and data harvested from project management software could also be used to look across teams to understand relative maturity of the research.
14.4.3 Distributing Divergent and Convergent Thinking Between Individuals and Teams
Most importantly, research teams have more options for accomplishing divergent and convergent thinking than do individuals. Teams are composed of individuals. Hence, if some aspect of thinking is best done by a team or by individuals, teams can in principle assign it to the appropriate level. For example, if individuals are relatively stronger at convergent thinking while teams are relatively stronger at divergent thinking, it could be optimal for divergent thinking to be performed more at the team level, but for convergent thinking to be performed more at the individual level (Shore et al. 2014). To take advantage of this strategy, however, it will be necessary to first understand more deeply the relative strengths and weakness of individuals and teams at convergent and divergent thinking for what types of problems, in what situations and environments and using what interaction tools.
Teams also have more options in how their individual members are rewarded. Individual researchers not in a team would individually bear the consequences of risky too-divergent thinking, but in a team could actually be rewarded for taking on such risk. An example from nature is scout ants, who individually have a high rate of demise but their ‘divergent thinking’ is important to the colony because it helps them work together in innovative ways to provide survival for current and emerging colonies (Wilson 2012).
Countering the above, research teams have fewer options for oscillating back and forth between divergent and convergent thinking during the life cycle of a research project. They inherently have more inertia, and thus the decision of what kind of thinking to emphasize and at what level, individual or team, is more serious.
For all the above reasons, team leadership is crucial. Throughout the life cycle of a project, a team will move through various quadrants of individual/team divergent/convergent thinking, with opportunity for the team and its leader to optimally allocate resources across those quadrants. As mentioned in the Introduction, successful leaders of research teams have developed a deep understanding of how to do this won from hard-earned experience. However this understanding is intuitive, without the tools necessary for quantitative analysis. Thus, we call here for more measurements and greater use of modern data analytics to analyse those measurements. For example, can we quantify: where in its life cycle a research project is; the degree to which divergent or convergent thinking is needed and how well the team’s current composition and cognitive constructs (Dong 2005) match the desired degree of divergent or convergent thinking? Just as the ‘quantified self’ movement (Swan 2013) seeks to use physical technology to monitor the manifold pulses of a person’s daily life to optimize health and productivity, a ‘quantified research team’ movement might seek to use data analytics technology to monitor the manifold pulses of a research team’s daily life (Waber 2013), to better match the team’s composition and organization to the research challenge at hand, and ultimately to optimize the team’s health and productivity.
Understanding how to optimize the balance between individual and team, and between divergent and convergent thinking, might also borrow from advances in emerging models for information foraging (Pirolli and Card 2005). For example, if useful information is ‘patchy’, a forager might first seek to look broadly for useful patches, and then focus in on a few of the most useful patches. Or, for example, the risk associated with not finding a patch in a particular time horizon, or the amount of resources allocated for the foraging, might determine which stage of the foraging is best done by individuals or by a team.
14.5 Research Institutions: Cultural Constraints and Biases
At the research institution level, there are outsized opportunities for optimization, because it is this level that defines the culture and reward system within which individual researchers and research teams are drawn from and within which they engage in divergent and convergent thinking.
14.5.1 Divergent Thinking: Balancing a Culture of Performance with a Culture of Learning
Divergent thinking, whether by individual researchers or research teams, takes place within an institutional culture. Institutional culture, in turn, evolves so that the institution self-preserves—those institutions whose cultures do not evolve in this manner go extinct, leaving behind those whose cultures did evolve in this manner. Since the quickest way for an institution to not self-preserve is to not meet its existing commitments, performance to those commitments almost always becomes a central part of its culture. A culture of conservativism—making commitments which are conservative, then meeting those commitments through approaches which are conservative—is natural. This can be the case even for research institutions that aim to be at the forefront of knowledge production, hence ostensibly tackling tough and perhaps intractable problems that require new ideas and divergent thinking.
Using the language popularized by Dweck and Leggett (1988), the culture rewards a ‘performance’ rather than a ‘learning’ mindset. By a performance mindset we mean a mindset that focuses on the immediate outcome or achievement as an external judgment on abilities which are perceived to be ‘fixed’. By a learning mindset we mean a mindset that focuses not so much on success or failure but on the opportunity to learn and to enhance abilities which are perceived to be capable of ‘growth’. As Mevin Kelley, former Director of Research at Bell Labs who hired Shockley, Bardeen, Baker and many other great scientists and engineers, would say to new recruits: ‘Ask not what you know, but what you don’t know’ (Narayanamurti 2016).
A culture of conservatism and performance is thus a strong mechanism for inhibiting divergent thinking, for avoiding the risks associated with such thinking, and most importantly for avoiding those risks throughout the sequence of problem and solution spaces that are typically explored in research. First, there is the initial problem space: what is the most interesting, challenging yet potentially solvable problem to propose to tackle in the first place? Conservatism dictates proposing too-safe problems whose solutions are already in hand. Second, there is the solution space: what is the deepest and most elegant solution with the widest fan-out even beyond the immediate problem at hand? Conservativism dictates proposing too-safe solutions that solve the immediate problem but little else. Third, there is the revisit of the problem space: as the work proceeds, are there even more important alternative problems that intermediate results promise to solve, possibly with some deviation from the original work plan? Conservativism dictates not deviating from the original work plan and blinding oneself to alternative problems. In other words, the culture of performance inhibitory mechanisms affects divergent thinking insidiously in both the solution and problem spaces.
A well-known example of how valuable it can be to expand the problem space to accommodate solutions that do not match the original problem space is 3M’s discovery of its Post-It Notes. Its origin was in the desire to create super-strong adhesives for use in the aerospace industry. A super-weak but pressure-sensitive adhesive was created enroute and was initially dismissed. But by an expansion of the problem space beyond the aerospace industry into consumer office supply market, the new adhesive, albeit with some deviation from the original work plan, became the foundation for one of the best-selling office supply products ever. Even in a culture as biased towards learning as 3M’s, however, the twists and turns of this expansion of problem space were considerable and Post-It Notes barely made it to market. In a culture biased towards performance, it would have been that much less likely that Post-It Notes would have come to market.
The obvious dis-inhibitory intervention to a culture that overemphasizes performance is to rebalance towards a culture of learning. As mentioned above, this rebalancing is in a direction opposite to that which is enabling the institution as a whole to preserve itself in the short term. However, it is in a direction consistent with that which would enable the institution as a whole to preserve itself in the long run, and thus a direction which enlightened management might pursue. The institution could do this by rewarding reasonable risk-taking and dis-rewarding unreasonable risk aversion, so as to reward the choosing of optimally challenging problems and solutions. Tools can be created for assessing and managing risk portfolios in research just as they exist in large numbers in financial planning. An institution that focuses on standard academic metrics such as GPA and SAT scores when hiring is selecting for strength in convergent rather than divergent thinking (Chamorro‐Premuzic 2006). New meausures should be added that also assess excellence in divergent thinking (D’Onfro 2014). The history of science and innovation is replete with extremely productive researchers who did not do well in school, and is also replete with less productive researchers who did extremely well in school: in research divergent thinking is just as necessary as convergent thinking.
14.5.2 Convergent Thinking: Balancing a Culture of Consensus with a Culture of Truth
Convergent thinking also takes place within an institutional culture. As discussed just above, though institutional culture tends to value convergent over divergent thinking, there are nonetheless many ways in which institutional culture can also inhibit effective convergent thinking. One that we see often in our personal experience is what might be called a culture of consensus. This is a culture that values social and intellectual harmony and punishes social and intellectual disharmony. Such biases towards harmony are easy to understand and probably evolved in humanity’s prehistory for good reason: there are many situations for which quick consensus, conflict avoidance and social cohesion are more important than accuracy. Those situations likely do not include among them research, for which there is a higher importance placed on accuracy. Consensus that is socially driven and not the beneficiary of deep intellectual debate and intermediate stages of disharmony does not achieve truth but instead groupthink, in which groups converge prematurely and inaccurately on false or less-good ideas (De Dreu et al. 1999).
An institutional culture of consensus is thus a strong inhibitory mechanism to effective convergent thinking. The obvious dis-inhibitory intervention is to rebalance towards what might be called a culture of truth. Such a culture is non-trivial to achieve, of course, because truth and the path to truth is uncomfortable. But a great research culture is probably not one in which individuals are ‘comfortable’. Truth requires individuals and teams to go beyond their intellectual comfort zones into Kuhn’s ‘essential’ (Kuhn 2012) or Senge’s ‘creative’ (Senge and Suzuki 1994) tension; divergent and convergent thinking do not so much cause comfort as they cause stress. Indeed, just as one would expect elite athletes and athletic teams to experience significant levels of stress in the game, indiviuals and research teams are also strained by the demands of innovative research solutions.
Not all stress is created equal, however. Cognitive and intellectual stress is essential, and one might argue the more the better. Existential (the threat of losing one’s job or funding) and social (the threat of losing one’s social standing) stress is not directly essential, and one might argue the less the better. The opportunity here is thus twofold. A first opportunity is to understand how to measure these two types of stress. For example, could text analytics (Pennebaker 2014) extract the magnitudes of the two types of stress either from single emails or from massive numbers of internal emails within an institution, thereby ‘taking the pulse’ of the institution? A second opportunity is to engineer changes in the research environment to maximize the first, and minimize the second, type of stress. Here, though, one might imagine two possibilities.
On the one hand, if existential/social stress were independent of cognitive/intellectual stress, one might seek to simply ‘zero out’ the first, e.g. by creating ‘secure bases’ (Bowlby 2005) for research funding and social rewards oriented towards teams and not individuals. One might make more disciplined use of strategies [including the Delphi method or its variants (Okoli and Pawlowski 2004) and certain kinds of analogy use (Paletz et al. 2013)] for enhancing the task or technical conflict necessary for unveiling logical inconsistencies, while minimizing the social conflict whose avoidance would otherwise foster groupthink. If, as has been speculated (Buchen 2011), those with some degree of Asperger’s, who are less aware of and less sensitive to social cues, are also least susceptible to groupthink and thus often the deepest and most first-principles thinkers, one might imagine a more disciplined harnessing of these thinkers (Silberman 2015).
On the other hand, if existential/social stress is not independent, but is an inseparable motivator, of cognitive/intellectual stress, then one might devise strategies which harness existential/social stress while controlling its degree. For example, just as some aspects of war can be gamified (Macedonia 2002), some aspects of research might also be gamified (Deterding et al. 2011; McGonigal 2011)—thus harnessing, but not letting get out of hand, existential/social stress.
14.6 A Vision for the Future
Research, the manufacture of knowledge and products, is complex and practiced largely as an art. As science (understanding) and technology (tools) continue to be developed and applied to the manufacturing of those other goods and services, it is natural, perhaps inevitable, that they will also be applied to research. From our perspective as physical scientists, we call here for this application to begin seriously in the physical sciences just as it has already begun in the life sciences.
Drawing on a recent workshop of EPS researchers/managers and social scientists of science, we have posited in this chapter a framework in which the microscopic processes underlying research are iterative and closely interacting cycles of divergent (generation of ideas) and convergent (testing and selecting of ideas) thinking processes. We anticipate that an improved understanding of these microscopic processes, and of how various ‘inhibitory mechanisms’ can prevent them from being executed effectively, can ultimately help us design and engineer appropriate ‘dis-inhibitory interventions’. Among the commonplace inhibitory mechanisms we identified, along with corresponding potential dis-inhibitory interventions, were: overcoming idea fixation through engineered exposure to new ideas; overcoming sloppy thinking through disciplined use of research narratives; overcoming over-reliance on strong links by exploiting weak links; balancing impermeable teams with permeable collaborations; balancing a culture of performance with a culture of learning and balancing a culture of consensus with a culture of truth.
Finally, we have focused in this chapter only on the direction from ‘science to art’, in which the emerging science of research is harnessed to improve the art of research. Ultimately, even greater opportunity will be unleashed when the other direction from ‘art to science’ is also exercised simultaneously and synergistically—when improvements in how research is actually done are used to test our understanding of how research is done. Within the physical sciences, such a close and bidirectional interaction between science (understanding) and art (technology and tools) has resulted in well-documented spirals of mutual benefit (Brooks 1994; Casimir 2010; Narayanamurti et al. 2013; Tsao et al. 2008). One can anticipate close bidirectional interaction between the science and art of research to result in similar spirals of mutual benefit. If we narrow the type of research to be examined and improved to the physical sciences and engineering, the analogy would be to Asimov’s Foundations 2 and 1 (Asimov 2012), except, rather than warring, engaging in a complementary partnership.
GEA, ARS and JYT acknowledge support from Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
- Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. New York, N.Y.: Simon and Schuster.Google Scholar
- Asimov, I. (2012). Foundation’s edge. New York, N.Y.: Random House LLC.Google Scholar
- Bennett, J., Lanning, S., & Netflix, N. (2007). The Netflix prize. Paper presented at the In KDD Cup and Workshop in conjunction with KDD.Google Scholar
- Beveridge, W. I. B. (1957). The art of scientific investigation. New York, N.Y.: WW Norton & Company.Google Scholar
- Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. The Journal of Machine Learning Research, 3, 993–1022.Google Scholar
- Bowlby, J. (2005). A secure base: Clinical applications of attachment theory (Vol. 393). UK: Taylor & Francis.Google Scholar
- Casimir, H. B. G. (2010). Haphazard reality: Half a century of science. Netherlands: Amsterdam University Press.Google Scholar
- Catmull, E., & Wallace, A. (2014). Creativity, Inc.: Overcoming the unseen forces that stand in the way of true inspiration. New York, N.Y.: Random House LLC.Google Scholar
- Cole, S. (1992). Making science: Between nature and society. USA: Harvard University Press.Google Scholar
- D’Onfro, J. (2014, July 12, 2014). Here’s why Google stopped asking bizarre, crazy-hard interview questions. Retrieved from http://www.businessinsider.com/google-hiring-practices-interviews-2014–7.
- Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, 34(3), 555–590.Google Scholar
- Darwin, C. (1887). The Autobiography of Charles Darwin. Barnes & Noble Publishing.Google Scholar
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining gamification. Paper presented at the Proceedings of the 15th International Academic MindTrek Conference. Envisioning Future Media Environments.Google Scholar
- Drucker, P. F. (2011). Technology, management, and society. USA: Harvard Business Press.Google Scholar
- Emmanuel, G., & Silva, A. (2014). Connecting the physical and psychosocial space to Sandia’s mission. Sandia Report, SAND2014-16421.Google Scholar
- Fealing, K. (2011). The science of science policy: A handbook. USA: Stanford University Press.Google Scholar
- Feist, G. J. (2008). The psychology of science and the origins of the scientific mind. USA: Yale University Press.Google Scholar
- Fitzgerald, F. S. (2009). The crack-up. USA: New Directions Publishing.Google Scholar
- Foltz, P. W., Laham, D., & Landauer, T. K. (1999). Automated essay scoring: Applications to educational technology. Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications.Google Scholar
- Gottschall, J. (2012). The storytelling animal: How stories make us human. USA: Houghton Mifflin Harcourt.Google Scholar
- Grueber, M., & Studt, T. (2014). 2014 global R&D funding forecast. R&D Magazine, 16, 1–35.Google Scholar
- Haak, L. L. (2014, Mary 24, 2014). A vision to transform the research ecosystem. Retrieved from http://www.editage.com/insights/a-vision-to-transform-the-research-ecosystem.
- Hoffman, P. (1998). The man who loved only numbers. New York: Hyperioncop.Google Scholar
- Isaacson, W. (2007). Einstein: His life and universe. New York, N.Y.:Simon and Schuster.Google Scholar
- Janelia Farm Research Campus: Report on Program Development. (2003). Retrieved from http://www.janelia.org/sites/default/files/JFRC.pdf.
- Jaynes, E. T. (1993). A backward look to the future. Physics and probability, 261–275.Google Scholar
- Kahneman, D. (2011). Thinking, fast and slow. UK: Macmillan.Google Scholar
- Kuhn, T. S. (2012). The structure of scientific revolutions. USA: University of Chicago press.Google Scholar
- Lehrer, J. (2012, January 30, 2012). Groupthink: The brainstorming myth. The New Yorker.Google Scholar
- Lucibella, M., & Blewett, H. (2013, October, 2013). Profiles in versatility: Part 1 of two-part interview: Entrepreneur Elon Musk talks about his background in physics. APS News, 22.Google Scholar
- Malthus, T. R. (1826). An Essay on the Principle of Population; Or, A View of Its Past and Present Effects on Human Happiness: With an Enquiry Into Our Prospects Respecting the Future Removal of Mitigation of the Evils which it Occasions. John Murray.Google Scholar
- McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. Penguin.Google Scholar
- Medawar, P. B. (1963). Is the scientific paper a fraud. The Listener, 70(12), 377–378.Google Scholar
- Mikolov, T., Yih, W.-T., & Zweig, G. (2013). Linguistic regularities in continuous space word representations. Paper presented at the HLT-NAACL.Google Scholar
- Narayanamurti, V., Anadon, L. D., & Sagar, A. D. (2009). Transforming energy innovation. Issues in Science and Technology, National Academies, 26(1), 57–64.Google Scholar
- Narayanamurti, V., Odumosu, T., & Vinsel, L. (2013). RIP: The basic/applied research dichotomy. Issues in Science and Technology, 29(2).Google Scholar
- Narayanamurti, V. (2016). Personal communication.Google Scholar
- Pennebaker, J. W (2014) (2011). The secret life of pronouns. What our words say about us. New York, N.Y.: Bloomsburty Press. Abruf am.Google Scholar
- Pirolli, P., & Card, S. (2005). The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. Paper presented at the Proceedings of International Conference on Intelligence Analysis.Google Scholar
- Polanyi, M. (1967). The tacit dimension.Google Scholar
- Salton, G. (1975). A theory of indexing (Vol. 18). SIAM.Google Scholar
- Sawyer, R. K. (2011). Explaining creativity: The science of human innovation. UK: Oxford University Press.Google Scholar
- Senge, P. M., & Suzuki, J. (1994). The fifth discipline: The art and practice of the learning organization. New York, N.Y.: Currency Doubleday.Google Scholar
- Shore, J., Bernstein, E. S., & Lazer, D. (2014). Facts and figuring: An experimental investigation of network structure and performance in information and solution spaces.Google Scholar
- Silberman, S. (2015). Neurotribes: The legacy of autism and the future of neurodiversity. Penguin.Google Scholar
- Singer, M. (2014). Tony Hsieh: Building his company, and his city, with urbanism. AIArchitect, 21.Google Scholar
- Sismondo, S. (2011). An introduction to science and technology studies. NJ: Wiley.Google Scholar
- Toulmin, S. E. (1961). Foresight and understanding: An enquiry into the aims of science. USA: Greenwood Press.Google Scholar
- Tsao, J. Y., Emmanuel, G. R., Odumoso, T., Silva, A. R., Narayanamurti, V., Feist, G. J. … Sun, R. (2013, December, 2013). Art and science of science and technology: Proceedings of the forum and roundtable (June 5–7, 2013). Albuquerque, New Mexico: Sandia National Laboratories.Google Scholar
- Waber, B. (2013). People analytics: How Social sensing technology will transform business and what it tells us about the future of work. NJ: FT Press.Google Scholar
- Wilson, E. O. (2012). The social conquest of earth. Ney York, NY: WW Norton & Company.Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.