In order to formulate specific research objectives, this chapter condenses insights from the theoretical background, deriving key objectives that are to be analyzed empirically. Following the “Engaged Scholarship Diamond Model”, designed to close the theory-practice gap (Ntounis & Parker, 2017), the four domains “Theory Building”, “Problem Formulation”, “Problem Solving” and “Research Design” are to be fulfilled in any preferred order by the researcher (Van de Ven, 2007). This thesis’ conclusion serves as “Problem Solving” and will bridge real world problems (“Reality”) and empirical results from empirical research (“Solution”). Heading from reality to theory, problem formulation included potential rise in complexity by globalization and the limitations of humans performing complex problem-solving. Information and expert knowledge were then identified by theory as being critical influencers in individual and group decision-making. The following first sub-chapter sums up key findings of the theoretical background. The resulting model, which is linked to some solution via the experimental research design, is to be explained in the second sub-chapter. A brief framework of a suitable experiment is provided in the third sub-chapter (Figure 3.1).

Figure 3.1
figure 1

Source Ntounis & Parker, 2017, p. 353

The Engaged Scholarship Diamond Model by Van de Ven (2007).

3.1 Summary of Key Findings

Besides limitations in financial resources, change and human resources were described as being the fundamental problems for interconnected institutions, engaged in complex problems of global proportions. In order to better cope with unpredictable change, expert knowledge is increasingly embedded in decision-making processes. Routine-strength can inhibit decision-makers to adapt to change effectively, while knowledge and feedback interpretation are influencing success in overcoming routine. Expertise is formed by many iterations of acting in a certain domain, with heterogeneous feedback coming from this domain, and is therefore a learning process, as all learning is a feedback process. The environment of such a domain is a predictor for maximization and learning itself, and can also lead to bias and to self-deception building upon illogical or even logical mental models. This can either make adaption to a novel, more efficient and effective strategy harder or easier. Experiments have shown that environmental conditions only influence a change in an agent’s strategy, when feedback or the agent’s interpretation of feedback confirms that the new environmental conditions lead to a performance downswing, when the routine strategy is not altered. Environmental conditions generally lead to different behavior when being formulated as being either man-made or its source being stochastic. Social or man-made change leads to agents trying to optimize via pattern recognition, whereas stochastic change leads to agents trying maximization via logical rationale. Risk, being expressed as either verbal or numerical probabilities are being interpreted differently, depending on the agent’s knowledge—however, humans tend to not behave optimal when probabilities are provided. Groups and individuals behave differently facing problems under uncertainty, also depending on whether or not groups are able to communicate within. Group performance is also influenced by its member’s expertise and performance, while individual expertise is hard to predict reliably via knowledge span, e.g. years of experience.

Two major aspects are then to be researched: the impact of public information and expertise on group decision-making, when facing a problem under uncertainty. Public information will either be communicated actively via text messages, which are actively announced via pop-up notifications or passively via visual clues. In both cases, public information is therefore considered change. There will also be a case, where change is neither announced actively, nor passively, and agents will have to figure out change themselves via feedback interpretation. Change will either impede strategy performance or not. The dependent variables will not only focus on decision-making performance but also behavior, and therefore strategy changes or accordingly strategy persistence. In no case will an agent be deceived by public information, a distinguishing aspect of the model for empirical research from psychological attempts including deception.

3.2 Model for Empirical Research

In order to test the influence of expert knowledge or expertise, the experiment has to be designed in such a way for participants being able to maximize in a domain, where feedback is part of a stable, well-defined problem. Participants can then use their optimal strategy in a second well-defined domain including little change, and then adopt their strategy in an ill-defined but metastable domain, with lots of change hidden from them, where the strategy from the well-defined domain still leads to maximization. During the well-defined stages, all agents act alone in isolation. In the ill-defined stages, agents will act as a group. The experiment will be based on the thoroughly researched puzzle game “Tower of Hanoi”. The multiplayer version of “Tower of Hanoi” is designed by a deterministic 64-state algorithm, which ensures that every agent of a group has influence over the outcome, but does not necessarily impact the outcome. The algorithm does not change during the ill-defined stages. Also, without communication, no participant can gain full control over the outcome. Therefore, even if the true rules governing the experiment during the multiplayer version are known, the outcome of some action remains unknown, making these stages ill-defined. However, a group can outperform randomness by sticking to the ideal strategy from the well-defined stages. Finally, the metastable, ill-defined domain will inhibit little change at some point, which vastly changes the inner dynamics and feedback becomes “chaotic” with high certainty. In theory however, all stages, including well-defined and ill-defined stages, can be solved in the same number of moves. Feedback itself will remain stable, i.e. logical from some strategic perspective during the well-defined domain. If the strategy is not altered in the well-defined domain after little change was introduced, performance will be worse, and feedback will remain logical from some strategic perspective. Feedback will remain seemingly logical from some strategic perspective during the metastable and ill-defined domain, but can also become chaotic from the perspective of some strategic perspective if some agents behaved “less than wise”. Feedback will be chaotic with high certainty during the instable and ill-defined stages. This might lead to participants interpreting chaotic feedback as being purely random, and any action being equally bad, leading to a state of mind as being “indifferent”. This can lead to agents acting blindly in accordance to their routine strategy or seemingly random. Feedback and therefore interpretation itself is then used as the defining “atoms” of the system, in accordance to some system being described by its system-states as “instable”, “indifferent”, “stable” and “metastable” (Jeschke & Mahnke, 2013). Figure 3.2 pictures these system states with intuitive diagrams.

Figure 3.2
figure 2

Source Jeschke & Mahnke, 2013, p. 17

Considered system states: instable, indifferent, stable, metastable.

Passive change is performed by visual clues, being the “goal rod” of the “Tower of Hanoi” stages. Agents will have to solve Tower of Hanoi three times in a row with the rightmost rod being the goal rod, and then three times in a row with the center rod being the goal rod. The change will not be “actively” communicated, but will be communicated “passively” by visual clues, which are corrected for color-blind people, i.e. not only by color but also by non-announced text. During the well-defined stages, expertise in solving Tower of Hanoi will be measured. The passive change is considered as non-social environmental change. Participants who are not immediately aware of the goal rod change will perform worse. After a certain number of well-defined stages, participants will face ill-defined stages, where the “Tower of Hanoi” game is in fact a multiplayer-version. Different types of public information, also including no public information are tested in various information conditions. Again, the goal rod will be changed after the same amount of stages as in the well-defined stages. Throughout the entire ill-defined stages, the same hidden rules apply. This experimental setup is further specified in table 3.1.

Table 3.1 Model for empirical research: system conditions of online experiment. Source own source

3.3 Experimental Framework for Research Objectives

In order to measure the impact of public information and expertise, various information conditions and various forms of logic models have to be categorized. In other words, various information conditions and strategies have to be considered. Even well-defined problems of reality can usually be solved in more than one way. In order to enable two forms of logic being valid during the well-defined Tower of Hanoi game, the disks can “jump edges” just like in the “Flag Run” experiments. Therefore, even the well-defined stages can be solved in more than one way of thinking. Also, the direction cannot be influenced by the direction buttons during the ill-defined stages of the multiplayer version of Tower of Hanoi. As assumed by Strunz & Chlupsa (2019), direction buttons attract a deep intrinsic motive to be part of an agent’s strategy, being ideal for testing non-routine problem-solving performance. In addition, the disks also jump edges during the ill-defined stages, and are collectively controlled by all agents of one group. Three agents per group were chosen, however, the number of agents per group can be chosen arbitrarily in accordance with the algorithm.

In summary, the experiment will research the following general research objectives: i) the impact of active public information about change on group problem-solving behavior, when such change does not have an influence on strategy performance, ii) the impact of passive public information about change on group problem-solving behavior, when such change does have an influence on strategy performance, iii) the impact of various forms of active public information, e.g. social change or stochastic change, on agents changing their routine strategy, iv) the impact of active public information about hidden rules on agents changing their routine strategy, v) the influence of individual expertise stemming from well-defined learning environments in ill-defined problem-solving domains regarding overcoming routine strategy and overall performance.

The experiment measuring these general research objectives is explained in the following chapter, after which the specific research questions and hypotheses are listed.