Abstract
The hiddenaction model captures a fundamental problem of principalagent theory and provides an optimal sharing rule when only the outcome but not the effort can be observed (Holmström in Bell J Econ 10(1):74, 1979). However, the hiddenaction model builds on various explicit and also implicit assumptions about the information of the contracting parties. This paper relaxes key assumptions regarding the availability of information included in the hiddenaction model in order to study whether and, if so, how fast the optimal sharing rule is achieved and how this is affected by the various types of information employed in the principalagent relation. Our analysis particularly focuses on information about the environment and about feasible actions for the agent. We follow an approach to transfer closedform mathematical models into agentbased computational models and show that the extent of information about feasible options to carry out a task only has an impact on performance if decision makers are well informed about the environment, and that the decision whether to perform exploration or exploitation when searching for new feasible options only affects performance in specific situations. Having good information about the environment, on the contrary, appears to be crucial in almost all situations.
1 Introduction
This paper focuses on the hiddenaction model introduced in Holmström (1979) (henceforth referred to as standard hiddenaction model): An individual (the principal) delegates some authority in order to act in her name (i.e., to exert productive effort) to another individual (the agent). This relation is specified in a contract which defines what the agent has to do, in terms of a task which is delegated from the principal to the agent, and how the resulting outcome is shared between the principal and the agent (Lambert 2001; Eisenhardt 1989). The key issue in the standard hiddenaction model is that the agent’s productive effort cannot be observed by the principal. In order to cope with this problem, a sharing rule is proposed which is based on outcome which might be measured by a particular performance metric. As a consequence of the proposed sharing rule, research related to this model often puts particular emphasis on information systems which the principal employs for measuring the outcome which was achieved by the agent fulfilling the task (for an overview see, for example, Baiman 1982, 1990; Hesford et al. 2007; Lambert 2001, 2006).
While the standard hiddenaction model puts strong emphasis on information to control the behavior of the agent through decisioninfluencing information, the role of information to reduce predecision uncertainty in order to increase the probability of better decisions by means of decisionfacilitating information remains widely unconsidered (Demski and Feltham 1976): By giving the principal and the agent all the information required to make optimal decisions and by making rather ‘heroic’ assumptions related to the principal’s and the agent’s informationprocessing capabilities (Müller 1995; Axtell 2007; Simon 1959, 1979), most of the problems related to the availability of decisionfacilitating information are ‘assumed away’. For example, the principal as well as the agent is assumed to have knowledge about the distribution of the states of the relevant environment which, together with the productive effort exerted by the agent, shapes performance. Moreover, the principal is assumed to be perfectly informed about the agent’s utility function and to have knowledge about the entire set of feasible actions from which the agent selects the level of productive effort which he exerts, which also means that the principal is able to immediately find the optimal sharing rule. There is, however, empirical evidence that the assumptions incorporated in the standard hiddenaction model are not perfectly in line with human capabilities and human behavior, particularly when studied in the context of organizations (e.g., Perrow 1986; Eisenhardt 1989; Hendry 2002).
In this paper, we shift the attention from the decisioninfluencing perspective to the decisionfacilitating role of information. We focus on the assumptions related to decisionfacilitating information and analyze whether a relaxation of the aforementioned assumptions results in modified requirements for the information which is relevant in the context of the standard hiddenaction model. In particular, employing more realistic assumptions regarding the availability of decisionfacilitating information could shift the attention from ex post performance information to information systems which provide information about the environment or the set of feasible actions. Limiting the principal’s and agent’s availability of decisionfacilitating information will inevitably require to endow them with additional capabilities, such as learning mechanisms or the ability to search for information following different strategies, such as exploration or exploitation (March 1991).
There are some empirical findings supporting the conjecture that managerial decision makers have particular requirements regarding decisionfacilitating information: For example, Vandenbosch and Huff (1997) report on the use of executive information systems indicating that managers employ the systems to challenge general managerial assumptions and preconditions, e.g., related to the environment. Based on an empirical study, Schäffer and Steiners (2004) distinguish different forms of information usage indicating that ex post performance evaluation is just one of several relevant forms while, according to Schäffer and Steiners (2005), managers are rather satisfied with accountingbased information but see some deficiencies with respect to information related to the environment including, for example, competitors or probabilities of external events. In a similar vein, Hall (2010) argues that future developments in accounting should be directed to provide relevant information for a general understanding of the related field and for strategizing.
Against this background, we employ an approach which allows for a relaxation (Guerrero and Axtell 2011; Leitner and Behrens 2015) of key assumptions regarding the availability of decisionfacilitating information of the contracting parties in the standard hiddenaction model: We put a particular focus on the assumptions related to (i) the availability of information about the environment and the assumption that (ii) both the principal and the agent are fully informed about the set of actions feasible to carry out the task which the principal delegates to the agent. The relaxation of assumptions reduces the model’s mathematical tractability dramatically. We therefore set up an agentbased variant of the hiddenaction problem which includes the relaxed assumptions (for simulationbased approaches in managerial science see, for example, Davis et al. 2007; Leitner and Wall 2015; Wall 2016).
The remainder of this paper is organized as follows: Sect. 2 introduces the main features of the hiddenaction model introduced in Holmström (1979) and discusses the assumptions incorporated into the standard hiddenaction model. In Sect. 3, we introduce the relaxed assumptions, discuss their operationalization, and formalize the agentbased model. Section 4 presents and discusses the results of the simulation study. Section 5 summarizes and concludes the paper, and discusses limitations and avenues for future research on this topic.
2 Information and information systems in the standard hiddenaction model
2.1 The hiddenaction model in a nutshell
The standard hiddenaction model is a static model that captures a singleperiod situation in which a principal hires an agent for carrying out a task.^{Footnote 1} The principal’s main role is to supply capital and to construct incentives, while the agent’s main role is to act on behalf of the principal. The hiddenaction model can be applied to a multiplicity of situations, such as economic relationships between employer and employee, homeowner and contractor, or buyer and supplier (Caillaud and Hermalin 2000; Eisenhardt 1989). The concept of the task delegated from the principal to the agent is, therefore, rather abstract and universal.
Main characteristics of the standard hiddenaction model. The main features that describe the economic relationship between the principal and the agent are the following (Holmström 1979; Caillaud and Hermalin 2000):

The principal delegates a task to the agent.

The agent exerts effort (also referred to as action) to carry out the task, which affects the principal’s payoff. We denote the set of all feasible effort levels by \(a\in {\mathbf {A}} \subseteq {\mathbb {R}}\). The agent has full power to decide for an effort level out of a set of feasible effort levels \({\mathbf {A}}\).

The agent’s effort \(a\) is hidden. This means that only the agent knows about the effort exerted; the principal cannot observe it (at least not at reasonable costs). Information about the agent’s effort is, thus, distributed asymmetrically between the principal and the agent.

The effort \(a\) together with a random state of nature \(\theta \) determines the task’s outcome \(x=x(a,\theta )\). The outcome \(x\) can be observed by both the principal and the agent. The state of nature \(\theta \) is a random variable that describes the economic environment in which the task is carried out. It cannot be observed by the principal. The agent, however, can observe (or deduce) the state of nature after the outcome \(x\) has taken effect.

Due to a lack of observability of the effort \(a\) to the principal, incentives for the agent have to be functions of the outcome \(x\) alone. The principal and the agent agree ex ante on a rule (an incentive scheme) that defines how the task’s outcome \(x\) is shared between the two parties. We denote the sharing rule by \(s(\cdot )\) and the agent’s share of outcome \(x\) by \(s(x)=x\cdot p\), whereby \(p\) stands for the premiumparameter.

The principal is assumed to be riskneutral. Her utility is defined over payoff and the agent’s compensation. We denote her utility function by \(U_P(xs(x))=xs(x)\).

The agent is assumed to be riskaverse. His utility is defined over compensation and disutility for the effort exerted. We denote his utility function by \(U_A(s(x),a)=V(s(x))G(a)\), with \(V^\prime >0\) and \(x_a \ge 0\).^{Footnote 2}
Figure 1 represents the sequence of events within the standard hiddenaction model. In \(\tau =1\), the principal designs a contract and offers it to the agent, who in \(\tau =2\) decides whether to accept the contract or not. The contract specifies the task and the sharing rule and is valid for one period. Once the contract is accepted, it is binding for both parties. If the agent accepts the contract, he exerts (and completes) productive effort in \(\tau =3\). The model is strictly sequential, which prevents the agent from adjusting his effort after \(\tau =3\). The random state of nature \(\theta \) realizes in \(\tau =4\). Finally, in \(\tau =5\), the outcome and the principal’s and the agent’s utilities take effect.
Constraints to be considered from the principal’s point of view. The principal’s decision problem is to find a sharing rule \(s(\cdot )\)

that meets a minimum level of utility \({\underline{U}}\) for the agent. \({\underline{U}}\) is referred to as reservation utility and represents the expected utility from pursuing his next best (outside) alternative. This requirement for the sharing rule assures that the agent accepts the contract in \(\tau =2\). In the hiddenaction model, this condition is referred to as participation constraint (Holmström 1979; Caillaud and Hermalin 2000).

that motivates the agent to exert a level of effort that maximizes both the principal’s and the agent’s utilities. In the design of the sharing rule, the principal has to take into account that the rule affects the agent’s choice of effort \(a\) via outcome \(x\), and that effort \(a\) leads to disutility for the agent. This condition is referred to as incentive compatibility constraint (Holmström 1979; Caillaud and Hermalin 2000).
The optimization program. Given the main characteristics of the standard hiddenaction model and the constraints to be considered as described above, the optimization program to generate paretooptimal sharing rules can be formalized by
where Eqs. (1b) and (1c) represent the participation and the incentive compatibility constraints, respectively. The notation ‘\({{\,\mathrm{arg\,max}\,}}\)’ denotes the set of arguments that maximize the objective function that follows, \({E}\) denotes the expectation operator (Holmström 1979). The solution to the standard hiddenaction model, as formalized in Eqs. (1a)–(1c), is presented in “Appendix C”. Table 1 summarizes the notation used in this section.
2.2 Concepts of information and information asymmetry
The concept of information is central to the hiddenaction model. In contrast to the competitive general equilibrium view that assumes that information is given and perfectly known (e.g., Arrow 1964), the principalagent theory follows the stream of information economics and regards information as being imperfect and costly (Stiglitz 2003; Frieden and Hawkins 2010; Hawkins et al. 2010). Consequently, in the latter stream of research asymmetries of information play a central role.
The concept of information has a long history, is often domaindependent, and is applied in multiple ways in the research literature (for an overview see, for example, Capurro 2009; Hofkirchner 2009; Madden 2000; McCreadie and Rice 1999). At a very general level, information can, for example, be defined as a fact which one is told, systematic data that conveys a message, or a numerical quantity that measures the uncertainty in the outcome of an experiment (Madden 2000; Khan 2018; Soofi 1994). Notions of information capture a wide spectrum ranging from semantic to technical definitions: The former notions use information in a rather intuitive sense, while in the latter ones, information is usually a welldefined function to capture the extent of uncertainty (Soofi 1994). McCreadie and Rice (1999) rather focus on semantic notions and provide a more detailed overview of conceptualizations and distinguish between information as

1.
A resource or commodity: McCreadie and Rice (1999) argue that information can be produced, purchased, replicated, distributed, manipulated, passed or not passed along, controlled, traded, and sold. This concept of information is consistent with the very wellknown sender–receiver model of communication (Shannon 1948, 2001) and, therefore, might also include assumptions about the interpretation and understanding of information by the receiver. It is, however, also recognized that information is different from other commodities, as it possesses properties of public goods: Its consumption is, for example, nonrivalrous, and from the macroperspective, it is not efficient to exclude it from others (Stiglitz 2000). According to McCreadie and Rice (1999), this conceptualization might be used as a basis for strategies for generating and consuming information (e.g., in the context of innovation and problemsolving) and for establishing the value of information. Even though it is inefficient from a macroperspective, this conceptualization constitutes the basis for individual benefits resulting from the availability of private information.

2.
Data in the environment: According to this concept, information is available in the environment (e.g., in terms of smells, sounds, artifacts, objects) and interacts with human informationprocessing capabilities. This view on information includes unintentional communication that is driven by perception (Buckland 1990) and is also related to the understanding that the value of information will be judged on the basis of the environment in which it is used: Relevant environments can, for example, be characterized alongside the dimensions of people (e.g., roles, cognitive abilities), problems (e.g., degree of structuring, experienced problem dimensions), settings (e.g., organization, means of communication), and problem resolutions (e.g., the way information is used, rules for solving problems) (Taylor 1991; Katzer and Fletcher 1992; Khan 2018).

3.
Representation of knowledge: This concept of information views information as a representation of (or is a pointer to) knowledge (McCreadie and Rice 1999). A recent example of this concept of information is persistent digital identifiers for scholarly publications, such as ORCID or DOI.

4.
Part of the process of communication: Here, information is seen as part of human behavior, and meanings are rather embodied in people than in words (Madden 2000). This concept shifts the focus from data to the way users handle data (Budd 1987) and also includes temporal, social, and personal factors as parts of work practices into the process of information acquisition and communication (McCreadie and Rice 1999). This view on information is, for example, reflected by the wellknown distinction between implicit and explicit knowledge (Dienes and Perner (1999)).
Asymmetric information in the hiddenaction setup. The standard hiddenaction model uses the concept of information as good or commodity (McCreadie and Rice 1999) and particularly focuses on the decisioninfluencing role of information (Demski and Feltham 1976). The principal is unable to observe or verify at reasonable costs the effort \(a\) made by the agent. The agent, of course, knows the effort level \(a\) (Spremann 1987). In addition, only the agent but not the principal can observe the exogenous variable \(\theta \) that realizes in \(\tau =4\) (see Fig. 1). The latter assumption assures that there is, indeed, asymmetry in information about effort \(a\), as the principal cannot perfectly infer \(a\) from outcome \(x\) without knowing \(\theta \) (Caillaud and Hermalin 2000). Please note that with respect to all other elements of the standard hiddenaction model introduced in Sect. 2.1, the principal and the agent share the same state of information, i.e., there is only information asymmetry with respect to the effort \(a\) and the realized exogenous factor \(\theta \). The fact that (i) the outcome \(x\) is correlated to \(a\) and (ii) that it is observable for both the principal and the agent without costs opens up a way to overcome the asymmetry in information for the principal: She designs the reward scheme \(s(\cdot )\), which provides incentives for the agent to make a level of effort that does not only maximize his own but also the principal’s utility. This notion of information asymmetry is used in both the static hiddenaction model (see Sect. 2.1) and the dynamic agentbased representation of the hiddenaction model (see Sect. 3).
Information in the agentbased model. The agentbased model introduced in this paper also follows the concept of information as good or commodity. Recall that the standard hiddenaction model assumes information other than the effort level \(a\) and the exogenous variable \(\theta \) to be given and available for both the principal and the agent; the transfer to the agentbased model allows for limiting the availability of these pieces of information and, therefore, for shifting the focus from the decisioninfluencing to the decisionfacilitating role of information. At the same time, the dynamic features of the agentbased model allow for endowing the principal and the agent with learning capabilities, so that they can assemble the missing pieces of information over time. Thus, we extend the notion of asymmetric information (as introduced above) by the difference between information \(J\), which is intrinsic to a system and information \(I\), which represents information about a system (Frieden and Hawkins 2010; Hawkins et al. 2010): \(J\) stands for the most complete and perfectly knowledgeable information intrinsic to a system, while \(I\) can be interpreted as information about a system based on a number of observations. In the agentbased model variant, we particularly consider that information about the environment and the feasible actions to carry out the delegated task are not given but are to be learned or discovered by the principal and the agent over time. Information \(J\) could, for example, represent the distribution of exogenous variables, and \(I\) could stand for the realizations of the exogenous variable observed by the agent. An observation can, therefore, be modeled as an informationflow process \(J\rightarrow I\).^{Footnote 3} Increasing (decreasing) the number of observations decreases (increases) the distance \(JI\) and therefore allows for agents being better (less) informed about \(J\), which reflects some of the results presented in Kreps (1979), Puppe (1996), and Billot (1999).^{Footnote 4} Both the principal and the agent store their observations in information systems. The different types of information systems (implicitly) included in the standard hiddenaction model are discussed in the next section.
2.3 Information systems in the hiddenaction model
The standard hiddenaction model in total captures (in parts implicitly) three types of information systems (ISs), which provide the principal and the agent with relevant pieces of information to make optimal decisions (cf. Fig. 2). We distinguish between internal and external information. Internal information is related to the business of an organization and captures, for example, information coming from an organization’s planning systems or information about an organization’s performance. External information refers to information about the economic environment in which the task is carried out. The following ISs are considered:

1.
Information about the environment (provided by IS 1): The principal and the agent are modeled to be informed about the distribution of the exogenous factor \({\theta }\). This type of information covers all relevant issues outside the organization, such as information about competitors, resources, technology, economic conditions (Dumond 1994): The principal uses this information in \(\tau =1\), when she fixes the incentive scheme \(s(\cdot )\), while the agent uses this information in \(\tau =2\) and \(\tau =3\), when he decides whether to accept the contract or not and selects an effort level \(a\), respectively.

2.
Information about the action space (provided by IS 2): In order to derive the optimal sharing rule \(s(\cdot )\) instantaneously in \(\tau =1\), and in order to exert the utility maximizing effort level in \(\tau =3\), both the principal and the agent have to have information about the entire action space \(\mathbf{A}\) which is provided by IS 2. In order to find the optimal compensation scheme \(s(\cdot )\) and to find the optimal effort level \(a\), the principal and the agent are required to consider all relevant alternatives within the action space. This assumption is rather ‘heroic’ as the specification of all feasible effort levels might be extremely difficult (Feltham 1968).

3.
Information about the outcome (provided by IS 3): In \(\tau =5\) the standard hiddenaction model assumes the principal and the agent to observe outcome \(x\). In order to do so, they use an IS which also provides internal information.
Figure 2 schematically represents ISs included in the hiddenaction context. This paper focuses on the decisionfacilitating role of information in the context of hiddenaction problems. For the remainder of the paper, we particularly stress the assumptions regarding the ISs which provide external information (IS 1) and internal information about the action space (IS 2). We do not focus on IS 3, as the type of information provided by this information system is mainly used for decisioninfluencing purposes. It is also important to note that the standard hiddenaction model assumes that the different types of ISs contain all the necessary pieces of information. For the standard hiddenaction model described above, this means that all required information is entered into ISs 1 and 2 before \(\tau =1\).
3 The agentbased model variant
3.1 Relaxing assumptions regarding information systems
3.1.1 Assumptions regarding the IS which provides external information (IS 1)
Relaxed Assumptions. Recall that the standard hiddenaction model (introduced in Holmström (1979)) assumes that both the principal and the agent instantaneously have information about the distribution of the exogenous factor capturing the environment. From a decisionfacilitating perspective, this means that the principal and the agent can immediately make the best possible decision. We adapt this assumption in the following way:

1.
The principal and the agent no longer instantaneously have information about all possible realizations of environmental variables as assumed by the standard hiddenaction model.

2.
The principal and the agent are endowed with the individual capability to learn about the environment (i.e., about the distribution of environmental variables) over time. We refer to the implemented learning model as simultaneous sequential learning.

3.
The principal and the agent no longer share one IS but can store their individually acquired pieces of information IS 1P and IS 1A, respectively (cf. Fig. 3).
These are feasible adaptations of the standard model, as learning about the environment is a common feature of organizations: Epstein (2003), for example, refers to it as a prerequisite for organizational wellbeing and survival (see also Daft and Weick 1984; Guo and Reithel 2018).
Operationalization of relaxed assumptions. In order to operationalize these adaptations, we enrich the standard hiddenaction model by a simultaneous and sequential learning model: We endow the principal with the ability to estimate the realizations of \(\theta \), which she stores in her private IS 1P. The principal’s learning mechanism is formalized in Sect. 3.2. For the agent, we transfer the assumption of the standard hiddenaction model, so that he is able to (ex post, in \(\tau =5\)) observe the realizations of \(\theta \), which he stores in his private IS 1A. For a schematic representation of the related ISs included in the agentbased model variant cf. Fig. 3.
Sophistication of IS 1P and IS 1A in the agentbased model variant. Our model considers different sophistication levels of the ISs which provide external information. The concept of sophistication of IS 1P and IS 1A can be related to prior research in two ways:

1.
Guo and Reithel (2018) divide informationprocessing in organizations into two main categories, namely information inflow and information outflow. We capture the inflow of information about the environment into the IS by the principal and the agent entering their learnings (according to the simultaneous sequential learning model) into their ISs. As a prerequisite for information outflow, the collected information, which was previously entered into the ISs, needs to be processed so that it can be used for decisionmaking purposes. IS sophistication is referred to as the organization’s capability to process information.

2.
The concept of IS sophistication can also be related to the fit between the individual, the task to be carried out, and the IS providing information: Liu et al. (2011) break down the tasktechnology fit model (which is originally introduced in Goodhue and Thompson (1995)) and argue that there are three twoway fits, namely the tasktechnology fit, the individualtechnology fit, and the taskindividual fit. The latter fit refers to the fit between individual capabilities and decisionmaking requirements imposed by certain tasks. The individualtechnology fit refers to the fit between characteristics of technologies and needs of individuals who are responsible for solving tasks, e.g., in terms of how information is provided. The tasktechnology fit refers to the fit between characteristics of technologies and the tasks to be carried out, for example in terms of providing good and appropriate information. Information system sophistication is conceptually related to the tasktechnology fit.
We operationalize the sophistication of the ISs which provide external information as follows: More (less) sophisticated ISs comprise better (poorer) information about the environment, whereby the lowest (highest) level of sophistication indicates that only very recent (all historical) data stored in the IS are processed and provided for decisionmaking purposes. Recall the concept of information used in the agentbased model introduced in Sect. 2.2: If an IS is characterized by a high level of sophistication, this means that it can provide more reliable information, in terms of information \(I\), about the information \(J\) that is intrinsic to a system. A higher (lower) level of sophistication, thus, decreases (increases) the distance \(JI\).
3.1.2 Assumptions regarding the IS which provides internal information (IS 2)
Relaxed assumptions. The assumptions of the standard hiddenaction model imply that there is one IS which comprises information about the set of feasible actions \(\mathbf{A}\), and that both the principal and the agent have the same information about the action space available (cf. Fig. 2). Prior research, however, argues that information asymmetry (following the concept of information asymmetry introduced in Sect. 2.2) lies in the heart of decentralization (Akerlof 1970), as decentralized decision makers (i.e., the agent) are usually better informed than central managers or the owner of an organization (i.e., the principal) (Rajan and Saouma 2006). In line with the argumentation in Akerlof (1970), we enrich the standard hiddenaction model with information asymmetry about the set of feasible actions between the principal and the agent, so that we allow for the agent to be better informed about the action space than the principal.
Operationalization of relaxed assumptions. In order to introduce information asymmetry regarding \(\mathbf{A}\), we make the following adaptations of the standard hiddenaction model:

1.
The principal and the agent no longer share one IS which contains information about the set of feasible actions, but we model them to have separate ISs (IS 2P and IS 2A in Fig. 3). The fact that the principal and the agent no longer share the same information system allows for the agent to be better informed about the set of feasible actions \(\mathbf{A}\) than the principal.

2.
As the principal no longer has full information about the set of feasible actions, we endow her with the ability to either search locally (exploitation of the known action space) or globally (exploration outside of the known action space) for actions which she wants the agent to carry out (March 1991).
Sophistication of IS 2P and IS 2A in the agentbased model variant. Our operationalization of information asymmetry is in line with previous research: Rajan and Saouma (2006), for example, argue that the extent of information asymmetry is influenced by the choice of the internal accounting system. We take up this argumentation and set up the model in the following way: While the agent has access to the entire action space \({\mathbf {A}}\) using his private IS 2A, we model the principal to be able to see only a fraction of \({\mathbf {A}}\) as a consequence of the sophistication of her private IS 2P. A low (high) level of sophistication of IS 2P, thus, indicates that the principal has information about a relatively small (large) fraction of \(\mathbf{A}\).^{Footnote 5}
3.2 Formalization of the agentbased model variant
In the agentbased representation^{Footnote 6} of the hiddenaction model, we indicate time periods by subscript \(t = 1,2,\ldots ,T\) and the sequence of events within one timestep \(t\) by subscript \(\tau =0,1,\ldots ,7 \). Figure 4 schematically represents the sequence of events within one timestep in the agentbased model and also indicates the types of information systems which the principal and the agent use during the different steps \(\tau \).
Characteristics of the principal, the agent, and the environment. We characterize the (riskneutral) principal by the utility function
where \(x_t\) denotes the outcome and \(s(x_t)=x_t\cdot p_t\), where \(p_t \in [0,1]\), stands for the agent’s compensation in \(t\). We model the agent as being characterized by the productivity \(\rho \) (which is constant throughout the simulations), we denote the agent’s exerted effort in \(t\) by \(a_t\), and formalize outcome \(x_t\) in period \(t\) by
We model exogenous factors \(\theta _t\) follow a normal distribution, \(\theta _t\sim N(\mu ,\sigma )\). As it is assumed in the standard hiddenaction model (see Holmström 1979), the principal aims at maximizing her utility subject to the participation constraint and the incentive compatibility constraint (see Eqs. 1a–1c): In order to do so, we allow the principal to adapt the parameterization of the sharing rule, i.e., the premium parameter \(p_t\), over time.
The (riskaverse) agent is characterized by the CARA utility function:
where \(\eta \) represents the agent’s Arrow–Pratt measure of riskaversion (Pratt 1975).
Information about the action space provided by IS 2P and IS 2A. In Sect. 3.1.2 we introduce information asymmetry regarding the set of feasible actions. In the agentbased model variant, we denote the set of feasible actions in timestep \(t\) by \({\mathbf {A}}_t\). Limited information about \({\mathbf {A}}_t\) hinders the principal from recognizing the optimal value for the premium parameter, \(p_t\), instantaneously—this is in contrast to the standard hiddenaction model, which suggests that the principal always has the necessary information to set the premium parameter \(p_t\) so that the agent has incentives to exert the optimal effort level (cf. Sect. 2.1 and Eq. 7 below). As a consequence, the principal has to search for the value of the premium parameter, which induces the optimal effort level: In order to do so, the principal has the option to either exploit the known fraction of the action space (local search) or to explore the action space outside of the known area (global search) (cf. March 1991). In periods \(t=2,\ldots ,T\) the principal selects her search strategy in \(\tau =0\) and, according to this strategy, uses her IS 2P to perform either a global or a local search for effort levels on which she will base the sharing rule \(s(\cdot )\) (cf. \(\tau =0\) in Fig. 4).^{Footnote 7} In order to assure the existence of a solution to the principal’s decision problem, it is necessary to set boundaries for \({\mathbf {A}}_t\). We set the lower boundary by the participation constraint, \(E(U_A( s(x_t),a_t)\ge {\underline{U}}\), and the upper boundary by means of the incentive compatibility constraint, \(a_t \in {{\,\mathrm{arg\,max}\,}}_{a'_t \in {\mathbf{A}_\mathbf{t}}} E(U_A(s(x_t),a'_t))\) (cf. Eqs. 1b and 1c). It is important to note that both boundaries are endogenous: They include an expectation concerning the environmental variable, as \(s(x_t)\) is based on the outcome \(x_t=a_t\cdot \rho + \theta _t\) (cf. Eq. 3). The agent’s reservation utility \({\underline{U}}\), however, is an exogenous variable. Changes in the state of information about the environment (via learning in \(\tau =7\), see below) might lead to changes in the boundaries of \({\mathbf {A}}_t\). The principal uses her IS 1P to compute her expectation as to the environment.
The principal’s IS for external information (IS 1P). We denote the information which the principal retrieves from IS 1P by the vector \(\varvec{{\tilde{\varTheta }}}_t = [{\tilde{\theta }}_{t1} {\tilde{\theta }}_{t2} \ldots {\tilde{\theta }}_{tm} ]\). The elements of \(\varvec{{\tilde{\varTheta }}}_t\) stand for the principal’s estimations of the exogenous factor in previous periods, for their computation see Eq. (9) below. The length of \(\varvec{{\tilde{\varTheta }}}_{t}\) is defined by parameter \(m\in \{1,2,\ldots ,t \}\), which also stands for the sophistication of IS 1P. A low level of sophistication indicates that only very recent information can be retrieved from IS 1P, while a high level of sophistication means that information from a larger number of past periods is available for the principal.^{Footnote 8} A higher value of \(m\), thus, indicates that the principal is better informed about the environment, as more historical information enables the principal to compute a more reliable expectation as to the environment.
Endogenous threshold to trigger the principal’s search strategy. As the information sources for the principal’s decision for a search strategy are now defined (IS 1P and IS 2P), we can focus on her decision rule: The principal’s decision to perform either a local or a global search (cf. March 1991) is based on (i) the principal’s estimated exogenous factor in \(t1\), \({\tilde{\theta }}_{t1}\), and (ii) her propensity to innovate \(\delta \in [0,1]\).^{Footnote 9} Recall that in periods \(t=2,3,\ldots ,T\) the principal retrieves the (i) estimation of the exogenous factor from IS 1P in \(\tau =0\). The (ii) propensity to innovate represents the principal’s tendency to search either locally or globally for effort levels which can be used as the basis for the sharing rule \(s(\cdot )\). A lower (higher) value of \(\delta \) decreases (increases) the principal’s tendency to search globally. Based on the principal’s exploration propensity \(\delta \), we compute an exploration threshold \(\kappa _t\) for timestep \(t\) which is implicitly defined by
where \(\sigma (\cdot )\) and \(\mu (\cdot )\) represent the standard deviation and the mean, respectively. If \({\tilde{\theta }}_{t1} > \kappa _t\) (\({\tilde{\theta }}_{t1} < \kappa _t\)) the principal performs a global (local) search.
Endogenous boundaries of the principal’s exploration and exploitation spaces. The principal now has selected her search strategy for period \(t\). In order to carry out the search for candidates for the optimal effort level, the search spaces in which the principal performs her search for potential effort levels need to be defined. The search space for a local (global) search is referred to as exploitation (exploration) space (see also Fig. 5). Please recall that the set of feasible actions \({\mathbf {A}}_t\) is bounded by the incentive compatibility and the participation constraints. The search spaces can be defined as follows:

The exploitation space is defined as a fraction of the entire search space in \(t\) and is a consequence of the sophistication of P’s IS 2P. We denote the sophistication of IS 2P by parameter \(q\), which defines the exploitation space in \(t\) as the fraction \(1/q\) of the entire action space \({\mathbf{A}_\mathbf{t}}\). We model the exploitation space to be equally distributed around the effort level on which the principal has based her computation of the premium parameter in the previous period, \({\tilde{a}}_{t1}\), and refer to \({\tilde{a}}_{t1}\) as the ‘statusquo effort level’.

The exploration space is the area outside of the exploration space but inside the boundaries of \({\mathbf {A}}_t\).
The search spaces are schematically illustrated in Fig. 5. Once the search strategy is settled, the principal randomly finds 2 alternative effort levels in the search space (with uniformly distributed probabilities of discovery). The discovered effort levels will be evaluated in the next step.
The principal’s evaluation of effort levels. The principal evaluates the newly found effort levels together with the statusquo effort level with respect to increases in her expected utility (based on the utility function in Eq. 2) in timestep \(\tau =1\) (cf. Fig. 4).^{Footnote 10} We denote the valuemaximizing effort level in \(t\) from the principal’s point of view by \({\tilde{a}}_t\).^{Footnote 11} Please note that the fact that the principal evaluates all candidates for the optimal effort in \(t\) on the basis of Eq. (2), requires the principal to build an expectation as to the environment. She retrieves this expectation from her IS 1P in \(\tau =1\) (cf. Fig. 4):
Recall that parameter \(m\) indicates the sophistication of IS 1P. The expected outcome in timestep \(t\) from the principal’s point of view, \({\tilde{x}}_{Pt}\), using valuemaximizing effort level \({\tilde{a}}_t\) can, thus, be formalized by \({\tilde{x}}_{t}={\tilde{a}}_{t}\cdot \rho +E_P(\theta _t)\).
The contract. Now that the principal has decided on a desired effort level \({\tilde{a}}_t\) for period \(t\), she can move on fixing the incentive scheme \(s(\cdot )\). In order to do so, the principal computes the optimal premiumlevel in \(t\) in step \(\tau =2\) according to
She, then, designs the contract and offers the contract to the agent, who decides whether or not to accept it in \(\tau =3\). In order to compute the premium level \(p_t\), the principal uses information about the environment provided by IS 1P and information about feasible actions provided by IS 2P (cf. \(\tau =2\) in Fig. 4). The agent uses IS 1A and IS 2A for his decision of whether to accept the contract or not (cf. \(\tau =3\) in Fig. 4).
The agent’s IS for external information (IS 1A) and his decision for an effort level. In case the agent accepts the contract, he exerts effort \({a}_t = \max _{a_t \in {\mathbf{A}_\mathbf{t}}} U_A\left( s({\tilde{x}}_{At}), a_t\right) \) in \(\tau =4\), where \({\tilde{x}}_{At}=a_t \cdot \rho + E_A(\theta _t)\) includes the agent’s expectation as to the outcome in \(t\), \(E_A(\theta _t)\). In order to compute \(E_A(\theta _t)\), he retrieves his observations of realized exogenous factors from IS 1A (cf. \(\tau =4\) in Fig. 4). Recall that the agent can observe exogenous factors after their realization. We denote the agent’s observations derived from IS 1A by \(\varvec{{\varTheta }}_t = [{\theta }_{t1} {\theta }_{t2} \ldots {\theta }_{tm} ]\). The agent computes his expectation as to the exogenous factor in \(t\) according to
As for IS 1P, parameter \(m\) indicates the sophistication level of IS 1A. The agent’s rule to come up with an effort level introduced above also reflects that the agent knows the entire action space \({\mathbf {A}}_t\). This piece of information is provided by IS 2A (cf. \(\tau =4\) in Fig. 4).
Realization of the outcome and of the principal’s and the agent’s utilities. After productive effort was exerted in \(\tau =4\) and the exogenous factor realized in \(\tau =5\), the outcome, \(x_t = {a}_t \cdot \rho + \theta _t\), materializes in \(\tau =6\), and the utilities for the principal and the agent realize (cf. Eqs. 2 and 4).
The principal’s estimation and agent’s observation of the environment. In \(\tau =7\), the agent observes the realization of the exogenous factor in \(t\), \(\theta _t\), and stores this observation in his IS 1A. The principal can only observe \(x_t\) using IS 3. Based on this information she estimates the exogenous factor in \(t\) according to
and stores \({\tilde{\theta }}_{t} \) in her IS 1P.^{Footnote 12} Finally, \({\tilde{a}}_t\) is carried over to period \(t+1\) as ‘statusquo effort level’.^{Footnote 13} This sequence explained above and depicted in Fig. 4 is repeated \(T\) times. The notation used for the formal representation of the agentbased variant of the hiddenaction model is summarized in Table 2.
3.3 Key parameters in the agentbased model
Parameters related to the principal. In addition to the utility function given in Eq. (2), the principal is characterized by a propensity to innovate (\(\delta \)) which represents her tendency to perform a local or global search, respectively (see Eq. 5 and subsequent paragraphs). With respect to the propensity to innovate, our analysis includes three different types of principals:

1.
Exploitationprone principals, who are characterized by a tendency toward local search (\(\delta =0.25\)).

2.
Indifferent principals, who assign equal probabilities to local and global search (\(\delta =0.5\)).

3.
Explorationprone principals, who are characterized by a tendency toward global search (\(\delta =0.75\)).
This parameter and the selected parameterization reflect current research on organizational ambidexterity that focuses on the ability of organizations to simultaneously pursue incremental and discontinuous innovation (Tushman and O’Reilly 1996; O’Reilly and Tushman 2013). It is, amongst others, also noted in March (1991) that organizations need to find an appropriate balance between exploration and exploitation to make sure to continuously adapt in order not to become irrelevant in the market. The principal’s propensity for innovation reflects this balance between exploration and exploitation.
Parameters related to the agent The agent is riskaverse and characterized by a CARA utility function (cf. Eq. 4). The assumption of riskaversion for the agent is transferred from the standard hiddenaction model (Holmström 1979). We set the agent’s Arrow–Pratt measure, \(\eta \), equal to \(0.5\). In addition, the agent is characterized by a measure for productivity, which we set to \(\rho =50\).
Parameters related to the environment. We model environmental variables to follow a normal distribution. In our analysis, we set the mean of the distribution of environmental variables equal to zero and consider four levels of environmental turbulence, which we operationalize by altering the distribution’s standard deviation, \(\sigma \): We set the standard deviation relative to the optimal outcome, \(x^*\), computed by using actual parameter settings and the secondbest solution of the standard hiddenaction model (cf. “Appendix C”), and consider a total of four cases ranging from relatively stable environments (\(\sigma =0.05x^*\)) to relatively turbulent environments (\(\sigma =0.65x^*\)).^{Footnote 14}
Parameters related to ISs. For the principal’s and the agent’s external information systems (IS 1P and IS 1A, respectively), our analysis covers three levels of sophistication. Recall that the sophistication of these ISs is formalized by parameter \(m\) (see Eq. 6 for IS 1P and Eq. 8 for IS 1A). We set \(m=1\) and \(m=3\) for a low and medium level of sophistication, respectively. For highly sophisticated ISs, we set \(m=\infty \). The sophistication of P’s internal information system IS 2P is captured by parameter \(q\), which identifies the fraction \(1/q\) of the set of feasible actions which is available for P. As discussed in Sect. 3.1.2, \(q\) is also a proxy for the extent of information asymmetry (regarding the action space) between the principal and the agent. Our analysis includes three sophistication levels of IS 2P: We set \(q\in \{3,5,10\}\) for a high, medium, and low level of sophistication, respectively. The choice of parameters reflects the concept of information introduced in Sect. 2.2: Let \(J\) stand for the most complete information intrinsic to a system and \(I\) stand for information about a system and let observations about information intrinsic to a system be an information flowprocess \(J \rightarrow I\) (Frieden and Hawkins 2010; Hawkins et al. 2010). Then, the sophistication parameters \(m\) and \(q\) shape the extent of information (intrinsic to a system and observed or observable by the agent) which is available for the agent. Higher values for \(m\) (lower values for \(q\)) indicate that more information is available, which we interpret as being better informed.
Global parameters. The possible combinations of the parameters which are subject to variation lead to a total number of \(3\times 4\times 3\times 3 = 108\) scenarios. For each scenario \(R=700\), simulation runs are performed, whereby the analysis focuses on the first 20 time periods, \(T=20\).
4 Results of the simulation study
4.1 Effects of the level IS sophistication on the shape performance over time
Scenarios. The first part of the analysis provides insights into the effects of the sophistication of the information systems employed within the contractual relationship between the principal and the agent on the level of performance obtained. In order to do so, this section presents results for the following parameter settings:

1.
The sophistication levels \(m\) of the external information systems IS 1P and IS 1A are varied. We present results for 3 sophistication levels: ISs which provide poor (\(m=1\)), medium (\(m=3\)), and good information (\(m=\infty \)). Please note that \(m\) does not vary between the principal and the agent.

2.
The sophistication level \(1/q\) of the principal’s internal information system IS 2P is varied. The scenarios cover cases in which the IS 2P provides poor (\(1/q=1/10\)), medium (\(1/q=1/5\)), and good information (\(1/q=1/3\)).

3.
The principal’s propensity to innovate is set to a medium level, \(\delta =0.5\). This parameter setting results in the principal assigning equal probabilities to exploration and exploitation, while searching for candidates for the optimal effort level in \(t\).

4.
We present results for the two extreme cases of low (\(\sigma = 0.05x^{*}\)) and high (\(\sigma = 0.65x^{*}\)) environmental uncertainty.
This section particularly discusses cases in which \(m\in \{1,3,\infty \}\) and \(1/q=1/10\) (cf. Fig. 6), and \(1/q\in \{1/10,1/5,1/3\}\) and \(m=1\) (cf. Fig. 7). The results for all parameter combinations are presented in Fig. 10 in “Appendix A”.
Performance indicator. For each timestep \(t\), we report the averaged normalized effort level carried out by the agent as performance measure. For each timestep \(t =1,\ldots ,T\) and each simulation run \(r =1,\ldots ,R\), we track the level of effort \(a_{tr}\) exerted by the agent and normalize it by the optimal level of effort \(a^*\). The optimal effort level results from the secondbest solution suggested by the standard hiddenaction model (see Holmström 1979 and “Appendix C”). The reported performance indicator is formalized by:
Results on the sophistication of IS 1P and IS 1A. The results on the sophistication of the principal’s and the agent’s systems for information about the environment are presented in Fig. 6. For this analysis, we investigate variations of IS 1P and IS 1A as described above, and keep the sophistication of the principal’s IS 2P constant at \(1/q=1/10\). Each subplot in Fig. 6 presents results for one investigated sophistication level of IS 1P and IS 1A. For each scenario, the subplots report the averaged normalized effort level introduced in Eq. (10). For scenarios with low environmental uncertainty (represented by triangles in Fig. 6), the results indicate that increasing the sophistication of IS 1P and IS 1A—and thereby increasing information about the organization’s environment—significantly increases the slope of the performance curves. Thus, effort for better information about the environment appears to pay off almost immediately in such scenarios. In addition to the slopes of the performance curves, the performances at the end of the observation period (i.e., after 20 periods) increase with the level of IS sophistication, too: While for the case of poor information about the environment, around \(0.74\) of the performance suggested by the standard hiddenaction model can be achieved after 20 periods (see the top subplot in Fig. 6), increasing the sophistication so that good information is provided leads to a final performance of almost \(0.95\) (see the bottom subplot in Fig. 6). The described patterns can also be observed for situations in which the principal’s internal IS 2A has higher sophistication levels: However, the less it is pronounced, the higher the sophistication of IS 2P (see Fig. 10 in “Appendix A”).
As soon as we switch to scenarios with high environmental uncertainty (represented by black diamonds in Fig. 6), similar shapes of the performance curves emerge: For the case of poor external information, for example, performance increases only in the first 2 periods to around \(0.68\), before it remains on this level until the end of the observation period. In scenarios with good information about the environment, performance increases in the first 11 periods to around \(0.87\) and then remains stable until period 20. From the results, we can draw the conclusion that higher sophistication levels of the IS for external information lead to (i) longer time spans in which performance increases, but (ii) the slopes of the performance curves in the first few periods are not affected. Consequently, final performances increase with the quality of the provided information. The same pattern emerges for situations with higher sophistication levels of P’s IS 2P (cf. Fig. 10 in “Appendix A”).
Results on the sophistication of IS 2P. We depict the performance curves for variations in the sophistication level of the principal’s IS for internal information IS 2P in Fig. 7, whereby each of the 3 subplots presents results for one of the investigated sophistication levels. We keep the sophistication levels of IS 1P and IS 1A constant at \(m=1\). For scenarios in which environmental uncertainty is low (\(\sigma =0.05x^*\), indicated by triangles in Fig.7), our results suggest that increasing the sophistication of the principal’s internal IS 2P significantly increases the slope of the performance curve. Performance, thus, increases much faster in the first few periods. The final performance achieved after 20 periods is, however, only marginally affected: The final performance for the case of poor internal information (\(1/q=1/10\)) amounts to around \(0.71\) (see the right subplot in Fig. 7). Increasing the sophistication of IS 2P to \(1/q=1/3\) only leads to a marginal increase, so that a final performance of around \(0.82\) can be achieved. For scenarios with a high sophistication of the principal’s and the agent’s information system for environmental information, the pattern is similar, but the increase in the slopes of the performance curves is less pronounced (cf Fig. 10 in “Appendix A”).
A totally different pattern emerges in situations in which environmental uncertainty is high (\(\sigma =0.65x^*\), indicated by black triangles in Fig. 7): Irrespective of the sophistication level of IS 2P, performance immediately reaches a level of around \(0.71\). From period \(t=2\) onward, there are generally no significant changes on this performance level. A similar behavior can be observed for situations in which the principal and the agent are better informed about the environment (see Fig. 10 in “Appendix A”). This is a remarkable and counterintuitive result: The results suggest that—at least for the first few periods—a significantly higher level of performance can be achieved in turbulent environments, as compared to stable environments. This finding might be explained by the pressure to be innovative, which turbulent environments impose on the principal (Mendes et al. 2016). The time span in which this result can be observed is critically shaped by the sophistication of the principal’s internal information system: For \(1/q=1/10\) (\(1/q=1/3\)), the performance observed in stable environments exceeds the performance in turbulent environments after 10 (4) periods.
Discussion and policy reflection The results provide support for intuition, i.e., that coping with environmental turbulence is more successful when information about the environment is improved (Raghunathan 1999). In this sense, the results are in line with prior research in the tradition of the tasktechnology model (Goodhue and Thompson 1995): This line of research indicates that the relation between environmental uncertainty and task characteristics affects the satisfaction of the user with the data provided by ISs. This level of satisfaction, then, critically shapes the success of organizational ISs (Karimi et al. 2004; Schäffer and Steiners 2005; Petter et al. 2013). In the model used in this paper, a higher sophistication of the ISs for environmental information (IS 1P and IS 1A) and the internal IS (IS 2P) provides more complete information, which results in ‘better’ decisions.^{Footnote 15} A higher sophistication of our modeled ISs can, thus, be interpreted as a higher tasktechnology fit.
The results suggest that increasing the performance of the internal IS 2P does not pay off in turbulent environment: This is an unexpected finding which puts calls for more sophisticated IS designs (e.g., Karimi et al. 2004) into perspective. Our results suggest that turbulent environments put a certain pressure to be innovative on decision makers, which is why performance increases immediately in the first few periods. Further performance increases can, then, only be achieved by improving the quality of information about the environment (IS 1P and IS 1A), which is why efforts to increase the sophistication of the internal IS 2P prove to be ineffective.
The results presented so far suggest the necessity to differentiate the investment decisions regarding the sophistication of organizational ISs according to the degree of environmental uncertainty. If environmental uncertainty is low, enhancing the sophistication of external ISs (IS 1P and IS 1A) as well as internal IS 2P increases the performance obtained: Investments in either direction appear to be beneficial. In contrast, when environmental uncertainty is high, investing into an improved internal IS 2P appears to be ineffective: In such situations, the priority should rather be given to improving the quality of external information (provided by IS 1P and IS 1A).
4.2 Effectiveness of search strategies for different levels of IS sophistication
Scenarios This part of the analysis focuses not only on variations in the sophistication of the ISs employed during the principal’s and the agent’s decisionmaking processes, but also takes the principal’s innovation propensity into account. The considered parameter setting is the following:

1.
As we did in Sect. 4.1, we vary the sophistication level \(m\) of the external information systems IS 1P and IS 1A, and analyze scenarios in which these ISs provide poor (\(m=1\)), medium (\(m=3\)), and good information (\(m=\infty \)).

2.
Also the sophistication level (\(1/q\)) of the principal’s internal information system IS 2P is varied, so that the cases for poor (\(1/q=1/10\)), medium (\(1/q=1/5\)), and good information (\(1/q=1/3\)) are covered.

3.
We vary the principal’s propensity to innovate: Like in Sect. 4.1, we investigate situations in which the principal is indifferent with respect to her search strategy (\(\delta =0.5\)). In addition, the results presented in this section cover situations in which the principal has a tendency for either exploration (\(\delta =0.75\)) or exploitation (\(\delta =0.25\)).

4.
We present results for 4 levels of environmental uncertainty: In addition to the two ‘extreme’ cases of low (\(\sigma = 0.05x^{*}\)) and high (\(\sigma = 0.65x^{*}\)) environmental uncertainty which were also included in the analysis in Sect. 4.1, we add two cases with intermediate environmental turbulence, in which we set \(\sigma = 0.25x^{*}\) and \(\sigma = 0.45x^{*}\).
The discussion in this section focuses on cases in which \(m\in \{1,3,\infty \}\) and \(1/q=1/10\) (cf. Fig. 8), and \(1/q\in \{1/10,1/5,1/3\}\) and \(m=1\) (cf. Fig. 9). The results for the remaining parameter combinations are presented in Fig. 11 in “Appendix B”.
Performance indicator. The performance measure used for the analysis in this section is based on the level of effort exerted by the agent. We, however, no longer present the averaged normalized effort level \({\tilde{p}}_{t}\) and the performance curves over time (as done in Figs. 6 and 7), but condense performance curves to the Manhattan distance \(d\), which, in this case, represents the distance between the averaged normalized effort level \({\tilde{p}}_t\) and the optimal effort level \(x^*\) over time. This allows us to present one performance measure per scenario, which can be formalized by
where \(t=1,\ldots ,T\) represents timesteps and \({\tilde{p}}_{t}\) stands for the averaged normalized effort level in period \(t\) (cf. Eq. 10).
Results on the sophistication of IS 1P and IS 1A. Results for scenarios in which the sophistication levels of IS 1P and IS 1A for external information are varied are presented in Fig. 8. Each subplot presents the results for one of the investigated sophistication levels and depicts contours resulting from the condensed performance measure introduced in Eq. (11). We keep the sophistication level of the principal’s IS for internal information IS 2P constant at \(m=1\) for this part of the analysis.
As soon as we switch to scenarios with medium (\(m=3\)) and high sophistication levels (\(m=\infty \)) of the ISs for external information (IS 1P and IS 1A), we can observe that a different pattern emerges. First, the largest distance between the achieved and the optimal performances can no longer be observed in stable but in turbulent environments. This finding is in line with the intuition that there is a negative relation between environmental turbulence and performance. Second, we can see that the contours are no longer nearly horizontal, but their slope increases with the quality of the information provided by the principal’s and the agent’s information systems for external information. This change in the pattern of contours indicates that the decision whether to perform exploitation or exploration becomes particularly critical, when the involved parties are well informed about the environment. In addition to the changes in the pattern, it can be observed that the distance between the achieved and the optimal performance decreases significantly with increases in sophistication of the principal’s Is 2P: While for stable environments (\(\sigma =0.05x^*\)), indifferent principals (\(\delta =0.5\)), and poor external information (\(m=1\)), the Manhattan distance is around \(7\), this distance reduces to around \(3.15\) and \(2.95\) for medium (\(m=3\)) and good (\(m=\infty \)) information, respectively. The marginal change in performance increase, thus, reduces with higher levels of IS sophistication. The same observations can be made for cases with a medium (\(1/q=1/5\)) and high (\(1/q=1/3\)) level of sophistication for IS 2P (cf. Fig. 11 in “Appendix B”).
Results on the sophistication of IS 2P. Figure 9 presents results from scenarios with variations in the sophistication level of the principal’s internal IS 2P. For the ISs for external information, we fix \(m=1\) for this part of the analysis. As discussed above, for the scenarios with a low sophistication level of IS 2P (\(1/q=1/10\)), the largest distance between the achieved performances and the optimal performance can be observed in stable environments, and the horizontal contours indicate that the choice of strategy does not affect performance.^{Footnote 16} An increase in the sophistication level of the principal’s IS for external information so that medium (\(1/q=1/5\)) and good information (\(1/q=1/3\)) is provided for decisionmaking purposes, only leads to slight changes in the observed pattern: First, the largest values for the distance from the achieved to the optimal performance shift into the direction of more turbulent environments. The largest distances can, however, be observed for intermediate levels of environmental turbulence. This is surprising as one would expect the largest Manhattan distances in scenarios with the highest level of environmental turbulence. Second, for all sophistication levels of IS 2P, we can observe nearly horizontal contours: This indicates that irrespective of the quality of internal information, the principal’s search strategy does not affect performance, as long as the sophistication of the IS for external information is \(m=1\). For higher sophistication levels of the ISs for external information (IS 1P and IS 1A), it can, however, be observed that a higher tendency for exploration leads to slightly better performances (cf. Fig. 11 in “Appendix B”).
Discussion and policy reflection. This section does not take a snapshot of one time period or analyze performance curves over time but provides a condensed measure for the efficiency of organizational search strategies and sophistication levels of information systems for time periods. From that perspective, the results presented here indicate that the intuition that it is harder for organizations to achieve a high performance in turbulent environments is only true if decision makers are well informed about the environment (i.e., in situations in which IS 1P and IS 1A provide medium or good information). In situations in which the principal and the agent only have poor information about the environment, there appears to be a pressure to be innovative in terms of carrying out tasks. This leads to an immediate boost in performance, so that—over the entire observation period—the distance between the achieved and the optimal performance decreases. This finding is in line with previous research: Eisenhardt (1989) and Alexiev et al. (2016), for example, argue that increased innovativeness is a common response of organizations to turbulent environments when information quality is poor. This is exactly what we observe for situations in which the principal and the agent have only limited information about the environment. In addition, our results show that this pressure does not exist in stable environments, which is why performance increases at a much slower pace. In addition, we analyze the effect of the search strategy’s impact on performance: Auh and Menguc (2005) argue that whether or not exploration or exploitation is the superior search strategy depends on the type of organization, which, in their case, is either defender or prospector. We show that as long as information about the environment is poor, the choice of the search strategy in fact does not matter. With an increase in quality of information about the environment, exploration becomes significantly superior to exploitation. For the sophistication of the principal’s information system for information about the action space, the results indicate that investments into highly sophisticated ISs only lead to very marginal increases of performance and no changes in the above discussed patterns. Thus, from a policy perspective, the findings presented here suggest a prioritization of ways to spend an organization’s resources: Every effort should be made to build an information system which provides good information about the environment before tackling the question of whether to develop new ways to carry out specific tasks.
5 Summary and conclusive remarks
The standard hiddenaction model (see Holmström 1979) comprises some rather ‘heroic’ assumptions about the availability of information and individual behavior. In this paper, we put a particular focus on the assumptions regarding the information which is accessible for both the principal and the agent. We relax selected assumptions and, by doing so, shift the focus from the decisioninfluencing role to the decisionfacilitating role of information. For this purpose, we employ an approach allowing the transfer of closedform mathematical models into agentbased models (Guerrero and Axtell 2011; Leitner and Behrens 2015), which allows to make less restrictive assumptions.
We limit the principal’s and the agent’s information about the environment in which the organization operates but endow them with the ability to learn about the environment over time. Both the principal and the agent store their information acquired in an information system. In addition, we add information asymmetry regarding the options available for the agent to carry out the task which the principal delegates to him: The principal is no longer fully informed about all feasible options but is endowed with search strategies to discover new options (March 1991). This information asymmetry between the principal and the agent is operationalized by granting them access to two different information systems for this type of information. We also investigate different levels of information system sophistication and analyze the impact on performance.
The results of our simulation study generate some key insights into the dynamics of delegation relationships with hidden action:

First, our results provide evidence of the intuition that coping with environmental turbulence is more successful when the quality of information about the nature of the environment is improved. We also observe that turbulent environments appear to put a pressure to be innovative on decision makers, which results in almost ‘immediate’ performance increases. Marginal changes in performance, however, decrease very fast, so that no further performance increases can be observed after only very few periods.

Second, the results indicate that increasing the quality of internal information about feasible ways to carry out tasks significantly affects the levels of achieved performances.

Third, the results show that the choice of organizational search strategy (exploration or exploitation) affects performances only if decision makers are well informed about the environment. For the case of a poor quality of information about the environment, the employed search strategy does not significantly affect the level of achieved performance.
Our research is, of course, not without its limitations. First, we take over some assumptions regarding the principal and the agent from principalagent theory. These assumptions cover, for example, the individual utilitymaximizing behavior or the availability of information about the agent’s characteristics. Future research might address these assumptions and assess their impact on the applicability of incentive mechanisms provided by principalagent theory. Second, the agent is modeled to carry out the same task repeatedly; we, however, assume that there are no learningcurve effects. Making the agent’s productivity, an endogenous variable might add further dynamics to the model. Third, we model situations in which no search costs occur for the exploration of the search space. Future research might also consider adding search costs. In addition, coming up with alternative incentive schemes is a promising line for future research.
Notes
Subscript \(a\) denotes the partial derivative with respect to \(a\).
Please note that we focus on the concept of information as resource or commodity and, therefore, exclude effects of cognitive abilities at the agent level, which would alter the informationflow process.
Please note that the notion of asymmetric information used in the standard hiddenaction model is a binary modeling choice, i.e., information about the effort level \(a\) is either available or not available. The extended notion of information introduced here allows for being better (or less) informed (cf. also information entropy Shannon 1948). Please also note that if the number of observations gets sufficiently large, there is a chance that additional observations do not contain any additional information (Billot 1999).
Please be aware that our adaptation blurs the line between two types of principal agent models: As outlined above, in the standard hiddenaction model all information except the effort \(a\) made by the agent is available for both the principal and the agent. In the hiddeninformation scenario, the principal and the agent share all information except some observations which only the agent has made. Arrow (1985) argues that the observations (which lead to private information for the agent), for example, relate to possibilities of production which are not available for the principal. This argumentation can be directly related to our operationalization of the sophistication of the IS which provides information about the set of feasible actions: An increase (decrease) in the amount of the agent’s private information (as a consequence of observations) leads to the agent being better informed about the set of feasible actions \(\mathbf{A}\), which can be translated into a low (high) level of sophistication of the internal information system.
In \(t=1\), the principal randomly selects one candidate for the optimal effort level and bases the further procedure on this candidate.
Please note that the principal learns about the environment and stores the information acquired in her IS 1P in \(\tau =7\) in each \(t\) (see Fig. 4). This means that the principal only retrieves information from her IS that she entered into the system in previous periods \(r<t\), where \(0<r<t\), and also that the principal’s information about the environment changes in each \(t\).
For \(t=1\) the effortlevel is a uniformly distributed random variable, see Fig. 4.
As firstorder stochastic dominance is assumed for effort levels (Jost 2001), the principal will always opt for the highest discovered effort level.
In period \(t+1\) the effort level \({\tilde{a}}_t\) will be referred to as statusquo effort level.
Please note that, as long as only one piece of information is unavailable, \({\tilde{a}}_t\) and \({a}_t\) perfectly coincide and, thus, the principal can estimate the realization of the exogenous factor without error.
If the statusquo effort level is located outside the feasible region, the principal is forced to carry out a global search for alternative effortlevels.
In order to compute the optimal outcome \(x^{*}\), we plug the principal’s and the agent’s utility functions (see Eqs. 2 and 4, respectively) into the principal’s optimization problem (see Eqs. 1a–1c): We use the parameter settings of the actual scenario and compute the optimal sharing rule \(s(\cdot )\) and the corresponding outcome using MathWorks^{®} Matlab.
Recall, if the sophistication of the information systems for external information increases to \(m=3\) and higher, this pattern can no longer be observed.
References
Akerlof GA (1970) The market for ’lemons’: qualitative uncertainty and the market mechanisms. Quart J Econ 84(August):488
Alexiev AS, Volberda HW, Van den Bosch FA (2016) Interorganizational collaboration and firm innovativeness: unpacking the role of the organizational environment. J Bus Res 69(2):974
Arrow KJ (1964) The role of securities in the optimal allocation of riskbearing. Rev Econ Stud 31(2):91
Arrow KJ (1985) The economics of agency. In: Pratt J, Zeckhauser R (eds) Principals and agents: the structure of business. Harvard Business School Press, Boston, pp 37–51
Auh S, Menguc B (2005) Balancing exploration and exploitation: the moderating role of competitive intensity. J Bus Res 58(12):1652
Axtell RL (2007) What economic agents do: how cognition and interaction lead to emergence and complexity. Rev Austrian Econ 20(2–3):105
Baiman S (1982) Agency theory in managerial accounting: a survey. J Account Lit 1:154
Baiman S (1990) Agency research in managerial accounting: a second look. Account Organ Soc 15(4):341
Billot A (1999) Do we really need numerous observations to select candidates? (The dDay Theorem). In: Beliefs, interactions and preferences in decision making. Springer, pp 121–134
Buckland MK (1990) information retrieval and the knowdedgeable society. In: Proceedings of the ASIS annual meeting, vol 27. Information Today Inc., Medford, pp 239–246
Budd RW (1987) Limiting access to information: a view from the leeward side. Inf Soc 5(1):41
Caillaud B, Hermalin BE (2000) Hidden action and incentives, Technical Notes. Ecole Ponts et Chaussée, París and University of California, Berkeley
Capurro R (2009) Past, present, and future of the concept of information. tripleC Commun Capital Critique Open Access J Global Sustain Inf Soc 7(2):125
Daft RL, Weick KE (1984) Toward a model of organizations as interpretation systems. Acad Manag Rev 9(2):284
Davis JP, Eisenhardt KM, Bingham CB (2007) Developing theory through simulation methods. Acad Manag Rev 32(2):480
Demski JS, Feltham GA (1976) Cost determination: a conceptual approach. Iowa State University Press, Ames
Dienes Z, Perner J (1999) A theory of implicit and explicit knowledge. Behav Brain Sci 22(5):735
Dumond EJ (1994) Making best use of performance measures and information. Int J Oper Prod Manag 14(9):16
Eisenhardt KM (1989) Agency theory: an assessment and review. Acad Manag Rev 14(1):57
Eisenhardt KM (1989) Making fast strategic decisions in highvelocity environments. Acad Manag J 32(3):543
Epstein EM (2003) How to learn from the environment about the environment? A prerequisite for organizational wellbeing. J Gen Manag 29(1):68
Feltham GA (1968) The value of information. Account Rev 43(4):684
Frieden BR, Hawkins RJ (2010) Asymmetric information and economics. Phys A 389(2):287
Goodhue DL, Thompson RL (1995) Tasktechnology fit and individual performance. MIS Q 1995:213–236
Guerrero OA, Axtell R (2011) Using agentization for exploring firm and labor dynamics: a methodological tool for theory exploration and validation. In: Osinga S, Hofstede GJ, Verwaart T (eds) Emergent results of artificial economics. Lecture Notes on Economics aDnd Mathematical Systems, vol 652. Springer Berlin Heidelberg, Berlin, pp 139–150
Guo X, Reithel B (2018) Informationprocessing support index: a new perspective IT usage. J Comput Inf Syst 2018:1–14
Hall M (2010) Accounting information and managerial work. Account Organ Soc 35(3):301
Harris M, Raviv A (1979) Optimal incentive contracts with imperfect information. J Econ Theory 20(2):231
Hawkins RJ, Aoki M, Frieden BR (2010) Asymmetric information and macroeconomic dynamics. Phys A 389(17):3565
Hendry J (2002) The principal’s other problems: honest incompetence and the specification of objectives. Acad Manag Rev 27(1):98
Hesford JW, Lee SHS, Van der Stede WA, Young SM (2007) Management accounting: a bibliographic study. In: Chapman CS, Hoopwood AG, Shields MD (eds) Handbook of management accounting research, vol 1. Elsevier, Amsterdam, pp 3–26
Hofkirchner W (2009) How to achieve a unified theory of information. tripleC Commun Capital Critique Open Access J Global Sustain Inf Soc 7(2):357
Holmström B (1979) Moral hazard and observability. Bell J Econ 10(1):74
Jost PJ (2001) Die PrinzipalAgentenTheorie in der Betriebswirtschaftslehre. SchäfferPoeschel, Stuttgart
Karimi J, Somers TM, Gupta YP (2004) Impact of environmental uncertainty and task characteristics on user satisfaction with data. Inf Syst Res 15(2):175
Katzer J, Fletcher PT (1992) The information environment of managers. Annu Rev Inf Sci Technol 27:227
Khan MT (2018) Taylor’s information use environments (IUEs): an assessment. Pak Libr Inf Sci J 49(3):13
Kreps DM (1979) A representation theorem for “preference for flexibility”. Econometrica 1979:565–577
Lambert RA (2001) Contracting theory and accounting. J Account Econ 32(1–3):3
Lambert RA (2006) Agency theory and management accounting. In: Chapman CS, Hopwood AG, Shields MD (eds) Handbook of management accounting research. Elsevier, Amsterdam, pp 247–268
Leitner S, Behrens DA (2015) On the efficiency of hurdle ratebased coordination mechanisms. Math Comput Model Dyn Syst 21(5):413
Leitner S, Wall F (2015) Simulationbased research in management accounting and control: an illustrative overview. J Manag Control 2015:1–25
Liu Y, Lee Y, Chen AN (2011) Evaluating the effects of taskindividualtechnology fit in multiDSS models context: a twophase view. Decis Support Syst 51(3):688
Madden AD (2000) A definition of information. In: Aslib proceedings: new information perspectives, vol 52. Emerald Group Publishing Limited, pp 343–349
March JG (1991) Exploration and exploitation in organizational learning. Organ Sci 2(1):71
McCreadie M, Rice RE (1999) Trends in analyzing access to information. Part I: Crossdisciplinary conceptualizations of access. Inf Process Manag 35(1):45
Mendes M, Gomes C, MarquesQuinteiro P, Lind P, Curral L (2016) Promoting learning and innovation in organizations through complexity leadership theory. Team Perform Manag 22(5/6):301
Mirrlees J (1974) Notes on welfare economics, information and uncertainty. In: Balch M, McFadden D, Wu S (eds) Essays on economic behavior under uncertainty. Norh Holland, Amsterdam, pp 243–261
Mirrlees JA (1976) The optimal structure of incentives and authority within an organization. Bell J Econ 7(1):105
Müller C (1995) AgencyTheorie und Informationsgehalt. Die Betriebswirtschaft 1995(1):61
O’Reilly CA, Tushman ML (2013) Organizational ambidexterity: past, present, and future. Acad Manag Perspect 27(4):324
Perrow C (1986) Economic theories of organization. Theory Soc 15(1–2):11
Petter S, DeLone W, McLean ER (2013) Information systems success: the quest for the independent variables. J Manag Inf Syst 29(4):7
Pratt JW (1975) Risk aversion in the small and in the large. Econometrica 32(1/2):122
Puppe C (1996) An axiomatic approach to “Preference for freedom of choice”. J Econ Theory 68(1):174
Raghunathan S (1999) Impact of information quality and decisionmaker quality on decision quality: a theoretical model and simulation analysis. Decis Support Syst 26(4):275
Rajan MV, Saouma RE (2006) Optimal information asymmetry. Account Rev 81(3):677
Rogerson WP (1985) The firstorder approach to principalagent problems. Econometrica 53(6):1357
Schäffer U, Steiners D (2004) Zur Nutzung von Controllinginformationen. Zeitschrift fü Planung & Unternehmenssteuerung 15(4):377
Schäffer U, Steiners D (2005) Controllinginformationen für das TopManagement deutscher Industrieunternehmen. Angebot und Nutzung im Spiegel einer empirischen Erhebung, Controlling & Management Review 49(3):209
Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379
Shannon CE (2001) A mathematical theory of communication. ACM SIGMOBILE Mobile Comput Commun Rev 5(1):3
Simon HA (1957) Models of man. Wiley, New York
Simon HA (1959) Theories of decisionmaking in economics and behavioral science. Am Econ Rev 49(3):253
Simon HA (1979) Theories of decisionmaking in economics and behavioral science. Am Econ Rev 69(4):493
Soofi ES (1994) Capturing the intangible concept of information. J Am Stat Assoc 89(428):1243
Spence M, Zeckhauser R (1971) Insurance, information, and individual action. Am Econ Rev 61:380
Spremann K (1987) Agent and principal. In: Bamberg G, Spremann K (eds) Agency theory, information, and incentives, corrected edn. Springer, Berlin, Heidelberg, New York, London, Paris, Tokyo, Hong Kong, 1987, Chap. 1, pp 3–37
Stiglitz JE (2000) The contributions of the economics of information to twentieth century economics. Q J Econ 115(4):1441
Stiglitz JE (2003) Information and the change in the paradigm in economics, part 1. Am Econ 47(2):6
Taylor RS (1991) Information use environments. In: Dervin B, Voigt M (eds) Progress in communication sciences, 10th edn. Ablex, Norwood, pp 217–255
Tushman ML, O’Reilly CA III (1996) Ambidextrous organizations: managing evolutionary and revolutionary change. Calif Manag Rev 38(4):8
Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185(4157):1124
Vandenbosch B, Huff SL (1997) Searching and scanning: how executives obtain information from executive information systems. MIS Q 21(1):81
Wall F (2016) Agentbased modeling in managerial science: an illustrative survey and study. RMS 10(1):135
Funding
This work was funded by the Anniversary Fund of the Oesterreichische Nationalbank under grant no. 17930. Open access funding provided by University of Klagenfurt.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: IS sophistication and performance over time
See Fig. 10.
Appendix B: IS sophistication and the effectiveness of search strategies
See Fig. 11.
Appendix C: Solution to the standard hiddenaction model
For the solution to the optimization program introduced in Eqs. (1a)–(1c), (Holmström 1979) suppresses the random state of nature \(\theta \) and views outcome \(x\) as random variable with distribution \(F(x,a)\), that is parameterized by the agent’s effort \(a\) (see also Mirrlees 1974, 1976). Given a distribution of the random state of nature, \(F(x,a)\) is this distribution induced on the outcome \(x\) via the function \(x=x(a,\theta )\). (Holmström 1979) further assumes that a change in the effort level has an effect on the distribution of the outcome, \(F_a(x,a)>0\). Moreover, \(F(x,a)\) has a density function where \(f_a(x,a)\) and \(f_{aa}(x,a)\) are well defined for all \((x,a)\). Following the firstorder approach (see, for example, Rogerson 1985), the optimization program introduced in Eqs. (1a)–(1c) can be reformulated as
We denote the multipliers for Eqs. (12b) and (12c) by \(\lambda \) and \(\mu \), respectively. Pointwise optimization leads to the following characterization for the optimal sharing rule:
For riskneutral principals, Eq. (13) reduces to
In situations in which a firstbest solution can be achieved, i.e., in situations in which the principal can observe the agent’s effort, no incentive problem exists, which is why the incentive compatibility constraint given in Eq. (12c) is not binding and \(\mu =0\) in Eqs. (13) and (14). In such situations, a fixed compensation for the agent is optimal. If there is an incentive problem and the principal wants the agent to increase his effort, Eq. (12c) must be binding and \(\mu >0\) in Eqs. (13) and (14).
For further details, the reader is referred to (Holmström 1979). A further discussion of the standard hiddenaction model and its solution is, for example, provided in Lambert (2001) and Caillaud and Hermalin (2000).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Leitner, S., Wall, F. Decisionfacilitating information in hiddenaction setups: an agentbased approach. J Econ Interact Coord 16, 323–358 (2021). https://doi.org/10.1007/s1140302000297z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1140302000297z
Keywords
 Management control
 Complexity economics
 Agentbased simulation
 Information systems
 Information system sophistication
 Search strategy
JEL Classification
 C63
 D83
 D86