Advertisement

Process-Tracing as a Tool to Analyse Discretion

  • Yf Reykers
  • Derek Beach
Open Access
Chapter
Part of the Palgrave Studies in European Union Politics book series (PSEUP)

Abstract

Despite the decades of theorization, the causal processes in-between acts of delegation and agency discretion and autonomy are still not developed theoretically, with much ambiguity about how the model’s elements are causally connected. This chapter shows that process-tracing is a useful methodological tool for improving our theoretical and empirical understanding of the causal processes underlying the PA model. Process-tracing, as a case-study method, requires explicitly theorizing the causal mechanism that connects delegation to agency costs and forces the analyst to unpack the process empirically. The added-value of process-tracing is illustrated on the example of the Council Secretariat’s facilitating leadership in intergovernmental negotiations. It is claimed that process-tracing confronts the principal–agent model to closer logical scrutiny, ultimately leading to stronger causal claims and better theorization.

1 Introduction

As a heuristic device, the principal–agent model’s main contribution lies in its ability to offer tools to simplify complex realities relating to the reasons for and consequences of the delegation of powers to agents. At least, that is how the model has traditionally been introduced. However, theoretical parsimony has been bought at a relatively high theoretical and empirical cost.

Although principal–agent models have been theorized for decades, the causal processes in between acts of delegation and agency discretion and autonomy are still not developed theoretically, with a large degree of opacity about how the model’s elements are causally connected. 1 Instead, most uses of the principal–agent model focus on comparing input (preferences of principals and agents) with realized policy outcomes. However, by doing so, they black-box the actual process whereby agents potentially exploit their delegated powers for private gains. Yet just because there is a correlation between what agents want and achieved outcomes does not mean that delegation resulted in agency costs. When a principal does not complain about its agent, existing approaches are unable to uncover whether this is the product of the principal being unable to sanction the shirking agent effectively, or because the agent has rationally anticipated the preferences of the principal.

It is here that process-tracing as a method is a useful methodological tool for improving our theoretical and empirical understandings of the causal processes underlying principal–agent modelling. The core of process-tracing as a case study method is the focus on tracing causal mechanisms, i.e. the causal process that links causes and outcomes together.

Applying process-tracing when using principal–agent models provides three analytical benefits. First, on the theoretical side, in order to trace causal processes, they first need to be fleshed out as theories of causal mechanisms. By unpacking mechanisms, we expose the underlying causal logics of the theory to closer logical scrutiny than if they are black-boxed. The language of mechanisms utilized in recent work on process-tracing provides a useful framework for developing better theories of the consequences of delegation in terms of agency costs.

Second, because we explicitly unpack causal mechanisms linking causes and outcomes in process-tracing, we are forced in our subsequent empirical analysis to actually study the causal process instead of treating it as an analytical black box. By doing so, we can better control for problems of observational equivalence related to the anticipatory adaptive behaviour of actors and other empirical challenges noted by principal–agent scholars (Pollack 2002; Damro 2007). The reason for this is that we would be going beyond an input–output empirical analysis of correlations, investigating what actors actually do in the process. Moreover, tracing an explicitly theorized mechanism enables us to make stronger claims about a causal relationship between X and Y than we can with correlational-type data.

Finally, theories that detail causes, outcomes and the causal mechanisms linking them are typically quite contextually sensitive. This means that different mechanisms can link the same cause and outcome in different contexts (Falleti and Lynch 2009; Beach and Pedersen 2016b). While this might at first glance increase unnecessarily the complexity of our theories, at the end of the day it strengthens our understanding of the politics of delegation and discretion because we learn more about context. In some types of cases, a particular control mechanism might be very effective, whereas in other contexts it has little if any effect because the mechanism linking it with the outcome does not work in the same fashion.

Although offering a strong claim for the introduction of process-tracing methodology to the principal–agent literature, process-tracing is by no means proposed as a panacea for all ills. For instance, the focus on single cases makes it difficult to systematically evaluate agency across a range of cases. However, we believe that the strategic deployment of process-tracing nested in a broader comparative case study design will both strengthen the underlying theoretical logic of the principal–agent model and provide stronger empirical evidence of how and under which conditions delegation can result in agency costs.

The chapter proceeds in four steps. First we engage in a short discussion of the principal–agent literature on the politics of discretion, illustrating the dearth of theorization on the causal precedents to agency costs. We subsequently present recent developments in process-tracing methods, focusing on how causal mechanisms can be unpacked theoretically and highlighting both the analytical and theoretical benefits of this approach. This includes an illustration of what a theorized causal mechanism can look like in practice, using the example of the Council Secretariat in decision-making within the Council of the EU. The next section elaborates on the actual empirical testing of a theorized causal mechanism. We discuss the added value of Bayesian updating for making causal inferences and provide a detailed description of how to translate each of the steps of a causal mechanism into expected empirical fingerprints. The example of the Council Secretariat is again used to show how such a testing might look like in practice. Before concluding, the final section briefly touches upon the limitations of process-tracing methods that one needs to be aware of when applying it in future principal–agent analysis.

2 Why Theorizing Causal Mechanisms Results in Stronger Theories and Empirical Evidence

Two research questions are central in principal–agent models: the first is why a principal delegates powers to an agent; the second is why delegation might result in agency costs. These two questions (or phases) have in the introductory chapter of this edited volume respectively been labelled as “the politics of delegation” and “the politics of discretion” (Delreux and Adriaensen this volume). We focus in this chapter on the later research question, concentrating in particular on the distributional consequences of delegation instead of the efficiency gains produced. The following engages in a short review of theoretical work on the consequences of delegation, primarily drawing attention to applications to the study of the EU. It is illustrated that existing work black-boxes the causal process whereby agents actually exploit their delegated powers for private gains.

The utility of the principal–agent model for analysing questions of delegation and autonomy has frequently been recognized. Stone Sweet and Thatcher (2002: 3) note that the principal–agent model is a popular framework among political scientists “not least because it offers ready-made, appropriate concepts”. Referring to its applications in the EU context, Kassim and Menon (2003: 125) furthermore stress that the model allows for “more nuanced hypotheses”. Yet, while offering useful concepts, most theorization is in the form of linking cause with effect without making explicit how delegation produces agency costs.

Much of the literature’s attention has been devoted to identifying the determinants of agency costs (i.e. the chances for opportunistic agency behaviour). principal–agent scholars studying the EU often focus on determining the degree of potential autonomy as a function of the size of informational asymmetries and the strength of the control functions (Pollack 1999; Franchino 2000a, 2000b; Tallberg 2002; Dijkstra 2014; Elsig 2007; Furness 2013). While there is a long list of potential theoretical determinants of high agency costs, the most often discussed are: the delegation of strong powers (Pollack 2003), the presence of only a limited pool of potential agents (Hawkins and Jacoby 2006), weak sanction mechanisms (Pollack 1999, 2003; Kerremans 2006; Delreux and Kerremans 2010), high informational asymmetries (Epstein and O’Halloran 1999; Dijkstra 2010) and strong preference differences between principal(s) and agent (Waterman and Meier 1998; Epstein and O’Halloran 2008; Damro 2007) or among principals (Elsig 2007, 2010; Delreux and Kerremans 2010; Niemann and Huigens 2011). More recently, EU scholars have also indicated the potential effect of incomplete contracting (Conceição-Heldt 2013) or, when taking into account the increasingly complex decision-making contexts such as the EU’s presence in international institutions, the informality of the institutional environment (Niemann and Huigens 2011) and the compellingness of the international negotiation context (Delreux 2011).

Yet, principal–agent scholars tend to only scratch the surface when discussing the actual causal process between delegation and agency costs. Tallberg, for example, theorizes that agents exploit delegated powers in unwanted ways when there are differences between preferences of principal(s) and agent and a decision by an agent is skewed towards their own preferences when the principal(s) are unable to sanction the agent through different control mechanisms (Tallberg 2002, 2006). However, when it comes to the actual process, more could be done on making the mechanism explicit. He does not move beyond stating that, “[e]xploiting its asymmetrical advantages in information and procedural control, the chair [agent] selects the equilibrium closest to its own ideal point” (2006: 38). Remarkably, the awareness of this theoretical underdevelopment has been present for over a decade now. Stone Sweet and Thatcher indicated in that regard that the principal–agent model as a causal theory “remains incomplete” (Stone Sweet and Thatcher 2002: 5), while Tallberg himself also stressed this concern by saying that “the ambition is to explore dynamic linkages between stages of the delegation process that are too often studied in splendid isolation”. Yet, as argued above, little has been done to turn this ambition into reality. The reason is probably that existing theorization operates within a co-variational understanding where it is variation across cases that is a “cause” instead of actual causal processes operative within a case.

Because of the lack of explicit causal mechanisms in theoretical models, most principal–agent theorization is in the form of probabilistic claims about causes whose presence increases the likelihood of an effect occurring. 2 For instance, Bradley and Kelley (2008) argue that “the more precise a delegation contract is, the less room there is for agency slack ”, a claim that is used by among others Conceição-Heldt (2013) in her study of the European Commission’s behaviour in EU trade policy. Elsig for instance states that “the autonomy of the agent increases with the degree of divergence of interests among collective principals” (Elsig 2007: 931–932). But again, little light is shed on how the cause (or causes) produce(s) the output, with the result that the theoretical claims lack a strong causal logic, as they are framed in correlational terms. Analysis mostly takes the form of checking if factors are present, instead of analysing how they contribute to autonomous agency behaviour. Put differently, the preferences of principals and agents (input) are compared with realized policies (output), without unpacking theoretically or empirically the actual causal process that lies in between.

Framing theories in correlational terms has serious implications for research design. The first major problem is that if hypotheses are framed in correlational terms, studying cases makes little sense because correlations will only manifest themselves as trends across a range of cases. Pollack argued in 2002 that “hypotheses about agency are notoriously difficult to test” and that “single case studies fail to produce definitive results” (Pollack 2002: 216). Not surprisingly, when it comes to the testing of hypotheses on agency costs, many applications of the principal–agent model utilize quantitative analysis (Pollack 2002; Epstein and O’Halloran 1999; Franchino 2000a; 2005). Although there are also more qualitative designs, those also tend to assess general trends across cases (Pollack 2003; Beach 2005; Dijkstra, 2010).

Yet causes are operative within cases, not across cases (Mahoney 2008; Beach and Pedersen 2016a). This means that if we really want to uncover empirically how a cause produces an outcome, we need to engage in an in-depth within-case analysis instead of merely investigating trends across cases. To do so requires that we theorize in a more deterministic fashion. Instead of talking about trends and correlations, we should rather focus on the causal mechanisms that will produce an outcome when the cause(s) and scope conditions required for its operation are present (Beach and Pedersen 2016a).

The second major problem is that, when framing theories merely as input–output instead of explicitly theorizing causal mechanisms that lie in between, we run the risk that empirical challenges wreak havoc on our findings. Pollack (2002) for example discusses the methodological challenge of observational equivalence, which follows from potential anticipatory adaptive behaviour by the agent (Kroll this volume). Multiple authors have in that regard already highlighted the problem of differentiating between autonomous agency behaviour and effective control by the principal (Weingast and Moran 1983; Pollack 2002; Damro 2007; Delreux 2013), yet only few have offered solid solutions. In order to overcome these difficulties, Pollack urged for a comparative case study approach that would allow the analyst to “trace the hypothesised causal mechanism at work” (Pollack 2002: 202). In the same contribution, he stresses the limitations of single case studies, arguing that “they mask the ‘invisible’ effects of rational anticipation by agents” and that “large-n [comparative] studies […] have the advantage of demonstrating the existence and strength of correlations between principals’ preferences and agency behaviour over a large number of cases and over time” (Pollack 2002: 206, bracketed text inserted).

We claim that empirically tracing an explicitly theorized causal mechanism offers a better methodological solution to this type of challenge. By assessing the actual activities of principals and agents in a causal process, we are able to get closer to showing whether anticipatory adaptation is actually occurring. As further discussed in the following sections, this is not only possible because tracing causal mechanisms requires an explicit sequential understanding of the principal–agent model. Another reason is that it obliges the analyst to a priori specify the expected empirical manifestations and subsequently raises the standards that are set for the empirical material.

Concluding, existing principal–agent theorization relating to the consequences of delegation in the EU, more broadly, tends to relegate the causal processes whereby delegation produces unwanted distributional consequences to a theoretical black box. We argue that principal–agent scholars could benefit greatly from applying process-tracing insights by first theorizing mechanisms more explicitly and then studying them empirically.

3 Process-Tracing of Causal Mechanisms

This section develops the first of two aspects of process-tracing that can help improve principal–agent theorization and research in EU studies: (1) the theoretical language of causal mechanisms as systems; and (2) the benefits of translating the principal–agent model to such a mechanistic understanding.

3.1 Theories of Causal Mechanisms

Process-tracing is a research method for tracing causal mechanisms using detailed, within-case empirical analysis of how a causal process plays out in an actual case. In theory-guided social science research, the ambition is to use causal theories to explain why something occurs either in a particular case or more broadly across a population of causally similar cases. For doing so, process-tracing moves beyond the production of detailed, descriptive narratives of the events between the occurrence of a purported cause and an outcome. Process-tracing research probes the theoretical causal mechanisms linking causes and outcomes together, enabling us to get somewhat closer to actual causal processes operating in cases (Beach and Pedersen 2016a, b).

Yet causal mechanisms are one of the most widely used but least understood types of causal claims in the social sciences (Brady 2008; Gerring 2010; Hedström and Ylikoski 2010; Waldner 2012). The essence of making a mechanism-based claim is that we shift the analytical focus from causes and outcomes to the hypothesized causal process in between them. However, what this actually means is disputed, and we can identify (at least) three distinct takes on the nature of mechanisms, each of which implies different research designs: an understanding of mechanisms as intervening variables, a minimalist understanding and a systems-understanding. The first take applies to cross-case comparisons, and the second and third one to within-case analyses. The result of this ambiguity is that there is considerable confusion in the methodological literature about what process-tracing methods actually are tracing and how we know good process-tracing when we see it in practice.

Some scholars view mechanisms as a form of intervening variable (Gerring 2007; Weller and Barnes 2015; King et al. 1994). However, to make inferences about intervening variables requires empirical evidence in the form of variation across cases, measuring to what extent different values of X are linked with different values of Y, thereby enabling the assessment of the mean causal effects of X. Therefore, in this understanding there is no such thing as “within-case” analysis; instead single cases are disaggregated into multiple “cases” (either spatially, temporally or substantively) to assess whether there is evidence of difference-making (King et al. 1994: 219–228). Yet this “one-into-many” strategy transforms the within-case tracing of a causal process into a co-variational analysis of patterns of difference-making across sub-cases. By doing so, it shifts the unit-of-analysis to a different level from that in which the causal relationship was originally theorized to play out. As the goal of process-tracing is to trace the workings of causal mechanisms within a case, shifting the analysis to another level basically means that one is studying something different than what was intended. Indeed, one can argue that assessing mean causal effects of X transforms the analysis into a form of a variance-based comparative case study. For instance, in Dijkstra’s excellent principal–agent-based analysis of Commission and Council Secretariat agency, he assesses trends qualitatively across cases. While definitely a worthwhile endeavour for highlighting particular tendencies or trends across cases, such an approach does not allow us to assess how agency actually plays out in the negotiation of a particular case, which is the level at which the theorized principal–agent model operates (Dijkstra 2010).

Among case study scholars who attempt to trace within-case causal processes, two distinct takes on mechanisms can be identified in the methods literature: minimalist and systems-understandings. In minimalist understandings, the causal arrow between a cause and outcome is not unpacked in any detail, either empirically or theoretically. Instead, within-case evidence, also sometimes called “diagnostic evidence”, is produced by asking “if causal mechanism M exists, what observables would it leave in a case?” (Bennett and Checkel 2014). However, the mechanism is not unpacked theoretically in any detail, meaning that our mechanistic evidence is somewhat superficial because we have not traced empirically the workings of each step of the mechanism. Interestingly, traces of minimal process-tracing can also be found in some (American) principal–agent studies. One example is the account by Ferejohn and Shipan (1989, 1990) on congressional influence on bureaucracy, where they apply an explicit sequential understanding of the principal–agent model.

In contrast, in a systems-understanding the ambition is to unpack explicitly the causal process that occurs in between a cause (or set of causes) and an outcome and subsequently to trace each of its constituent steps empirically. In other words, the goal is to dig deeper into how things work in practice. Empirical evidence that assesses the workings of each step of the hypothesized mechanism enables stronger causal inferences to be made about the actual process. Such a systems-understanding of causal mechanisms has though not (yet) been applied in principal–agent literature.

In the systems-understanding of mechanisms, a causal mechanism is unpacked into its constituent steps. They are theorized as systems of interlocking steps that transmit causal powers or forces between a cause (or a set of causes) and an outcome (examples of this understanding include Bhaskar 1978; Glennan 1996, 2002; Bunge 1997, 2004; Machamer et al. 2000; Machamer 2004; Mayntz 2004; Waldner 2012; Rohlfing 2012; Beach and Pedersen 2013, 2016b). Each of the steps of the mechanism can be described in terms of entities that engage in activities (Machamer et al. 2000; Machamer 2004). Entities are the factors (actors, organizations or structures) engaging in activities, whereas the activities are the producers of change or what transmits causal forces or powers through a mechanism. In practical terms, the entities can be defined as nouns, whereas the activities can be depicted as verbs. What the entities and activities more precisely are in conceptual terms depends on the type of causal explanation, along with the level at which the mechanism works and the time span of its operation. The activities that entities engage in move the mechanism from an initial or start condition through different steps to an outcome. Crucially, when theorizing the steps of a causal mechanism, they should exhibit some form of productive continuity , meaning that each of the steps logically leads to the next step, with no large logical holes in the causal story linking X and Y together (Machamer et al. 2000: 3).

3.2 Benefits of Applying Process-Tracing for principal–agent Scholars

The principal–agent model has not yet been theorized as a causal mechanism that leads to agency costs, in a way where there is productive continuity in the causal mechanism linking each of its steps. Interestingly, however, the model itself is nevertheless well suited for explication in mechanistic terms. First, the principal–agent model is a mid-level theory which can easily be translated in empirical observations. Second, rational-choice-inspired theories in general mostly apply a sequential logic for explaining a particular outcome (Beach and Pedersen 2013: 33), which tends to be adopted by some scholars when conceptualizing the principal–agent model. Moreover, Tallberg (2002: 24–25) explicitly stressed the need for principal–agent analysts to focus on “the causal chain” and “the functional logic that links the different stages of the delegation process to each other”, thereby opening the door for a mechanism-based approach. Some principal–agent scholars even use a “mechanism”-terminology (Kassim and Menon 2003), but they do not unpack them in their analysis.

In general, applying process-tracing methods has both analytical and theoretical benefits. The analytical value for principal–agent scholars of unpacking causal mechanisms in detail is threefold. First, unpacking mechanisms exposes the causal claim to more logical scrutiny because one cannot just postulate that the outcome is the product of an agent following its own preferences. Second, by explicitly theorizing each step of the mechanism, the subsequent analysis should also study the workings of each step empirically. If evidence is found that each step worked as theorized, then a strong causal inference about the relationship is made possible. If evidence for one or more steps is not found, this should result in a theoretical revision of the mechanism, thereby producing more accurate theories of causal processes. The theoretical value-added of the language of mechanisms for principal–agent theorization is then that by unpacking a causal process, we are better able to identify logical shortcomings in our theories. Such critical links in causal stories are especially valuable to elaborate on, as more logical scrutiny results in better theories, other things equal. This benefit is even more pronounced given the increasingly complex institutional context to which the principal–agent model is applied, as highlighted in the introductory chapter of this edited volume. Put differently, unpacking the mechanisms linking delegation with agency costs, we develop a better theoretical understanding of how and when agency matters.

Finally, theories at the level of causal mechanisms are typically sensitive to context. The same cause might link to the same outcome through different mechanisms in different contexts, or the same cause might produce a different outcome through different mechanisms in different contexts (Falleti and Lynch 2009; Beach and Pedersen 2016a; Beach and Rohlfing 2016). By tracing mechanisms in particular cases, we can figure out which mechanism was operating in the case and then compare our findings with results from other cases in different contexts to learn more about the impact of scope conditions on the politics of delegation and discretion.

3.3 An Illustration of a Theorized Causal Mechanism

What do theories of causal mechanisms actually look like? Here we illustrate how principal–agent analysts can theorize a causal mechanism. Building upon the above insights and earlier research on the role and influence of the Council Secretariat (Beach 2004, 2009), it is possible to delineate a causal mechanism linking informal delegation of powers relating to drafting proposals (i.e. the causal condition) to agency costs (i.e. the outcome) through the facilitating leadership activities of the Council Secretariat in intergovernmental negotiations within the Council of the EU. Informal delegation is when principals do not grant a formal mandate to the agent, but instead invite informally an agent to perform certain tasks.

In the model, the main principals are governments acting in the Council and their agent is the Council Secretariat (See Fig. 1). 3 The theorized mechanism is expected to function in the context of negotiations on Treaty reform taking place in Intergovernmental Conferences (IGCs), but the mechanism might also apply to other (more normal) intergovernmental-style Council meetings such as on CFSP issues. Drafting powers relate to the drafting of treaty texts of particular articles or sets of articles.
Fig. 1

Visualization of the principal–agent relationship

It is expected that this mechanism can be generalized to other secretariats with slight modifications regarding the types of leadership they supply. The mechanism is portrayed in Table 1. The causal mechanism is treated as a middle-range theory, and is expected to be present in the population of cases of EU intergovernmental negotiations when the scope conditions relating to demand for leadership and ability to supply leadership are present.
Table 1

Causal mechanism of agency costs through drafting

Causal condition

Agency costs through drafting

Outcome

Step 1

“identifying problems”

Step 2

“crafting proposals”

Step 3

“tabling proposals”

Step 4

“building support”

Delegation of drafting powers to agent

Agent identifies zone-of-possible agreement by collecting information on “problem” and principal’s preferences

Using information gained, agent crafts a proposal (or set of proposals)

Agent tables proposal either directly or through proxy that is as close as possible to agent preferences

Agent manages process by building support for proposal, brokering compromises, etc.

Agent proposal(s)

- accepted increased efficiency

- agency costs

While a traditional principal–agent analysis would limit itself to claiming that (and testing whether) delegation of informal drafting powers leads to delegation costs through the agent’s information advantages, Table 1 depicts how this can be translated in a linear and sequential causal mechanism composed of four steps. In step 1, the agent identifies a zone-of-possible agreement by gathering information on governmental preferences and their distribution, along with detailed information about the problem under discussion. This information is, in step 2, used by the agent to craft a proposal, or a set of proposals, which is (are) then, in step 3, tabled by either the agent or by a proxy (another actor). In parallel with tabling the proposal, in step 4, we would expect the agent to engage in a range of different activities to secure adoption of the proposal, such as for instance building coalitions supporting it and brokering compromises. Finally, we should expect that due to these actions, the final outcome is skewed closer to what the agent wants. By unpacking the causal process in more detail, we can probe the causal logic, for example asking whether there might be other more plausible causal pathways between informal delegation of drafting powers and agency costs. For example, does the agent have to both draft and build support for its proposals to gain influence? Or might there be other actors that could intervene to build support behind the agent’s draft proposal?

4 Empirical Testing of Mechanisms in Process-Tracing

Once we have theorized a causal mechanism that links a cause (or set of causes) with an outcome in a particular context, how can we study it empirically? Given that we have theorized the steps of the mechanism, we should then empirically assess whether each step operated as we theorized.

4.1 Empirical Fingerprints and the Bayesian Logic

The actual testing of whether the mechanism is present and operated as theorized in a particular case requires translating each of the steps of the theorized causal mechanism into case-specific empirical predictions of what evidence we should find in a case if the mechanism is operating. In other words, after having translated the theorized principal–agent model into a causal mechanism, the analyst should clearly define what he or she expects to find as confirming and/or disconfirming evidence that each part of the relationship (i.e. the outcome, the causal condition(s) and each of the steps of the mechanism) is present in the case under investigation and works as hypothesized. Here it is important to be creative, gaming through a wide range of potential empirical fingerprints that steps of a mechanism might leave in a particular case.

But how can we make causal inferences about mechanisms when we only possess within-case evidence provided by tracing causal processes in a case? In process-tracing, we are not assessing the difference that changes in values of the cause make for values of the outcome across cases. Instead, inferences are made using the correspondence between hypothetical and actual observable manifestations of the operation of mechanisms within a selected case, what can be termed mechanistic, within-case evidence (Beach and Pedersen 2016a; Bennett 2014).

Increasingly scholars are arguing that Bayesian logic provides a good logical framework of epistemological tools that enable scholars to ask the right questions when evaluating what collected empirical material can act as evidence of (Bennett 2008, 2014; Beach and Rohlfing 2016; Humphrey and Jacobs 2015; Beach and Pedersen 2013, 2016a). Instead of postulating that we have collected “causal process observations”, the core of the Bayesian approach is about providing justifications for why empirical material can act as evidence that can update our confidence in the existence of a causal relationship (here a mechanism binding a cause and an outcome). As such, the amount of updating that new evidence enables is determined by both our prior confidence in the principal–agent model and the evidential weight of new evidence. 4

Our prior confidence in a causal hypothesis matters because if we already have a large amount of theoretical and empirical knowledge suggesting that the principal–agent model is valid in a case, only very strong new empirical evidence can further increase our confidence in the theoretical model. However, given that we are making within-case claims in process-tracing, we typically do not have very strong existing research on whether a causal mechanism is operative in a particular case. This means that we usually employ process-tracing case studies as plausibility probes, where even relatively weak evidence of the operation of a mechanism tells us something new.

Central to Bayesian logic is the intense evaluation of what diverse types of empirical material (e.g. expert interviews, meeting records, archival material, speeches) can tell us about the veracity of causal theories. In the application of Bayesian logic to process-tracing, the literature tells us that we should focus on justifying why empirical material can act as evidence by answering two questions: whether we have to find a given piece of empirical material (certainty of evidence), and if found, whether there are any plausible alternative explanations for finding the empirical material (uniqueness of evidence) (Van Evera 1997; Bennett 2014; Rohlfing 2012; Beach and Pedersen 2013, 2016a). It is not enough just to postulate that we have to find evidence (certainty); instead, we have to provide detailed reasoning for why a given piece of empirical material has to be found in the case, otherwise the step of the mechanism is disconfirmed. Similarly, uniqueness has to be evaluated rigorously, asking whether finding the evidence could be accounted for with any plausible alternative explanation for finding the evidence. 5

Specification of the principal–agent-based causal mechanism in concrete observable empirical fingerprints means that data are not gathered randomly in a case study (Beach and Pedersen 2013: 132–143). Instead, the potential sources of information have to be selected in a theory-informed manner. While strongest evidence for the existence of one step of a mechanism might be found by conducting elite interviews, this is not necessarily the case for other steps of the mechanism, whose existence and functioning might be more convincingly confirmed by data from, for instance, meeting records or agendas. A clear and a priori definition of what the empirical manifestations are by consequence also implies specifying the selection of sources and their reliability.

4.2 An Illustration of Empirical Testing

We will now illustrate what empirical testing of a mechanism might look like using the example of the theorized informal delegation of powers to the Council Secretariat that was already elaborated above.

When investigating cases of informal delegation of drafting powers to the Council Secretariat in intergovernmental negotiations on treaty change, what observable implications should we expect to find? We will detail this for both the causal condition starting the mechanism (i.e. actual delegation of drafting powers), each step of the mechanism, and the outcome (i.e. a more efficient deal but where there are also measurable agency costs). Regarding the causal condition, the delegation of drafting functions is defined as informal, meaning that delegation takes place on the basis of an unwritten contract (i.e. member states including the Council Presidency informally invite the Council Secretariat to prepare proposals for the negotiations). This informality implies that we have to rely primarily on participant accounts to determine the functions actually delegated to the Council Secretariat. Evidence here can be in the form of insiders from the Council Presidency or Council Secretariat telling us in interviews that the Secretariat was invited to draft proposals for revisions of treaty tests. However, this evidence would not be certain because both officials from the Presidency and Secretariat might have motives for not telling us about informal delegation, with for example Presidency officials trying to downplay the degree to which they were dependent on assistance from the Secretariat. We could strengthen our confidence in the findings by drawing on other sources, such as national officials that relate that the Secretariat told them they were drafting a particular text.

If step 1 of the theorized causal mechanism (problem identification, see Table 1) exists in a case, we should expect to find fingerprints in the empirical record of Council Secretariat activities relating to collecting information about what governments wanted, along with activities relating to understanding the problems that the principals wanted to be solved. Evidence for this step can be found by asking participants whether the Council Secretariat is at the centre of a hub of informal communication between involved policy-makers. Yet, there could be many reasons why participants distort the truth. Stronger evidence could come in the form of a systematic mapping of channels of communication through interviews or questionnaires, where we ask a large set of participants a question like “in your communication with other delegations and institutions, what was the identity of the three contacts that you had most communication with?”. If participants from governments systematically identified the Secretariat as a top contact, this would be quite unique evidence of the hub pattern of communication because these officials would have few incentives to overplay the Secretariat’s role. However, they might also have incentives to downplay their own role, meaning that the evidence would not be very certain. We might expect to find other fingerprints, such as notes produced by the Secretariat that detail member state preferences, for example in “Notes for the Presidency” type documents. This would again be unique (although it might be a document using information collected by the rotating Presidency), but it would not be very certain because the Secretariat might still be collecting information without producing documents. Additionally, we might not have access to the empirical record that would enable us to assess whether these notes were actually produced or not.

Step 2 (crafting proposals) should leave more concrete fingerprints such as preparatory documents and actual proposals drafted by the Council Secretariat. These can take the form of “non-papers” and other documents intended for internal use in the negotiations, or publicly available proposals that were drafted by the Council Secretariat. Determining who actually drafted proposals can be difficult to determine, given that the rotating Presidency will usually have formal responsibility for the task and will often have incentives to claim that they drafted all of the text. There have been a few notable exceptions, where the Secretariat tabled a proposal under its own name, but this is very rare.

Increasing the difficulty for principal–agent analysts, the Council Secretariat will often have incentives to downplay their own role in drafting texts. Yet, there are also physical forms of evidence that can be found, e.g. draft “non-papers” produced by the Secretariat, or the internal files of the Council Secretariat that show the progressive drafts of a proposal, although even with this evidence one cannot rule out that the content was significantly influenced by external input from the Presidency or other actors. One can gradually update our confidence in the existence of this step of the mechanism in the case at hand, by combining insights from various sources, thereby excluding alternative explanations of finding this evidence. Finding physical records in the form of progressive drafts of proposals can document the drafting efforts of the Secretariat in a way that would be quite unique evidence. But this evidence is not very certain given that the Secretariat might still be drafting a text which is claimed by the Presidency as their own texts. Additionally, there can be access issues that prevent us from assessing whether the predicted evidence even exists.

Step 3 (tabling proposals) can be easily measured if the Council Secretariat tables the proposal itself, but this is relatively rare. In most instances, the Council Secretariat either lets the Presidency formally table the proposal, or utilizes another like-minded government to table it. To prove “paternity”, the information about drafting (step 2) can be coupled with a detailed analysis of the sequence of events leading up to the proposal being tabled. Here evidence could be a detailed chronology that details the progress of revisions of a particular issue area. What draft was tabled when and by whom? Can we identify junctures where the proposed text in a particular issue significantly changed after the Secretariat drafted a text (step 2) through interviews with participants? While this is not very conclusive or unique evidence, it can at least make it plausible that the step existed. In addition, if a government tabled a proposal, evidence might again be gathered by coupling interviews with Council Secretariat officials with interviews with cabinet officials from that particular national ministry, checking whether or not there was any intentional influence from the Council Secretariat in the government’s decision to table the proposal.

Finally, step 4 (building support) investigates the activities of the Council Secretariat in managing the negotiating process in the dossier. This can involve agenda management, attempting to ensure that the issue is negotiated in a forum that is conducive to ensuring both efficiency and what the Council Secretariat wants. Finding in interviews that the Secretariat asked the Presidency to change negotiating forum (for example by moving it to lower, more technical levels) would be strong confirming evidence, whereas it is not certain that we have to find this because actors would have incentives to downplay Secretariat influence.

Further activities involve building support for the proposal through lobbying and coalition-building, and brokering compromises. To measure these activities requires detailed tracing of what took place in the negotiations, usually relying on participant accounts and tracing whether and with whom meetings have been taken place to discuss the draft proposal. Individually, each recollection in an interview would be quite weak evidence, but if we find similar recollections across a range of actors with differing motivations, we would trust it as stronger confirming evidence. Finally, other evidence might include confirmation of Council Secretariat officials trying to persuade (“lobby”) national officials behind the scenes or even having distributed briefing notes to create a common standpoint. This would not be very certain because these types of overt lobbying activities are relatively rare, and even if they took place, we typically would not have access to the empirical record that would enable us to assess whether it was present.

The outcome is the proposal as accepted by governments. What we are interested in measuring is how efficient the outcome is (Did leadership result in fewer gains left on the table?) and the magnitude of agency costs. Given that we cannot rerun a negotiation without agent leadership in order to measure the difference that it made, it can be very difficult to disentangle agency costs from increased efficiency, and more generally from what would have happened regardless of the intervention by the agent. However, by tracing the activities of different actors in the process, and comparing the state-of-play prior to and after the agent’s interventions, along with asking participants whether there were any noticeable gains left on the table, we can make cautious claims about the impact that agent interventions had. Moreover, as the hypothesized causal mechanism has attempted to capture in theoretical terms the productive continuity between each of the steps, we can be relatively confident in the outcome being the result of the agent’s actions, or in more cautious terms, at least more confident that our empirical evidence tells us more about causal relationships than when building on correlational research.

Crucially, while each individual piece of evidence for the operation of each step of the theorized mechanism might be quite weak, if they are independent of each other, their disconfirming or confirming power can be summed together, enabling us to make overall conclusions about the direction in which the weight of evidence points (for more, see Beach and Pedersen 2016b).

5 Limitations of Process-Tracing

We do not claim that process-tracing is a panacea to all ills. Although we have provided a strong claim for the use of process-tracing in future principal–agent analysis, principal–agent analysts should also be aware of the method’s limitations (and how to overcome them).

Process-tracing is first and foremost a within-case methodology. As it is mostly used to study single cases, it does not tell us whether the found (or not found) causal mechanism is present in other cases (Beach and Pedersen 2016b). Indeed, process-tracing is a method where “a lot can be said about a little”. This can make it difficult to make evidence-based claims about broader trends of delegation and agency across cases, especially across different contexts because of the sensitivity of mechanisms to context. However, the (positive) flip-side of the coin is that process-tracing does allow more elaborate conclusions on causality in the case at hand.

Additionally, it can be theoretically very difficult and time-consuming to theorize causal mechanisms between each particular type of delegated power and potential delegation costs. Process-tracing has for that reason most often been applied to explain remarkable or historical outcomes. Yet this does not at all mean that it is only suitable for that kind of analysis. Ideally, principal–agent scholars should nest their process-tracing study within a broader comparative case study design. Combination with ontological similar methods, such as qualitative comparative analysis (QCA) , which likewise applies a deterministic view on causality, is one way to do so (Beach and Rohlfing 2016). A preceding medium-N QCA study, such as the one conducted by Delreux (2011), would allow the analyst to order cases based upon the presence or absence of the hypothesized causal conditions and outcomes, and to subsequently select cases to which process-tracing analysis can be applied to gain deeper insights into the actual causal process.

Moreover, by using process-tracing designs, principal–agent scholars will also expose themselves to two types of critiques. On the theoretical level, a decent process-tracing account compels the analyst to theorize the causal mechanism linking causes and outcomes in more detail, which opens for more logical scrutiny of one’s claims. Empirically, the requirement of translating the mechanism into expected empirical manifestations raises the standard for good empirical evidence. This more often brings us into a situation where we do not have access to the type of good empirical evidence that would enable us to update our confidence in a relationship actually being present in a given case.

6 Conclusion

We started this chapter with the observation that although a considerable amount of scholarship has been devoted to identifying the determinants of agency costs, the actual causal process that leads to agents exploiting their delegated powers remains a theoretical and analytical black box. As a result, the current state of the art on the consequences of delegation is strongly skewed towards probabilistic accounts that make co-variational types of claims. Empirically, this means that principal–agent analysts tend to check if the model’s core factors are present instead of analysing how they actually contribute to autonomous agency behaviour. Moreover, principal–agent analyses often take the form of comparing preferences of principals and agents with realized policy outcomes, a tendency particularly present among EU scholars. While definitely worthwhile for identifying potential determinants of agency costs, such an approach does not allow making strong evidence-based causal claims.

A more explicit focus on tracing causal mechanisms would allow principal–agent analysts to open up this black box of causality. More in particular, making use of process-tracing as a methodological tool offers a chance to improve both the theoretical and empirical state of the art. Throughout this chapter, we have identified the core aspects of process-tracing methodology and illustrated how it can be applied to principal–agent analysis. This proposed ideal type of process-tracing can easily be translated in a set of concrete guidelines or best practices for principal–agent scholars interested in making stronger causal claims about the nature of agency costs and in minimizing the risk of methodological challenges such as observational equivalence.

First and foremost, principal–agent analysis should step away from theorizing the model in probabilistic terms. In order to really uncover the causal relationship between delegation and agency costs, principal–agent scholars should rather apply a deterministic reasoning that is more compatible with designs that use case studies as the analytical point of departure because this is the level at which causal relationships actually play out (Mahoney 2008). Second, this implies moving away from an input–output focus, by way of translating the existing principal–agent model into a causal mechanism. In practice, that takes the form of unpacking the model into a system of interlocking steps that transmit causal powers from the mere act of delegation to the occurrence of agency costs in a way that ensures as much productive continuity in our causal “story” as possible. Third, for testing the theorized causal mechanism, the analyst should specify for each of the mechanism’s constituent steps, as well as for the causal condition and outcome, what the expected empirical fingerprints are. That also includes indicating through which means the required data can best be gathered. Fourth, relying upon within-case evidence comes with inferential limitations. The principal–agent analyst should therefore ideally build upon existing knowledge for specifying the prior confidence in the existence of the hypothesized mechanism. Fifth, the strength of the evidence that is collected can be evaluated using this Bayesian reasoning. Put differently, the principal–agent analyst should consider whether the collected evidence updates the existing (prior) confidence in, on the one hand, the presence of each step of the mechanism and, on the other hand, whether the mechanism works as hypothesized.

We have argued that process-tracing, with all its limitations, nevertheless allows principal–agent scholars to not only draw better conclusions on the underlying theoretical logic of the model, but also to provide stronger evidence of how delegation can result in agency costs. In that sense, it offers an opportunity for updating the principal–agent model as it currently stands, as better analysis is logically connected to better theorization. Opening up the black box of causality in other words benefits both principal–agent scholars’ empirical and theoretical claims.

Notes

  1. 1.

    In this chapter, we focus on the later phase of the model, which Delreux and Adriaensen (this volume) call “the politics of discretion”, but our claims can be extended to the analysis of the causes of delegation (i.e. “the politics of delegation”) as well.

     
  2. 2.

    There are some exceptions. For instance, Cortell and Peterson (2006: 258) assert that “discretion is a necessary but not sufficient condition for slack”. However, the causal mechanism linking cause (agency discretion) and outcome (agency slack) still remains in the theoretical black box.

     
  3. 3.

    In general terms, we treat the member states in the Council of the EU as the main principal. It should be noted, however, that the Council Presidency is thereby treated as an intermediate principal. By consequence, the Council Presidency becomes an intermediate principal to the agent (i.e. the Council Secretariat) in concrete acts of delegation.

     
  4. 4.

    For a longer, more technical introduction to Bayesian logic, see Bennett 2014.

     
  5. 5.

    A common misunderstanding of uniqueness is that we are talking about alternative explanations of the outcome. However, any non-trivial social science outcome always has multiple causes (Rohlfing 2014). What we want to evaluate in Bayesian logic as applied to process-tracing is whether there are alternative explanations for finding the particular piece of evidence itself.

     

References

  1. Beach, D. (2004). The dynamics of European integration: Why and when EU institutions matter. Palgrave MacMillan: Houndsmills.Google Scholar
  2. Beach, D. (2005). Why governments comply: An integrative compliance model that bridges the gap between instrumental and normative models of compliance. Journal of European Public Policy, 12(1), 113–142.CrossRefGoogle Scholar
  3. Beach, D. (2009). Leadership and intergovernmental negotiations in the European Union. In M. Egan, N. Nugent, & W. Paterson (Eds.), Research agendas in EU studies: Stalking the elephant (pp. 92–116). Houndsmills: Palgrave MacMillan.Google Scholar
  4. Beach, D., & Pedersen, R. (2013). Process-tracing Methods: Foundations and guidelines. Ann Arbor: The University of Michigan Press.CrossRefGoogle Scholar
  5. Beach, D., & Pedersen R. (2016a). Selecting appropriate cases when tracing causal mechanisms. Sociological methods and research. doi:  10.1177/0049124115622510.
  6. Beach, D., & Pedersen, R. (2016b). Causal case study methods: Foundations and guidelines for comparing, matching and tracing. Ann Arbor: The University of Michigan Press.Google Scholar
  7. Beach, D., & Rohlfing, I. (2016). Integrating cross-case analyses and process tracing in set-theoretic research: Strategies and parameters of debate. Sociological methods and research. doi:  10.1177/0049124115613780.
  8. Bennett, A. (2008). Process tracing: A Bayesian perspective. In J. Box-Steffensmeier, H. Brady, & D. Collier (Eds.), The Oxford handbook of Political methodology (pp. 702–721). Oxford: Oxford University Press.Google Scholar
  9. Bennett, A. (2014). Appendix: Disciplining our conjectures systematizing process tracing with Bayesian analysis. In A. Bennett & J. Checkel (Eds.), Process-tracing: From metaphor to analytical tool (pp. 276–298). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  10. Bennett, A., & Checkel, J. (Eds.). (2014). Process tracing: From metaphor to analytic tool. Cambridge: Cambridge University Press.Google Scholar
  11. Bhaskar, R. (1978). A realist theory of science. Brighton: Harvester.Google Scholar
  12. Brady, H. (2008). Causation and explanation in social science. In J. Box-Steffensmeier, H. Brady, & D. Collier (Eds.), The Oxford handbook of political methodology (pp. 217–270). Oxford: Oxford University Press.Google Scholar
  13. Bradley, C., & Kelley, J. (2008). The concept of international delegation. Law and Contemporary Problems, 71(1), 1–36.Google Scholar
  14. Bunge, M. (1997). Mechanism and explanation. Philosophy of the Social Sciences, 27(4), 410–465.CrossRefGoogle Scholar
  15. Bunge, M. (2004). How does it work? The search for explanatory mechanisms. Philosophy of the Social Sciences, 34(2), 182–210.CrossRefGoogle Scholar
  16. Conceição-Heldt, E. (2013). Do agents ‘run amok’? A comparison of agency slack in the EU and US trade policy in the Doha Round. Journal of Comparative Policy Analysis, 15(1), 21–36.CrossRefGoogle Scholar
  17. Damro, C. (2007). EU delegation and agency in international trade negotiations: A cautionary comparison. Journal of Common Market Studies, 45(4), 883–903.CrossRefGoogle Scholar
  18. Delreux, T. (2011). The EU as an environmental negotiator. Surrey: Ashgate.Google Scholar
  19. Delreux, T. (2013). Examining EU external action through the lens of principal-agent theory. Paper presented at the 8th Pan-European Conference on International Relations. Warsaw.  doi:10.1016/j.arth.2013.07.003.
  20. Delreux, T., & Kerremans, B. (2010). How agents weaken their principals’ Incentives to control: The case of EU negotiators and EU member states in multilateral negotiations. Journal of European Integration, 32(4), 357–374.CrossRefGoogle Scholar
  21. Delreux, T., & Adriaensen, J. (2017). Introduction. Use and limitations of the principal–agent model in studying the European Union. In T. Delreux & J. Adriaensen (Eds.), The principal–agent model and the European Union (pp. 1–34). London: Palgrave MacMillan.Google Scholar
  22. Dijkstra, H. (2010). Explaining variation in the role of the EU council secretariat in first and second pillar policy-making. Journal of European Public Policy, 17(4), 527–554.CrossRefGoogle Scholar
  23. Dijkstra, H. (2014). Information in EU security and defence. In T. Blom & S. Vanhoonacker (Eds.), The politics of information: The case of the European Union (pp. 229–241). Palgrave: Basingstoke.CrossRefGoogle Scholar
  24. Elsig, M. (2007). The EU’s choice of regulatory venues for trade negotiations: A tale of agency power? Journal of Common Market Studies, 45(4), 927–948.CrossRefGoogle Scholar
  25. Elsig, M. (2010). Principal-agent theory and the World Trade Organization: Complex agency and ‘missing delegation’. European Journal of International Relations, 17(3), 495–517.Google Scholar
  26. Epstein, D., & O’Halloran, S. (1999). Delegating powers. A transaction cost politics approach to policy making under separate powers. New York: Cambridge University Press.CrossRefGoogle Scholar
  27. Epstein, D., & O’Halloran, S. (2008). Sovereignty and delegation in international organizations. Law and Contemporary Problems, 71(1), 77–92.Google Scholar
  28. Falleti, T., & Lynch, J. (2009). Context and causal mechanisms in political analysis. Comparative Political Studies, 42(9), 1143–1166.CrossRefGoogle Scholar
  29. Ferejohn, J., & Shipan, C. (1989). Congressional influence on administrative agencies: A case study of telecommunications policy. In L. Dodd & B. Oppenheimer (Eds.), Congress reconsidered (4th ed., pp. 393–410). Washington, DC: Congressional Quarterly Press.Google Scholar
  30. Ferejohn, J., & Shipan, C. (1990). Congressional influence on bureaucracy. Journal of Law, Economics & Organization, 6(special issue), 1–20.Google Scholar
  31. Franchino, F. (2000a). Control of the commission’s executive functions: Uncertainty, conflict and decision rules. European Union Politics, 1(1), 63–92.CrossRefGoogle Scholar
  32. Franchino, F. (2000b). The commission’s executive discretion, information and comitology. Journal of Theoretical Politics, 12(2), 155–181.CrossRefGoogle Scholar
  33. Franchino, F. (2005). The powers of the union, delegation in the EU. Cambridge: Cambridge University Press.Google Scholar
  34. Furness, M. (2013). Who controls the European external action service? Agent autonomy in EU external policy. European Foreign Affairs Review, 18(1), 103–126.Google Scholar
  35. Gerring, J. (2007). The mechanismic worldview: Thinking inside the box. British Journal of Political Science, 38(1), 161–179.Google Scholar
  36. Gerring, J. (2010). Causal mechanisms: Yes but…. Comparative Political Studies, 43(2), 1499–1526.CrossRefGoogle Scholar
  37. Glennan, S. (1996). Mechanisms and the nature of causation. Erkenntnis, 44(1), 49–71.CrossRefGoogle Scholar
  38. Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69(S3), 342–353.CrossRefGoogle Scholar
  39. Hawkins, D., & Jacoby, W. (2006). How agents matter. In D. Hawkins, D. Lake, D. Nielson, & M. Tierney (Eds.), Delegation and agency in international organizations (pp. 199–228). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  40. Hedström, P., & Ylikoski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49–67.CrossRefGoogle Scholar
  41. Humphrey, M., & Jacobs, A. (2015). Mixing methods: A Bayesian approach. American Political Science Review, 109(4), 653–673.CrossRefGoogle Scholar
  42. Kassim, H., & Menon, A. (2003). The principal–agent approach and the study of the European Union: Promise unfulfilled? Journal of European Public Policy, 10(1), 121–139.CrossRefGoogle Scholar
  43. Kerremans, B. (2006). Pro-active policy entrepreneur or risk minimizer? A principal–agent interpretation of the EU’s role in the WTO. In O. Elgström & M. Smith (Eds.), The European Union’s roles in international politics (pp. 172–188). Oxford: Routledge.Google Scholar
  44. King, G., Keohane, R., & Verba S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton: Princeton University Press.Google Scholar
  45. Kroll, D. (2017). Manifest and latent control on the council by the European council. In T. Delreux & J. Adriaensen (Eds.), The principal–agent model and the European Union (pp. 157–180). London: Palgrave MacMillan.Google Scholar
  46. Machamer, P. (2004). Activities and causation: The metaphysics and epistemology of mechanisms. International Studies in the Philosophy of Science, 18(1), 27–39.CrossRefGoogle Scholar
  47. Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25.CrossRefGoogle Scholar
  48. Mahoney, J. (2008). Toward a unified theory of causality. Comparative Political Studies, 41(4–5), 412–436.CrossRefGoogle Scholar
  49. Mayntz, R. (2004). Mechanisms in the analysis of social macro-phenomena. Philosophy of the Social Sciences, 34(2), 237–259.CrossRefGoogle Scholar
  50. Niemann, A., & Huigens, J. (2011). The European Union’s role in the G8: A principal–agent perspective. Journal of European Public Policy, 18(3), 420–442.CrossRefGoogle Scholar
  51. Pollack, M. (1999). Delegation, agency and agenda setting in the treaty of Amsterdam. European Integration online Papers, 3(6).Google Scholar
  52. Pollack, M. (2002). Learning from the Americanists (Again): Theory and method in the study of delegation. West European Politics, 25(1), 200–219.CrossRefGoogle Scholar
  53. Pollack, M. (2003). The engines of European integration. Delegation, agency and agenda-setting in the EU. New York: Oxford University Press.CrossRefGoogle Scholar
  54. Rohlfing, I. (2012). Case studies and causal inference: An integrative framework. Palgrave MacMillan: Houndsmills.CrossRefGoogle Scholar
  55. Rohlfing, I. (2014). Comparative hypothesis testing via process tracing. Sociological Methods & Research, 43(4), 606–642.CrossRefGoogle Scholar
  56. Stone, Sweet A., & Thatcher, M. (2002). Theory and practice of delegation to non-majoritarian institutions. West European Politics, 25(1), 1–22.Google Scholar
  57. Tallberg, J. (2002). Delegation to supranational institutions: Why, how and with what consequences? West European Politics, 25(1), 23–46.CrossRefGoogle Scholar
  58. Tallberg, J. (2006). Leadership and negotiation in the European Union. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  59. Van Evera, S. (1997). Guide to methods for students of political science. Ithaca: Cornell University Press.Google Scholar
  60. Waldner, D. (2012). Process tracing and causal mechanisms. In H. Kincaid (Ed.), The Oxford handbook of the philosophy of social science (pp. 65–84). Oxford: Oxford University Press.Google Scholar
  61. Waterman, R., & Meier, K. (1998). principal–agent models: An expansion? Journal of Public Administration Research and Theory, 8(2), 173–202.CrossRefGoogle Scholar
  62. Weller, N., & Barnes, J. (2015). Finding pathways mixed-method research for studying causal mechanisms. Cambridge: Cambridge University Press.Google Scholar
  63. Weingast, B., & Moran, M. (1983). Bureaucratic discretion of congressional control? Regulatory policymaking by the federal trade commission. Journal of Political Economy, 91(5), 765–800.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2017

Authors and Affiliations

  • Yf Reykers
    • 1
  • Derek Beach
    • 2
  1. 1.KU LeuvenLeuvenBelgium
  2. 2.University of AarhusAarhusDenmark

Personalised recommendations