The better toolbox: experimental methodology in economics and psychology

In experimental economics one can confront a “don’t!”, as in “do not deceive your participants!”, as well as a “do!”, as in “incentivize choice making!”. Neither exists in experimental psychology. Further controversies exist in data collection meth-ods, e.g., play strategy (vector) method in game experiments, and how to guarantee external and internal validity by describing experimental scenarios by field-related vignettes or by abstract, often formal, rules as it is used in decision and game theory. We emphasize that differences between the experimental methodology of the two disciplines are minor rather than substantial and suggest that such differences should be resolved, as much as possible, through empirical research. Rather than focusing on familiar debates, we suggest to substitute the revealed-motive approach in experimental economics by designs whose data not only inform about choice, but also about the reasoning dynamics.


Introduction
Experimental research in economics differs from experimental research in cognitive, social and economic psychology, in part because psychology studies human behavior in its entirety, while economics concentrates on decision making.More specifically, economics concentrates on conscious decision making that takes place in economic scenarios such as markets, industrial firms and so on, and that focuses on money related concerns.We understand conscious decision making as consequentialist choice made via forward-looking deliberation.In turn, by forwardlooking deliberation, we mean the process by which one tries to predict both the consequences of one's own choice options and the circumstances beyond one's own control (other agents' relevant choices, random events and so on), with the aim of selecting an option with desirable consequences.
The difference in focus leads to distinctive characteristics of the experimental methodology.For instance, due to their focus on consequentialist forward-looking deliberation, experimental economists usually disregard unconscious or purely emotional behavior 1 , in which psychologists are on the other hand much interested.Moreover, as conscious deliberation is conceived of as an individually based mental activity (even in brainstorming circumstances), forward-looking deliberation falls into the scope of methodological individualism.Thus, in experimental economics, even when the topic of investigation is group behavior (typically, the same as in social psychology) economists attempt to explain group behavior in terms of the results of individual decisions.Despite such differences, however, there is a profound commonality of interests, objectives and views between the two disciplines.Thus, rather than discussing why experimental psychology and economics do not fully overlap, we limit ourselves to analyzing experimental methodology concerning the common interests of the two disciplines-namely decision making by forwardlooking deliberation-and how to go about closing the gap in the type of experimental data collected for such research.
The methodological differences between experimental psychology and economics may be the result of vagaries in the history of both disciplines.In fact, experimental psychology rests on top of a much longer tradition than experimental economics does 2 .This might have induced experimental economists to claim "But we do it differently!" in order to differentiate their research from merely importing experimental psychology into economics.Whether that was in fact the case (after all it is not bad to learn from other disciplines) or whether instead a familiar empirical methodology such as experimental research was re-evaluated, remarkable differences between the experimental methodology in economics and psychology have been claimed (see Camerer 1996;Hertwig andOrtmann 2001, 2008, and, for an encompassing philosophical perspective on methodological issues, Guala 2005).However, as when comparing religions, such appraisals have often neglected the huge overlap in research topics and empirical methods 3 while emphatically pointing out minor differences.We take the opposite stance, looking at thematic overlaps of experimental 1 So-called system 1, in the distinction popularized by Kahneman (2011). 2 For a review of the early results in the field of experimental economics, see Roth (1995).This claim is true also for other subdisciplines of psychology, e.g. economic psychology (the International Association for Research in Economic Psychology and its flagship Journal of Economic Psychology were founded only in 1982). 3By learning from all experimental research, partly elicited by more or less field specific elicitation methods, like using mouse lab, eye tracking, brain scanning, physiological measures, etc. and by using all paradigms like portfolio choice, intertemporal allocation tasks, etc. as well as social and strategic interaction tasks like social dilemma, (reward allocation) dictator games, market games, etc.

3
The better toolbox: experimental methodology in economics… economics and psychology while addressing in Sect. 2 the dilemma whether "Economists wanted to experiment differently" or "Were economists attempting to reinvent the wheel".
In the humanities, including in social science, one can distinguish description and prescription.Regarding the latter, for instance, philosophers often discuss idealsethical or even epistemic principles-to which we might want to adhere irrespective of whether they are in line with our cognitive limitations and emotional system.So, it is assumed that one can be guided by ideals even when will hardly ever be able to comply with such ideals4 .However, at least in practical philosophy what "ought to be" should be feasible. 5This implies that one has to understand human behavior, even when trying to redirect it, e.g., when assuming a behaviorally informed prescriptive attitude.Proper scientific understanding of human behavior is guided by empirical observations confirming its underlying hypotheses.Sound descriptions thus presuppose empirically confirmed theoretical hypotheses, namely nomological knowledge.Yet, all too often nomological knowledge is neglected.For example, it is a questionable attitude of economic advisors and advisory boards alike to offer predictions about future events even when such predictions are not based on any available empirically validated hypotheses.Too often such predictions are merely resting on personal viewpoints, which, however convincing they might be, do not rely on validated theoretical background hypotheses6 .On the other hand, advice based on models presents the advantage of allowing for more systematic speculations (for instance if based on available resources) especially in relation to market models.Model theory-based advice is less biased by idiosyncratic opinions.
Thus, collecting nomological knowledge is a crucial step for properly studying human behavior.One possible way to accumulate nomological knowledge empirically is by relying on field data.That is, one could try and confirm hypotheses in light of more or less suitable field observations, especially when external validity is an issue.Notice, however, that data based on simple observations regarding which choices were made may not be sufficient for researchers who are attempting not only to observe and study choice behavior but also to assess the reasoning and the emotional aspects that are often effective drivers of choice behavior.Investigating actual reasoning and emotional aspects, thus, may involve, besides observing which decisions were made, also interviewing the decision maker.Think for instance of committees needing to come to a majority or even to a unanimous decision.
A second possible avenue to acquire nomological knowledge is to design stylized experimental scenarios generating data suitable for rejecting or confirming the hypotheses in which one is interested.Such designs allow for high levels of abstraction and need not be conceived with a specific field situation in mind.An example of such designs are experimental scenarios through which one purports to test some fundamental aspect of human decision-making, rather abstract from specific contingencies that might arise in the field.
In the humanities, the tradition to experimentally accumulate nomological knowledge has been initiated in (social and cognitive) psychology.Only later the experimental approach became also quite popular in economics (see Roth 1995).At present, there exists an experimental tradition in nearly all humanities and it is worth noting how, in recent years, the experimental designs have become more and more often terminologically and conceptually based on decision and game theory.Despite the close connection between game theory and economics (see von Neumann and Morgenstern 1947), the rigorously defined modelling tools and solution concepts offered by decision and game theory were often viewed as applicable beyond economic scenarios.Taking advantage of this wide applicability has made possible or enhanced a lively interdisciplinary exchange among all humanities.This mutual exchange has, however, overlooked that several crucial rationality requirements of game theory are not in line with human cognition and psychology, such as interpersonal payoff aggregation (that is, ascribing numerical probabilities to all possible constellations of circumstances beyond one's control originated, for instance, by other agents' choices and then, in case of individual decision-makers, aggregating interpersonally the probability-weighted implications of choice options in order to select one among those), iterated elimination of weakly dominated strategies, common knowledge of rationality etc.Although new traditions of experimental research are prospering in the different social sciences, so far, the methodology of experimental research has been mainly developed in psychology and largely accepted, but also sometimes challenged, by experimental economists.
Describing the huge overlap existing in the experimental research of economists and psychologists would require a history of laboratory research, Rather, we are interested in why the rare differences between experimental economists and psychologists get so often stressed and controversially discussed, and why such a debate too often remains a conceptual one rather than one discussed and settled on the basis of nomological knowledge.Especially testing empirical claims for field-like complex decision making could be more easily done by field data.To capture the richness of field environments in the lab would likely overburden cognitively the usually inexperienced lab participants while experts are more or less fully aware of it.All what one can try to induce experimentally are approximations of field environments.We appeal to this debate in Sect. 3 where we discuss when and whether experimenters should deceive participants and how this practice can be circumvented.Section 4 returns to the issue of theoretical "unity" as opposed to "too much variety" analyzing the issue of risk-aversion.After arguing in Sect. 5 that there cannot be too much rigor, we indicate in Sect.6 how the dynamics of decision making, which should be interesting to economists as well, can be experimentally explored.Section 7 concludes.

3
The better toolbox: experimental methodology in economics… 2 Have economists wanted to reinvent the wheel?
During the initial phase of experimental economics, one often encountered psychologists offering a large variety of concepts such as equity, cognitive dissonance, prospect theory, and so on, whereas economists, actually neoclassical economists, puts forth just one-rationality.The issue is that what (neoclassical) economics brings to the table is a language built in terms of preference relations, probabilistic beliefs, their Bayesian updating, and its notion of rationality.A language can be a very apt framework to make predictions but, per se, does not allow to predict much.In fact, one can apply neoclassical jargon to tautologically explain nearly all reasonably consistent choice behavior.On the other hand, behavioral economics mainly attempts to fit preferences and beliefs, and possibly their dynamics, in ways which render observed choices fitting with the theory (see, for instance, prospect theory in the tradition of Tversky and Kahneman 1979).While offering a language is a great achievement in terms of expressivity and flexibility, it is much less fruitful, in and of itself, in terms of offering predictive content 7 .
Another innovation brought about by experimental economics is its initial requirement to provide a solution benchmark, usually based only on the experimentally induced monetary incentives. 8For scholars open to this kind of methodological duality, this is a good way to offer a focal prediction to which actual behavior can be compared without claiming any empirical validity for it and possibly even dismissing the need to test it.Instead, for those who still accept the rationality assumption, the revealed motive approach allows to vary the decision task or game model in ways that align benchmark solutions and internally observed behavior.Nonetheless, it is now more broadly understood that while what gets experimentally induced in the lab by describing choice sets, (monetary) payoff functions, random events, decision process and information conditions is indeed a game form, 9 nothing is however observed about other motives that might be quite relevant to participants' decision making such as shame, guilt, honor, reciprocity obligations and so on.Such motives can be imported in the decision-making process in uncontrolled ways and participants may even misperceive or neglect other motives.In this vein, Pull (2003)-but see also Selten (1960)-have argued that the modal 50:50 sharing in many ultimatum experiments is due to participants neglecting the sequentiality of ultimatum bargaining.Inducing just game forms rather than full-fledged games renders also solution benchmarks problematic as it requires, in order for participants to reason 7 We acknowledge the considerable influence of prospect theory but also mainstream that weakening or even substituting some rationality requirements but sticking to others can lead us astray.Especially when experimentally inducing commonly known individual risk attitudes via binary-lottery incentives, prospect theory still seems to be in a premature stage.Would one rely only on idiosyncratic probability transformation when only the likelihood of the experimentally induced possible loss is rather continuously affected by own and others' choice making? 8There are limits of inducing monetary incentives not only by crowding out or in intrinsic motivations in social and strategic decision tasks.We definitely favor Simon's approach to procedural, boundedrationality theory. 9If we abuse terminology slightly allowing also for one-person games.
about the game, strong common knowledge assumptions that are questionable if not plainly impossible to realize.Given these considerations, it is probably a good thing that in the literature do appear experiments that receive quite a lot of attention (see e.g.Dana et al. 2007), despite not inducing a well-defined game. 10  In addition to introducing a precise, regimented language to reason about decision making, and solution benchmarks along with the issues they raise in terms of analyzing game behavior, economists have introduced another element of novelty in their experimental methodology, i.e. ruling out the deception of participants.
Has this brought about a new way of experimental research?This is what we try to answer next.

The taboo of deceiving participants
The issue of experimenter deception (see Ortmann and Hertwig, 2002) is due to psychologists (in particular, social psychologists) allowing deception and requiring only debriefing: after the experiment, participants become informed on how they have been deceived.Experimental economists ban deception tout court on the basis of the argument that a reputation of never deceiving participants is a very worthy public good in the experimental lab.Progress, here, could mean that the controversy can now be discussed not only in the abstract but also in light of experimental evidence-what after all experimentalists should agree to view as decisive.As a matter of fact, Jamison, Karlan and Schechter (2008) support the hypothesis that participants do behave differently after being deceived by experimenters.Such evidence may not suffice to persuade (social) psychologists to sacrifice the convenience of using deception and debriefing, but it should at least alert them that there is a price to be paid for deception, namely lower quality or even flawed data.Now, even experimental economists do not shy away from studying deception (see Gneezy 2005, Gneezy et al. 2013), for instance by implementing deception games in which participants can deceive others or the experimenter (see Fischbacker and Föllmi-Heusi 2013).In fact, studying the "dark side" of human nature in the form of lying, deceiving, bribing, stealing, and so on, has become rather fashionable.There is no controversy about deception experiments as perfectly admissible experimental workhorses.This paves the way to a more subtle kind of experimenter deception: experimenters can avoid deceiving their participants themselves by outsourcing deception to (experimenter) participants assigned to the role of "subcontractors."In this kind of experiments, there are two types of participants: "participant 10 An example is the rich setup, described in Guth (2021), with each of six sellers chooses a price, unaware of whom they are competing with.All what is commonly known is that one's own demand increases when one's own price decreases and that there are three random events such that one seller is a monopolist whose demand is affected by all random events, two sellers are duopolist, whose demands depend only two random events and on the other's price, and three are triopolist, whose demand levels depend on the two others' prices and one random event.So, each seller's demand depends on three circumstances beyond own control.But sellers do not know that nor which of the eight circumstances beyond their control (five prices and three random events) actually matter for their own demand.

3
The better toolbox: experimental methodology in economics… participants" (normal participants acting in the experimental situation under scrutiny) and "experimenter participants" (participants incentivized to deceive "participant participants," as it is typically done in deception game experiments).Thus, like employers who do not dare to violate labor protection law themselves but hire subcontractors who very likely will engage in such violations, experimenters can avoid deceiving their participants directly by "subcontracting" deception to appropriately incentivized "experimenter participants11 ".In our view, and in line with Gneezy (2005), there should be more experimental tests of the subtle issue whether participants would mind, and clearly distinguish, between deception by "subcontractors" ("experimenter participants") and by experimenters themselves.

Too many theories, or too many aversions?
One of the earliest "aversion" concepts, later on joined by too many other "aversions" and the one on which we focus in this section, is aversion to risk.12Risk aversion is customarily induced and controlled in experiments by using binary lottery incentives: a participant can win either a low or a high monetary prize, and the number of tokens earned in the experiment monotonically determines the probability of winning the larger monetary prize. 13If the function linking the probability of winning the larger prize to the number of tokens earned is made public, often induced in experiments by reading instructions aloud, the idiosyncratic risk attitude of a participant could further be assumed to be commonly known. 14Yet, although the one we have just described serves the purpose of inducing commonly known risk attitudes, this method is rarely used even when referring to risk attitudes accounting for experimental findings (e.g., Kagel 1995).
Arguing that this method does not achieve what it is supposed to (see Selten et al. 1999) discredits the economic concept of risk attitude, measured by the curvature of the utility of money, but not the binary lottery method, whose rational requirements (prefer more to less money and compute probabilities correctly) need not always hold but are at least not outrageously unrealistic at least in easy choice tasks.One should therefore either reject the binary lottery method (due for instance to the evidence put forth by Selten 1999Selten , 2003)), and thereby expected utility of money, or accept expected utility theory along with the obligation to use binary lottery incentives for inducing commonly known risk attitudes in case of equilibrium benchmark predictions.Furthermore, experimental economists, by accepting that "incentivizing" may fail, should accept also that "not incentivizing" may work, as shown by many psychological studies.

Too much or too little rigor?
Psychologists often and reasonably claim that presenting choice tasks by "vignettes" relating them to actual field situations enhances participants' intuitive understanding as to what matters and is at stake.However, when wanting to compare different institutional rules, this may confound which aspects of the verbal vignettes are triggering possible effects.Instead, a formal representation might allow to identify more clearly and less ambiguously basic institutional differences, and actually allow to connect different institutions almost continuously through hybrid, that is intermediate ones.One advantage of this approach is to let treatments differ only in numerical parameters even when the institutions under considerations might be quite different.In our view, this could reduce and even avoid explicit and implicit demand effects of verbal instructions.
One example (see Fischer et al. 2006) is to compare the demand game, as formulated and analyzed by Nash (1950), and the ultimatum game (see the survey by Güth and Kocher 2014).Whereas the former features independent demands and allows for both conflict (parties lose what they can share) and anticonflict (parties share less than what is available), the latter features sequential bargaining and rules out anticonflict by making one party, the responder, the residual claimant.However, when imposing monotonic response strategies,15 both can be formally represented rather similarly.Denote the two players by X and Y, the monetary reward to be shared by p > 0, and the respective demands of X and Y by x and y with 0 < x, y < p.With the help of this notation we can define the rules of • The demand game via (u x (x, y), u y (x, y)) = (x, y) if x + y ≤ p (0,0) otherwise and of • The ultimatum game via Actually, the experimental study of Fischer et al. (2006) compares discrete treatments w with 0 < w < 1 where, in case of x + y ≤ p, it is randomly decided according to probability w whether the payoffs are the ones of the demand, respectively the ultimatum game, with the treatment-specific instructions differing in one single numerical parameter, namely w.Furthermore, the border cases w = 0 and w = 1 1 3 The better toolbox: experimental methodology in economics… could also be approximated by w↘0 and w↗1, respectively, rather than studied directly (see Brennan et al. 2004, for the principle of approximate truth).
One can apply the same idea by connecting independent (as pioneered by Cournot 1838) and sequential (cf.von Stackelberg, 1952) selling on duopoly.Di Cagno et al. (2016) in their analysis of conditional cooperation vary a probability and thereby implement a transition from linear public good to trust games, always based on the same verbal instructions.In the latter case one essentially connects a social dilemma (simultaneous contributions) with a trust (sequential contributions) game.In a similar way, Kübler et al. ( 2008) have experimentally compared "screening" and "signaling" by using verbally different instructions, whereas Güth and Winter (2013) compared screening and signaling in an experiment whose instructions differ in just one numerical aspect.Altogether, this illustrates that mathematical rigor can be useful in suppressing unwarranted demand effects and inspiring easier comparisons of different institutional rules and experimental paradigms.Of course, this does not deny that vignettes may enhance comprehension, especially when allowing participants to recall field situations with which they are quite experienced.

Deliberation dynamics
It is easy to criticize and illustrate limitations, but it remains more difficult to provide an alternative approach.Rational choice or satisficing theory offer convenient terminology and methodology (it is possible to derive optimal or satisficing choices when preferences, beliefs, and aspirations are given) but no readily available choice algorithms, suggesting that we do not know enough about the dynamics of human decision making deliberation.The dynamics of such deliberations can still be reasonable (that is: consistent with forward-looking consequentialism) but they remain, at best, boundedly rational.Like rationality, bounded rationality (for example satisficing, cf.Simon 1956) is a consequentialist notion: one chooses among options by anticipating their consequences.Yet, boundedly rational agents engaging in forwardlooking deliberation are cognitively constrained and not optimizing.Instead, boundedly rational agents try to find some satisficing choice by engaging in a dynamic deliberation process according to which we linearly test choice options successively rather than analytically deriving the satisficing ones (or the optimal ones, in case we are considering rational agents).
Like most scholars in cognitive psychology, as we stated in Sect. 1 we view forward-looking decision making as a deliberation process.In particular, we can think of it as one based on a dynamic process involving several steps with a variable number of feedback loops (see Güth and Ploner 2017, for more details).The process applies to individual decision makers (but possibly also to teams of agents) following these steps: 1. Cognitively perceive her decision task (mental modeling) by considering causal links to predict how her own choice, paired with circumstances outside of her control (i.e.choices by others and random events), affects the achievement of the desired goal; 2. Generate a few scenarios of circumstances outside of her control which one does not dare to neglect, regardless of whether she is in a position to specify their numerical probabilities (rather than considering the Bayesian universe of all uncontrollable circumstances); 3. Form aspiration levels for each action goal and each self-generated scenario; 4. Try to satisfice, typically by considering one choice option after another, and stopping search in light of feedback information on satisficing success. 16  This process structure illustrates how experimentalists need not be restricted to collecting choice data but can also elicit information data about deliberation dynamics as well by imposing such deliberation dynamics, for instance by using a suitable software.By observing these reasoning steps and feedback loops, one hopefully collects additional data which, together with choice data, help to infer more clearly not only which choice was made, but also how and why that choice has been selected.Of course, cognitive data on mental modeling, scenario generation, aspiration formation and search steps are more reliable when not only choice behavior, but also proper reasoning is incentivized.
This could go hand in hand with using modern techniques like mouse lab to record which parameters characterizing a choice task are actually retrieved, when and in which order.Similarly, payoff calculators and other software devices can indicate which causal relationships have been considered.Eye-tracking, controlling decision times and physiological reactions, and brain scanning can provide further evidence of cognitive aspects.Direct insights in reasoning processes, at least of their conscious aspects, can be obtained by asking individual participants to reason aloud or by video and audio-recording discussions of groups of individuals who, as a group, have to jointly reason and decide what to do.
Of course, imposing these reasoning dynamics might affect the way of forwardlooking deliberation.Obtaining deliberation data in addition to choice data will always command a price 17 and can turn out to be cumbersome, even when not any longer requiring transcripts and content analysis.Altogether, all such methods would help to finally provide qualitative and quantitative evidence enhancing our understanding of what influences the ways in which we cognitively perceive more and less demanding decision tasks and generate choices in the light of their anticipated consequences. 16After having searched long before finding a satisficing action one will likely stop search whereas finding a satisficing action immediately might induce aspiration adaptation (e.g., Sauermann and Selten 1962), scenario updating, e.g., in the form of adding new scenarios, or mental updating. 17Actually, journals seem more open to fancy methods like brain scanning medical priming and measurements, what discourages attempts to use loud reasoning or to deliver the outcome of reasoning steps by imposing some reasoning dynamics, as indicated above.

3
The better toolbox: experimental methodology in economics…

Conclusion
We have discussed above methodological aspects that are particularly relevant when trying to capture our cognitive perception of more or less cognitively demanding decision tasks in which we compare choices by anticipating their consequences.This research could and perhaps should suggest behaviorally valid hypotheses not only about human decision making, but also about human reasoning.Only recently-and in economics only particularly recently-such wider data collection has been complemented and inspired by experimental research due to its specific chances for testing competing hypotheses, for limiting confounding effects, for running experiments to evaluate the possible implementation of institutional changes, etc.
The opportunities and advantages of experimental research are a boon for all humanities, and especially so for psychology and economics-what accounts for the extensive mutual exchange of paradigms, concepts, and methods in data collection and analysis.While it would be quite demanding to describe the common grounds of experimental research in psychology and economics, we believe that the longer tradition of experimental research in psychology has been enormously inspired and enriched by the impressive boom of experimental economics.On the other hand, psychology may have understood earlier what are the limitations inherent in using experiments as a method of empirical research.Running field experiments may help but may yield only field-specific confounded results.Statistical, questionnaire, panel data etc., are often of superior quality, for instance, due to their external validity.We may specialize in experimental research but experimental psychologists and experimental economists would be better off learning from all kinds of empirical research to offer more reliable predictions about how people reason and about what they eventually choose.
Therefore, we should intensify the mutual exchange and inspiration by reducing claims of different methodology to its core elements (such as incentivizing and deceiving of participants) by analyzing their (dis)advantages empirically, e.g., experimentally rather than philosophically.Our main focus has been to point out that the revealed motive approach is still dominant in behavioral economics although it reverses the true causality that motives (guides) choice behavior.Instead, we propose that collecting deliberation data in addition to choice data in order to learn more about the dynamics of human choice deliberations should become a major focus in empirical research for both economics and psychology.In this paper we indicate a way how this can be done.Hopefully other researchers will develop further ideas on how we all can learn more about consequentialistic choice dynamics.Only by joining forces we can hope to improve experimental methodology and enhance our understanding of how and why participants behave in certain ways.As Gneezy (2005) stresses out: "people are not indifferent to the process leading up to the outcome."Let us add that they also are not indifferent to the process by which they process reasoning and finally choose how to decide.
Comparisons of the experimental traditions and methods in economics and psychology mainly discuss and exacerbate minor differences while neglecting their huge overlap.This of course does not mean that both disciplines are necessarily convincing and beyond doubt,18 as the overlapping practices might contain themselves questionable elements.Indeed, let us close by describing one example of a common problematic feature of experimental research in economics and psychology, namely the extensive use of degenerate paradigms rather than generic ones. 19When, for example, reporting on Prisoner's Dilemma, public good games and in general on experiments about problems of cooperation,20 scholars usually do so without considering the reasons why they are focusing on highly special parameter constellations (except by alluding that this may be less cognitively demanding for participants) whereas researchers who employ asymmetric games with non-degenerate parameter constellations to test whether findings are robust to differences in roles usually are supposed and asked to prove why they avoid the special symmetric cases.But if degenerate or border-case conditions trigger peculiar reasoning (for instance, that participants with the same role behave in the same way) this practice is quite risky: conclusions would be very special and may not generalize to the field where degenerate cases almost never occur.In fact, even if one is particularly interested in behavior as produced by degenerate parameter constellations, it would be a good practice to vary minor asymmetries and explore their effects when they diminish or disappear.