How to Reveal People's Preferences: Comparing Time Consistency and Predictive Power of Multiple Price List Risk Elicitation Methods

The question of how to measure and classify people's risk preferences is of substantial importance in the field of Economics. Inspired by the multitude of ways used to elicit them, we conduct a holistic investigation of the most prevalent method, the multiple price list (MPL) and its derivations. In our experiment, we find that revealed preferences differ under various versions of MPLs as well as yield unstable results within a 30-minute time frame. We determine the most stable elicitation method with the highest forecast accuracy by using multiple measures of within-method consistency and by using behavior in two economically relevant games as benchmarks. A derivation of the well-known method by Holt and Laury (2002), where the highest payoff is varied instead of probabilities, emerges as the best MPL method in both dimensions. As we pinpoint each MPL characteristic's effect on the revealed preference and its consistency, our results have implications for preference elicitation procedures in general.


Introduction
Risk is a fundamental concept that affects human behavior and decisions in many real-life situations. Whether a person wants to invest in the stock market, tries to select the best health insurance or just wants to cross the street, he/she will face risky decisions every day. Therefore, risk attitudes are of high importance for decisions in many economics-related contexts. A multitude of studies elicit risk preferences in order to control for risk attitudes, as it is clear that they might play a relevant role in explaining results -e.g. de Véricourt et al. (2013) in the newsvendor setting, Murnighan et al. (1988) in bargaining, Beck (1994) in redistribution or Tanaka et al. (2010) in linking experimental data to household income, to name just a few. Moreover, several papers try to shed light on the causes of risk-seeking and risk-averse behavior in the general population with laboratory (Harrison and Rutström, 2008), internet (von Gaudecker et al., 2011) and field experiments (Andersson et al., 2015;Harrison et al., 2007). Since the seminal papers by Laury (2002, 2005), approximately 20 methods have been published which provide alternatives to elicit risk preferences. They differ from each other in terms of the varied parameters, representation and framing. Many of these risk elicitation methods have the same theoretical foundation and therefore claim to measure the same parameter -a subject's "true" risk preference. However, there are significant differences in results depending on the method used, as an increasing number of evidence suggests. It follows that if someone's revealed preference is dependent on the measurement method used, scientific results and real-world conclusions might be biased and misleading.
As far as existing comparison studies are concerned, they usually compare two methods with each other and often use different stakes, parameters, framing, representation, etc., which makes their results hardly comparable. Our paper complements existing experimental literature by making the following contribution: Taking the method by Holt and Laury (2002) as a basis, we conduct a comprehensive comparison of the multiple price list (MPL) versions of risk elicitation methods by classifying all methods into nine categories. To the best of our knowledge, no investigation -including various measures of between-and within-method consistency -has ever been conducted in the literature that incorporates such a high number of methods. To isolate the effect of different methods, we consistently use the MPL representation and calibrate the risk intervals to be the same for each method assuming expected utility theory (EUT) and constant relative risk aversion (CRRA), while also keeping the risk-neutral expected payoff of each method constant and employing a within-subject design. Moreover, our design allows us to investigate whether differences across methods can be reconciled by assuming different functional forms documented in the literature such as constant absolute risk aversion (CARA), decreasing relative risk aversion (DRRA), decreasing absolute risk aversion (DARA), increasing relative risk aversion (IRRA) and increasing absolute risk aversion (IARA). Additionally, we extend our analysis to incorpo-rate EUT with probability weighting and also to prospect theory (PT) and cumulative prospect theory (CPT).
We investigate the within-method consistency of each method by comparing the differences in subjects' initial and repeated decisions within the same MPL method. Moreover, we assess methods' self-perceived complexity and shed light on differences and similarities between methods. In the end, we provide suggestions which specific MPL representation to use by comparing our results to decisions in two benchmark games that resemble real-life settings: investments in capital markets and auctions. Therefore, we analyze the methods along two dimensions: robustness and predictive power, and determine which properties of particular methods drive risk attitude and its consistency.
We find that a particular modification of the method by Holt and Laury (2002) derived by Drichoutis and Lusk (2012) has the highest predictive power in investment settings both according to the OLS regression and Spearman rank correlation. In addition, specific methods devised by Bruner (2009) also perform relatively well in these analyses. However, the method by Drichoutis and Lusk (2012) clearly outperforms the other methods in terms of within-method consistency and is perceived as relatively simple -in the end, our study provides the recommendation for researchers to implement this method when measuring risk attitudes using an MPL framework. Moreover, our results remain qualitatively the same if we relax our assumption on the risk aversion function, or if we take probability weighting or alternative theories such as prospect theory or cumulative prospect theory into account.

Multiple Price Lists Explained
Incentivized risk preference elicitation methods aim to quantify subjects' risk perceptions based on their revealed preferences. We present nine methods in a unified structure -the commonly used MPL format -to our subjects, taking one of the most cited methods as a basis: Holt and Laury (2002). The MPL table structure is as follows: Each table has multiple rows, and in each row all subjects face a lottery (two columns) on one side of the table, and a lottery or a certain payoff (one or two columns) on the other side, depending on the particular method. Then, from row to row, one or more of the parameters change. The methods differ from each other by the parameter which is changing. As the options on the right side become strictly more attractive from row to row, a subject indicates the row where he/she wants to switch from the left option to the right option. This switching point then gives us an interval for a subject's risk preference parameter according to Table 1 1 , assuming EUT and CRRA 2 .
1 To ease comparison to existing studies, we used exactly the same coefficient intervals as Holt and Laury (2002).
2 u(c) = c 1−ρ 1−ρ  Notes: This table indicates the mapping from a subject's chosen switching point into the resulting risk parameter intervals in each method; the leftmost column contains the interpretation of the risk intervals; "Never" means a subject prefers the option "Left" in each row Note that several other representations of risk elicitation methods exist besides the MPL such as the bisection method (Andersen et al., 2006), the trade-off method (Wakker and Deneffe, 1996), questionnaire-based methods (Weber et al., 2002), willingness-to-pay (Hey et al., 2009), etc., but the MPL is favored because of its common usage. Andersen et al. (2006) consider that the main advantage of the MPL format is that it is transparent to subjects and it provides simple incentives for truthful preference revelation. They additionally list its simplicity and the little time it takes as further benefits. As far as the specific risk elicitation method in the MPL framework designed by Holt and Laury (2002) is concerned, it has proven itself numerous times in providing explanations for several phenomena such as behavior in 2x2 games (Goeree et al., 2003), market settings (Fellner and Maciejovsky, 2007), smoking, heavy drinking, being overweight or obese (Anderson and Mellor, 2008), consumption practices (Lusk and Coble, 2005) and many others.
Early studies document violations of EUT under risky decision making and provide suggestions how these differences can be reconciled (Bleichrodt et al., 2001). In addition, recent studies (Tanaka et al., 2010;Bocqueho et al., 2014) document potential empirical support for prospect theory (PT, Kahneman and Tversky, 1979) 3 when it comes to risk attitudes: Harrison et al. (2010) found that PT describes behavior of half of their sample best. There is also evidence that subjective probability weighting (PW) (Quiggin, 1982) should be taken into account. In addition, cumulative prospect theory (CPT) -PT combined with PW (Tversky and Kahneman, 1992) -might also be a candidate that can explain the documented anomalies under EUT. Wakker (2010) provides an extensive review on risk under PT.
We justify using CRRA as Wakker (2008) claims that it is the most commonly postulated assumption among economists. Most recently, Chiappori and Paiella (2011) provide evidence on the validity of this assumption in economicfinancial decisions. 4 Nevertheless, alternative functional forms have been introduced, e.g. CARA 5 (Pratt, 1964). It was also questioned whether social status -and mostly the role of wealth or income -might shape risk attitude, which would lead to functions which are increasing or decreasing in these factors such as IRRA and DRRA (Andersen et al., 2012) 6 or IARA and DARA (Saha, 1993). 7 A review of these functions is provided by Levy (1994). In our robustness analysis, we relax our original assumptions on EUT and CRRA and incorporate all of the above mentioned alternative theories and functional forms. Note that even though we calibrated our parameters to accomodate EUT and CRRA, one is still able to calculate the risk parameter ρ using the aforementioned alternative specifications. 8 We group our aforementioned nine risk elicitation methods into two categories: 1. The standard gamble methods (SG methods), where on one side of the table there is always a 100% chance of getting a particular certain payoff and on the other side there is a lottery. 2. The paired gamble methods (PG methods), with lotteries on both sides.
We therefore primarily conduct a comparison of different MPL risk elicitation methods. What we do not claim, however, is that the method devised by Holt/Laury (2002) (or MPL in general) is the most fitting to measure people's risk preferences -we strive to give a recommendation to researchers who already intend to use Holt/Laury (2002) in their studies, and provide a better alternative that shares its attributes with the original MPL design.
It should be mentioned that there is an alternative interpretation of our study: 4 Note that this approach is also popular among economists due to its computational ease: The vast majority of economic experiments assumes CRRA as well, which makes our results comparable to theirs.
ρ 8 This implies that the same switching point in two methods does not yield the same risk parameter estimate under different specifications, but these estimates are still directly comparable according to theory, as they claim to measure a subject's underlying risk attitudes ceteris paribus. The different MPL methods can also be conceived as a mapping of existing risk elicitation methods (from other frameworks) to the MPL space. Several methods exist where the risk elicitation task is provided in a framed context -such as pumping a balloon until it blows (Lejuez et al., 2002) or avoiding a bomb explosion (Crosetto and Filippin, 2013). Similarly, some methods differ due to the representation of probabilities with percentages (Holt and Laury, 2002) or charts (Andreoni and Harbaugh, 2010). All these methods can be displayed with different MPLs by showing the probabilities and the corresponding payoffs in a table format. We provide a complete classification of these methods in the Literature Review section. Up to now, different risk elicitation methods were compared by keeping the original designs, but this approach comes at a price: As the methods differ in many dimensions, any differences found can be attributed to any of those particular characteristics. Our approach can be understood as a way to make all risk elicitation methods as similar as possible, with the drawback of losing the direct connection to the original representation. This paper should therefore primarily be seen as a comparison of different MPL risk elicitation methods, and the resulting comparison of existing risk elicitation methods by mapping them into the same space is only reported for the sake of completeness.

Literature Review
We will now discuss the different methods in greater detail and how they are embedded in the literature, if at all. Table 2 provides a summary of the exact parameter that is changing across methods. 9

Standard Gamble Methods
Among the SG methods, there are four parameters that can be changed: The sure payoff (sure), the high payoff of the lottery (high), the low payoff of the lottery (low) or the probability of getting the high payoff (p) (or the probability of getting the low payoff (1 − p), respectively). The parameters must of course be chosen in such a way that high > sure > low always holds. For instance, we denote the SG method where the low payoff is changing by "SGlow", the SG method with the varying certainty equivalent by "SGsure" or the standard gamble method where the probabilities are changing as "SGp". Binswanger (1980) introduced a method (SGall) where only one of the options has a certainty equivalent. The other options consist of lotteries where the probabilities are fixed at 50-50, but both the high and the low payoff are changing. Cohen et al. (1987) used risk elicitation tasks in which probabilities and lottery outcomes were held constant and only the certainty equivalent was varied. These methods have later been redesigned and presented in a more sophisticated format as a single choice task by Grossman (2002, 2008).
A recent investigation by Abdellaoui et al. (2011) presents a similar method (SGsure method) in an MPL format with 50-50 probabilities. Bruner (2009) presents a particular certainty equivalent method, where the certainty equivalent and the lottery outcomes are held constant, but the corresponding probabilities of the lotteries are changing (SGp method). Additionally, Bruner (2009) introduces another method where only the potential high outcomes of lotteries vary (SGhigh method). Although not present in the literature, we chose to include a method where the potential low outcome varies for reasons of completeness (SGlow method). 10

Paired-Gamble Methods
Holt and Laury (2002,2005) introduced the most-cited elicitation method under EUT up to now, where potential outcomes are held constant and the respective probabilities change (PGp). Drichoutis and Lusk (2012) suggest a similar framework where the outcomes of different lotteries change while the probabilities are held constant. We differentiate these methods further into PGhigh and PGlow depending on whether the high or the low outcome is varied in the MPL. Additionally, the PGall method varies both the probabilities and the potential earnings at the same time.
Many risk elicitation tasks used in the literature fit into the framework of choosing between different lotteries. Sabater-Grande and Georgantzis (2002) provide ten discrete options with different probabilities and outcomes to choose from. Lejuez et al. (2002) introduce the Balloon Analogue Risk Task where subjects could pump up a balloon, and their earnings depend on the final size of the balloon. The larger the balloon gets, the more likely it will explode, in which case the subject earns nothing. Visschers et al. (2009) and Andreoni and Harbaugh (2010) use a pie chart for probabilities and a slider for outcomes to visualize a similar trade-off effect in their experiment. Crosetto and Filippin (2013) present their Bomb Risk Elicitation Task with an interesting framing which quantifies the aforementioned tradeoff between probability and potential earnings with the help of a bomb explosion. 11

Questionnaire Methods
In addition to the MPL methods, we chose to also incorporate questionnaire risk elicitation methods into our study. Several methods have been introduced that evaluate risk preferences with non-incentivized survey-based methods, and the questions and the methodology they use are very similar. The most recently published ones include the question from the German Socio-Economic Panel Study (Dohmen et al., 2011) or the Domain-Specific Risk-Taking Scale (DOSPERT) by Blais and Weber (2006). For a more detailed description, see the last paragraph of Section 2.

Comparison Studies
The question arises which method to use if there is such a large variety of risk elicitation methods and whether they lead to the same results. Comparison studies exist, but the majority compares two methods with each other, thus their scope is limited. The question of within-method consistency has been addressed by some papers: Harrison et al. (2005) document high re-test stability of the method introduced by Holt and Laury (2002, PGp). Andersen et al. (2008b) test consistency of the PGp (2002) method within a 17-month time frame. They find some variation in risk attitudes over time, but do not detect a general tendency for risk attitudes to increase or decrease. This result was confirmed in Andersen et al. (2008a). Yet there is a gap in the academic literature on the time stability of different methods and their representation we are eager to fill.
Interestingly, more work has been done on the field of between-method consistency. Fausti and Gillespie (2000) compare risk preference elicitation methods with hypothetical questions using results from a mail survey. Isaac and James (2000) conclude that risk attitudes and relative ranking of subjects is different in the Becker-DeGroot-Marschak procedure and in the first-price sealed-bid auction setting. Berg et al. (2005) confirm that assessment of risk preferences varies generally across institutions in auction settings. In another comparison study, Bruner (2009) shows that changing the probabilities versus varying the payoffs leads to different levels of risk aversion in the PG tasks. Moreover, Dave et al. (2010) conclude that subjects show different degrees of risk aversion in the Holt and Laury (2002, PGp) and in the Eckel and Grossman (2008, SGall) task. Their results were confirmed by Reynaud and Couture (2012) who used farmers as the subject pool in a field experiment. Bleichrodt (2002) argues that a potential reason for these differences might be attributed to the fact that the original method by Eckel and Grossman (2008) does not cover the risk seeking domain, which can be included with the slight modification we made when incorporating this method. Dulleck et al. (2015) test the method devised by Andreoni and Harbaugh (2010) using a graphical representation against the PGp and describe both a surprisingly high level of within-and inter-method inconsistency. Drichoutis and Lusk (2012) compare the PGp method to a modified version of it where probabilities are held constant. Their analysis reveals that the elicited risk preferences differ from each other both at the individual and at the aggregate level. Most recently, Crosetto and Filippin (2015) compare four risk preference elicitation methods with their original representation and framing and confirm the relatively high instability across methods.
In parallel, a debate among survey-based and incentivized preference elicitation methods emerged which were present since the survey on questionnairebased risk elicitation methods by Farquhar (1984). Eckel and Grossman (2002) conclude that non-incentivized survey-based methods provide misleading conclusions for incentivized real-world settings. In line with this finding, Anderson and Mellor (2009) claim that non-salient survey-based elicitation methods and the PGp method yield different results. On the contrary, Lönnqvist et al. (2015) provide evidence that the survey-based measure, which Dohmen et al. (2011) had implemented, explains decisions in the trust game better than the SGsure task. Charness and Viceisza (2015) provide evidence from developing countries that hypothetical willingness-to-risk questions and the PGp task deliver deviating results.

Further Considerations
A recent stream of literature broadens the horizon of investigation to theoretical aspects of elicitation methods: Weber et al. (2002) show that people have different risk attitudes in various fields of life, thus risk preferences seem to be domain-specific. Lönnqvist et al. (2015) document no significant connection between the HLp task and personality traits. Dohmen et al. (2010) document a connection between risk preferences and cognitive ability, which was questioned by Andersson et al. (2015). Hey et al. (2009) investigate noise and bias under four different elicitation procedures and emphasize that elicitation methods should be regarded as strongly context specific measures. Harrison and Rutström (2008) provide an overview and a broader summary of elicitation methods under laboratory conditions, whereas Charness et al. (2013) survey several risk preference elicitation methods based on their advantages and disadvantages.
In addition, there is evidence that framing and representation matters. Wilkinson and Wills (2005) advised against using pie charts showing probabilities and payoffs as human beings are not good at estimating angles. Hershey et al. (1982) identify important sources of bias to be taken into account and pitfalls to avoid when designing elicitation tasks. Most importantly, these include task framing, differences between the gain and loss domains and the variation of outcome and probability levels. Von Gaudecker et al. (2008) show that the same risk elicitation methods for the same subjects deliver different results when using different frameworks -e.g. multiple price list, trade-off method, ordered lotteries, graphical chart representation, etc. This procedural indifference was confirmed by Attema and Brouwer (2013) as well, which implies that risk preferences on an individual level are susceptible to the representation and framing used.
The previous paragraphs lead us to the conclusion that methods should be compared to each other by using the same representation and format. This justifies our decision to compare them using the standard MPL framework which guarantees that the differences cannot be attributed to the different framing and representation of elicitation tasks. However, this comes at the price that we had to change some of the methods slightly, which implies that they are not exactly the same as their originally published versions. We certainly do not claim that the MPL is the only valid framework, but our choice for it seems justified by its common usage and relative simplicity. We consider a future investigation using a different representation technique as a potentially interesting addition. Also, we emphasize that the differences in our results exist among the MPL representations of the methods and they can only be generalized to the original methods to a very limited extent. See Table 3 for an overview of the link between the MPL representation and the particular method that was published originally, and Table 12 in Appendix section A.2, where we compared the results from our MPL methods to the results in previous studies -most of the studies deliver significantly different results to the risk parameters measured in our study. This is not surprising given the considerations in Sections 1.2.4 and 1.2.5, as we map all methods to the MPL space. Furthermore, risk elicitation methods are very noisy in general, for example the same method with the same representation delivers significantly different results in Crosetto and Filippin (2013) and Crosetto and Filippin (2015).

Design
We provide a laboratory experiment to compare different MPL risk elicitation methods. Subjects answered the risk elicitation questions first. Then, benchmark games were presented to them to gauge predictive power, which was followed by a non-incentivized questionnaire. We will provide a detailed description on the exact procedures of each part in the later paragraphs.
We conducted ten sessions at the Vienna Center for Experimental Economics (VCEE) with 96 subjects. 12 Sessions lasted about 2 hours, with a range of earnings between 3e and 50e, amounting to an average payment of 20.78e with a standard deviation of 10.1e. We calibrated these payments similarly to previous studies (e.g. Bruner, 2009or Abdellaoui et al., 2011. Average earnings were about 9.5e in the risk task and about 8.3e in the benchmark games plus a 3.00e show-up fee.  provide evidence that the existence of a show-up fee could lead to an elevated level of risk aversion in the subject pool. In our experiment, this moderate show-up fee was only pointed out to the subjects after making their decisions in the risk elicitation methods and the benchmark games. Thus, it could not have distorted their preferences. The experiment was programmed and conducted with the software z-Tree (Fischbacher, 2007), and ORSEE (Greiner, 2015) was used for recruiting subjects.
We employed a within-subject design, meaning that each subject took decisions in each and every task as in other comparison studies (Eckel and Grossman, 2008;Crosetto and Filippin, 2015). This property rules out that the methods differ due to heterogeneity between subjects, but it comes with the drawback that methods which were encountered later might deliver more noisy or different results due to fatigue or other factors, as the answer to a particular method could also be a function of previously seen MPLs. Consequently, we included the order in which a method appeared in all regressions as controls wherever possible, compared the variance in earlier and latter methods and tested for order effects; no significant effects were found. 13 To avoid biases, a random number generator determined the order of methods for each subject separately in the beginning of each session. 14 After receiving instructions on screen and in written form, subjects went through the nine incentivized risk elicitation methods. In order to avoid potential incentive effects mentioned by Holt and Laury (2002), the expected earnings for a risk-neutral individual were equal in every method. Furthermore, to avoid potential biases due to the different reactions to gains and losses (Hershey et al., 1982), each of our lotteries is set in the gains domain. Andersen et al. (2006) confirmed previous evidence (Poulton, 1989) that there is a slight tendency of anchoring and choosing a switching point around the middle for risk elicitation tasks. In order to counteract anchoring and onedirectional distortion of preferences as a consequence of this unaviodable pullto-center effect, each risk elicitation task appeared randomly either top-down or bottom-up. Depending on randomization, out of nine potential switching opportunities the fourth or the sixth option were the risk-neutral switching points. 15 Subjects also had the opportunity to look at their given answer and modify it right after each decision if they wished to do so. After making a decision in each method, we asked subjects the following question: "On a scale from 1 to 10, how difficult was it for you to make a decision in the previous setting?". With this question we assessed self-perceived complexity of the tasks, since there is evidence in the literature (Mador et al., 2000) that subjects make noisier decisions if the complexity of a lottery increases, and therefore a less complex method is preferred. Moreover, Dave et al. (2010) outline the trade-offs between noise, accuracy and subjects' mathematics skills. They suggest that it is a good strategy to make MPL tasks simpler for subjects. In this spirit, we asked our subjects to indicate the row in which they switched from the "LEFT" column to the "RIGHT" column, thereby enforcing a single switching point (SSP). Using this framework, subjects were not required to make a decision for each and every row in every method, which would have meant more than 100 monotonous, repetitive binary choices throughout the experiment. Additionally, this approach ensures that the subjects were guaranteed to give answers without preference reversals. We consider this option more viable than accepting multiple switching points -thus allowing inconsistent 13 See Table 11 in appendix section A.1. 14 Each subject encountered the methods in a unique random order and each order was used only once in the entire experiment. 15 An example for the difference between the top-down and bottom-up representation can be found in Table 22 in the Online Resource.
choices -and using the total number of "safe" choices to determine a subject's risk coefficient interval. The SSP has been used several times, e.g. Gonzalez and Wu (1999) or Jacobson and Petrie (2009). By enforcing a SSP, we faced a trade-off between potential boredom and the non-detection of people with inconsistent preferences. Furthermore, some of the reported within-method instability might stem from "fat preferences" or indifference between two or more options. However, the SSP can further be justified in that only a small proportion of subjects expressed multiple switching points in earlier studies, 16 so this design choice is highly unlikely to drive our results.
In order to test within-method consistency, three of the nine methods were randomly chosen and presented to subjects again, without telling them that they had already encountered that particular method. 17 Repeating all methods was not feasible due to fatigue concerns, as the experiment is already quite long. This approach allows us to test both within-method and inter-method consistency. The modification of subjects' answers was allowed here once as well. The perceived complexity of tasks was also elicited again.
Control questions were used for the preference elicitation methods and for each benchmark game in order to verify that subjects understood the task they were about to perform. 18 Subjects had to answer them correctly in order to participate in the experiment.
We incorporated the random lottery incentive system emphasized by Cubitt et al. (1998). Thus, the computer chose one of the twelve risk preference methods and one of the eight rows within that particular method on a random basis to be payoff-relevant. Additionally, one of the three benchmark games was chosen to be payoff-relevant as well. This random lottery incentive system helps keeping the costs at a reasonable level while having similarly sized (than e.g. Bruner, 2009) or even larger (than e.g. Laury, 2002 or Harrison et al., 2007) stakes for the elicitation tasks compared to previous studies, while mitigating potential income effects. Nevertheless, we note that the random lottery incentive system might be a potential caveat in our study, since Cox et al. (2015) document somewhat different behavior under various payment mechanisms.
As far as hedging behavior is concerned, Blanco et al. (2010) provide evidence that hedging and the corresponding biased beliefs and actions can only be problematic if the hedging opportunities are highly transparent. Taking this consideration into account, we provided feedback on the outcome of the risk elicitation tasks only at the end of the experiment. Thus, it was not possible for subjects to create a portfolio and use hedging behavior over different parts of the experiment.
On top of the risk elicitation tasks, we used three benchmark games resembling real-life situations as well as situations relevant to economists. As behavior in these settings should only depend on risk attitudes, they will serve as benchmarks to contribute to the debate which risk elicitation methods are appropriate to predict behavior in these games. The benchmark games appeared in a randomized order. First, we used the same investment task as Charness and Gneezy (2010). Here, subjects could decide how much they wanted to invest in stocks and bonds out of an endowment of 10e. Subjects knew that any investment in bonds is a safe investment, and therefore they received the same amount they had invested in bonds as income. Additionally, the amount they invested in stocks was to be multiplied by 2.5 or lost completely with equal chance. Under EUT, this setting implies that both risk neutral and risk seeking decision makers should invest the entire amount. Thus, in order to be able to differentiate between them, we introduced another investment setting where the potential payment for stocks was 1.5 times the invested amount.
The third benchmark game was a first-price sealed-bid auction against a computerized opponent in line with Walker et al. (1987). Subjects could bid between 0.00e and 20.00e of their endowment, and they knew that the computer bid any amount between 0.00e and 20.00e with equal chance. The potential earnings (E 1 for subject 1) according to the bids (x 1 ; x 2 ) are: Our benchmark games are deliberately chosen in such a way that risk is clearly relevant in the games, while being one step away from the artificial risk elicitation mechanisms. Therefore, all benchmark games are framed heavily, while still ensuring that risk attitudes should be the only factor driving a subject's decisions. The investment settings are very similar to the risk elicitation mechanisms described above in the sense that they resemble an SG method (with the difference that you choose your sure payoff and your lottery at the same time). The auction is more complex, as the optimal risk-neutral solution is harder to compute, but here you basically choose your own lottery, too. We therefore expect stronger correlation with the MPL methods for the investment games.
The experiment concluded with an extensive questionnaire. In order to incorporate survey-based measures, we asked subjects to provide an answer on a ten-point Likert-scale to the following two questions in line with Dohmen et al. (2011): "In general, are you a person who is fully prepared to take risks or do you try to avoid taking risks?" and "In financial situations, are you a person who is fully prepared to take risks or do you try to avoid taking risks?". The perceived complexity of these questions was elicited as well. In the questionnaire, we elicited the following socioeconomic factors: Age, gender, field of study, years of university education, nationality, high school grades in mathematics, monthly income and monthly expenditure. Furthermore, we elicited cognitive ability by conducting a cognitive reflection test (Frederick, 2005). Lastly, we assessed subjects' personalities in line with Rammstedt and John (2007), who provide a short measure of personality traits according to the BIG5 19 methodology introduced by Costa and McCrae (1992).

Results
We will first establish in sections 3.1 and 3.2 that the elicited risk parameter is highly dependent on the particular variant of MPL used because the overall distributions of switching points are very diverse and the rank correlations between the different methods are low in most circumstances. Section 3.3 analyzes the common features across methods. In section 3.4 we apply multiple measures to determine method quality. To this end, we first use benchmark games to let the data speak which risk elicitation methods predict behavior in these games best. In Section 3.4.2, we will show which method produces the most stable results overall. Section 3.5 concludes with the result that the PGhigh method is the most stable method and that it has the highest predictive power.

Overall Distributions are Different
According to EUT, a subject's behavior does not depend on which parameters are changed from row to row, as his underlying risk parameter value is constant. As the different versions of the MPL are calculated in such a way that the same switching point implies the same risk parameter interval, a consistent individual should have the same switching point in all versions of the MPL. This implies that the distributions of switching points should be the same across methods, barring some noise.
First, see Figure 1 for a graphical representation of the distributions. It is clearly visible at first glance that the distributions are not the same across all methods. For example, in the SGp method, most subjects would be classified as highly risk loving, whereas in the PGhigh method the majority of subjects would be classified as risk averse. To verify whether distributions across methods are the same, we conduct two tests: a Friedman test, which shows that the means are not the same across methods (p < 0.0001), and a Kruskal-Wallis test, which shows that the distribution of answers is not the same across methods (p < 0.0001). We conclude that the switching points are, contrary to standard theory but in accordance with the literature, dependent upon the version of the particular MPL variation used.
To see which specific versions are significantly different from each other, we con-  Fig. 1 Distributions of risk preferences; a low value indicates risk loving and a high value indicates risk averse behavior; x-axis: switching points (e.g. risk preferences) of subjects, where 1 means a subject switches from left to right in the first row and 9 means a subject never switches; y-axis: frequency of switching point duct a series of Wilcoxon tests, the natural pairwise analogue to the Kruskal-Wallis test. We use the Wilcoxon test to give a comparison of the distributions, as a difference in distributions is a more meaningful statistic here than a comparison of means. The p-values of the pairwise tests can be found in Table  4. Out of 55 pairwise comparisons, 28 comparisons indicate that methods are different at p < 0.001. 34 (43) instances suggest that methods are different at p < 0.01 (0.05) significance levels. 20 To make sure that the differing results are not a product of fatigue or order effects, we also test whether CRRA-20 Note that one should be careful while reading this table and the ones following because of the presence of the multiple testing problem; therefore, we introduce a new notation in the tables: P-values lower than 0.001 are denoted by three stars, p-values lower than 0.01 are denoted by two stars and p-values lower than 0.05 are denoted by one star. p < 0.001 can be interpreted as significant, even when using the conservative Bonferroni correction (see Abdi, 2007).
To provide an answer to this puzzle, see Section 3.4.2, where we conduct a quality assessment of the different methods.

Rank Correlations are Low
In this section we look at the rank correlation coefficients between the different methods and the questionnaire answers. If there are high rank correlations between the risk elicitation methods, one might argue that it is irrelevant which one is used if one intends to control for risk attitudes under any given circumstance. Rank correlations between the MPL methods and the questionnaire measures can be found in Table 5. We see that some of the correlations are significant, but only 11% of all pairwise comparisons in total if we test conservatively at p < 0.001 because of the multiple testing problem. Pay special attention to the fact that PGp, the most widely used method today, has no significant rank correlations with any of the other methods. 22 See also Table 10 in Appendix A.1 for standard correlations, which basically gives the same results as Table 5. These findings provide further evidence that the elicitation procedure should be chosen with care as the elicited risk aversion coefficient and also the relative ranking of subjects according to each method varies within broad boundaries.

Method Similarities
We have established in sections 3.1 and 3.2 that there are significant differences in the distributions of the risk elicitation methods. There are, however, some similarities that can be observed across methods: In Table 6, we classify MPLs according to whether the high payoff, the low payoff, the probabilities or the certainty equivalents change in the MPL table, whether the method has a certainty equivalent and whether the table was presented in a top-down or bottom-up format. Furthermore, we control for age, gender, cognitive reflection scores and the order in which the tables were presented. Column 1 shows the results for the first time a method was encountered, and column 2 for the repeated measurements. We see that generally, methods that change the probabilities or methods that have a certainty equivalent classify subjects as more risk loving, while methods that change the low payoffs classify subjects as more risk-averse. 23 When a method is presented to subjects for the first time, changing the high payoff also classifies them as more risk-loving, while presenting the table with ascending numbers seems to classify subjects as more risk-averse, although these two effects seem to vanish when presenting subjects with the same tables again.
Note that we do not observe order effects, or significant effects of the control variables. Notes: OLS regressions clustered by individual subjects with one observation being the outcome from one answer of one subject in one method; dependent variable is the resulting CRRA-coefficient, with low scores indicating risk-loving behavior; independent variables on the left are dummies; nonsignificant controls for age, gender, order, BIG5 scores, income and CRT scores are included in the regressions but omitted in the table; first column gives results for the first time subjects encountered one of the nine methods, second column for the repeated measurements; stars are given as follows (differently than in the other tables, due to the absence of a multiple testing problem): *: p<0.10; **: p<0.05; ***: p<0.01

Method Quality Indicators
We use two avenues to measure a method's quality: its predictive power (section 3.4.1) and its stability (section 3.4.2).

Predictive Power
In order to see which method predicts behavior best in our benchmark games, we look at three statistics: the predictive power by simple OLS regression, the predictive power by Spearman rank correlation, and the absolute average deviation from the prediction.
In Table 7, we see the outcome of OLS regressions in the upper part, while controlling for personality measures and socioeconomic variables. In the lower part of Table 7 you see Spearman rank correlation coefficients, which we include because besides the absolute size of the elicited coefficients, the correct rank ordering of subjects is essential since these methods are often used to control for the role of risk attitude in various settings. The OLS regression can be understood as follows: The dependent variable is the outcome of a particular benchmark game, and the independent variables include the outcome in terms of the elicited risk aversion parameter ρ of one of the risk elicitation methods 24 plus all controls mentioned above. 25 The resulting coefficients in the investment games are negative because a higher ρ implies risk-averse behavior, and therefore lower investments and bids in the benchmark games; the reverse is true for the auction. The corresponding adjusted R 2 values can be found in parentheses below the coefficients.
The OLS regression equation is then given by where i denotes the index of benchmark games (BG), j denotes the index of risk measures, M P L denotes the outcome of a risk elicitation method, BIG5 denotes personality measures according to the BIG5, SE denotes the socioeconomic variables and CRT denotes the number of correct answers in the cognitive reflection test.
Additionally, we can calculate a point prediction in each of the benchmark games for each risk elicitation method (but not for the questionnaires). In Table 8 we report the absolute average deviations from these predictions, averaged over all three benchmark games according to the formula where H i denotes high investment game outcomes and H * i high investment game predictions (L stands for investment low and A for auction). 26 In the auction, none of the methods produce statistically significant results in the OLS regression. This is puzzling, as the auction can in itself be seen as a risk elicitation procedure, albeit with heavy framing. Recent literature, however, provides evidence that not only risk attitudes but also other factors like regret aversion (Engelbrecht-Wiggans and Katok, 2008) could drive behavior in auctions. As far as the Spearman rank correlation is concerned, the SGp method is the only one that is rank correlated (p < 0.05) with auction behavior. 24 The results do not change qualitatively if we use the switching points as independent variables instead of ρ. This data is available upon request. 25 It is not possible to add controls in the Spearman rank correlation. 26 Note that we divide the deviation in the auction game by 2 because the choice range in the auction game is twice as high. Notes: In the OLS regression, the dependent variable is the outcome in one of the four benchmark games, the independent variables are the outcome in terms of ρ from one method plus controls (age, gender, BIG5, CRT test, income, years of university education); the adjusted R 2 value for the regression can be found below a coefficient; Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 In the investment games, the methods produce much better results. In the low investment setting, PGhigh has the biggest explanatory power, with SGhigh being a close follower. Note that it is surprising that PGhigh is the best predictor both in the regression and the rank correlation, as the investment games in themselves can be interpreted as standard gamble methods, so one would expect one of these methods to perform best.
In the high investment setting, many methods (PGhigh, PGlow, SGhigh, SGlow, SGp, and the questionnaires) are able to explain a part of the variance, with SGp being the one giving the best results (p < 0.01). Note that in this setting, survey-based measures perform very well, so questionnaire measures seem to serve as good proxies for subjects' risk preferences in some circumstances. Note that the adjusted R 2 values are relatively low in general; we added the above mentioned controls to our regressions, which are not able to pick up much of the variation. 27 As far as the deviations from the predictions are concerned, PGhigh performs 27 In all regressions, none of the controls were significant at p < 0.05. This implies that behavior seems to primarily be driven by risk attitudes.
best with an average deviation of 1.75 across all benchmark games with SGp and SGlow also having low deviations. 28 In conclusion, PGhigh and SGp yield the best results in explaining behavior, with PGhigh having the lowest deviation from the prediction of behavior in the benchmark games. We conclude that PGhigh has the highest predictive power with SGp being a close runner-up. Additionally, we relax our assumptions on CRRA and perform robustness checks taking CARA, DRRA, DARA, IRRA and IARA into account in Tables 23-34 in the Online Resource. 29 Furthermore, due to the ample evidence on the violations of EUT, we provide the same regressions by taking probability weighting 30 , prospect theory 31 and cumulative prospect theory into consideration. 32 The results show that our findings remain quantitatively and qualitatively the same under different specifications. In general, we see similar explanatory power and in the vast majority of cases the same significance levels for the PGhigh and SGp methods, which confirm our findings. In some specifications, we even see that the coefficient for PGhigh becomes significant also for the investment games with low stakes, for example under DRRA. Nevertheless, a further justification is that some of the other methods (SGhigh, PGlow and questionnaire methods) lose significance (for example under PT, IRRA or IARA) under some of the above mentioned alternative theoretical foundations and functional forms.

Stability Measures
In this section, we evaluate the stability of the different MPL representations. Remember that after our subjects had gone through all nine MPL methods, three of them were randomly chosen and presented to them again. A method can be described as stable if the given answers between the first and the second 28 Note that one might be concerned with this analysis because if a method generally classifies subjects as risk-averse, it is not surprising that it explains behavior well in the low investment setting, as subjects naturally behave risk-averse in this setting due to the parameters. However, this critique is not valid for any method that provides good predictions across multiple benchmark games (e.g. PGhigh). 29 We used subjects' self-reported monthly income and expenditure as a proxy for their wealth. This was necessary, since our subject pool consisted of students who are not expected to have any wealth, but their monthly income or expenditure can serve as a good proxy for this purpose as they are expected to be highly correlated with their wealth (Persson and Tabellini, 1994) and social class. 30 In the Online Resource, we report the regressions using the probability weighting func- 31 As our lotteries are in the gains domain, prospect theory amounts to u(c) = c α . 32 For prospect theory and cumulative prospect theory as well as probability weighting functions, we used the functional form and parameters provided in Tversky and Kahnemann (1992) and Quiggin (1982) to create the tables in the Online Resource; different specifications for PT provided in Camerer and Ho (1994) and in Wu and Gonzalez (1996), as well as for PW in Prelec (1998), do not change the results qualitatively. Specifications and parameters for DRRA and IRRA can be found in Andersen et al. (2012) and Saha (1993), respectively, for IARA and DARA in Saha (1993).
time a method was encountered are very similar. To analyze this similarity, we use three criteria: equality of overall distribution, equality of rank ordering and absolute average deviation between the first and second answers. For reasons of completeness, we also report the perceived complexity of each method. 33 Table 9 reports these measures. In the first column we give p-values from a Kolmogorov-Smirnov test that evaluates whether the distributions of the first and the second time a method is encountered are the same. A significant p-value means that the distributions are significantly different from each other, indicating a low stability of overall distribution across a 30 minute time period.
The second column gives the rank correlation between the first and second time a method was encountered. This measure is important because if a method's overall distribution merely shifted up or down without changing the rank ordering of subjects, this method can also be described as stable since the ordering of subjects remains the same. The third column reports the absolute average deviation (AAD) of subjects' answers when a particular method is presented to them again, compared to the first time -a lower value is therefore better. The last column gives the means of the perceived complexity of a method on a 1 to 10 Likert scale.
To visualize these results, we also report the distributions of the differences in switching points between the first and the second time a method is encountered in Figure 2.
Any method that does not yield stable results over a 30 minute time period cannot be described as stable, and stability is a highly preferable characteristic in a risk measure. For the KS-test (column 1 in Table 9), stability relative to the other methods is indicated by a nonsignificant result: For any methods with a significant result, the overall distributions of answers are different between the first and the second time a method was encountered. Four of the methods have a nonsignificant p-value: SGp, SGhigh, PGp, PGhigh. A significant rank correlation (column 2 in Table 9) also indicates a stable risk measure, indicating a shift in the distribution, but no change in rank ordering. We see that three of those four methods have significant rank correlations with p < 0.01: SGp, SGsure and PGhigh. A low absolute average deviation in answers is also an indicator of a stable risk measure, and the method with the lowest deviation is PGhigh, followed by PGp and SGsure.
Concerning the complexity, we see that a method that is perceived as less complex does not necessarily imply more stability in answers, as SGlow has the lowest complexity rating, yet it is classified as unstable in all three categories. However, a general tendency of low complexity indicating more stability can PGall Fig. 2 Distributions of absolute differences in switching points between the first and the second time a method is encountered be observed.
As far as a possible relationship between stability and the control variables (CRT and BIG5 scores, age, gender, income, years of university education) is concerned, no significant effects have been found, so the results will be omit-ted here. 34 Finally, we mention that the slight differences in the number of observations between the repetitions of particular methods -caused by the pseudo-random number generator -do not drive within-method consistency.
We conclude that PGhigh is the most stable method, as it is the only method that performs well in all three categories, with the overall distributions of switching points not significantly different, high rank correlations and low average deviation. SGp, SGsure and PGp perform well in two of the three categories.

Results Conclusion
In the benchmark games, as far as predictive power is concerned, we conclude that PGhigh has the highest predictive power with SGp being a close second, irrespective of the assumed functional form or theoretical framework. 35 We conclude that only the PGhigh (Drichoutis and Lusk, 2012), PGp (Holt and Laury, 2002), SGsure (Cohen et al., 1987;Abdellaoui et al., 2011) and SGp (Bruner, 2009) methods lead to consistent results within a 30-minute time frame, with the PGhigh method being by far the most consistent: The PGhigh method's performance is superior to the other methods in terms of deviations from normative predictions, overall and relative stability across time, etc. Our findings are further supported by the fact that we controlled for personality traits, order effects, various socioeconomic factors and cognitive reflection in our analyses.
Therefore, we conclude that while SGp also has high predictive power and good stability in answers, the most stable MPL method with the highest predictive power is PGhigh, which corresponds to a method derived by Drichoutis and Lusk (2012) in our alternative interpretation.

Conclusion
We conducted a holistic assessment and analysis of MPL risk elicitation methods that are present in the economics literature with a sophisticated experimental design using a unified framework and representation method. Previous findings in the literature (Dave et al, 2010;Crosetto and Filippin, 2015;etc.) 34 On the subject level, we tested whether the average deviation between the first and repeated decisions are related to any of the control variables, whether the standard deviation of CRRA scores across all methods is related to the control variables, and whether the average differences between the predicted and the actual outcomes in the benchmark games are related, both per method and overall. At most weakly significant results were found; the results are available from the authors upon request. 35 For non-incentivized surveys, our data shows that eliciting preferences with general and financial questions is a relatively good predictor compared to several incentivized elicitation methods.
indicate that between-method consistency of particular methods is low. We confirm this finding by extending our analysis to all popular methods using the same representation. Furthermore, we show that distributional differences among methods are far from negligible. In addition, we investigate the time consistency of all these methods and document substantial differences in a 30 minute time period for most of the methods. All this implies that an arbitrary selection of a particular risk assessment method can lead to differing results and misleading revealed preferences. Thus, it matters which elicitation method is used by researchers in order to control for risk and other preferences.
Our main takeaway is that we provide a suggestion which elicitation method to use based on objective criteria that assess within-method as well as betweenmethod consistency and validity in real-world settings such as investments and auctions, and our suggestion is to use the PGhigh method by Drichoutis and Lusk (2012). This particular method performs best if we look at the absolute deviations from the normative predictions in benchmark games and also in terms of rank correlations. Furthermore, it yields highly correlated results within a 30-minute time frame in terms of individual deviations and overall distribution. These findings remain robust -in some cases even more pronounced -if we relax our assumptions on CRRA to alternative functions such as CARA, DRRA, DARA, IRRA and IARA. Moreover, our conclusions remain the same if we allow subjective probability weighting or if we estimate risk attitude parameters in line with prospect theory or cumulative prospect theory.
In a broader context, one should take care when choosing which risk elicitation method to use, especially if one aims to control for risk attitudes in potential real-world contexts such as investment into assets. To be taken into consideration are the nature of the task they intend to control for, trade-off effects between noise, exactness and simplicity. Moreover, we find that changing both the potential rewards and probabilities is perceived as relatively complex by subjects and yields inconsistent results. A further point to consider is that varying the potentially achievable minimum payoff seems to induce more risk-averse behavior while the presence of a certainty equivalent fosters risk taking. Cognitive ability, personality traits and other socioeconomic factors do not seem to be related to risk aversion nor to the extent of consistency we measured.
The debate between changing the probabilities or rewards (Bruner, 2009) seems to be far from settled as one of the methods in each context (PGhigh and SGp) delivers promising results. In addition, our findings might provide guidance in implementing other elicitation methods in the MPL format -e.g. loss aversion (Gächter et al., 2010), willingness to pay (Kahneman et al., 1990), individual discount rates (Harrison et al., 2002) -in terms of whether to vary probabilities, rewards or to use a certainty equivalent. On a final note we suggest that the relatively high variation in risk preferences across and within particular methods might not be mere artifacts -especially in light of other recent evidence (Andreoni et al., 2015). We encourage further research to shed light on the consistency of other preference elicitation mechanisms such as social preferences or overconfidence.

A Appendix
A.1 Robustness Checks Table 10 shows standard correlation coefficients between methods; the results are qualitatively the same as in Table 5. Table 11 tests for linear order effects in the data: As the nine methods were presented to subjects in a randomized order, risk aversion or noise might increase or decrease as a function of previously seen MPLs. Coefficients in Table 11 can be interpreted as a change in standard deviation (i.e. noise) of CRRA coefficients or changes in CRRA coefficients themselves as a function of order, i.e. the number of previously encountered MPLs. No significant effects were found. For robustness checks on functional form (PT, CARA, DRRA, IRRA, IARA, DARA and probability weighting), please refer to the Online Resource, Tables 23-34.  Table 5. Our conclusions remain qualitatively the same. Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Reported coefficients for OLS regressions; the dependent variable is the standard deviation (column 1) of the CRRA-coefficient or the CRRA score (column 2) across all subjects in one particular order, the independent variable is the order in which a method appeared, i.e. the number of previously encountered MPLs; stars are given as follows: *: p<0.1; **: p<0.05; ***: p<0.01

A.2 Comparison of our Results to the Results in previous studies
In Table 12 we see the differences in the mean values of CRRA risk coefficients between previous studies and our results for each method, where we find that several studies deliver significantly different results to ours. This is not surprising for two reasons: First, risk elicitation methods are very noisy in general, for example the same method with the same representation delivers significantly different results in Crosetto and Filippin (2013) and Crosetto and Filippin (2015), or the task by Holt and Laury (2002), which delivered highly heterogeneous results in past studies as Table 12 shows. Second, framing and representation are vastly different in most studies when compared to our study. Furthermore, in the studies by Eckel and Grossman (2008), Reynaud and Couture (2012) and Crosetto and Filippin (2015) the risk loving domain is not covered in the SGall task; that and the pull-to-center effect drives the risk estimates to be higher, which is also suggested by Bleichrodt (2002) and Andersen et al. (2006).  Eckel and Grossman (2008) .001 0.73 0.9 30 Reynaud and Couture (2012) .001 0.694 0.33 444 Crosetto and Filippin (2015) .001 Notes: mean and standard deviation in terms of CRRA-coefficients; N = 96 in our study; PGp by Crosetto and Filippin (2015) follows the method in Lejuez et al. (2002); in Lejuez et al. (2002) no standard deviation was reported

General Instructions
Welcome to this experiment in decision making.
You will be asked to make a series of choices that will affect your payment at the end of the experiment. Please pay close attention to the instructions, and if you have any questions raise your hand and an employee of the lab will help you with any questions you might have.
Also, you will be asked to fill out a short questionnaire, where your answers are not relevant for your payoff. All your decisions and answers during the experiment will stay completely anonymous to everyone.
In 'Part 1', you will make decisions in twelve different situations, where in each situation you will make one choice.
A sample decision screen is provided above and on the piece of paper in Fig. 3 This is the first sample screen subjects saw; note that it does not match any of the nine methods.
front of you. Please feel free to take notes on this paper.
On each screen you see two columns ('LEFT' and 'RIGHT'). In each column you have eight rows with different payment possibilities. You have to decide which option you prefer in each row.
For example on the screen provided above, in the first row you will have to choose between: 'LEFT': receive 8e with 100% probability 'RIGHT': receive 14e with 36.0% probability or 5e with 64.0% probability In the second row, you would make a choice between: 'LEFT': receive 8e with 100% probability 'RIGHT': receive 14e with 41.6% probability or 5e with 58.4% probability In the center of the screen you will find a number of radio buttons. You can only click one of those buttons on each screen. This button indicates at what row you want to switch from 'LEFT' to 'RIGHT'.
If you choose row 3, that means you prefer 'LEFT' for the first 2 rows, When you are finished making your decision for a screen, click 'OK' and you will get to the next screen where you will see your choice again. In case you are not satisfied with your choice, you can change your choice once if you wish to. Your second choice is final and cannot be changed afterwards.
After each decision screen, we will ask you how difficult it was for you to make a decision on the previous screen. These questions do not affect your payment. Still, we ask you to answer truthfully.
For your payoff of Part 1, one screen of the twelve is randomly selected by the computer. The computer will also select one of the rows at random. In this row, you have chosen 'LEFT' or 'RIGHT'. Finally the computer will randomize between the two possible outcomes based on the given probabilities.
For example, the computer chooses at random the following row to be relevant for your payoff: 'LEFT': 'receive 20e with 70% probability or 5e with 30% probability.' 'RIGHT': 'receive 15e with 60% probability or 10e with 40% probability.' You have chosen left in that particular row. Therefore, you either get 20e or 5e, but the probability to get 20e is higher.
Every screen and every row on each screen has an equal chance to be chosen by the computer to be relevant for your payoff. Considering that every decision you make matters, we advise you to think carefully about each decision you make.
After you made these twelve decisions, 'Part 2' will start. Instructions for 'Part 2' will be given on the screens themselves. One situation in 'Part 2' will be randomly selected to affect your payment. 'Part 1' and 'Part 2' are completely independent of each other.
Your final payoff will then be the sum of your payoffs from 'Part 1' and 'Part 2'.
If you have no questions, please click 'OK' to answer a couple of control questions. These questions will make sure that you have understood the setup. You cannot commence with the experiment unless you answer the control questions correctly.
The experiment will start afterwards! If you have questions, please raise your hand at any time and an experimenter will provide assistance.

Auction Instructions
You will now participate in an auction against a computer opponent over a good that has a value of 20e. You can bid any amount from 0e to 20e, and you can specify your bid down to the exact Cent.
The computer will bid a random number from 0e to 20e, down to the exact Cent, and each number has an equal probability to be chosen.
If your bid is higher than the computer's, you will get 20e minus your bid as your payoff. If your bid is lower than the computer's, you will get 0e.
If your bids are tied, the winner of the auction is selected randomly and you will receive the payoff of 20e minus your bid with 50% probability.
Example 1: If you bid 12.41e and the computer bids 16.53e, your payoff is 0e as the computer's bid is higher than yours. Example 2: If you bid 18.8e and the computer bids 0.17e, your payoff is 1.2e (=20-18.8) from this auction, as your bid is higher than the computer's.
Remember, you can bid any amount from 0e to 20e. If you win the auction, your payoff is 20e minus your bid. Now, please type in how much you want to bid for the good.

Investment Game Instructions
You will now have the opportunity to invest an endowment of 10.00e.
There are two assets you can invest in: STOCKS and BONDS.
The amount you invest in bonds does not give returns. You will get the amount you invested as your payoff for sure.
STOCKS: STOCKS can have higher gains than BONDS, but are more risky.
The amount you invest in STOCKS has a 50% chance to be multiplied by 1.5, and a 50% chance to be lost.
You can freely allocate your endowment of 10.00e between the two assets, down to the exact Cent.
Example 1: You invest 10e in BONDS and 0e in STOCKS. Your payoff will be 10e. Example 2: You choose to invest 2.58e in BONDS and 7.42e in STOCKS. Your payoff will either be 13.71e (=7.42*1.5+2.58) with 50% probability, or 2.58e with 50% probability. Example 3: You choose to invest all 10e into STOCKS. Your payoff will either be 15e (=10*1.5) with 50% probability, or 0e with 50% probability.
Remember: You will receive the amount you invest in BONDS as your payoff.
You will receive the amount you invest in STOCKS times 1.5 with 50% probability, and 0e with 50% probability.
Please choose how much you want to invest in STOCKS (the rest of your endowment will be invested in BONDS):

Text of Control Questions
Risk Preference Elicitation

Risk Preference Elicitation Methods
In this section we report the parameters for the gambles as they were presented to subjects. Note that all the preference elicitation methods described here are represented top-down for simplicity reasons. An example of a bottom-up representation of a particular method is provided afterwards.

Robustness Checks on Functional Form
In this section we report the regressions from Table 7 with the different assumption that a subject's utility function does not follow CRRA but either PT, CARA, DRRA, IRRA, IARA or DARA. For a discussion of these different assumptions, see the end of Section 3.4.1 and Section 1.1. The results remain unchanged or provide an even more favorable picture if we make these assumptions as far as significance is concerned. PGhigh becomes a significant explanatory factor under DRRA, and several other methods lose their significance under the other specifications. We conclude that our results are robust to the assumption of the underlying utility function. Notes: Compared to Table 7 in the main text, significance remains the same, and R 2 values are lower. Spearman rank correlation coefficients are unchanged due to the nature of the functional form. Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Spearman rank correlations and adjusted R 2 values are unchanged compared to Table 7 in the main text due to the similarity of the functional form to CRRA; expected signs are the opposite as for the other funcional forms; stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001  Notes: Coefficients, p-values and R 2 values are slightly better compared to Table 7 in the main text; stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001  Notes: Generally lower R 2 than DARA; stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001

Probability Weighting
In this section we report regressions that take probability weighting (PW) into account. For a discussion of these different assumptions, see the end of Section 3.4.1 and Section 1.1. Note that the only methods that change compared to the tables in the above section are SGp, PGp and PGall, as these have changing probabilities over different rows. The conclusions we drew for Table 7 in the main text remain qualitatively the same if we assume probability weighting. Notes: Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Expected signs are the opposite as for the other functional forms; stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001 Notes: Stars are given as follows: *: p<0.05; **: p<0.01; ***: p<0.001

Sample Screenshots
Fig. 5 Decision-making screen for lotteries; subjects indicated in which row they wanted to switch from the left to the right option by clicking one of the radio buttons in the middle Fig. 6 Revision screen for lotteries; subjects indicated whether they wanted to revise their first decision