# An Interdisciplinary Review of Research in Conjoint Analysis: Recent Developments and Directions for Future Research

- 2.8k Downloads
- 10 Citations

## Abstract

This review article provides reflections on the state of the art of research in conjoint analysis—where we came from, where we are, and some directions as to where we might go. We review key articles, mostly contemporary published since 2000, but some classic, drawn from the major marketing as well as various interdisciplinary academic journals to highlight important areas related to conjoint analysis research and identify more recent developments in this area. We develop an organizing framework that attempts to integrate various threads of research in conjoint methods and models. Our goal is to (a) emphasize the major developments in recent years, (b) evaluate these developments, and (c) to identify several potential directions for future research.

## Keywords

Conjoint analysis Measurement Preference Utility functions## 1 Introduction

Conjoint analysis is one of the most celebrated research tools in marketing and consumer research. This methodology which enables understanding consumer preferences^{1} has been applied to help solve a wide variety of marketing problems including estimating product demand, designing a new product line and calibrating price sensitivity/elasticity. The method involves presenting respondent customers with a carefully designed set of hypothetical product profiles (defined by the specified levels of the relevant attributes), and collecting their preferences in the form of ratings, rankings, or choices for those profiles.

Since the introduction of conjoint analysis in marketing research over four decades ago, a remarkable variety of new models and parameter estimation procedures have been developed. Some of these include the move from nonmetric to metric orientation and orthogonal experimental designs in the 1970s, developments in choice-based and hybrid conjoint including adaptive conjoint model in the 1980s, the growing popularity of hierarchical Bayesian and latent class models in the 1990s, and the adaptability of conjoint models to online choice tasks, incentivized contexts, group dynamics, and social influences in the past decade. Several earlier review articles in marketing and consumer academic research have documented the evolution of conjoint analysis.^{2} This manuscript provides an organizing framework for this vast literature and reviews key articles, critically discusses several advanced issues and developments, and identifies directions for future research. Cognizant of the fact that conjoint analysis has matured, this review is selective in the choice of articles, some classic but mostly contemporary focusing on the developments during the period post-2000; that have made or have the potential for having maximal impact in the field. Hopefully, this interdisciplinary review will encourage conjoint scholars to evolve beyond existing conjoint models and explore new problems and applications of consumer preference measurement, develop new forms of data collection, devise new estimation procedures, and tap into the dynamic nature of this methodology.

## 2 An Organizing Framework for Conjoint Analysis

*A*)

*Behavioral and Theoretical Underpinnings*, (

*B*)

*Researcher Issues for Research Design*, (

*C*)

*Respondent Issues for Data Collection*, (

*D*)

*Researcher Issues for Data Analysis*,

*and*(

*E*)

*Managerial Issues Concerning Implementation*. This framework considers all three relevant stakeholders: the researcher, the respondent, and the manager.

## 3 (A) Behavioral and Theoretical Underpinnings

### 3.1 A1. Behavioral Processes in Judgment, Preference, and Choice

The developments in the judgment and decision-making research offers great potential for conjoint analysis to better understand the behavioral processes in judgment, preference, and choice. We now know how, and increasingly, why characteristics of task and choice option guide attention and how internal memory and external information search affect choice in path-dependent ways.^{3} Recent research illustrates that preferences are typically constructed rather than stored and retrieved [111].

Judgments and choices typically engage multiple psychological processes, from attention-guided encoding and evaluation to retrieval of task-relevant information from memory or external sources, including prediction, response, and post-decision evaluation and updating. Attention is more important in decisions from descriptions (e.g., the full-profile approach of conjoint analysis) whereas memory and learning is more relevant in decisions from experience through trial and error sampling of choice options [77].^{4} On the other hand, in decisions from experience, recent outcomes are given more weight and rare events get underweighted. In a similar vein, the insight that evaluation is relative from prospect theory continues to gain support [184]. Since neurons encode changes in stimulation, rather than absolute levels, absolute judgments are much more difficult than relative judgments. Relative evaluation includes other observed or counterfactual outcomes from the same or different choice alternatives, as well as expectations.

Also relevant to conjoint analysis are the recent extensions of decision field theory (DFT) and models of value judgment in multiattribute choice [94]. In these models, attributes of choice alternatives are repeatedly randomly sampled and each additional acquisition of information increases or decreases the valuation of an alternative in a choice set, ending when the first option reaches a certain threshold. DFT as a multilayer connectionist network has also been applied to explain context effects such as similarity, attraction, and compromise effects [143]. For instance, conjoint models that capture compromise effect result in better prediction and fit compared to traditional value maximization models [102].

While stimulus sampling models typically assume path-independence, choice models are often biased toward early-emerging favorites resulting in reference-dependent subsequent evaluations [97] and distortion of the value of options, i.e., decision by distortion effect [16, 158]. Also, studies of anchoring suggest that the priming of memory accessibility (and hence preference) can be changed by asking a prior question and remains strong in the presence of incentives, experience, and market feedback [9]. Not only are there short-term changes, but long-term effects on memory have been shown; for example, measuring the long-term effects of purchase intentions on memory and subsequent purchases [28].

The recently developed query theory (QT) [95] on preference construction is a process model of valuation describing how the order of retrievals from memory plays a role in judging the value of objects, emphasizing output interference. Weber et al. [185] show that the queries about reasons supporting immediate versus delayed consumption are issued in reverse order thus making the endowment effect disappear. Similar to Luce’s choice axiom, the support theory (ST) [176] is a model of inference about the probability of an event that uses the relative weight of what we know and can generate about the event (its support) and compares it to what we know and can generate about all other possible events [184]. Since competing hypotheses are often generated by associative memory processes from long-term memory, irrelevant alternative hypotheses may well be generated and occupy limited-capacity working memory [52]. This has implications for conjoint research as consumers with greater working memory capacity can include more alternative hypotheses (i.e., explicit disjunctions) and thus greater discrimination and lower judged probability of the focal brand being chosen.

Recently, the dual process models of System 1 and System 2 processes proposed by Kahneman [97] have gained popularity. Psychological models have distinguished between a rapid, automatic and effortless, associative, intuitive process (System 1) and a slower, rule-governed, analytic, deliberate, and effortful process (System 2). The extent and the process in which the two systems interact [57] is a topic of debate. Both cognitive and affective mechanisms have been demonstrated to give rise to the discounting of future events such as in delayed versus immediate consumption [185, 178]. These theories have implications for conjoint data collection for technology products or durable goods.

#### 3.1.1 Suggested Directions for Future Research

- 1.
While time-discounted utility models are useful in inter-temporal choice, there is also a need to incorporate various behavioral effects in conjoint models. Conjoint modelers can extend and augment inter-temporal utility specifications by using temporal inflation parameters representing differences in “internal noise” used by behavioral researchers. An example is the recent critique by Hutchinson et al. [86] of the theoretical assumptions made by Salisbury and Feinberg’s [150] stochastic modeling of experimental data, where they empirically tested alternate models of choice and judgment with respect to assumptions relating to “internal noise” and “uncertainty about anticipated utility” as well as the stochastic versus deterministic nature of the scale parameters.

- 2.
Conjoint analysts can develop utility models that extend prospect theory and neuron-encoded relative judgments to better understand how consumers select reference points and how multiple reference points might be used in relative evaluation [184]. For instance, individual heterogeneity can be captured through a distribution of reference points rather than a single reference point such as reference price [50].

- 3.
Decision-makers may pay equal attention to all possible outcomes than is warranted by their probabilities and linger at extreme outcomes to assess best and worst choices in choice-based conjoint studies. Cumulative prospect theory that explains the evaluation of outcome probabilities relative to its position in the configuration of outcomes [175] can provide a useful avenue for research for this problem.

- 4.
The power of affect, feelings, and emotions in consumer judgment, preference, and choice is now well established [118]. Future conjoint research should incorporate the mechanisms of the dual process model, i.e. System 1 and System 2 models [97]. Also, decision affect theory provides a framework that incorporates emotional reactions to counterfactual outcome comparisons such as regret or loss aversion [34]. In a risky choice situation, the fit with self-regulatory orientation can also transfer as affective information into the choice task which could be modeled [78].

### 3.2 A2. Compensatory Versus Noncompensatory Processes

Much of the conjoint research assumes that the utility function for a choice alternative is additive and linear in parameters.^{5} The implied decision rules are compensatory. Generally speaking, linear compensatory choice models do not address simplifying choice heuristics such as truncation and level focus that can result in an abrupt change in choice probability. Yet, noncompensatory simple heuristics are often more or at least equally accurate in predicting new data compared to linear models that are criticized for over-fitting the data [67, 89, 103]. While the linear utility model has been the mainstay in conjoint research, Bayesian methods, including data augmentation, can easily accommodate nonlinear models and can deal with irregularities in the likelihood surface [6]. Recently, Kohli and Jedidi [103] and Yee et al. [190] propose dynamic programming methods (using greedy algorithm) to estimate lexicographic preference structures.

Noncompensatory processes are particularly relevant in the context of consideration sets, an issue typically ignored by the traditional conjoint research (e.g., [67, 91]). Many advocate a noncompensatory rule for consideration and a compensatory model at the choice stage (consider-then-choose rule), albeit some critics question the existence and parsimony of a formal consideration set (see Horowitz and Louviere [81] who find the same utility function at the two-stage versus one-stage only). For instance, in a study estimating consideration and choice probabilities simultaneously, Jedidi et al. [91] find that both segment-level and individual Tobit models perform better than the traditional conjoint model which ignores both consideration as well as error component in preference. Similarly, Gilbride and Allenby [67] estimate a two-stage model using hierarchical Bayes methods, augmenting the latent consideration sets within their MCMC approach. Recently, Hauser et al. [76] propose two machine-learning algorithms to estimate cognitively simple generalized disjunctions-of-conjunctions (DOC) decision rules, and Liu and Arora [114] develop a method to construct efficient designs for a two-stage, consider-then-choose model.^{6} Stuttgen et al. [164] propose a continuation of the line of research started by Gilbride and Allenby [67] and Jedidi and Kohli [89] that does not rely on compensatory trade-offs at all. These finding are consistent with economic theories of consideration set wherein consumers balance search costs with option value of utility maximization to achieve cognitive simplicity.

#### 3.2.1 Suggested Directions for Future Research

- 1.
It seems that combining lexicographic and compensatory processes in a two-stage model using the greedoid algorithm in the first stage is a promising research route to follow as it enhances the ecological rationality of preference models (see [67, 103], and [89]).

- 2.
Several interesting behavioral processes such as the formation and dynamics of the consideration set still need to be understood. Given the technological advances (i.e., eye-tracking technology) in dealing with noncompensatory processes and satisficing rules, it behooves conjoint researchers to adapt such methods in the future (see [164, 173], and [157]).

- 3.
Knowledge about cue diagnosticity

^{7}and take-the-best (TTB) strategy performs really well when the distribution of cue validities is highly skewed. Several other heuristics have also performed well including the models that integrate TTB and full information [80, 109]. We encourage conjoint researchers to incorporate cue diagnosticity in estimating noncompensatory models. - 4.
While the recognition heuristic (RH) for inference in cases in which only one of two provided comparison alternatives is recognized as a useful tool, the debate is whether recognition is always used as a first stage in inference or whether recognition is simply one cue in inference that can be integrated (see [132, 142]) without any special status. For future research, RH can be potentially applied in conjoint-choice contexts that are characterized by rapid, automatic, and effortless processes (i.e., System 1 process [97]) typical in low-involvement routine products.

### 3.3 A3. Integrating Behavioral Learning and Context Effects

Conjoint analysis has made some significant gains in incorporating behavioral theory into preference measurement. Recently, Bradlow et al. [20] investigated how subjects impute missing attribute levels when they evaluate partial conjoint profiles using a Bayesian “pattern-matching” learning model. Respondents impute values for missing attributes based on several factors including their priors over the set of attribute levels, a given attribute’s previously shown values, the previously shown values of other attributes, and the covariation among attributes. Alba and Cooke [3] critique that not all attributes are spontaneously inferred and even when inference is natural, symmetry may be violated such that the probability of imputing cause (e.g., quality) from effect (e.g., price) may deviate from the probability of imputing effect from cause. When information is intentionally retrieved, the weighting function may reflect uncertainty about the accuracy of the profiles or the ability to retrieve them.

There is substantial research in conjoint analysis to demonstrate context effects. Conjoint models in marketing research have assumed stable preference structures in that preferences at the time of measurement are the same as at the time of trial or purchase. However, context effects produce instability when the context at measurement does not match the context at decision time [15, 110]. DeSarbo et al. [45] introduced a Bayesian dynamic linear model (DLM)-based methodology that permits the detection and modeling of the dynamic evolution of individual preferences in conjoint analysis that occur during the task due to learning, exposure to additional information, fatigue, cognitive storage limitations, etc. (see [113]). Also, see Rutz and Sonnier [149] for Bayesian modeling (i.e., DLM method) of dynamic attribute evolution due to market structural changes for more details.

Kivetz et al. [102] find that incorporating the “compromise effect” leads to superior predictions and fit compared with the traditional value maximization model. Recently, Levav et al. [110] demonstrated using experimental studies that normatively equivalent decision contexts can yield different decisions, which challenges the assumption that people maximize utility and possess a complete preference ordering. This type of research attempts to bridge consumer psychology with marketing science. Other related work involving dynamic preference structures include Netzer et al. [129], Evgeniou et al. [59], Bradlow and Park [19], Fong et al.[64], Ruan et al. [148], Rooderkerk et al. [145], De Jong et al. [38], and Elrod et al. [56].

#### 3.3.1 Suggested Directions for Future Research

- 1.
There is clearly a need for more rigorous work to incorporate behavioral effects in preference measurement. While this may create a conflict between isomorphic goal of fit and paramorphic goal of predictive validity [130], a greater dialogue and collaboration between the two research camps is essential for improved quality of conjoint research.

- 2.
Future research in this conjoint arena should examine other documented behavioral effects such as asymmetric dominance, asymmetric advantage, enhancement, and detraction effects (see [2]).

- 3.
Since preference formation is a dynamic process dependent on learning and context effects, future researchers should attempt to further develop and use flexible models and dynamic random-effects models such as those used by Liechty et al. [113], and Bradlow et al. [20]. Many of the flexible models developed to capture dynamics in repeated choice (e.g., [107]) could be adapted to conjoint preference measurement.

- 4.
It would be worthwhile to investigate how choice probabilities change in choice-based conjoint and choice simulators when context effects and consumer expertise are built directly into the model as these may affect the likelihood and form of missing attribute inference [3].

### 3.4 A4. Group Dynamics and Social Interactions

A vast majority of choice models assume that a consumers’ latent utility is a function of brand attributes, and not the preferences of referent others. However, some scholars have examined the influence of referent others in a dyadic and network context. For instance, Arora and Allenby [10] develop a Bayesian model to estimate attribute-specific influence of spouses in a decision-making context and discuss how and whom marketers can target communication messages effectively. Using a Bayesian autoregressive mixture model, Yang and Allenby [189] demonstrate that preference interdependence due to geographically defined networks is more important than demographic networks in explaining behavior. Ding and Eliashberg [47] proposed a new model that explicitly considers dyadic decision-making in ethical drug prescriptions in the context of physician and patients. The issue of reducing hypothetical biases (e.g., socially desirable responses [48]) in group dynamics through innovative methodology, such as incentive-aligned conjoint studies, is critical.

Some exciting work has started using conjoint models in the domain of group dynamics and social interactions [129, 29, 68, 88, 127, 159].^{8} With the availability of “sentiment analysis” tools, firms are now able to extend beyond ratings data and capture a torrent of online textual communications from a variety of social media including blogs, chat rooms, new sites, YouTube, and Twitter.^{9} Recently, Sonnier et al. [159] using the web crawler technology and automated classification of sentiments were able to demonstrate that positive and negative comments increased the dynamic stock while negative comments decreased it and that such effects are masked when the comment volume is aggregated across valence. Based on the theory of social contagion [88], Narayan et al. [127] study the behavioral mechanisms underlying peer influence affecting choice decisions and find that consumers update their inherent attribute preferences in a Bayesian manner by utilizing the relative uncertainty of their attribute preference and that of their peers and use peer choices as an additional attribute. This particular study is significant as the authors mitigate problems of endogeneity, correlated unobservable variables, and simultaneity by setting up a preinfluence and post-influence conjoint experimental design. Most recently, Kim et al. [101] introduced a holistic preference and concept measurement model called *PIE* for conjoint analysis which is a new incentive-aligned data collection method which allows a consumer to obtain individualized shopping advice through other people.

#### 3.4.1 Suggested Directions for Future Research

- 1.
In the promising domain of group dynamics and social interactions for technology-based products, one important research question would be to ask what role can internal versus external motivations of online information disseminators play in changing the posterior beliefs and preference structure of consumers [69]? For example, very little is known as to what motivates opinion leaders and early adopters to not just possess but share information with others.

- 2.
There is a vast potential for conjoint models to draw from consumer research on reference group formation and social influences on buyer choice behavior such as internalization, identification, and compliance [141, 156]. In this area, barter conjoint offers a promising potential to model the effects of information diffusion among subjects and how endowment and loss-aversion effects [101, 22, 49] induce individuals to behave differently than conventional choice behavior.

- 3.
We issue a call for scholars to explore further developments in conjoint models that capture online recommender systems and social interactions given the rising importance of social media [32]. Existing algorithms using Classification and Regression Trees, Bayesian Tree Regression, and Stepwise Componential Regression can be further combined to develop an optimal sequence of questions to predict online visitor’s preference [37]. Additional research into problems involving multiple decision makers with multiple utility functions (e.g., in business-to-business applications) would prove valuable.

## 4 (B) Researcher Issues for Research Design

Conjoint researchers have long dealt with the problem of large number of attributes and levels with the help of experimental designs. The specific choice will depend on a variety of factors including objectives of the research, cost, time, statistical sophistication, and the need to develop individual-level estimates, etc. We focus on the research designs related to conjoint approaches that are more popular: choice-based conjoint analysis, menu-based experimental choice, and maximum difference best/worst conjoint method. We also briefly discuss some recent developments in experimental design and handling of large number of attributes.

### 4.1 B1. Choice-Based Conjoint Analysis

Choice-based conjoint (CBC) analysis describes a class of hybrid techniques that are among the most widely adopted market research methods for conjoint analysis (see [137]).^{10} The early choice-based hybrid models used stage-wise regression, compositional models to fit self-explicated data, and the decompositional model at the segment level. However, hybrid models were later extended to allow for parameter estimation at the individual level using self-explicated data for within-attribute part-worth estimation, and using the full-profile approach for improving estimates of attribute importance.

Recent developments have allowed for estimation at the individual level through Bayesian estimation [71, 167], even though a respondent provides only a small amount of information within CBC. In the same vein, it is not clear whether segments obtained from CBC are similar to those found from post hoc clustering of part-worths [25]. One aspect of choice-based models, particularly with the development of multinomial logit estimation procedures, is the property of independence of irrelevant alternatives (IIA) that forces all cross-elasticities to be equal. However, researchers have developed ways to deal with the IIA assumption by employing mixed-logit or random-parameters logit that allows for flexible variance-covariance structures. Building on recent work by Louviere and Meyer [116] and Louviere et al. [117], Fiebig et al. [61] argue that much of the heterogeneity in attribute weights is accounted for by a pure scale effect (i.e., holding attribute coefficients fixed, the scale of the error term is greater) leading to scale heterogeneity MNL model. Also noteworthy is the recent development in detecting and statistical handling of attribute nonattendance in which respondents focus on a subset of attributes only in choice-based conjoint. Scarpa and colleagues use two different panel mixed-logit models to account for response pattern of repeated exclusion that influence model estimation (see [154], [155], and [24]).

#### 4.1.1 Suggested Directions for Future Research

- 1.
Several marketing scholars (see [130], [70], and [83]) identified the importance of advanced research into the direct modeling of behavioral effects on decision-making and choice (e.g., in choice-based conjoint analysis). The research issues include understanding of such behavioral phenomena as self-control, context effects, inattention, or reference dependence. The embedding of meta-attributes such as expectations, goals, motivations, reference groups, and social networks might also prove gainful in conjoint analysis.

- 2.
Another potential area of study is the modeling of individual-level structural heterogeneity. More specifically, are there some combination of attribute levels that create a change in the structure of the utility function utilized by a specific consumer? While conjoint scholars have explored compensatory vs. noncompensatory models for a given choice-based conjoint task, work involving potential regime shifts during the task by consumer would prove insightful (see [63]).

### 4.2 B2. Menu-Based Experimental Choice

In menu-based conjoint analysis, customers are asked to pick several features from a menu of features or products that are individually priced. If the utility of each feature is above a certain threshold, it is chosen and the utilities of all the chosen features are maximized simultaneously resulting in multiple chosen alternatives [112]. The responses therefore entail a binary vector of choices for each respondent for each of the menu scenarios in the experiment. This is quite akin to choosing a bundle of items [31] from a larger set or designing a product using a product configurator as buying, for instance, a Dell laptop. Configurators represent a promising form of conjoint data collection in which the respondent self-designs the best product configuration [112]. Recently, Levav et al. [110] argue that in a mass customization decision (such as using a configurator), consumers can often lose their self-control in assessing utility correctly in repeated choice situations due to bounded rationality and the depletion effects of their mental resources [181]. Dellaert and Stremersch [40] borrowing from choice theory and task complexity theory also demonstrated that consumers’ product utility had a positive effect on mass customization utility while task complexity had a negative effect, albeit lower for experts.

In addition to the many menu choices that it generates, menu-based choice represents a modeling challenge that is distinct from the traditional single-choice analysis of data from choice-based conjoint experiments—e.g., using multinomial logit models or multinomial probit models. The Bayesian modeling approach in this context, entailing a constrained random-effects multinomial probit model [112], incorporates constraints in menu choices (e.g., firm-level design or production constraints) as well as heterogeneity in customers’ price sensitivities and preferences for the variety of customized options a firm can offer. In this multiple choice modeling scenario, researchers can assess the intrinsic worth of each feature, their price sensitivities, and model correlations among them for each individual. Web-based menus would allow firms to offer mass-customized services with every potential customer visiting their web site.

#### 4.2.1 Suggested Directions for Future Research

- 1.
Given the ability of menu-based conjoint to provide individual-level information and the growing reality of web-based mass customization, we encourage researchers to further study customer heterogeneity in demand and new channels of information exchange to maximize customer value.

- 2.
Conjoint scholars can add to our understanding of mass-customized choice processes by explicating individual traits, task factors, and decision strategies that influence customization complexity. To further refine the model, future conjoint scholars can incorporate a more general distance model that can explicitly account for the relative differences between attribute levels, unlike the 0–1 pattern-matching model (see [20]). This can be accomplished by combining conjoint analysis and MDS to impute missing attribute levels. When the number of attributes is large, mapping between attributes and some higher-order dimensions can be developed (i.e., conjoint utility functions) a la MDS methods. Methods of reverse mapping can yield part-worth values for the original attributes. But, this approach needs to be developed and validated.

- 3.
One other promising line of research here would be to study whether consumers enjoy mass customizing a product or service, and at what levels of complexity will they make suboptimal choices. It is possible that consumers also overspend their mental capacity early in the configuration sequence triggering a tendency to accept the default alternative in subsequent decisions, even when such decisions involve few options that would require less capacity to evaluate. A related issue in need of further investigation is minimizing the dysfunctional consequences of information overload in conjoint studies.

### 4.3 B3. Maximum Difference Scaling—Best/Worst Conjoint

Based on a multinomial extension of Thurstone’s model for paired comparisons, Finn and Louviere [62] developed a univariate scaling model (MaxDiff) that can be utilized to measure brand-by-attribute positions, develop univariate scales from multiple measures, etc. Swait et al. [166] describe how to generalize or extend MaxDiff to conjoint applications which they call Best/Worst conjoint analysis or B/W. In the B/W method respondents choose the two attribute levels which are, respectively, “best” and “worst” for each product profile. With such data, the method enables the estimation of separate attribute effects for each attribute independently of its part-worths. This is an important advantage over the traditional additive conjoint and choice models that do not allow for such separation [166]. B/W experiments have also been found to contain less respondent error than choice-based conjoint models containing the same attributes and levels [166]. Other advantages include allowing for ties in evaluations unlike ranking tasks and a more discriminating way to measure attribute importance than either rating scales or the method of paired comparisons. Also, it has greater predictive validity as an importance measurement than either ratings scales or the method of paired comparisons. B/W measurements are scale-free and thus ideal for comparison across different cultural groups that use scales quite differently [33] without any need to make prior assumptions regarding the scaling of evaluation and choice. Consequently, maximum difference scaling has been used extensively in Best/Worst Conjoint Analysis. However, some limitations include evaluating both positive and negative attributes, effects of having only best or worst features versus best and worst, collinearity, and sequence effects, among others. For example, MaxDiff results are shown to be less accurate at the “best” end but augmentation (e.g., Q Sort) improves MaxDiff results on “best” items [53].

#### 4.3.1 Suggested Directions for Future Research

- 1.
Best/Worst allows for ties in evaluations and for skewed preference functions, unlike ranking tasks. Whether or not B/W and choice-based conjoint produce equivalent part-worth utilities, after adjusting for the difference in respondent error, is currently unknown as the results have been mixed [166]. More research is needed to further validate the B/W method.

- 2.
More recently, Marley and Louviere [121] have developed several different probabilistic B/W choice models: the Consistent Random Utility B/W choice model, the MaxDiff model, the biased MaxDiff model, and the concordant B/W choice model (see also [122]). However, questions remain about whether the B/W method can be used in accordance with the random utility theory. A related question is whether the judgments respondents make in a B/W task could be used as though they had made in an alternative-based choice, ranking, or rating using compensatory rules.

### 4.4 B4. Developments in Experimental Design

Rating-based methods in marketing conjoint studies have frequently utilized resolution III designs (or orthogonal arrays), which assume that some main effects are confounded with some two-level interactions. In general, orthogonal designs for linear models are efficient as measured by A-, D-, and G-efficiency computed from eigenvalues of the \( {\left(X\hbox{'}X\right)}^{-1} \) matrix (recently, Toubia and Hauser [169] proposed the criterion of managerial efficiency, M-efficiency, as well). Kuhfeld [106] showed that the OPTEX procedure produces more efficient designs; however, it fails to achieve the perfect level balance or the proportionality criteria of orthogonal arrays. In the case of choice-based conjoint methods, Huber and Zwerina [85] show that achieving utility balance increases the efficiency. Building on their work, Sandor and Wedel [151] develop Bayesian-based efficient designs (through relabeling, swapping, and cycling) that minimize the standard errors with higher predictive validity. Subsequently, Sandor and Wedel [152] develop efficient designs that are optimal for mixed-logit models by evaluating the dimension-scaled determinant of the information matrix of the mixed multinomial logit model. Because choice-based conjoint model is nonlinear, both minimal overlap and utility balance in the choice set are desirable. Rose et al. [146] extend the Sandor and Wedel study to construct statistical S-efficiency that optimizes Bayesian designs for a given sample size based on parameter values, random-parameters logit mixing distributions, and model specifications [146, 99]. However, the trade-off is that choice task difficulty typically is accompanied with greater measurement response error, and thus a lower response R-efficiency.

Despite several developments, some limitations remained, such as the need to obtain repeated observations from each respondent, the use of aggregate-customization design that was optimal for the average respondent only, and the challenge of computing ordinary Fisher’s information matrix. This was later partly addressed by Sandor and Wedel [153] who used a small set of different designs for different consumers to capture respondent heterogeneity. Recently, Yu et al. [191], using the generalized Fisher information matrix, proposed an individually adapted sequential Bayesian approach to generate a conjoint-choice design that is tailor-made for each respondent. The method is superior both in estimation of individual-level part-worths (and population-level estimates) and choice prediction compared to benchmarks such as aggregate-customization and orthogonal design approaches. Further, this method is less sensitive to low-response accuracy as compared to the polyhedral method proposed by Toubia et al. [171] and their subsequent adapted method [172]. New developments are also emerging in the area of choice set designs with forced choice experiments. For example, Burgess and Street [21] developed procedures to construct near-optimal designs to estimate main effects and two-level interactions with a smaller numbers of choice sets and they derive the relevant mathematical theory for such designs; see [21, 163, 161, 162] for detailed descriptions.

#### 4.4.1 Suggested Directions for Future Research

- 1.
Newer methods of adaptive questions based on active machine-based learning method are proving very successful over market-based, random, and orthogonal-design questions when consumers use noncompensatory heuristics; see Abernethy et al. [1] and Dzyabura and Hauser [54]. We encourage more research along this direction.

- 2.
The trade-off between S- and R-efficiency is an interesting issue to resolve going forward. While greater S-efficiency yields smaller variance, increasing R-efficiency by reducing task complexity with attribute overlap reduces S-efficiency. While inconclusive, more research needs to be done whether efficient experimental designs contribute more to the precision of choice model estimates in light of task complexity (see [99]).

### 4.5 B5. Handling a Large Number of Attributes

A comprehensive review of various methods for dealing with large number of attributes is available in Rao et al. [140]. Several scholars are currently working on the issue of handling large numbers of attributes [35, 128]. For instance, Dahan [35] simplified the conjoint task (using Conjoint Adaptive Ranking Database System) by asking respondents to choose only among a very limited number of sets that are perfectly mapped to specific utility functions proposed in advance by the researcher. Park et al. [134] proposed a new incentive-aligned web-based upgrading method for eliciting attribute preferences in complex products (e.g., cameras); this method enables participants to upgrade one attribute at any level from a large number of attributes allowing for dynamic customization of the product. Their empirical application shows that the upgrading method is comparable to the benchmarked self-explicated approach, takes less time, and has a higher external validity.

Recently, Netzer and Srinivasan [128] proposed a web-based adaptive self-explicated (ASE) approach to solve the self-explicated constant sum question problem when the number of product attributes becomes large. The ASE method breaks down the attribute importance question into a ranking of the attributes followed by a sequence of constant sum paired comparison questions for two attributes at a time thus replacing the importance measurement stage of the traditional self-explication model. The attribute importance is estimated by using a log-linear regression model (with OLS estimation) which gives the benefit of estimating standard errors as well. The ASE method significantly and substantially improved predictive validity as compared to the self-explication model, adaptive conjoint analysis, and the fast polyhedral method.

As with the large number of attributes problem, researchers should also consider the number-of-levels effect. As the number of intervening attribute levels increase, the derived importance of an attribute also increases. Prior studies have linked this phenomenon to data collection methodology, measurement scale of the dependent variable, and parameter estimation procedures [179], but results are somewhat inconclusive. More recently, De Wilde et al. [39] explain this phenomenon by focusing on selective attention, and argue that attentional contrast directs attention away from redundant attribute levels and toward novel attributes in sequential evaluation procedure (e.g., in traditional full-profile conjoint analysis and choice-based conjoint).

#### 4.5.1 Suggested Directions for Future Research

- 1.
The search for methods for coping with large number of attributes has been identified as one of the key areas for future research [18]. An approach that holds promise is to have subsamples of respondents provide data on a subset of attributes with some linkages among the sets as in bridging conjoint analysis. Hierarchical Bayesian methods can then be applied to such data to estimate part-worths at the individual level. We encourage conjoint scholars to further advance this line of research.

- 2.
Given scant research, there is a need for studies, using simulations as well as empirical data, to compare the relative efficacy of the different methods in handling large number of attributes. Future research should assess how measurement technique, attribute representation, and experimental design will influence the relative novelty of an attributes’ levels at the time of measurement. Further, conjoint scholars should engage in developing algorithms that are sensitive to level balance across attributes, especially for unbalanced designs.

## 5 (C) Respondent Issues for Data Collection

Over the years, conjoint research has focused either on preference ratings (or rankings) of a number (between a dozen to thirty) of carefully designed product profiles (a la ratings-based methods) or on stated choice for each of several choice sets of product profiles, including a no choice option. When the number of attributes becomes large (i.e., over six), methods such as adaptive methods or partial profile methods have been employed. These approaches have come to a stable situation. Not many research issues seem to exist in this arena. Rather, we will focus on newer methods such as using incentive alignment and willingness to pay, barter conjoint and conjoint poker, meta-attributes and complexity of stimuli, and the role of no-choice option given their recent development and future research potential.

### 5.1 C1. Incentive Compatibility and Willingness to Pay

Ding et al. [48] found strong evidence in favor of incentive-aligned choice conjoint in out-of-sample predictions and a more realistic preference structure that exhibited higher price sensitivity, lower risk-seeking behavior, and lower susceptibility to socially desirable behaviors. This development has cast doubt on the assumption that purchase intent and choice are related in stated preference data. However, a real challenge is for researchers to implement incentive alignment in really new or complex products when it is not cost effective to offer real product to each participant or to generate all product variations.

Dzyabura and Hauser [54] addressed the cost issue by implementing an active machine-learning algorithm which approximates the posterior with a distributional variation and uses belief propagation to update the posterior distribution. The questions are selected sequentially to minimize the expected posterior entropy by anticipating the potential responses, i.e., to consider or not to consider. Their study confirms that consumers use cognitively simple heuristics with relatively fewer aspects and that the adaptive questions search the space of decision rules efficiently. Ding [46] addressed the issue of “all product variations” by developing a truth-telling mechanism by incentivizing conjoint participants which becomes the Bayesian Nash Equilibrium. The BDM procedure ensures that it is in the best interest of a participant to have his or her *inferred* willingness to pay equal to his or her *true* willingness to pay.

Conjoint methods are typically used for measuring the willingness to pay (WTP). WTP becomes more relevant in the context of incentive-aligned upgrading of attributes [134]. Wathieu and Bertini [183] used categorization theory to argue that a moderately incongruent price differential is more likely to induce deliberation when a new benefit is added or augmented beyond consumer expectations. Dong et al. [51] proposed a Rank Order mechanism that predicts preferences for a list of reward products, instead of an individual’s monetary value for one product, and gives or sells the top-rated one to the respondent. They recommend the WTP mechanism when there is only one real product and price can be estimated from preference measurement task; and the Rank Order method when two or more real versions of the product are available regardless of whether or not WTP can be estimated.

The contingent valuation method, typically used to determine the WTP for a nonmarket good, is subject to exaggeration bias which stems from factors such as new product enthusiasm, an attempt to influence the decision to market the product, or a tendency to be less sensitive to total costs [93, 180]. One approach is to calibrate the responses into quasi-real ones based on self-assessed certainty; however, the latter measure can also be fraught with survey bias. The second approach has been transforming the hypothetical WTP into real WTP assuming a functional relationship. Park and MacLachlan [133] propose an exaggeration bias-corrected contingent valuation method in which the individual compares the real WTP with an independent randomly drawn spurious WTP and then takes the larger one as his or her hypothetical WTP. The real WTP is only assumed to be related randomly with the hypothetical WTP rather than have a functional relation.

Voelckner [180] found significant and substantial differences between WTP reported by subjects when payment of the stated price is real or hypothetical. The author compared hypothetical and real WTPs across and within four methods of measuring WTP (i.e., first-price sealed bid auction, the Vickrey auction, contingent valuation, and conjoint analysis). There was evidence of overbidding bias as a result of perceived competitive pressure resulting in higher WTPs for auctions compared to methods based on stated preference data. Recently, Miller et al. [125] compared the performance of four approaches to measure WTP based on direct versus indirect assessment and hypothetical versus actual WTP with real purchase data. Their findings show that respondents are more price-sensitive in incentive-aligned settings than in nonincentive-aligned settings and in real purchase setting, and are better suited to assess WTP for product prototypes. Overall, recent developments in this domain have been very significant with a promising future outlook.

#### 5.1.1 Suggested Directions for Future Research

- 1.
While the Rank Order method of incentive compatibility has proven very valuable in motivating truth responses, there is still a need to sort out a host of issues such as desired versus undesired products to be included in the list, the incentive value of products, and whether the incentive list should be revealed before or after the conjoint task.

- 2.
Given that WTP is a latent construct, research for its validation should be undertaken employing SEM methodology; for instance employing an induced value experiment that provides incentive-compatible estimates of WTP may come closest to mapping the true representation of WTP as a latent construct [134, 46].

- 3.
Giving respondents time to think (TTT) in a contingent valuation study by designing a quasi-experimental study that mimics realistic decision contexts may alter the WTP. How does information and time affect responses to contingent valuation conjoint studies? This is an excellent opportunity for bridging research in consumer psychology, marketing science, and environmental and information economics [27].

- 4.
While WTP research typically focuses on estimating marginal rates of substitution (i.e., WTP for marginal changes in product attributes), there is potential scope for data enrichment by combining stated preference and revealed preference; the former providing robust estimates for substitutability and the latter providing robust estimates for predicting uptake behavior (see [126] for associated statistical estimation methodologies).

### 5.2 C2. Barter Conjoint and Conjoint Poker

Barter conjoint approach collects substantially larger amount of pairwise data (offers submitted or not and the responses to offers received) without demanding much additional effort, as well as potentially improving the quality of data by allowing information diffusion among participants during preference measurement. Ding et al. [49] using two studies and two holdout tasks found that the barter conjoint significantly outperformed both incentive-aligned and hypothetical CBC in out-of-sample prediction. Toubia et al. [173] recently developed and tested an incentive-compatible conjoint poker game and compared it with incentive-compatible choice-based conjoint using a series of experiments. Their findings indicate that conjoint poker induces respondents to consider more of the profile-related information presented to them (i.e., greater involvement and motivation) as compared with choice-based conjoint. Similar to the incentive-compatible mechanisms that add motivation to respondents [48], conjoint poker motivates respondents toward truth telling.

#### 5.2.1 Suggested Directions for Future Research

- 1.
Future research in these relatively new approaches could be developed in a number of different directions. For example, applications of barter and poker methods could also be tested for products that are less desirable, allowing for increases or decreases in group assignments, and/or allowing for multiple trades.

- 2.
There is the restriction that the barter requires synchronized implementation and simultaneous bartering which makes online conversion somewhat cumbersome. Future barter research should examine newer procedures that do not tend to promote possible endowment and loss-aversion effects. Finally, the current estimation method does not model any dynamic effects in preference formation despite the various stages of the barter.

### 5.3 C3. Meta-Attributes and Complexity of Stimuli

Conjoint researchers need to recognize that consumers often think of products in terms of “meta-attributes” including needs, motivations, and goals which may correspond to bundles of attributes [130]. Research in judgment and decision-making has incorporated the role of multiple goals and how situational and task factors including goal-framing effects [123] activate and chronically elevate their accessibility which in turn determine decision rules—e.g., deontological goal of “what is right”, consequentialist goal of “what has the best outcomes”, versus affective goal of “what feels right” [13]. Also, consequences associated with an attribute that is central in consumers’ hierarchy of goals are likely to generate primary appraisals [118]. These meta-level preferences can impact decision-making and they tend to be more stable than context-specific preferences. We know that customers think of products in terms of meta-attributes and hierarchy of goals, and that attributes that serve a consequentialist goal are more likely to be accessible and appraised [118, 130].

In the context of complex stimuli, i.e., really new products, the role of uncertainty and consumer learning mechanisms through mental simulation and analogies is critical. Some advances have been made in this domain (see [73, 79]), but the results are still preliminary. In a related vein, there is also evidence of inconsistency between the importance of attributes as estimated in value-elicitation surveys (i.e., stated preferences) and those implied by actual choices (i.e., revealed preferences). Horsky et al. [82] empirically demonstrate that attributes may be differentially weighted in stated preference versus actual choice as a function of their tangibility, such that tangible and concrete attributes are weighted more heavily in choice since consumers are under pressure to justify their decisions. Going forward, we offer the following issues for future research.

#### 5.3.1 Suggested Directions for Future Research

- 1.
One big challenge is to conceptually map the relationship between physical (i.e., concrete) attributes and meta-attributes in a way that can be translated into product design specifications. Some concrete attributes may lose their meaning when interpreted at a higher level of abstraction and generality, thus undermining the validity of responses [31].

- 2.
The other challenge is methodological, although some work in this domain has started using factor analysis, text mining, and tree-based methods (e.g., Classification and Regression Trees, Bayesian Tree Regression) as valuable tools in this respect [37, 66]. While factor analysis is feasible, it lacks the ability to create maps between physical attributes and meta-attributes. We encourage continued research in this area.

### 5.4 C4. The Role of the No Choice Option

Parker and Schrift [135] argued that the mere addition of a no-choice option to a set changes the consumers’ judgment criteria from comparative judgment (i.e., attribute-based processing) to an evaluative judgment (i.e., alternative-based processing). Through a series of studies, the authors demonstrate that the mere addition of a no-choice option (i.e., rejectable choice set) leads to alternative-based recall (encoding and retrieval) and information processing, greater weights being given to attributes that are enriched (more meaningful when evaluated alone) and those that meet consumers’ minimum needs, and ultimately a change in preference. The perceived difference between alternatives will be increasingly smaller the further the attributes are from the consumers’ threshold. Consistent with the literature on context effects [15], this study confirms that consumers shift their preference structure between a forced choice context and a rejectable choice context and ultimately choice shares. It is conceivable that every decision a consumer makes has a no-choice option and conjoint scholars should design studies that add the no-choice option when it is feasible and salient for consumers. Further, Botti et al. [17] suggest that mostly all choices consumers make are restricted or constrained in some manner.

#### 5.4.1 Suggested Directions for Future Research

- 1.
Potential distortions as arising due to variations in choice sets need to be examined by-product/service class, type of experimental design, method of administration, etc. to fully understand the impact of the specific methodology selected to perform conjoint analysis.

- 2.
A number of interesting subareas on the impact of choices made when a “no choice” option is included need further investigation. These include the frequency in which the “no choice” option is selected, the impact of “no choice” selection on estimated importance, and whether the choices are sequenced or staged (i.e., first consider, then decide to choose) [114].

## 6 (D) Researcher Issues for Data Analysis

Major developments in the estimation procedures relevant for the conjoint researcher include Hierarchical Bayesian, Latent Class, and Polyhedral Estimation approaches. Further, opportunities exist in integrating multiple sources of data to obtain robust conjoint results.

### 6.1 D1. The Hierarchical Bayesian (HB) Approach

The HB method of estimation is helpful in tackling the challenge in conjoint analysis to estimate accurate part-worths at the individual level without imposing excessive response burden on the respondents. HB methods have been known to improve on finite mixture-based individual-level estimates which tend to be more stable than estimates that are based on individual data [4]. Following earlier pioneering work,^{11} Allenby et al. [5] utilized the Bayesian method and the Gibbs sampler to extend research by incorporating prior ordinal constraints on conjoint part-worths and found better internal cross-validation on the data. Often, there is a logical or practical ordering of the attribute levels that exists in the real world.

Subsequently, Srinivasan and Park [160] proposed a new method to optimize the full-profile design for a large number of attributes and provided a heuristic procedure to weigh together the part-worth estimates of the self-stated and full-profile data on a smaller number of core attributes. By differentiating between core and noncore attributes, they predicted preference for a new stimulus by using the optimal weight and conjoint part-worths for the core attributes and the self-explicated part-worths for the noncore attributes. Andrews et al. [8] showed that HB models performed well even when the part-worths came from a mixture of distributions and were robust to violations of the underlying assumptions. In almost all instances, the Bayesian method has been found to be comparable or even superior to the traditional methods both in part-worth estimation and predictive validity. Sandor and Wedel [153] demonstrated that heterogeneous designs which take into account Bayesian design principles of prior uncertainty and respondent heterogeneity showed substantial gains in efficiency compared with homogeneous designs. Heterogeneous designs consist of several subdesigns that are offered to different consumers and can be constructed with relative ease for a wide range of conjoint-choice models.^{12}

Ter Hofstede et al. [167] proposed a general model (finite mixture regression model) that includes the effects of discrete and continuous heterogeneity as well as self-stated and derived attribute importance in hybrid conjoint studies. As a departure from earlier studies, they treat self-stated importance as data rather than as prior information, and include them in the formulation of the likelihood thus helping them investigate the relationship of self-stated and derived importance at the individual level. Furthermore, the order constraints derived from the self-stated importances are “hard” constraints, ignoring the relative distance between importances and measurement error in the self-stated part-worths, which may result in the stated order differing stochastically from the “true” underlying order. Their study shows that including self-stated importance in the likelihood leads to much better predictions than does considering them as prior information. An excellent resource on HB methods in marketing and conjoint analysis can be found in Rossi et al. [147].

#### 6.1.1 Suggested Directions for Future Research

- 1.
It has not been conclusively demonstrated in what contexts consumer heterogeneity is better described by a continuous [4] or by a discrete distribution [44], pointing to a need for further research to resolve this issue (see also Ebbes et al. [55]). Still, we believe that the HB method is a preferred approach when a large number of part-worths need to be estimated compared to more classical methods of estimation that can use up a large number of degrees of freedom and where the likelihood function may have multiple maxima [84, 138].

- 2.
More research is required to examine the potential effects of distributional misspecification concerning the likelihood, prior, and hyper prior distributions in HB conjoint analyses (not just prior sensitivity).

### 6.2 D2. The Latent Class Approach

Market segmentation remains one of the most important uses for conjoint analysis based on the estimated attribute part-worths [31, 105, 168, 186]. Historically, segments were developed in a rather disjointed two-step fashion (clustering after estimating individual-level conjoint part-worths). This resulted in various problems, for instance, in highly fractionized designs, the estimated individual-level part-worths are often unstable and are stochastic and quite different loss functions are optimized using these disjointed methods. In this light, there has been research dedicated to *simultaneously* performing this two-step approach more parsimoniously; for instance an early example includes the Q-factor analytic procedure that maximizes the predictive power of the derived segment-level utility function. DeSarbo and colleagues provide alternative cluster-wise regression based formulations for such benefit segmentation approaches utilizing conjoint analysis [42].

Following these deterministic cluster-wise approaches, a number of latent class or finite mixture-based solutions to simultaneously perform conjoint and market segmentation analysis had been developed. The advantages of these simultaneous procedures are that they employ stochastic frameworks involving mixtures of conditional distributions which allow for heuristic tests for the optimal number of segments (via AIC, BIC, CAIC, ICOMP, etc. heuristics),^{13} fuzzy posterior probability of memberships that permit fractional membership in more than one market segment, and a stochastic approach that allows for computation of the standard errors of the estimated part-worths. Many such latent class conjoint procedures also allow for heteroscedasticity among groups of consumers as well as for variation within these groups’ responses. Interested readers are referred to several early articles by DeSarbo and colleagues (cited in DeSarbo and DeSarbo [42]). In the last decade, these authors develop a host of latent class models that can be applied to conjoint analysis, addressing the issue of segment identification. Chung and Rao [31] develop a comparability-based balance (COBA) model that accommodates bundle choices with any degree of heterogeneity among components (products) and incorporates consumer preference heterogeneity that can be used for segmentation and optimal bundle pricing.

Much of the early literature involved modeling heterogeneity through the use of individual-level traditional conjoint analysis. Bayesian conjoint analysis and latent class conjoint analysis had initially focused on the modeling of metric data. In more recent times, effort has been devoted to conjoint-choice experiments. This was motivated by the fact that conventional rating-based (metric) conjoint analysis depends on a consideration (rating) task that does not link directly to any behavioral theory. We feel that employing actual choice between alternatives is more realistic than the conventional approach of using mere artificial rankings and ratings. As such, we applaud the development of such latent class conjoint procedures for the analysis of choice data.

#### 6.2.1 Suggested Directions for Future Research

- 1.
Latent class models all typically assume that the respondent belongs to one and only one underlying segment allowing for the calculation of posterior probabilities. By definition, these posterior probabilities for each respondent sum to one, indicating a convex combination of these segment memberships. These individual-level predictions obtained from such finite mixture-based models tend to be rather poor depending upon the degree of separation of the centroids of the conditional segment-level support distributions and the within segment variation, thus limiting the range of the predictions. We encourage the development of new methods for improved prediction.

- 2.
Segment identifiability remains a problem with such latent class segmentation procedures in conjoint analysis since individual differences in the estimated individual-level parameters are rarely well predicted by demographics, psychographics, etc. This same problem lies with respect to the estimated segment-level parameters as well. Even with explicit reparameterization of the mixing proportion via the concomitant approach, it is uncommon to be able to shed sufficient light on describing the derived market segments vis-à-vis traditional individual difference measurements. We encourage the development of new methods in improving segment identifiability.

- 3.
Using the ideas of Hidden Markov Models [129, 65, 144], additional research is required to investigate the dynamic nature of such derived market segments including switching segment memberships over time, the evolution of different market segments over context or consumptive situations, and the time path of changing parameters.

### 6.3 D3. The Polyhedral Estimation Approach

Toubia et al. [171] proposed and tested a new “polyhedral” choice-based question-design method that adapts each respondent’s choice sets on the basis of previous answers by that respondent.^{14} The simulations conducted suggest that polyhedral question design does well in many domains, particularly those in which heterogeneity and part-worth magnitudes are relatively large. In particular, the polyhedral algorithms hold potential when profile comparisons are more accurate than self-explicated importance measures and when respondent fatigue is a concern due to a large number of features. For example, in product development scenarios, managers may want to learn the incremental utility of a large number of features allowing them to screen several features quickly [138].

Toubia et al. [170] validated the polyhedral approach and found that it was superior to the fixed efficient design in both internal and external validity, and slightly better than the adaptive conjoint method. However, the polyhedral approach is highly sensitive to errors in the early choices. Despite mixed results of the polyhedral questions especially when response error is high, Toubia et al. [172] subsequently proposed and tested a probabilistic polyhedral method by recasting the polyhedral heuristic into a Bayesian framework which includes prior information in a natural, conjugate manner. This method shows potential to improve accuracy in high response-error domains by minimizing the expected size of the polyhedron (i.e., choice balance) and also by minimizing the maximum uncertainty on any combination of part-worths (i.e., post-choice symmetry). Evgeniou et al. [58] introduce methods from statistical learning theory to conjoint analysis that compares favorably to the polyhedral heuristic.

While, Toubia et al. [172] demonstrated improved accuracy in using probabilistic polyhedral method, the analytic-center estimation does not yet perform as well as the HB method. Abernethy et al. [1], using complexity control machine learning, demonstrate robustness to response errors inherent in adaptive choice which outperforms polyhedral estimation proposed by Toubia et al. [170]. More recently, Dzyabura and Hauser [54] developed and tested an active machine-learning algorithm to identify noncompensatory heuristic decision rules based on prior beliefs and respondent’s answers to previous questions. Currently, research that frames the fast polyhedral method in HB specification (GENPACE) has shown to outperform FastPACE under certain conditions [177].

#### 6.3.1 Suggested Directions for Future Research

- 1.
We suggest future conjoint scholars working with the polyhedral algorithm to combine self-explicated data within the framework of stated choice data to improve the estimation as shown by Toubia et al. [171] and Ter Hofstede et al. [167] in traditional conjoint analysis. Such self-explicated data can help constrain the rank order of part-worths and thereby shrink the polyhedral confidence region for estimated part-worths.

- 2.
Combining analytic-center (AC) estimation with Bayesian methods may broaden the scope and applicability of the polyhedral algorithm when respondent heterogeneity and response accuracy in stated choice are both low. Also, the polyhedral ellipsoid algorithm can perhaps be further broadened to newer domains of application including situations marked by a lack of nondominance, choice balance, and symmetry—criteria that are presupposed in the current algorithm.

### 6.4 D4. Integrating Multiple Sources of Data

Based on existing research, conjoint analysis could also benefit substantially by combining multiple sources of data. Traditionally, preference measurement studies have relied on data provided explicitly by consumers during the preference measurement task. Both stated and revealed preference data provide information on the utility of offerings, and thus one source of data can be integrated as a covariate in a model of the other [82]. Further, Allenby et al. [6] recommend that information across datasets may be combined by forming a joint likelihood function with common parameters that will result in more precision. For example, stated preference data may require corrections for various response biases, while revealed preference data may require information controlling for contextual effects.

An interesting development by Ashok et al. [11] is the structural equation models (SEM) that integrate *softer* variables (e.g., attitudes) into binary and multinomial choice models to explain choice decisions. They compare the limited information model (without latent variables) in which factor scores for the exogenous latent variables are included in the utility function as error-free variables with the full information model with latent variables. In general, full information estimation methods yield structural parameter estimates that are significantly more precise than those obtained by using two-stage limited information approach where latent constructs are treated as error free instead of as random variables.

Furthermore, there is potential for combining stated preference data with auxiliary revealed preference data. For instance, researchers could look at qualitative and observational research techniques to capture response latencies, eye movement, and other psychosomatic patterns. Haaijer et al. [74] demonstrated that response time is related to preference and choice uncertainty such that shorter response times represent more certain choices. In a very recent study, Toubia et al. [173] conduct two eye-tracking studies (using Tobii 2150 tracker) to compare incentive-compatible conjoint poker with incentive-compatible choice-based conjoint. The assumption is that choice-based conjoint participants make choices based on a smaller subset of attributes resulting in decreased visual attention for a large proportion of attributes and levels.

The different approaches to modeling consumer preference (e.g., compositional model, decompositional model, subjective expected utility model, etc.) are based on the inherent assumption of traditional utility theory and attribute processing. However, consumer researchers for some time now have also established the power of affect, feelings, and emotions in consumer judgment, preference, and choice [136]. Unfortunately, not much research has been done to integrate the traditional utility-based paradigm with such affective responses in conjoint experiments. The concept of “attribute prominence” consisting of attribute importance and emotionality would better capture choice than merely using cognitive-based importance measures as earlier suggested by Luce et al. [118].

#### 6.4.1 Suggested Directions for Future Research

- 1.
A promising area in need of more work is the marriage of discrete choice models with latent variables such as attitudes and perceptions. Following Ashok et al. [11], we encourage more researchers to integrate latent constructs in discrete choice models such as attitude, satisfaction, service quality perception, and other widely used marketing-based perceptual constructs. A related area is the marriage of scanner-panel data with multinomial choice, where nonproduct attributes such as consumer attitudes and motivations and store level data may drive brand purchase along with product attributes [60, 165].

- 2.
Additional research should be aimed at understanding the underlying mechanism (rules and heuristics) that determines consumers’ decisions and develop measures of the decision process variables—decision problems, decision contexts, social situations, and individual factors.

- 3.
We believe that integrating multiple sources of data in innovative ways can add to the reliability, validity, and generalizability of conjoint studies in the future. The integration of qualitative aspects and emotional reactions of consumers with stated preference data in forming preferences and choices is an important research avenue [43]. While aesthetic stimuli pose special challenge in designing a factorial design due to the difficulty of decomposing what is essentially unitary or holistic stimuli, researchers are encouraged to work creatively in harnessing the benefits of such auxiliary data.

- 4.
Conjoint analysis provides an exacting measurement of consumer preferences, but to design a product or set marketing variables a firm must often do so in light of the actions and potential actions of its competitors. We are now beginning to see equilibrium (or nonequilibrium) models, which include the reactions of firms, competitors, and customers, coupled to conjoint analyses. One example is Kadiyali et al. [96]. More work needs to be done in this promising line of research.

## 7 (E) Managerial Issues Concerning Implementation

We now discuss selected implementation issues relevant for the manager including product optimization, market value of attribute improvement, optimal pricing, and product line decisions.

### 7.1 E1. Product Optimization

The primary goal of traditional conjoint analysis was to find a parsimonious manner of estimating consumer utility functions and deriving attribute (level) importances. In this effort, one could design a product with maximum utility whose attribute levels correspond to the highest estimated utility values. While the problem was first formulated as a zero–one integer programming model, a more general and thorough approach to product design optimization was developed by Green and colleagues with their Product Optimization and Selected Segment Evaluation (POSSE) procedures. Soon thereafter, efforts were directed to extend single-product design optimization heuristics to entire product lines introducing two objective functions (the buyer’s and seller’s welfare problem). This marked a critical development in product optimization research that triggered a flurry of research.

Another major advance in this field was the idea that consumers’ preference structures were dynamic rather than static (due to variety seeking, learning, and fatigue), which calls for models that can capture the dynamics and respondents heterogeneity (for a review, see Wittink and Keil [188]. More recent artificial intelligence and engineering optimization approaches to product line optimization using conjoint analysis include Belloni et al. [14], Wang et al. [182], Luo [119], and Michalek et al. [124]. Recently, some progress has been made by Luo et al. [120] wherein they propose a hierarchical Bayesian structural equation model by incorporating subjective characteristics along with objective attributes in new product design. Their results indicate that by collecting additional information about consumers’ perceptions of the subjective characteristics, the proposed model provides the product designer with a better understanding and a more accurate prediction of consumers’ product preferences compared to traditional *conjoint* models. We encourage more research in this area such as testing the virtual-reality prototypes [36], instead of physical prototypes, when attributes are large in number and therefore expensive.

#### 7.1.1 Suggested Directions for Future Research

- 1.
A line of research with promising potential is the area of improving preference measurement for really new products as opposed to incrementally new products. In attempts to improve preference measurement by building consumer knowledge, more research needs to be conducted to fully understand consumer inferential techniques in reducing uncertainty (i.e., consumer-initiated analogy generation and marketer-supported analogy). More needs to be done on how consumers think and learn about really new products pre-, during, and post adoption stages, and how we can modify measurement techniques to maximize the predictive accuracy of preference measurement.

- 2.
We believe that attribute-based conjoint models are potentially limited and that further investigation should proceed at least as far as customer-ready prototypes for a spectrum of design concepts. The prototypes are likely to provide more accurate information on customer reactions and costs and more accurate information on the attribute levels achieved (rather than expected) with particular designs. One possible direction for future extension is to combine this with other related methods such as Neural Network Approaches and Genetic Algorithms to gain better prediction. See Chung and Rao [32] for modeling of unobserved attributes in experiential products using virtual expert model.

### 7.2 E2. Market Value of Attribute Improvement

Predicting performance in the marketplace and gaining insight into the value of design features are important goals of market research. One question of managerial relevance is whether or not attribute improvement can be measured in terms of cost-benefit analysis. In other words, given that an attribute improvement (positive change) always comes with a price increase (negative change), there is a trade-off involved in its impact on market share. Ofek and Srinivasan [131] show that the market value of an attribute improvement (MVAI) can be expressed as the ratio of the change in market share due to an improvement in attribute to the ratio of decrease in market share due to change in price. These authors tested this approach using five portable camera mount products described on five attributes each varied at three levels. They estimate the MVAI for each of the attributes and show that these have less bias than the commonly used attribute values computed by averaging the ratio of weights of attribute and price across individuals. They also demonstrated that profitability of attribute improvements decreased when factoring in competitive reactions. The firm should undertake attribute improvement if MVAI exceeds the cost of attribute improvement. To mimic a real-world situation, MVAI can incorporate choice set, competitive reactions, and heterogeneity of respondents and translate utilities into choice probabilities [138].

#### 7.2.1 Suggested Directions for Future Research

- 1.
Future research should pay more attention to the dynamic issue of consumer choice or preference (both before product design and before product launch), which means that studies should extend over multiperiods and respondents should be able to upgrade [138, 100]. Also, research should be done after product diffusion (i.e. multiperiod analysis), as attributes’ importance will change as consumers gain more experience with the products as will the market value of the attribute.

- 2.
Meta analyses in this area would be particularly desirable. More specifically, publishing research on tracking the monetary implications of pursuing optimal conjoint design implementations in different commercial scenarios would prove a great aid in advancing more applications of conjoint analyses.

### 7.3 E3. Optimal Pricing

Kohli and Mahajan [104] propose a model for determining the price that maximizes the profit of a product that has been screened based on share criterion. They do so by incorporating the effect of measurement and estimation error in demand estimates which in turn affects the price that maximizes profit. They model heterogeneity in individual reservation prices by assuming that the variance of the distribution is constant but the mean is normally distributed. Jedidi and Zhang [90] further develop this method to allow for the effect of new product introduction on category-level demand, and Jedidi et al. [92] describe a method for estimating consumer reservation prices for product bundles. Chung and Rao [31] evolve the issue of optimal pricing to the level of bundle choice models which employ attribute-based products (i.e., components) of a bundle as the ultimate unit of analysis in estimating the utility of the bundle. Reservation price for bundles is higher for attributes regarded as desirable or complimentary.

More recently, Iyengar et al. [87] describe a conjoint model for the multipart pricing of products and services. Given that for many product and service categories there is a two-way dependence of price and consumption (fixed fee and usage-based fee), Iyengar et al. [87] incorporate the effects of consumption on consumer choice and the uncertainty of service usage (by using a quadratic utility function). A benefit of their model is its ability to infer consumption at different prices from choice data which can aid marketers in their market share maximization objectives.

Ding et al. [50] demonstrate that consumers demonstrate two behavioral regularities in relation to how their utility functions depend on the role of price: consumers infer quality information from a product’s price and they have a reference price for a given product. Consumer heterogeneity is captured through an individual-specific reference point and an individual-specific information coefficient. They demonstrate that the classic economic model where price serves the allocative purpose is more relevant for inexperienced or uninvolved customers. On the other hand, price maximally serves as an informational price cuing quality where customers are the most involved. This piece of research is one of the pioneering first steps in integrating behavioral regularities into classic utility models in pricing research. Kannan et al. [98], through an online choice experiment on digital versus print products, propose a model to account for customers’ perceptions of substitutability or complementarity of content forms in developing pricing policies for digital products. Research on product line extensions has traditionally treated this issue as substitutes, although it is possible that customers may perceive digital products as imperfect substitutes or even complements to printed products. Bundling and pricing strategies are determined by capturing customers’ heterogeneity in their perceptions of substitutability and complementarity by estimating parameters of the model using a finite mixture (FM) model.

#### 7.3.1 Suggested Directions for Future Research

- 1.
Along the lines of Iyengar et al. [87], future research can examine computationally efficient methods for optimal selection of product features and prices. There is also potential for factoring in the effect of competitive actions and reactions on multipart pricing.

- 2.
Future researchers can look into additional behavioral regularities built into the utility model such as a reflexive shape around the reference point and the effect of dynamic competition. This would be a useful area for the application of game theoretic models employing alternative strategies and competitive scenarios.

### 7.4 E4. Product Line Decisions

The optimal product line design problem belongs to the class of NP-hard combinatorial optimization problems. A number of optimization algorithms have been applied to solve such difficult problems including dynamic programming, beam search, genetic algorithms, and Lagrangian relaxation with branch and bound [12, 23]. More recently, alternative heuristics have been devised employing conjoint and choice models. Michalek et al. [124] recently presented a unified methodology for product line optimization that coordinates positioning and design models to achieve realizable firm-level optima. Their procedure incorporates a general Bayesian representation of consumer preference heterogeneity, and manages attributes over a continuous domain to alleviate issues of combinatorial complexity using conjoint based consumer choice data. Tsafarakis et al. [174] devise particle swarm optimization technology for the problem of optimal product line design and employ a Monte Carlo simulation to favorably compare its performance to the use of genetic algorithms. In addition, these authors use concepts from game theory to illustrate how the proposed algorithm can be extended to incorporate retaliatory actions from competitors using Nash equilibrium concepts.

#### 7.4.1 Suggested Directions for Future Research

- 1.
Future research in this area should extend such models beyond linear and continuous cost functions, to accommodate mixed level product attributes (discrete and continuous), to handle category expansion and pioneering advantages, and allow for the enactment of various designated offensive and defensive strategies.

- 2.
It would also be useful to extend such computer science-based procedures to accommodate multiple objectives for optimization in conjoint applications.

## 8 Conclusion

From the rigorous psychometric tradition from which conjoint analysis has evolved, a plethora of advances have been made. In this manuscript, we have attempted to integrate several substantive issues of interest in conjoint analysis within an organizing framework that impacts major stakeholders (i.e., researcher, respondent, and manager). For each of the five categories in our framework, we summarize recent developments in the field, provide some critical insights, and present suggested directions for future research. We hope that conjoint scholars will gainfully employ this organizing framework as a repository for drawing additional new insights and conducting future research. We believe that research in conjoint continues to be vibrant and the recent advances, developments, and directions discussed in this paper will contribute to the realization of the tremendous potential of conjoint analysis.

In conclusion, our paper makes several contributions to the literature (including the recent book by Rao [139]). First, our review incorporates an organizing framework based on the behavioral and theoretical processes underlying several issues related to the researcher, the respondent, and the manager in conjoint analysis. We have an expanded and provided recent coverage of the behavioral and theoretical underpinnings (see section A) that sets the tone for the rest of the review. Second, our framework allocates adequate attention to critical issues surrounding the three major stakeholders: the researcher, the respondent, and the manager. Third, we cite publications from major marketing and nonmarketing journals across disciplines. Fourth, our paper also sets a comprehensive research agenda going forward, 55 research directions in total, which can be leveraged for future development of conjoint analysis methodology. Finally, we believe that a review paper on conjoint analysis will be able to draw wide readership and citation by scholars in the future, thereby enhancing the impact factor of this journal.

## Footnotes

- 1.
The part-worth conjoint analysis model is basic and may be represented by the following formula: \( {U}_x=\sum_{i=1}^m\sum_{j=1}^{k_i}{\alpha}_{ij}{x}_{ij} \) where \( {U}_x \) = overall utility of an alternative; \( {\alpha}_{ij} \)= the part-worth contribution or utility associated with the

*j*th level (*j*,*j*= 1, 2….*ki*) of the*i*th attribute (*i*,*i*= 1, 2….*m*); \( {k}_i \) = number of levels of attribute*I*; \( m \) = number of attributes; \( {x}_{ij} \) = 1 if the*j*th level of the*i*th attribute is present and = 0 otherwise. - 2.
- 3.
Recent developments in tools in psychology including functional imaging and neural recordings, process tracing tools, and modeling tools such as mediation and multilevel analysis have benefitted this research stream. For instance, process models consider intervening variables and intermediate stages between the start and end of the decision by incorporating additional external search information and internal memory-based information.

- 4.
The beta-delta model explains greater discounting of future outcomes when immediate rewards are available than when all rewards are in the future, by an exponential delta process that always operates and an additional exponential beta process that only operates when immediate rewards are present. For instance, in decisions from descriptions, certainty in the probability dimension and immediacy on the delay dimension are given extra attention, and consequently decision weight, as captured by prospect theory and Laibson’s beta-delta model of time discounting [108].

- 5.
The function is typically,

*U*=*ΣXβ*, where*X*’s are attributes or functions of attributes such as*X*_{1}*X*_{2}. - 6.
Liu and Arora [115] found asymmetric effects in design efficiency loss. When the true model is conjunctive, compensatory designs have significant loss of design efficiency. However, when the true model is compensatory, the efficiency loss from using a conjunctive design is significantly lower.

- 7.
Cue diagnosticity is an information processing technique based on metacognitive insights about past inferential accuracy that helps in distinguishing between two alternatives. TTB is an inferential strategy based on memory retrieval mimicking lexicographic decision rule in choice using the most diagnostic cue.

- 8.
For instance, Chevalier and Mayzlin [29] find that differences in the number of ratings (volume) and the average rating (valence) across online book retailers (Amazon and Barnes and Noble.com) affected relative sales. Similar results have been found wherein online customer movie ratings are related to future box office revenues [41].

- 9.
For an interesting online study of user engagement conducted by Yahoo using web-based user and content data (click through rate) and tensor segmentation technique, see [30].

- 10.
- 11.
In one of the earlier studies, Cattin et al. [26] employed a Bayesian procedure to improve the prediction of holdout profiles by using self-stated utilities to derive a prior distribution. This prior distribution was used in the estimation of the individual-level part-worths from the full-profile evaluations. Subsequent to their pioneering work, several noteworthy studies have estimated the importances from full-profile data under various real-world constraints derived from the order information in the self-stated data thereby improving the predictive performance of the model (see reviews by Hauser and Rao [75] and Rao [138]).

- 12.
In HB practical applications, there usually is no attempt to optimize the design blocking, so there is no reason to expect the particular trade-offs individual subjects see to provide a meaningful basis for a Bayesian update of the priors provided by the population means. However, Sawtooth Software assigns blocks by drawing randomly from the full design for each subject resulting in better HB estimates. For asymmetric designs, random designs can be more efficient overall than purely orthogonal designs.

- 13.
AIC is Akaike’s Information Criterion, BIC is Bayesian Information Criterion, CAIC is Consistent Akaike’s Information Criterion, and ICOMP is Information Complexity. For technical details of segment retention criteria, the reader is referred to Andrews and Currim [7].

- 14.
Polyhedral “interior-point” algorithms (Fast Polyhedral Adaptive Conjoint Estimation or FastPACE) design questions that quickly reduce the range of feasible part-worths that are consistent with the respondent’s choices. The estimation methods employed are hierarchical Bayes and “analytic center”, a new estimation procedure that is a by-product of polyhedral question design. The analytic center is the point that minimizes the geometric mean of the distances to the faces of the polyhedron thereby yielding a close approximation to the center of the polyhedron.

## References

- 1.Abernethy J, Evgeniou T, Toubia O, Vert J-P (2008) Eliciting consumer preferences using robust adaptive choice questionnaires. IEEE Trans Knowl Data Eng 20(2):145–155Google Scholar
- 2.Adamowicz W, Bunch D, Cameron TA, Dellaert BGC, Hanneman M, Keane M et al (2008) Behavioral frontiers in choice modeling. Mark Lett 19(3–4):215–228Google Scholar
- 3.Alba JW, Cooke ADJ (2004) When absence begets inference in conjoint analysis. J Mark Res 41(4):382–387Google Scholar
- 4.Allenby GM, Ginter JL (1995) Using extremes to design products and segment markets. J Mark Res 32(4):392–403Google Scholar
- 5.Allenby GM, Neeraj A, Ginter JL (1995) Incorporating prior knowledge into the analysis of conjoint studies. J Mark Res 35(2):152–162Google Scholar
- 6.Allenby GM, Fennell G, Huber J, Eagle T, Gilbride T, Horsky D, Kim J, Lenk PJ, Johnson R, Ofek E, Orme B, Otter T, Walker J (2005) Adjusting choice models to better predict market behavior. Mark Lett 16(3–4):197–208Google Scholar
- 7.Andrews RL, Currim IS (2003) A comparison of segment retention criteria for finite mixture logit models. J Mark Res 40(2):235–243Google Scholar
- 8.Andrews RL, Ansari A, Currim IS (2002) Hierarchical Bayes versus finite mixture conjoint analysis models: a comparison of fit, prediction, and partworth recovery. J Mark Res 39(1):87–98Google Scholar
- 9.Ariely D, Loewenstein G, Prelec D (2003) Coherent arbitrariness: stable demand curves without stable preferences. Quant J Econ 118:73–105Google Scholar
- 10.Arora N, Allenby GM (1999) Measuring the influence of individual preference structures in group decision making. J Mark Res 36(4):476–487Google Scholar
- 11.Ashok K, Dillon WR, Yuan S (2002) Extending discreet choice models to incorporate attitudinal and other latent variables. J Mark Res 39(1):31–46Google Scholar
- 12.Balakrishnan PVS, Gupta R, Jacob VS (2006) An investigation of mating and population maintenance strategies in hybrid genetic heuristics for product line designs. Comput Oper Res 33(3):639–659Google Scholar
- 13.Bartels DM, Medin DL (2007) Are morally motivated decision makers insensitive to the consequences of their choices? Psych Sci 18:24–28Google Scholar
- 14.Belloni A, Freund R, Selove M, Simester D (2008) Optimizing product line designs: efficient methods and comparisons. Manag Sci 54(9):1544–1552Google Scholar
- 15.Bettman JP, Luce MF, Payne JW (1998) Constructive consumer choice processes. J Consum Res 25(3):187–217Google Scholar
- 16.Bond SD, Carlson KA, Meloy MG, Russo JE, Tanner RJ (2007) Information distortion in the evaluation of a single option. Organ Behav Hum Decis Process 102:240–254Google Scholar
- 17.Botti S, Broniarczyk S, Häubl G, Hill R, Huang Y, Kahn B, Kopalle P, Lehmann D, Urbany J, Wansink B (2008) Choice under restrictions. Mark Lett 19(3–4):183–199Google Scholar
- 18.Bradlow ET (2005) Current issues and a wish list for conjoint analysis. Appl Stochast Models Bus Ind 21(4–5):319–323Google Scholar
- 19.Bradlow ET, Park Y-H (2007) Bayesian estimation of bid sequences in Internet auctions using a generalized record-breaking model. Mark Sci 26(2):218–229Google Scholar
- 20.Bradlow ET, Hu Y, Ho T-H (2004) A learning-based model for imputing missing levels in partial conjoint profiles. J Mark Res 41(4):369–381Google Scholar
- 21.Burgess L, Street DJ (2005) Optimal designs for choice experiments with asymmetric attributes. J Stat Plan Infer 134:288–301Google Scholar
- 22.Camerer CF (2005) Three cheers—psychological, theoretical, empirical—for loss aversion. J Mark Res 42(2):129–133Google Scholar
- 23.Camm JD, Cochran JJ, Curry DJ (2006) Conjoint optimization: an exact branch-and-bound algorithm for the share-of-choice problem. Manag Sc 52(3):435–447Google Scholar
- 24.Campbell D, Hensher DA, Scarpa R (2011) Non-attendance to attributes in environmental choice analysis: a latent class specification. J Environ Plan Manag 54:2061–1076Google Scholar
- 25.Carroll DJ, Green PE (1995) Psychometric methods in marketing research: part I, conjoint analysis. J Mark Res 32(4):385–391Google Scholar
- 26.Cattin P, Gelfand AE, Danes J (1983) A simple Bayesian procedure for estimation in a conjoint model. J Mark Res 20(1):29–35Google Scholar
- 27.Cawley J (2008) Contingent valuation analysis of willingness to pay to reduce childhood obesity. Econ Hum Biol 6:281–292Google Scholar
- 28.Chandon P, Morwitz VG, Reinartz WJ (2004) The short and long-term effects of measuring intent to repurchase. J Consum Res 31:566–572Google Scholar
- 29.Chevalier J, Mayzlin D (2006) The effect of word of mouth on sales: online book reviews. J Mark Res 43(3):345–354Google Scholar
- 30.Chu W, Park S, Beaupre T, Motgi N, Phadke A, Chakraborty S, Zachariah J (2009) A case study of behavior-driven conjoint analysis on Yahoo! front page today module (2009) Proceedings of KDDGoogle Scholar
- 31.Chung J, Rao VR (2003) A general choice models for bundles with multiple-category products: application to market segmentation and optimal pricing for bundles. J Mark Res 40(2):115–130Google Scholar
- 32.Chung J, Rao VR (2012) A general consumer preference model for experience products: application to internet recommendation services. J Mark Res 49(2):289–305Google Scholar
- 33.Cohen S, Neira L (2003) Measuring preference for product benefits across countries: Overcoming scale usage bias with maximum difference scaling. ESOMAR 2003 Latin Amer. Conf. Proceedings, AmsterdamGoogle Scholar
- 34.Connolly T, Zeelenberg M (2002) Regret in decision making. Psych Sci 11:212–216Google Scholar
- 35.Dahan E (2007) Conjoint adaptive ranking database system. Working paper. University of California at Los Angeles, Los AngelesGoogle Scholar
- 36.Dahan E, Srinivasan V (2000) The predictive power of internet-based product concept testing using visual depiction and animation. J Prod Innov Manag 17(2):99–109Google Scholar
- 37.De Bruyn A, Liechty JC, Huizingh EKRE, Lilien GL (2008) Offering online recommendations with minimum customer input through conjoint-based decision aids. Mark Sci 27(3):443–460Google Scholar
- 38.de Jong MG, Lehmann DR, Netzer O (2012) State-dependence effects in surveys. Mark Sci 31(5):838–854Google Scholar
- 39.De Wilde E, Cooke ADJ, Janiszewski C (2008) Attentional contrast during sequential judgments: a source of the number-of-levels effect. J Mark Res 45(4):437–449Google Scholar
- 40.Dellaert BGC, Stremersch S (2005) Marketing mass-customized products: striking a balance between utility and complexity. J Mark Res 42(2):219–227Google Scholar
- 41.Dellarocas C, Zhang M, Awad NF (2007) Exploring the value of online product reviews in forecasting sales: the case of motion pictures. J Interact Mark 21(4):23–45Google Scholar
- 42.DeSarbo WS, DeSarbo C (2007) A generalized normative segmentation methodology employing conjoint analysis. In: Gustafsson A, Herrmann A, Huber F (eds) Conjoint measurement: methods and applications, 4th edn. Springer, New York, pp 321–346Google Scholar
- 43.DeSarbo WS, Wu J (2001) The joint spatial representation of multiple variable batteries collected in marketing research. J Mark Res 38(2):244–253Google Scholar
- 44.DeSarbo WS, Ansari A, Chintagunta P, Himmelberg C, Jedidi K, Johnson R, Kamakura W, Lenk P, Srinivasan K, Wedel M (1997) Representing heterogeneity in consumer response models 1996 choice conference participants. Mark Lett 8(3):335–348Google Scholar
- 45.DeSarbo WS, Fong DKH, Liechty J, Chang-Coupland J (2005) Evolutionary preference/utility functions: a dynamic perspective. Psychometrika 70(1):179–202Google Scholar
- 46.Ding M (2007) An incentive-aligned mechanism for conjoint analysis. J Mark Res 44(2):214–223Google Scholar
- 47.Ding M, Eliashberg J (2008) A dynamic competitive forecasting model incorporating dyadic decision making. Manag Sci 54(4):820–834Google Scholar
- 48.Ding M, Grewal R, Liechty J (2005) Incentive-aligned conjoint analysis. J Mark Res 42(1):67–82Google Scholar
- 49.Ding M, Park Y-H, Bradlow ET (2009) Barter markets for conjoint analysis. Manag Sci 55(6):1003–1017Google Scholar
- 50.Ding M, Ross WT Jr, Rao VR (2010) Price as an indicator of quality: implications for utility and demand functions. J Retail 86(1):69–84Google Scholar
- 51.Dong S, Ding M, Huber J (2010) A simple mechanism to incentive-align conjoint experiments. Int J Res Mark 27:25–32Google Scholar
- 52.Dougherty MRP, Sprenger A (2006) The influence of improper sets of information on judgment: how irrelevant information can bias judged probability. J Exp Psych Gen 135:262–281Google Scholar
- 53.Dyachenko T, Naylor R, Allenby G (2013) Models of sequential evaluation in best-worst choice tasks, Proceedings of the Sawtooth Software Conference, Sawtooth Software IncGoogle Scholar
- 54.Dzyabura D, Hauser JR (2011) Active machine learning for consideration heuristics. Mark Sci 30(5):801–819Google Scholar
- 55.Ebbes P, Liechty J, Grewal R (2014) Attribute-level heterogeneity. Manag Sci. doi: 10.1287/mnsc.2014.1898 Google Scholar
- 56.Elrod T, Häuble G, Tipps SW (2012) Parsimonious structural equation models for repeated measures data, with application to the study of consumer preferences. Psychometrika 77(2):358–387Google Scholar
- 57.Evans JSBT (2008) Dual-processing accounts of reasoning, judgment, and social cognition. Annual Rev Psych 59:255–278Google Scholar
- 58.Evgeniou T, Boussios C, Zacharia G (2005) Generalized robust conjoint estimation. Mark Sci 24(3):415–429Google Scholar
- 59.Evgeniou T, Pontil M, Toubia O (2007) A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. Mark Sci 26(6):805–818Google Scholar
- 60.Feit ED, Beltramo MA, Feinberg FM (2010) Reality check: combining choice experiments with market data to estimate the importance of product attributes. Manag Sci 56(5):785–800Google Scholar
- 61.Fiebig DG, Keane MP, Louviere J, Wasi N (2010) The generalized multinomial logit model: accounting for scale and coefficient heterogeneity. Mark Sci 29(3):393–421Google Scholar
- 62.Finn A, Louviere JJ (1992) Determining the appropriate response to evidence of public concern: the case of food safety. J Public Policy Mark 11(2):12–25Google Scholar
- 63.Fong DKH, DeSarbo WS (2012) A Bayesian methodology for simultaneously detecting and estimating regime change points and variable selection in multiple regression models for marketing research. Quant Mark Econ 5(4):293–314Google Scholar
- 64.Fong DKH, Ebbes P, DeSarbo WS (2012) A heterogeneous Bayesian regression model for cross-sectional data involving a single observation per response unit. Psychometrika 77(2):293–314Google Scholar
- 65.Frühwirth-Schnatter S (2006) Finite mixtures of regression models. Finite mixture and Markov switching models. Springer, New York, pp 241–275Google Scholar
- 66.Ghose S, Rao VR (2007) A choice model of bundles features and meta-attributes: an application to product design. Working Paper, Cornell UniversityGoogle Scholar
- 67.Gilbride TJ, Allenby GM (2004) A choice model with conjunctive, disjunctive, and compensatory screening rules. Mark Sci 23(3):391–406Google Scholar
- 68.Godes D, Mayzlin D (2004) Using online conversations to measure word of mouth communication. Mark Sci 23(4):545–560Google Scholar
- 69.Godes D, Mayzlin D, Chen Y, Das S, Dellarocas C, Pfeiffer B, Libai B, Sen S, Shi M, Verlegh P (2005) The firm’s management of social interactions. Mark Lett 16(3–4):415–428Google Scholar
- 70.Goldfarb A, Ho T-H, Amaldoss W, Brown AL, Chen Y, Cui TH, Galasso A, Hossain T, Hsu M, Lim N, Xiao M, Yang B (2012) Behavioral models of managerial decision-making. Mark Lett 23(2):405–442Google Scholar
- 71.Green PE, Srinivasan V (1978) Conjoint analysis in consumer research: issues and outlook. J Consum Res 5(2):103–123Google Scholar
- 72.Green PE, Srinivasan V (1990) Conjoint analysis in marketing: new developments with implications for research and practice. J Mark 54(4):3–19Google Scholar
- 73.Gregan-Paxton J, John DR (1997) Consumer learning by analogy: a model of internal knowledge transfer. J Consum Res 24(3):266–284Google Scholar
- 74.Haaijer R, Kamakura WA, Wedel M (2000) Response latencies in the analysis of conjoint choice experiments. J Mark Res 37(3):376–382Google Scholar
- 75.Hauser JR, Rao VR (2003) Conjoint analysis, related modeling, and applications. In: Wind Y, Green PE (eds) Marketing research and modeling: progress and prospects: a tribute to Paul E. Green. Kluwer, NorwellGoogle Scholar
- 76.Hauser JR, Toubia O, Evgeniou T, Befurt R, Dzyabura D (2010) Disjunctions of conjunctions, cognitive simplicity, and consideration sets. J Mark Res 47(2):376–382Google Scholar
- 77.Hertwig R, Barron G, Weber EU, Erev I (2004) Decisions from experience and the effect of rare events in risky choice. Psych Sci 15:534–539Google Scholar
- 78.Higgins ET (2005) Value from regulatory fit. Psych Sci 14:209–213Google Scholar
- 79.Hoeffler S (2003) Measuring preferences for really new products. J Mark Res 40(4):406–420Google Scholar
- 80.Hogarth RM, Karelaia N (2007) Heuristic and linear models of judgment: matching rules and environments. Psych Rev 114:733–758Google Scholar
- 81.Horowitz JL, Louviere JJ (1995) What is the role of consideration sets in choice modeling? Int J Res Mark 12(May):39–54Google Scholar
- 82.Horsky D, Nelson P, Posavac SS (2004) Stating preference for the ethereal but choosing the concrete: how the tangibility of attributes affects attribute weighting in value elicitation and choice. J Consum Psych 14(1–2):132–140Google Scholar
- 83.Hsua DK, Haynie JM, Simmons SA, McKelvie A (2014) What matters, matters differently: a conjoint analysis of the decision policies of angel and venture capital investors. Ventur Cap An Int J Entrep Finan 16(1):1–25Google Scholar
- 84.Huber J, Train K (2001) On the similarity of classical and Bayesian estimates of individual mean part-worths. Mark Lett 12(3):259–269Google Scholar
- 85.Huber J, Zwerina K (1996) The importance of utility balance in efficient choice designs. J Mark Res 33(3):307–317Google Scholar
- 86.Hutchinson JW, Zauberman G, Meyer R (2010) On the interpretation of temporal inflation parameters in stochastic models of judgment and choice. Mark Sci 29(1):23–31Google Scholar
- 87.Iyengar R, Jedidi K, Kohli R (2008) A conjoint approach to multipart pricing. J Mark Res 45(2):195–210Google Scholar
- 88.Iyengar R, Van den Bulte C, Valente TW (2011) Opinion leadership and social contagion in new product diffusion. Mark Sci 30(2):195–212Google Scholar
- 89.Jedidi K, Kohli R (2005) Probabilistic subset—conjunctive models for heterogeneous consumers. J Mark Res 42(4):483–494Google Scholar
- 90.Jedidi K, Zhang JZ (2002) Augmenting conjoint analysis to estimate consumer reservation price. Manag Sci 48(10):1350–1368Google Scholar
- 91.Jedidi K, Kohli R, DeSarbo WS (1996) Consideration sets in conjoint analysis. J Mark Res 33(3):364–372Google Scholar
- 92.Jedidi K, Jagpal S, Manchanda P (2003) Measuring heterogeneous reservation prices for product bundles. Mark Sci 22(1):107–130Google Scholar
- 93.Johannesson M, Blomquist GC, Blumenschen K, Johansson PO, Liljas B, O’Connor RM (1999) Calibrating hypothetical willingness to pay responses. J Risk Uncertain 18(1):21–32Google Scholar
- 94.Johnson JG, Busemeyer JR (2005) A dynamic, stochastic, computational model of preference reversal phenomena. Psych Rev 112:841–861Google Scholar
- 95.Johnson EJ, Haubl G, Keinan A (2007) Aspect of endowment: a query theory of value construction. J Exp Psych Learn Mem Cogn 33:461–474Google Scholar
- 96.Kadiyali V, Sudhir K, Rao VR (2001) Structural analysis of competitive behavior: new empirical industrial organizational methods in marketing. Int J Res Mark 18(1, 2):161–186Google Scholar
- 97.Kahneman D (2003) Maps of bounded rationality: a perspective on intuitive judgment and choice. In: Frangsmyr T (ed) Les Prix Nobel: The Nobel Prizes 2002. Nobel Foundation, Stockholm, pp 449–489Google Scholar
- 98.Kannan PK, Pope BK, Jain S (2009) Pricing digital content product lines: a model and application for the national academies press. Mark Sci 28(4):620–636Google Scholar
- 99.Kanninen B (2002) Optimal design for multinomial choice experiments. J Mark Res 39:214–227Google Scholar
- 100.Kim SS, Malhotra NK (2005) A longitudinal model of continued is use: an integrative view of four mechanisms underlying postadoption phenomena. Manag Sci 51(5):741–755Google Scholar
- 101.Kim H-J, Park Y-H, Bradlow ET, Ding M (2014) PIE: a holistic preference concept and measurement model. J Mark Res 51(1):335–351Google Scholar
- 102.Kivetz R, Netzer O, Srinivasan V (2004) Alternative models for capturing the compromise effect. J Mark Res 41(3):237–257Google Scholar
- 103.Kohli R, Jedidi K (2007) Representation and inference of lexicographic preference models and their variants. Mark Sci 26(3):380–399Google Scholar
- 104.Kohli R, Mahajan V (1991) A reservation-price model for optimal pricing of multiattribute products in conjoint analysis. J Mark Res 28(3):347–354Google Scholar
- 105.Krieger AM, Green PE (1996) Modifying cluster-based segments to enhance agreement with an exogenous response variable. J Mark Res 33(3):351–363Google Scholar
- 106.Kuhfeld WF (2005) Marketing research methods in SAS: experimental design, choice, conjoint, and graphical techniques. SAS Institute, CaryGoogle Scholar
- 107.Lachaab M, Ansari A, Jedidi K, Trabelsi A (2006) Modeling preference evolution in discrete choice models: a Bayesian state-space approach. Quant Mark Econ 4(1):57–81Google Scholar
- 108.Laibson D (1997) Golden eggs and hyperbolic discounting. Quant J Econ 112:443–477Google Scholar
- 109.Lee MD, Cummins TDR (2004) Evidence accumulation in decision making: unifying the “Take The Best” and the “Rational” models. Psych Bull Rev 11:343–352Google Scholar
- 110.Levav J, Heitmann M, Herrmann A, Iyengar SS (2010) Order in product customization decisions: evidence from field experiments. J Polit Econ 118(2):274–299Google Scholar
- 111.Lichtenstein S, Slovic P (2006) The construction of preference. Cambridge University Press, LondonGoogle Scholar
- 112.Liechty JC, Ramaswamy V, Cohen SH (2001) Choice menus for mass customization: an experimental approach for analyzing customer demand with an application to a web-based information service. J Mark Res 38(2):183–196Google Scholar
- 113.Liechty JC, Fong DKH, DeSarbo WS (2005) Dynamic models incorporating individual heterogeneity: utility evolution in conjoint analysis. Mark Sci 24(2):285–293Google Scholar
- 114.Liu Q, Arora N (2011) Efficient choice designs for a consider-then-choose model. Mark Sci 30(2):321–338Google Scholar
- 115.Loewenstein GF, Weber EU, Hsee CK, Welch N (2001) Risk as feelings. Psych Bull 127:267–286Google Scholar
- 116.Louviere JJ, Meyer RJ (2007) Formal choice models of informal choices: what choice modeling research can (and can’t) learn from behavioral theory. In: Malhotra NK (ed) Review of marketing research. Sharpe, New York, pp 3–32Google Scholar
- 117.Louviere JJ, Street D, Burgess L, Wasi N, Islam T, Marley AAJ (2008) Modeling the choices of individuals decision makers by combining efficient choice experiment designs with extra preference information. J Choice Model 1(1):128–163Google Scholar
- 118.Luce MF, Payne JW, Bettman JR (1999) Emotional trade-off difficulty and choice. J Mark Res 36(2):143–159Google Scholar
- 119.Luo L (2011) Product line design for consumer durables: an integrated marketing and engineering approach. J Mark Res 48(1):128–139Google Scholar
- 120.Luo L, Kannan PK, Ratchford BT (2008) Incorporating subjective characteristics in product design and evaluations. J Mark Res 45(2):182–194Google Scholar
- 121.Marley AAJ, Louviere JJ (2005) Some probabilistic models of best, worst, and best-worst choices. J Math Psych 49(6):464–480Google Scholar
- 122.Marley AAJ, Flynn TN, Louviere JJ (2005) Probabilistic models of set-dependent and attribute-level best–worst choice. J Math Psych 52(5):281–296Google Scholar
- 123.McKenzie CRM, Nelson JD (2003) What a speaker’s choice of frame reveals: reference points, frame selection, and framing effects. Psychon Bull Rev 10(3):596–602Google Scholar
- 124.Michalek JJ, Ebbes P, Adigüzel F, Feinberg FM, Papalambros PY (2011) Enhancing marketing with engineering: optimal product line design for heterogeneous markets. Int J Res Marketing 28(1):1–12Google Scholar
- 125.Miller KM, Hofstetter R, Krohmer H, Zhang ZJ (2011) How should consumers’ willingness to pay be measured? an empirical comparison of state-of-the-art approaches. J Mark Res 48(1):172–184Google Scholar
- 126.Morikawa T, Ben-Akiva M, McFadden D (2002) Discrete choice models incorporating revealed preferences and psychometric data. Adv Econ 16:29–55Google Scholar
- 127.Narayan V, Rao VR, Saunders C (2011) How peer influence affects attribute preferences: a Bayesian updating mechanism. Mark Sci 30(2):368–384Google Scholar
- 128.Netzer O, Srinivasan V (2011) Adaptive self-explication of multi-attribute preferences. J Mark Res 48(1):140–156Google Scholar
- 129.Netzer O, Lattin J, Srinivasan V (2008) A hidden Markov model of customer relationship dynamics. Mark Sci 27(2):185–204Google Scholar
- 130.Netzer O, Toubia O, Bradlow ET, Dahan E, Evgeniou T, Feinberg FM, Feit EM, Hui SK, Johnson J, Liechty JC, Orlin JB, Rao VR (2008) Beyond conjoint analysis: advances in preference measurement. Mark Lett 19(3–4):337–354Google Scholar
- 131.Ofek E, Srinivasan V (2002) How much does the market value an improvement in a product attribute? Mark Sci 22(3):398–411Google Scholar
- 132.Pachur T, Hertwig R (2006) On the psychology of the recognition heuristic: retrieval primacy as a key determinant of its use. J Exp Psych Learn Mem Cogn 32:983–1002Google Scholar
- 133.Park JH, MacLachlan DL (2008) Estimating willingness to pay with exaggeration bias-corrected contingent valuation method. Mark Sci 27(4):691–698Google Scholar
- 134.Park Y-H, Ding M, Rao VR (2008) Eliciting preference for complex products: a web-based upgrading method. J Mark Res 45(5):562–574Google Scholar
- 135.Parker RJ, Schrift RY (2011) Rejectable choice sets: how seemingly irrelevant no-choice options affect consumer decision processes. J Mark Res 48(4):840–854Google Scholar
- 136.Pham MT, Cohen JB, Pracejus JW, Hughes GD (2001) Affect monitoring and the primacy of feelings in judgment. J Consum Res 28(2):167–188Google Scholar
- 137.Raghavarao D, Wiley JB, Chitturi P (2010) Choice-based conjoint analysis: models and designs. Chapman and Hall/CRC, New YorkGoogle Scholar
- 138.Rao VR (2008) Developments in conjoint analysis. In: Wierenga B (ed) Handbook of marketing decision models. Springer, New York, pp 23–55Google Scholar
- 139.Rao VR (2014) Applied conjoint analysis. Springer, New YorkGoogle Scholar
- 140.Rao VR, Kartono B, Su M (2008) Methods for handling massive numbers of attributes in conjoint analysis. Rev Mark Res 5:104–129Google Scholar
- 141.Reed A II (2004) Activating the self-importance of consumer selves: exploring identity salience effects on judgments. J Consum Res 31(2):286–295Google Scholar
- 142.Richter T, Spath P (2006) Recognition is used as one cue among others in judgment and decision making. J Exp Psych Learn Mem Cogn 32:150–162Google Scholar
- 143.Roe RM, Busemeyer JR, Townsend JT (2001) Multiattribute decision field theory: a dynamic connectionist model of decision making. Psych Rev 108:370–392Google Scholar
- 144.Romero J, van der Lans R, Wierenga B (2013) A partially hidden Markov model of customer dynamics for clv measurement. J Interact Mark 27(3):185–208Google Scholar
- 145.Rooderkerk RP, Van Heerde HJ, Bijmolt THA (2011) Incorporating context effects into a choice model. J Mark Res 48(4):767–780Google Scholar
- 146.Rose JM, Bliemer MCJ, Hensher DA et al (2008) Designing efficient stated choice experiments in the presence of reference alternatives. Transp Res Part B Methodol 42(4):395–406Google Scholar
- 147.Rossi PE, Allenby GM, McCulloch R (2005) Bayesian statistics and marketing. Wiley, West Sussex. doi: 10.1002/0470863692 Google Scholar
- 148.Ruan S, MacEachern SN, Otter T, Dean AM (2008) The dependent Poisson race model and modeling dependence in conjoint choice experiments. Psychometrika 73(2):261–288Google Scholar
- 149.Rutz OJ, Sonnier GP (2011) The evolution of internal market structure. Mark Sci 30(2):274–289Google Scholar
- 150.Salisbury LC, Feinberg FM (2010) Alleviating the constant stochastic variance assumption in decision research: theory, measurement, and experimental test. Mark Sci 29(1):1–17Google Scholar
- 151.Sandor Z, Wedel M (2001) Designing conjoint choice experiments using managers’ prior beliefs. J Mark Res 38(4):430–444Google Scholar
- 152.Sandor Z, Wedel M (2002) Profile construction in experimental choice designs for mixed logit models. Mark Sci 21(4):455–475Google Scholar
- 153.Sandor Z, Wedel M (2005) Heterogeneous conjoint choice designs. J Mark Res 42(2):210–218Google Scholar
- 154.Scarpa R, Thiene M, Train KE (2008) Utility in WTP space: a tool to address confounding random scale effects in destination choice to the Alps. Am J Agric Econ 90:994–1010Google Scholar
- 155.Scarpa R, Gilbride TJ, Campbell D, Hensher DA (2009) Modelling attribute non-attendance in choice experiments for rural landscape valuation. Eur Rev Agric Econ 36(2):151–174Google Scholar
- 156.Schlosser AE, Shavitt S (2002) Anticipating discussion about a product: rehearsing what to say can affect your judgments. J Consum Res 29(1):101–115Google Scholar
- 157.Shi SW, Wedel M, Pieters FGM (R) (2012) Information acquisition during online decision making: a model-based exploration using eye-tracking data. Manag Sci 59(5):1009–1026Google Scholar
- 158.Simon D, Krawczyk DC, Holyoak KJ (2004) Construction of preferences by constraint satisfaction. Psych Sci 15:331–336Google Scholar
- 159.Sonnier GP, McAlister L, Rutz OJ (2011) A dynamic model of the effect of online communications on firm sales. Mark Sci 30(4):702–716Google Scholar
- 160.Srinivasan S, Park CS (1997) Surprising robustness of the self-explicated approach to customer preference structure measurement. J Mark Res 34(2):286–291Google Scholar
- 161.Street DJ, Burgess L (2004) Optimal and near-optimal pairs for the estimation of effects in 2-level choice experiments. J Stat Plan Inf 118:185–199Google Scholar
- 162.Street DS, Burgess L (2007) The construction of optimal stated choice experiments: theory and methods. Wiley-Interscience, HobokenGoogle Scholar
- 163.Street DJ, Bunch DS, Moore BJ (2001) Optimal designs for 2
^{k}paired comparison experiments. Commun Stat– Theory Method 30(10):2149–2171Google Scholar - 164.Stuttgen P, Boatwright P, Monroe RT (2012) A satisficing choice model. Mark Sci 31(6):878–899Google Scholar
- 165.Swait J, Andrews RL (2003) Enriching scanner panel models with choice experiments. Mark Sci 22(4):442–460Google Scholar
- 166.Swait J, Louviere JJ, Anderson D (1995) Best-worst conjoint: a new preference elicitation method to simultaneously identify overall attribute importance and attribute level partworths. Working Paper, University of FloridaGoogle Scholar
- 167.Ter Hofstede FK, Kim Y, Wedel M (2002) Bayesian prediction in hybrid conjoint analysis. J Mark Res 39(2):253–261Google Scholar
- 168.Ter Hofstede FK, Wedel M, Steenkamp J-BEM (2002) Identifying spatial segments in international markets. Mark Sci 21(2):160–177Google Scholar
- 169.Toubia O, Hauser JR (2007) On managerially efficient experimental designs. Mark Sci 26(6):850–858Google Scholar
- 170.Toubia O, Simester DJ, Hauser JR, Dahan E (2003) Fast polyhedral adaptive conjoint estimation. Mark Sc 22(3):273–303Google Scholar
- 171.Toubia O, Hauser JR, Simester DI (2004) Polyhedral methods for adaptive choice-based conjoint analysis. J Mark Res 41(1):116–131Google Scholar
- 172.Toubia O, Hauser JR, Garcia R (2007) Probabilistic polyhedral methods for adaptive choice-based conjoint analysis: theory and application. Mark Sci 26(5):596–610Google Scholar
- 173.Toubia O, de Jong MG, Steiger D, Fuller J (2012) Measuring consumer preferences using conjoint poker. MarkeSci 31(1):138–156Google Scholar
- 174.Tsafarakis S, Marinakis Y, Matsatsinis N (2010) Particle swarm optimization for optimal product line design. Internat J Res in Mark 27(4):13–32Google Scholar
- 175.Tversky A, Kahneman D (1992) Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain 5:297–323Google Scholar
- 176.Tversky A, Koehler DJ (1994) Support theory: a non-extensional representation of subjective probability. Psych Rev 101:547–567Google Scholar
- 177.Vadali S, Liechty JC, Rangaswamy A (2006) Generalized hierarchical Bayes estimation for polyhedral conjoint analysis. Working Paper
*,*Pennsylvania State UniversityGoogle Scholar - 178.Venkatraman S, Aloysius JA, Davis FD (2006) Multiple prospect framing and decision behavior: the mediational roles of perceived riskiness and perceived ambiguity. Organ Behav Hum Decis Process 101:59–73Google Scholar
- 179.Verlegh PWJ, Schifferstein HNJ, Wittink DR (2002) Range and number-of levels effects in derived and stated measures of attribute importance. Mark Lett 13(1):41–52Google Scholar
- 180.Voelckner F (2006) An empirical comparison of methods for measuring consumers’ willingness to pay. Mark Lett 17:137–149Google Scholar
- 181.Vohs KD, Baumeister RF, Schmeichel BJ, Twenge JM, Nelson NM, Tice DM (2008) Making choices impairs subsequent self-control: a limited resource account of decision-making, self-regulation, and active initiative. J Pers Social Psych 94:883–898Google Scholar
- 182.Wang X, Camm JD, Curry DJ (2009) A branch-and-price approach to the share-of-choice product line design problem. Manag Sci 55(19):1718–1728Google Scholar
- 183.Wathieu L, Bertini M (2007) Price as a stimulus to think: the case for willful overpricing. Mark Sci 26(1):118–129Google Scholar
- 184.Weber EU, Johnson EJ (2009) Mindful judgment and decision making. Annual Rev Psych 60:53–85Google Scholar
- 185.Weber EU, Johnson EJ, Milch KF, Chang H, Brodscholl JC, Goldstein DG (2007) Asymmetric discounting in intertemporal choice—a query-theory account. Psych Sci 18:516–523Google Scholar
- 186.Wedel M, Kamakura W (2000) Market segmentation: conceptual and methodological foundations, 2nd edn. Kluwer, NorwellGoogle Scholar
- 187.Wittink DR, Cattin P (1989) Commercial use of conjoint analysis: an update. J Mark 53(3):91–96Google Scholar
- 188.Wittink DR, Keil SK (2003) Continuous conjoint analysis. In: Gustafsson A, Herrmann A, Huber F (eds) Conjoint Measurement: methods and applications, 3rd edn. Springer, New YorkGoogle Scholar
- 189.Yang S, Allenby GM (2003) Modeling interdependent consumer preferences. J Mark Res 40(3):282–294Google Scholar
- 190.Yee M, Dahan E, Hauser JR, Orlin JB (2008) Greedoid-based noncompensatory inference. Mark Sci 26(4):532–549Google Scholar
- 191.Yu J, Goos P, Vandebroek M (2011) Individually adapted sequential Bayesian conjoint-choice designs in the presence of consumer heterogeneity. Internat J Res in Mark 28:378–388Google Scholar