1 Introduction

Conjoint analysis is one of the most celebrated research tools in marketing and consumer research. This methodology which enables understanding consumer preferencesFootnote 1 has been applied to help solve a wide variety of marketing problems including estimating product demand, designing a new product line and calibrating price sensitivity/elasticity. The method involves presenting respondent customers with a carefully designed set of hypothetical product profiles (defined by the specified levels of the relevant attributes), and collecting their preferences in the form of ratings, rankings, or choices for those profiles.

Since the introduction of conjoint analysis in marketing research over four decades ago, a remarkable variety of new models and parameter estimation procedures have been developed. Some of these include the move from nonmetric to metric orientation and orthogonal experimental designs in the 1970s, developments in choice-based and hybrid conjoint including adaptive conjoint model in the 1980s, the growing popularity of hierarchical Bayesian and latent class models in the 1990s, and the adaptability of conjoint models to online choice tasks, incentivized contexts, group dynamics, and social influences in the past decade. Several earlier review articles in marketing and consumer academic research have documented the evolution of conjoint analysis.Footnote 2 This manuscript provides an organizing framework for this vast literature and reviews key articles, critically discusses several advanced issues and developments, and identifies directions for future research. Cognizant of the fact that conjoint analysis has matured, this review is selective in the choice of articles, some classic but mostly contemporary focusing on the developments during the period post-2000; that have made or have the potential for having maximal impact in the field. Hopefully, this interdisciplinary review will encourage conjoint scholars to evolve beyond existing conjoint models and explore new problems and applications of consumer preference measurement, develop new forms of data collection, devise new estimation procedures, and tap into the dynamic nature of this methodology.

2 An Organizing Framework for Conjoint Analysis

The developments in conjoint research have naturally drawn from a variety of disciplines (notably choice behavior and statistical theory). The conceptual framework shown in Fig. 1 attempts to integrate various threads of research across five major categories: (A) Behavioral and Theoretical Underpinnings, (B) Researcher Issues for Research Design, (C) Respondent Issues for Data Collection, (D) Researcher Issues for Data Analysis, and (E) Managerial Issues Concerning Implementation. This framework considers all three relevant stakeholders: the researcher, the respondent, and the manager.

Fig. 1
figure 1

A framework for organizing contemporary research in conjoint analysis

3 (A) Behavioral and Theoretical Underpinnings

3.1 A1. Behavioral Processes in Judgment, Preference, and Choice

The developments in the judgment and decision-making research offers great potential for conjoint analysis to better understand the behavioral processes in judgment, preference, and choice. We now know how, and increasingly, why characteristics of task and choice option guide attention and how internal memory and external information search affect choice in path-dependent ways.Footnote 3 Recent research illustrates that preferences are typically constructed rather than stored and retrieved [111].

Judgments and choices typically engage multiple psychological processes, from attention-guided encoding and evaluation to retrieval of task-relevant information from memory or external sources, including prediction, response, and post-decision evaluation and updating. Attention is more important in decisions from descriptions (e.g., the full-profile approach of conjoint analysis) whereas memory and learning is more relevant in decisions from experience through trial and error sampling of choice options [77].Footnote 4 On the other hand, in decisions from experience, recent outcomes are given more weight and rare events get underweighted. In a similar vein, the insight that evaluation is relative from prospect theory continues to gain support [184]. Since neurons encode changes in stimulation, rather than absolute levels, absolute judgments are much more difficult than relative judgments. Relative evaluation includes other observed or counterfactual outcomes from the same or different choice alternatives, as well as expectations.

Also relevant to conjoint analysis are the recent extensions of decision field theory (DFT) and models of value judgment in multiattribute choice [94]. In these models, attributes of choice alternatives are repeatedly randomly sampled and each additional acquisition of information increases or decreases the valuation of an alternative in a choice set, ending when the first option reaches a certain threshold. DFT as a multilayer connectionist network has also been applied to explain context effects such as similarity, attraction, and compromise effects [143]. For instance, conjoint models that capture compromise effect result in better prediction and fit compared to traditional value maximization models [102].

While stimulus sampling models typically assume path-independence, choice models are often biased toward early-emerging favorites resulting in reference-dependent subsequent evaluations [97] and distortion of the value of options, i.e., decision by distortion effect [16, 158]. Also, studies of anchoring suggest that the priming of memory accessibility (and hence preference) can be changed by asking a prior question and remains strong in the presence of incentives, experience, and market feedback [9]. Not only are there short-term changes, but long-term effects on memory have been shown; for example, measuring the long-term effects of purchase intentions on memory and subsequent purchases [28].

The recently developed query theory (QT) [95] on preference construction is a process model of valuation describing how the order of retrievals from memory plays a role in judging the value of objects, emphasizing output interference. Weber et al. [185] show that the queries about reasons supporting immediate versus delayed consumption are issued in reverse order thus making the endowment effect disappear. Similar to Luce’s choice axiom, the support theory (ST) [176] is a model of inference about the probability of an event that uses the relative weight of what we know and can generate about the event (its support) and compares it to what we know and can generate about all other possible events [184]. Since competing hypotheses are often generated by associative memory processes from long-term memory, irrelevant alternative hypotheses may well be generated and occupy limited-capacity working memory [52]. This has implications for conjoint research as consumers with greater working memory capacity can include more alternative hypotheses (i.e., explicit disjunctions) and thus greater discrimination and lower judged probability of the focal brand being chosen.

Recently, the dual process models of System 1 and System 2 processes proposed by Kahneman [97] have gained popularity. Psychological models have distinguished between a rapid, automatic and effortless, associative, intuitive process (System 1) and a slower, rule-governed, analytic, deliberate, and effortful process (System 2). The extent and the process in which the two systems interact [57] is a topic of debate. Both cognitive and affective mechanisms have been demonstrated to give rise to the discounting of future events such as in delayed versus immediate consumption [185, 178]. These theories have implications for conjoint data collection for technology products or durable goods.

3.1.1 Suggested Directions for Future Research

  1. 1.

    While time-discounted utility models are useful in inter-temporal choice, there is also a need to incorporate various behavioral effects in conjoint models. Conjoint modelers can extend and augment inter-temporal utility specifications by using temporal inflation parameters representing differences in “internal noise” used by behavioral researchers. An example is the recent critique by Hutchinson et al. [86] of the theoretical assumptions made by Salisbury and Feinberg’s [150] stochastic modeling of experimental data, where they empirically tested alternate models of choice and judgment with respect to assumptions relating to “internal noise” and “uncertainty about anticipated utility” as well as the stochastic versus deterministic nature of the scale parameters.

  2. 2.

    Conjoint analysts can develop utility models that extend prospect theory and neuron-encoded relative judgments to better understand how consumers select reference points and how multiple reference points might be used in relative evaluation [184]. For instance, individual heterogeneity can be captured through a distribution of reference points rather than a single reference point such as reference price [50].

  3. 3.

    Decision-makers may pay equal attention to all possible outcomes than is warranted by their probabilities and linger at extreme outcomes to assess best and worst choices in choice-based conjoint studies. Cumulative prospect theory that explains the evaluation of outcome probabilities relative to its position in the configuration of outcomes [175] can provide a useful avenue for research for this problem.

  4. 4.

    The power of affect, feelings, and emotions in consumer judgment, preference, and choice is now well established [118]. Future conjoint research should incorporate the mechanisms of the dual process model, i.e. System 1 and System 2 models [97]. Also, decision affect theory provides a framework that incorporates emotional reactions to counterfactual outcome comparisons such as regret or loss aversion [34]. In a risky choice situation, the fit with self-regulatory orientation can also transfer as affective information into the choice task which could be modeled [78].

3.2 A2. Compensatory Versus Noncompensatory Processes

Much of the conjoint research assumes that the utility function for a choice alternative is additive and linear in parameters.Footnote 5 The implied decision rules are compensatory. Generally speaking, linear compensatory choice models do not address simplifying choice heuristics such as truncation and level focus that can result in an abrupt change in choice probability. Yet, noncompensatory simple heuristics are often more or at least equally accurate in predicting new data compared to linear models that are criticized for over-fitting the data [67, 89, 103]. While the linear utility model has been the mainstay in conjoint research, Bayesian methods, including data augmentation, can easily accommodate nonlinear models and can deal with irregularities in the likelihood surface [6]. Recently, Kohli and Jedidi [103] and Yee et al. [190] propose dynamic programming methods (using greedy algorithm) to estimate lexicographic preference structures.

Noncompensatory processes are particularly relevant in the context of consideration sets, an issue typically ignored by the traditional conjoint research (e.g., [67, 91]). Many advocate a noncompensatory rule for consideration and a compensatory model at the choice stage (consider-then-choose rule), albeit some critics question the existence and parsimony of a formal consideration set (see Horowitz and Louviere [81] who find the same utility function at the two-stage versus one-stage only). For instance, in a study estimating consideration and choice probabilities simultaneously, Jedidi et al. [91] find that both segment-level and individual Tobit models perform better than the traditional conjoint model which ignores both consideration as well as error component in preference. Similarly, Gilbride and Allenby [67] estimate a two-stage model using hierarchical Bayes methods, augmenting the latent consideration sets within their MCMC approach. Recently, Hauser et al. [76] propose two machine-learning algorithms to estimate cognitively simple generalized disjunctions-of-conjunctions (DOC) decision rules, and Liu and Arora [114] develop a method to construct efficient designs for a two-stage, consider-then-choose model.Footnote 6 Stuttgen et al. [164] propose a continuation of the line of research started by Gilbride and Allenby [67] and Jedidi and Kohli [89] that does not rely on compensatory trade-offs at all. These finding are consistent with economic theories of consideration set wherein consumers balance search costs with option value of utility maximization to achieve cognitive simplicity.

3.2.1 Suggested Directions for Future Research

  1. 1.

    It seems that combining lexicographic and compensatory processes in a two-stage model using the greedoid algorithm in the first stage is a promising research route to follow as it enhances the ecological rationality of preference models (see [67, 103], and [89]).

  2. 2.

    Several interesting behavioral processes such as the formation and dynamics of the consideration set still need to be understood. Given the technological advances (i.e., eye-tracking technology) in dealing with noncompensatory processes and satisficing rules, it behooves conjoint researchers to adapt such methods in the future (see [164, 173], and [157]).

  3. 3.

    Knowledge about cue diagnosticityFootnote 7and take-the-best (TTB) strategy performs really well when the distribution of cue validities is highly skewed. Several other heuristics have also performed well including the models that integrate TTB and full information [80, 109]. We encourage conjoint researchers to incorporate cue diagnosticity in estimating noncompensatory models.

  4. 4.

    While the recognition heuristic (RH) for inference in cases in which only one of two provided comparison alternatives is recognized as a useful tool, the debate is whether recognition is always used as a first stage in inference or whether recognition is simply one cue in inference that can be integrated (see [132, 142]) without any special status. For future research, RH can be potentially applied in conjoint-choice contexts that are characterized by rapid, automatic, and effortless processes (i.e., System 1 process [97]) typical in low-involvement routine products.

3.3 A3. Integrating Behavioral Learning and Context Effects

Conjoint analysis has made some significant gains in incorporating behavioral theory into preference measurement. Recently, Bradlow et al. [20] investigated how subjects impute missing attribute levels when they evaluate partial conjoint profiles using a Bayesian “pattern-matching” learning model. Respondents impute values for missing attributes based on several factors including their priors over the set of attribute levels, a given attribute’s previously shown values, the previously shown values of other attributes, and the covariation among attributes. Alba and Cooke [3] critique that not all attributes are spontaneously inferred and even when inference is natural, symmetry may be violated such that the probability of imputing cause (e.g., quality) from effect (e.g., price) may deviate from the probability of imputing effect from cause. When information is intentionally retrieved, the weighting function may reflect uncertainty about the accuracy of the profiles or the ability to retrieve them.

There is substantial research in conjoint analysis to demonstrate context effects. Conjoint models in marketing research have assumed stable preference structures in that preferences at the time of measurement are the same as at the time of trial or purchase. However, context effects produce instability when the context at measurement does not match the context at decision time [15, 110]. DeSarbo et al. [45] introduced a Bayesian dynamic linear model (DLM)-based methodology that permits the detection and modeling of the dynamic evolution of individual preferences in conjoint analysis that occur during the task due to learning, exposure to additional information, fatigue, cognitive storage limitations, etc. (see [113]). Also, see Rutz and Sonnier [149] for Bayesian modeling (i.e., DLM method) of dynamic attribute evolution due to market structural changes for more details.

Kivetz et al. [102] find that incorporating the “compromise effect” leads to superior predictions and fit compared with the traditional value maximization model. Recently, Levav et al. [110] demonstrated using experimental studies that normatively equivalent decision contexts can yield different decisions, which challenges the assumption that people maximize utility and possess a complete preference ordering. This type of research attempts to bridge consumer psychology with marketing science. Other related work involving dynamic preference structures include Netzer et al. [129], Evgeniou et al. [59], Bradlow and Park [19], Fong et al.[64], Ruan et al. [148], Rooderkerk et al. [145], De Jong et al. [38], and Elrod et al. [56].

3.3.1 Suggested Directions for Future Research

  1. 1.

    There is clearly a need for more rigorous work to incorporate behavioral effects in preference measurement. While this may create a conflict between isomorphic goal of fit and paramorphic goal of predictive validity [130], a greater dialogue and collaboration between the two research camps is essential for improved quality of conjoint research.

  2. 2.

    Future research in this conjoint arena should examine other documented behavioral effects such as asymmetric dominance, asymmetric advantage, enhancement, and detraction effects (see [2]).

  3. 3.

    Since preference formation is a dynamic process dependent on learning and context effects, future researchers should attempt to further develop and use flexible models and dynamic random-effects models such as those used by Liechty et al. [113], and Bradlow et al. [20]. Many of the flexible models developed to capture dynamics in repeated choice (e.g., [107]) could be adapted to conjoint preference measurement.

  4. 4.

    It would be worthwhile to investigate how choice probabilities change in choice-based conjoint and choice simulators when context effects and consumer expertise are built directly into the model as these may affect the likelihood and form of missing attribute inference [3].

3.4 A4. Group Dynamics and Social Interactions

A vast majority of choice models assume that a consumers’ latent utility is a function of brand attributes, and not the preferences of referent others. However, some scholars have examined the influence of referent others in a dyadic and network context. For instance, Arora and Allenby [10] develop a Bayesian model to estimate attribute-specific influence of spouses in a decision-making context and discuss how and whom marketers can target communication messages effectively. Using a Bayesian autoregressive mixture model, Yang and Allenby [189] demonstrate that preference interdependence due to geographically defined networks is more important than demographic networks in explaining behavior. Ding and Eliashberg [47] proposed a new model that explicitly considers dyadic decision-making in ethical drug prescriptions in the context of physician and patients. The issue of reducing hypothetical biases (e.g., socially desirable responses [48]) in group dynamics through innovative methodology, such as incentive-aligned conjoint studies, is critical.

Some exciting work has started using conjoint models in the domain of group dynamics and social interactions [129, 29, 68, 88, 127, 159].Footnote 8 With the availability of “sentiment analysis” tools, firms are now able to extend beyond ratings data and capture a torrent of online textual communications from a variety of social media including blogs, chat rooms, new sites, YouTube, and Twitter.Footnote 9 Recently, Sonnier et al. [159] using the web crawler technology and automated classification of sentiments were able to demonstrate that positive and negative comments increased the dynamic stock while negative comments decreased it and that such effects are masked when the comment volume is aggregated across valence. Based on the theory of social contagion [88], Narayan et al. [127] study the behavioral mechanisms underlying peer influence affecting choice decisions and find that consumers update their inherent attribute preferences in a Bayesian manner by utilizing the relative uncertainty of their attribute preference and that of their peers and use peer choices as an additional attribute. This particular study is significant as the authors mitigate problems of endogeneity, correlated unobservable variables, and simultaneity by setting up a preinfluence and post-influence conjoint experimental design. Most recently, Kim et al. [101] introduced a holistic preference and concept measurement model called PIE for conjoint analysis which is a new incentive-aligned data collection method which allows a consumer to obtain individualized shopping advice through other people.

3.4.1 Suggested Directions for Future Research

  1. 1.

    In the promising domain of group dynamics and social interactions for technology-based products, one important research question would be to ask what role can internal versus external motivations of online information disseminators play in changing the posterior beliefs and preference structure of consumers [69]? For example, very little is known as to what motivates opinion leaders and early adopters to not just possess but share information with others.

  2. 2.

    There is a vast potential for conjoint models to draw from consumer research on reference group formation and social influences on buyer choice behavior such as internalization, identification, and compliance [141, 156]. In this area, barter conjoint offers a promising potential to model the effects of information diffusion among subjects and how endowment and loss-aversion effects [101, 22, 49] induce individuals to behave differently than conventional choice behavior.

  3. 3.

    We issue a call for scholars to explore further developments in conjoint models that capture online recommender systems and social interactions given the rising importance of social media [32]. Existing algorithms using Classification and Regression Trees, Bayesian Tree Regression, and Stepwise Componential Regression can be further combined to develop an optimal sequence of questions to predict online visitor’s preference [37]. Additional research into problems involving multiple decision makers with multiple utility functions (e.g., in business-to-business applications) would prove valuable.

4 (B) Researcher Issues for Research Design

Conjoint researchers have long dealt with the problem of large number of attributes and levels with the help of experimental designs. The specific choice will depend on a variety of factors including objectives of the research, cost, time, statistical sophistication, and the need to develop individual-level estimates, etc. We focus on the research designs related to conjoint approaches that are more popular: choice-based conjoint analysis, menu-based experimental choice, and maximum difference best/worst conjoint method. We also briefly discuss some recent developments in experimental design and handling of large number of attributes.

4.1 B1. Choice-Based Conjoint Analysis

Choice-based conjoint (CBC) analysis describes a class of hybrid techniques that are among the most widely adopted market research methods for conjoint analysis (see [137]).Footnote 10 The early choice-based hybrid models used stage-wise regression, compositional models to fit self-explicated data, and the decompositional model at the segment level. However, hybrid models were later extended to allow for parameter estimation at the individual level using self-explicated data for within-attribute part-worth estimation, and using the full-profile approach for improving estimates of attribute importance.

Recent developments have allowed for estimation at the individual level through Bayesian estimation [71, 167], even though a respondent provides only a small amount of information within CBC. In the same vein, it is not clear whether segments obtained from CBC are similar to those found from post hoc clustering of part-worths [25]. One aspect of choice-based models, particularly with the development of multinomial logit estimation procedures, is the property of independence of irrelevant alternatives (IIA) that forces all cross-elasticities to be equal. However, researchers have developed ways to deal with the IIA assumption by employing mixed-logit or random-parameters logit that allows for flexible variance-covariance structures. Building on recent work by Louviere and Meyer [116] and Louviere et al. [117], Fiebig et al. [61] argue that much of the heterogeneity in attribute weights is accounted for by a pure scale effect (i.e., holding attribute coefficients fixed, the scale of the error term is greater) leading to scale heterogeneity MNL model. Also noteworthy is the recent development in detecting and statistical handling of attribute nonattendance in which respondents focus on a subset of attributes only in choice-based conjoint. Scarpa and colleagues use two different panel mixed-logit models to account for response pattern of repeated exclusion that influence model estimation (see [154], [155], and [24]).

4.1.1 Suggested Directions for Future Research

  1. 1.

    Several marketing scholars (see [130], [70], and [83]) identified the importance of advanced research into the direct modeling of behavioral effects on decision-making and choice (e.g., in choice-based conjoint analysis). The research issues include understanding of such behavioral phenomena as self-control, context effects, inattention, or reference dependence. The embedding of meta-attributes such as expectations, goals, motivations, reference groups, and social networks might also prove gainful in conjoint analysis.

  2. 2.

    Another potential area of study is the modeling of individual-level structural heterogeneity. More specifically, are there some combination of attribute levels that create a change in the structure of the utility function utilized by a specific consumer? While conjoint scholars have explored compensatory vs. noncompensatory models for a given choice-based conjoint task, work involving potential regime shifts during the task by consumer would prove insightful (see [63]).

4.2 B2. Menu-Based Experimental Choice

In menu-based conjoint analysis, customers are asked to pick several features from a menu of features or products that are individually priced. If the utility of each feature is above a certain threshold, it is chosen and the utilities of all the chosen features are maximized simultaneously resulting in multiple chosen alternatives [112]. The responses therefore entail a binary vector of choices for each respondent for each of the menu scenarios in the experiment. This is quite akin to choosing a bundle of items [31] from a larger set or designing a product using a product configurator as buying, for instance, a Dell laptop. Configurators represent a promising form of conjoint data collection in which the respondent self-designs the best product configuration [112]. Recently, Levav et al. [110] argue that in a mass customization decision (such as using a configurator), consumers can often lose their self-control in assessing utility correctly in repeated choice situations due to bounded rationality and the depletion effects of their mental resources [181]. Dellaert and Stremersch [40] borrowing from choice theory and task complexity theory also demonstrated that consumers’ product utility had a positive effect on mass customization utility while task complexity had a negative effect, albeit lower for experts.

In addition to the many menu choices that it generates, menu-based choice represents a modeling challenge that is distinct from the traditional single-choice analysis of data from choice-based conjoint experiments—e.g., using multinomial logit models or multinomial probit models. The Bayesian modeling approach in this context, entailing a constrained random-effects multinomial probit model [112], incorporates constraints in menu choices (e.g., firm-level design or production constraints) as well as heterogeneity in customers’ price sensitivities and preferences for the variety of customized options a firm can offer. In this multiple choice modeling scenario, researchers can assess the intrinsic worth of each feature, their price sensitivities, and model correlations among them for each individual. Web-based menus would allow firms to offer mass-customized services with every potential customer visiting their web site.

4.2.1 Suggested Directions for Future Research

  1. 1.

    Given the ability of menu-based conjoint to provide individual-level information and the growing reality of web-based mass customization, we encourage researchers to further study customer heterogeneity in demand and new channels of information exchange to maximize customer value.

  2. 2.

    Conjoint scholars can add to our understanding of mass-customized choice processes by explicating individual traits, task factors, and decision strategies that influence customization complexity. To further refine the model, future conjoint scholars can incorporate a more general distance model that can explicitly account for the relative differences between attribute levels, unlike the 0–1 pattern-matching model (see [20]). This can be accomplished by combining conjoint analysis and MDS to impute missing attribute levels. When the number of attributes is large, mapping between attributes and some higher-order dimensions can be developed (i.e., conjoint utility functions) a la MDS methods. Methods of reverse mapping can yield part-worth values for the original attributes. But, this approach needs to be developed and validated.

  3. 3.

    One other promising line of research here would be to study whether consumers enjoy mass customizing a product or service, and at what levels of complexity will they make suboptimal choices. It is possible that consumers also overspend their mental capacity early in the configuration sequence triggering a tendency to accept the default alternative in subsequent decisions, even when such decisions involve few options that would require less capacity to evaluate. A related issue in need of further investigation is minimizing the dysfunctional consequences of information overload in conjoint studies.

4.3 B3. Maximum Difference Scaling—Best/Worst Conjoint

Based on a multinomial extension of Thurstone’s model for paired comparisons, Finn and Louviere [62] developed a univariate scaling model (MaxDiff) that can be utilized to measure brand-by-attribute positions, develop univariate scales from multiple measures, etc. Swait et al. [166] describe how to generalize or extend MaxDiff to conjoint applications which they call Best/Worst conjoint analysis or B/W. In the B/W method respondents choose the two attribute levels which are, respectively, “best” and “worst” for each product profile. With such data, the method enables the estimation of separate attribute effects for each attribute independently of its part-worths. This is an important advantage over the traditional additive conjoint and choice models that do not allow for such separation [166]. B/W experiments have also been found to contain less respondent error than choice-based conjoint models containing the same attributes and levels [166]. Other advantages include allowing for ties in evaluations unlike ranking tasks and a more discriminating way to measure attribute importance than either rating scales or the method of paired comparisons. Also, it has greater predictive validity as an importance measurement than either ratings scales or the method of paired comparisons. B/W measurements are scale-free and thus ideal for comparison across different cultural groups that use scales quite differently [33] without any need to make prior assumptions regarding the scaling of evaluation and choice. Consequently, maximum difference scaling has been used extensively in Best/Worst Conjoint Analysis. However, some limitations include evaluating both positive and negative attributes, effects of having only best or worst features versus best and worst, collinearity, and sequence effects, among others. For example, MaxDiff results are shown to be less accurate at the “best” end but augmentation (e.g., Q Sort) improves MaxDiff results on “best” items [53].

4.3.1 Suggested Directions for Future Research

  1. 1.

    Best/Worst allows for ties in evaluations and for skewed preference functions, unlike ranking tasks. Whether or not B/W and choice-based conjoint produce equivalent part-worth utilities, after adjusting for the difference in respondent error, is currently unknown as the results have been mixed [166]. More research is needed to further validate the B/W method.

  2. 2.

    More recently, Marley and Louviere [121] have developed several different probabilistic B/W choice models: the Consistent Random Utility B/W choice model, the MaxDiff model, the biased MaxDiff model, and the concordant B/W choice model (see also [122]). However, questions remain about whether the B/W method can be used in accordance with the random utility theory. A related question is whether the judgments respondents make in a B/W task could be used as though they had made in an alternative-based choice, ranking, or rating using compensatory rules.

4.4 B4. Developments in Experimental Design

Rating-based methods in marketing conjoint studies have frequently utilized resolution III designs (or orthogonal arrays), which assume that some main effects are confounded with some two-level interactions. In general, orthogonal designs for linear models are efficient as measured by A-, D-, and G-efficiency computed from eigenvalues of the \( {\left(X\hbox{'}X\right)}^{-1} \) matrix (recently, Toubia and Hauser [169] proposed the criterion of managerial efficiency, M-efficiency, as well). Kuhfeld [106] showed that the OPTEX procedure produces more efficient designs; however, it fails to achieve the perfect level balance or the proportionality criteria of orthogonal arrays. In the case of choice-based conjoint methods, Huber and Zwerina [85] show that achieving utility balance increases the efficiency. Building on their work, Sandor and Wedel [151] develop Bayesian-based efficient designs (through relabeling, swapping, and cycling) that minimize the standard errors with higher predictive validity. Subsequently, Sandor and Wedel [152] develop efficient designs that are optimal for mixed-logit models by evaluating the dimension-scaled determinant of the information matrix of the mixed multinomial logit model. Because choice-based conjoint model is nonlinear, both minimal overlap and utility balance in the choice set are desirable. Rose et al. [146] extend the Sandor and Wedel study to construct statistical S-efficiency that optimizes Bayesian designs for a given sample size based on parameter values, random-parameters logit mixing distributions, and model specifications [146, 99]. However, the trade-off is that choice task difficulty typically is accompanied with greater measurement response error, and thus a lower response R-efficiency.

Despite several developments, some limitations remained, such as the need to obtain repeated observations from each respondent, the use of aggregate-customization design that was optimal for the average respondent only, and the challenge of computing ordinary Fisher’s information matrix. This was later partly addressed by Sandor and Wedel [153] who used a small set of different designs for different consumers to capture respondent heterogeneity. Recently, Yu et al. [191], using the generalized Fisher information matrix, proposed an individually adapted sequential Bayesian approach to generate a conjoint-choice design that is tailor-made for each respondent. The method is superior both in estimation of individual-level part-worths (and population-level estimates) and choice prediction compared to benchmarks such as aggregate-customization and orthogonal design approaches. Further, this method is less sensitive to low-response accuracy as compared to the polyhedral method proposed by Toubia et al. [171] and their subsequent adapted method [172]. New developments are also emerging in the area of choice set designs with forced choice experiments. For example, Burgess and Street [21] developed procedures to construct near-optimal designs to estimate main effects and two-level interactions with a smaller numbers of choice sets and they derive the relevant mathematical theory for such designs; see [21, 163, 161, 162] for detailed descriptions.

4.4.1 Suggested Directions for Future Research

  1. 1.

    Newer methods of adaptive questions based on active machine-based learning method are proving very successful over market-based, random, and orthogonal-design questions when consumers use noncompensatory heuristics; see Abernethy et al. [1] and Dzyabura and Hauser [54]. We encourage more research along this direction.

  2. 2.

    The trade-off between S- and R-efficiency is an interesting issue to resolve going forward. While greater S-efficiency yields smaller variance, increasing R-efficiency by reducing task complexity with attribute overlap reduces S-efficiency. While inconclusive, more research needs to be done whether efficient experimental designs contribute more to the precision of choice model estimates in light of task complexity (see [99]).

4.5 B5. Handling a Large Number of Attributes

A comprehensive review of various methods for dealing with large number of attributes is available in Rao et al. [140]. Several scholars are currently working on the issue of handling large numbers of attributes [35, 128]. For instance, Dahan [35] simplified the conjoint task (using Conjoint Adaptive Ranking Database System) by asking respondents to choose only among a very limited number of sets that are perfectly mapped to specific utility functions proposed in advance by the researcher. Park et al. [134] proposed a new incentive-aligned web-based upgrading method for eliciting attribute preferences in complex products (e.g., cameras); this method enables participants to upgrade one attribute at any level from a large number of attributes allowing for dynamic customization of the product. Their empirical application shows that the upgrading method is comparable to the benchmarked self-explicated approach, takes less time, and has a higher external validity.

Recently, Netzer and Srinivasan [128] proposed a web-based adaptive self-explicated (ASE) approach to solve the self-explicated constant sum question problem when the number of product attributes becomes large. The ASE method breaks down the attribute importance question into a ranking of the attributes followed by a sequence of constant sum paired comparison questions for two attributes at a time thus replacing the importance measurement stage of the traditional self-explication model. The attribute importance is estimated by using a log-linear regression model (with OLS estimation) which gives the benefit of estimating standard errors as well. The ASE method significantly and substantially improved predictive validity as compared to the self-explication model, adaptive conjoint analysis, and the fast polyhedral method.

As with the large number of attributes problem, researchers should also consider the number-of-levels effect. As the number of intervening attribute levels increase, the derived importance of an attribute also increases. Prior studies have linked this phenomenon to data collection methodology, measurement scale of the dependent variable, and parameter estimation procedures [179], but results are somewhat inconclusive. More recently, De Wilde et al. [39] explain this phenomenon by focusing on selective attention, and argue that attentional contrast directs attention away from redundant attribute levels and toward novel attributes in sequential evaluation procedure (e.g., in traditional full-profile conjoint analysis and choice-based conjoint).

4.5.1 Suggested Directions for Future Research

  1. 1.

    The search for methods for coping with large number of attributes has been identified as one of the key areas for future research [18]. An approach that holds promise is to have subsamples of respondents provide data on a subset of attributes with some linkages among the sets as in bridging conjoint analysis. Hierarchical Bayesian methods can then be applied to such data to estimate part-worths at the individual level. We encourage conjoint scholars to further advance this line of research.

  2. 2.

    Given scant research, there is a need for studies, using simulations as well as empirical data, to compare the relative efficacy of the different methods in handling large number of attributes. Future research should assess how measurement technique, attribute representation, and experimental design will influence the relative novelty of an attributes’ levels at the time of measurement. Further, conjoint scholars should engage in developing algorithms that are sensitive to level balance across attributes, especially for unbalanced designs.

5 (C) Respondent Issues for Data Collection

Over the years, conjoint research has focused either on preference ratings (or rankings) of a number (between a dozen to thirty) of carefully designed product profiles (a la ratings-based methods) or on stated choice for each of several choice sets of product profiles, including a no choice option. When the number of attributes becomes large (i.e., over six), methods such as adaptive methods or partial profile methods have been employed. These approaches have come to a stable situation. Not many research issues seem to exist in this arena. Rather, we will focus on newer methods such as using incentive alignment and willingness to pay, barter conjoint and conjoint poker, meta-attributes and complexity of stimuli, and the role of no-choice option given their recent development and future research potential.

5.1 C1. Incentive Compatibility and Willingness to Pay

Ding et al. [48] found strong evidence in favor of incentive-aligned choice conjoint in out-of-sample predictions and a more realistic preference structure that exhibited higher price sensitivity, lower risk-seeking behavior, and lower susceptibility to socially desirable behaviors. This development has cast doubt on the assumption that purchase intent and choice are related in stated preference data. However, a real challenge is for researchers to implement incentive alignment in really new or complex products when it is not cost effective to offer real product to each participant or to generate all product variations.

Dzyabura and Hauser [54] addressed the cost issue by implementing an active machine-learning algorithm which approximates the posterior with a distributional variation and uses belief propagation to update the posterior distribution. The questions are selected sequentially to minimize the expected posterior entropy by anticipating the potential responses, i.e., to consider or not to consider. Their study confirms that consumers use cognitively simple heuristics with relatively fewer aspects and that the adaptive questions search the space of decision rules efficiently. Ding [46] addressed the issue of “all product variations” by developing a truth-telling mechanism by incentivizing conjoint participants which becomes the Bayesian Nash Equilibrium. The BDM procedure ensures that it is in the best interest of a participant to have his or her inferred willingness to pay equal to his or her true willingness to pay.

Conjoint methods are typically used for measuring the willingness to pay (WTP). WTP becomes more relevant in the context of incentive-aligned upgrading of attributes [134]. Wathieu and Bertini [183] used categorization theory to argue that a moderately incongruent price differential is more likely to induce deliberation when a new benefit is added or augmented beyond consumer expectations. Dong et al. [51] proposed a Rank Order mechanism that predicts preferences for a list of reward products, instead of an individual’s monetary value for one product, and gives or sells the top-rated one to the respondent. They recommend the WTP mechanism when there is only one real product and price can be estimated from preference measurement task; and the Rank Order method when two or more real versions of the product are available regardless of whether or not WTP can be estimated.

The contingent valuation method, typically used to determine the WTP for a nonmarket good, is subject to exaggeration bias which stems from factors such as new product enthusiasm, an attempt to influence the decision to market the product, or a tendency to be less sensitive to total costs [93, 180]. One approach is to calibrate the responses into quasi-real ones based on self-assessed certainty; however, the latter measure can also be fraught with survey bias. The second approach has been transforming the hypothetical WTP into real WTP assuming a functional relationship. Park and MacLachlan [133] propose an exaggeration bias-corrected contingent valuation method in which the individual compares the real WTP with an independent randomly drawn spurious WTP and then takes the larger one as his or her hypothetical WTP. The real WTP is only assumed to be related randomly with the hypothetical WTP rather than have a functional relation.

Voelckner [180] found significant and substantial differences between WTP reported by subjects when payment of the stated price is real or hypothetical. The author compared hypothetical and real WTPs across and within four methods of measuring WTP (i.e., first-price sealed bid auction, the Vickrey auction, contingent valuation, and conjoint analysis). There was evidence of overbidding bias as a result of perceived competitive pressure resulting in higher WTPs for auctions compared to methods based on stated preference data. Recently, Miller et al. [125] compared the performance of four approaches to measure WTP based on direct versus indirect assessment and hypothetical versus actual WTP with real purchase data. Their findings show that respondents are more price-sensitive in incentive-aligned settings than in nonincentive-aligned settings and in real purchase setting, and are better suited to assess WTP for product prototypes. Overall, recent developments in this domain have been very significant with a promising future outlook.

5.1.1 Suggested Directions for Future Research

  1. 1.

    While the Rank Order method of incentive compatibility has proven very valuable in motivating truth responses, there is still a need to sort out a host of issues such as desired versus undesired products to be included in the list, the incentive value of products, and whether the incentive list should be revealed before or after the conjoint task.

  2. 2.

    Given that WTP is a latent construct, research for its validation should be undertaken employing SEM methodology; for instance employing an induced value experiment that provides incentive-compatible estimates of WTP may come closest to mapping the true representation of WTP as a latent construct [134, 46].

  3. 3.

    Giving respondents time to think (TTT) in a contingent valuation study by designing a quasi-experimental study that mimics realistic decision contexts may alter the WTP. How does information and time affect responses to contingent valuation conjoint studies? This is an excellent opportunity for bridging research in consumer psychology, marketing science, and environmental and information economics [27].

  4. 4.

    While WTP research typically focuses on estimating marginal rates of substitution (i.e., WTP for marginal changes in product attributes), there is potential scope for data enrichment by combining stated preference and revealed preference; the former providing robust estimates for substitutability and the latter providing robust estimates for predicting uptake behavior (see [126] for associated statistical estimation methodologies).

5.2 C2. Barter Conjoint and Conjoint Poker

Barter conjoint approach collects substantially larger amount of pairwise data (offers submitted or not and the responses to offers received) without demanding much additional effort, as well as potentially improving the quality of data by allowing information diffusion among participants during preference measurement. Ding et al. [49] using two studies and two holdout tasks found that the barter conjoint significantly outperformed both incentive-aligned and hypothetical CBC in out-of-sample prediction. Toubia et al. [173] recently developed and tested an incentive-compatible conjoint poker game and compared it with incentive-compatible choice-based conjoint using a series of experiments. Their findings indicate that conjoint poker induces respondents to consider more of the profile-related information presented to them (i.e., greater involvement and motivation) as compared with choice-based conjoint. Similar to the incentive-compatible mechanisms that add motivation to respondents [48], conjoint poker motivates respondents toward truth telling.

5.2.1 Suggested Directions for Future Research

  1. 1.

    Future research in these relatively new approaches could be developed in a number of different directions. For example, applications of barter and poker methods could also be tested for products that are less desirable, allowing for increases or decreases in group assignments, and/or allowing for multiple trades.

  2. 2.

    There is the restriction that the barter requires synchronized implementation and simultaneous bartering which makes online conversion somewhat cumbersome. Future barter research should examine newer procedures that do not tend to promote possible endowment and loss-aversion effects. Finally, the current estimation method does not model any dynamic effects in preference formation despite the various stages of the barter.

5.3 C3. Meta-Attributes and Complexity of Stimuli

Conjoint researchers need to recognize that consumers often think of products in terms of “meta-attributes” including needs, motivations, and goals which may correspond to bundles of attributes [130]. Research in judgment and decision-making has incorporated the role of multiple goals and how situational and task factors including goal-framing effects [123] activate and chronically elevate their accessibility which in turn determine decision rules—e.g., deontological goal of “what is right”, consequentialist goal of “what has the best outcomes”, versus affective goal of “what feels right” [13]. Also, consequences associated with an attribute that is central in consumers’ hierarchy of goals are likely to generate primary appraisals [118]. These meta-level preferences can impact decision-making and they tend to be more stable than context-specific preferences. We know that customers think of products in terms of meta-attributes and hierarchy of goals, and that attributes that serve a consequentialist goal are more likely to be accessible and appraised [118, 130].

In the context of complex stimuli, i.e., really new products, the role of uncertainty and consumer learning mechanisms through mental simulation and analogies is critical. Some advances have been made in this domain (see [73, 79]), but the results are still preliminary. In a related vein, there is also evidence of inconsistency between the importance of attributes as estimated in value-elicitation surveys (i.e., stated preferences) and those implied by actual choices (i.e., revealed preferences). Horsky et al. [82] empirically demonstrate that attributes may be differentially weighted in stated preference versus actual choice as a function of their tangibility, such that tangible and concrete attributes are weighted more heavily in choice since consumers are under pressure to justify their decisions. Going forward, we offer the following issues for future research.

5.3.1 Suggested Directions for Future Research

  1. 1.

    One big challenge is to conceptually map the relationship between physical (i.e., concrete) attributes and meta-attributes in a way that can be translated into product design specifications. Some concrete attributes may lose their meaning when interpreted at a higher level of abstraction and generality, thus undermining the validity of responses [31].

  2. 2.

    The other challenge is methodological, although some work in this domain has started using factor analysis, text mining, and tree-based methods (e.g., Classification and Regression Trees, Bayesian Tree Regression) as valuable tools in this respect [37, 66]. While factor analysis is feasible, it lacks the ability to create maps between physical attributes and meta-attributes. We encourage continued research in this area.

5.4 C4. The Role of the No Choice Option

Parker and Schrift [135] argued that the mere addition of a no-choice option to a set changes the consumers’ judgment criteria from comparative judgment (i.e., attribute-based processing) to an evaluative judgment (i.e., alternative-based processing). Through a series of studies, the authors demonstrate that the mere addition of a no-choice option (i.e., rejectable choice set) leads to alternative-based recall (encoding and retrieval) and information processing, greater weights being given to attributes that are enriched (more meaningful when evaluated alone) and those that meet consumers’ minimum needs, and ultimately a change in preference. The perceived difference between alternatives will be increasingly smaller the further the attributes are from the consumers’ threshold. Consistent with the literature on context effects [15], this study confirms that consumers shift their preference structure between a forced choice context and a rejectable choice context and ultimately choice shares. It is conceivable that every decision a consumer makes has a no-choice option and conjoint scholars should design studies that add the no-choice option when it is feasible and salient for consumers. Further, Botti et al. [17] suggest that mostly all choices consumers make are restricted or constrained in some manner.

5.4.1 Suggested Directions for Future Research

  1. 1.

    Potential distortions as arising due to variations in choice sets need to be examined by-product/service class, type of experimental design, method of administration, etc. to fully understand the impact of the specific methodology selected to perform conjoint analysis.

  2. 2.

    A number of interesting subareas on the impact of choices made when a “no choice” option is included need further investigation. These include the frequency in which the “no choice” option is selected, the impact of “no choice” selection on estimated importance, and whether the choices are sequenced or staged (i.e., first consider, then decide to choose) [114].

6 (D) Researcher Issues for Data Analysis

Major developments in the estimation procedures relevant for the conjoint researcher include Hierarchical Bayesian, Latent Class, and Polyhedral Estimation approaches. Further, opportunities exist in integrating multiple sources of data to obtain robust conjoint results.

6.1 D1. The Hierarchical Bayesian (HB) Approach

The HB method of estimation is helpful in tackling the challenge in conjoint analysis to estimate accurate part-worths at the individual level without imposing excessive response burden on the respondents. HB methods have been known to improve on finite mixture-based individual-level estimates which tend to be more stable than estimates that are based on individual data [4]. Following earlier pioneering work,Footnote 11 Allenby et al. [5] utilized the Bayesian method and the Gibbs sampler to extend research by incorporating prior ordinal constraints on conjoint part-worths and found better internal cross-validation on the data. Often, there is a logical or practical ordering of the attribute levels that exists in the real world.

Subsequently, Srinivasan and Park [160] proposed a new method to optimize the full-profile design for a large number of attributes and provided a heuristic procedure to weigh together the part-worth estimates of the self-stated and full-profile data on a smaller number of core attributes. By differentiating between core and noncore attributes, they predicted preference for a new stimulus by using the optimal weight and conjoint part-worths for the core attributes and the self-explicated part-worths for the noncore attributes. Andrews et al. [8] showed that HB models performed well even when the part-worths came from a mixture of distributions and were robust to violations of the underlying assumptions. In almost all instances, the Bayesian method has been found to be comparable or even superior to the traditional methods both in part-worth estimation and predictive validity. Sandor and Wedel [153] demonstrated that heterogeneous designs which take into account Bayesian design principles of prior uncertainty and respondent heterogeneity showed substantial gains in efficiency compared with homogeneous designs. Heterogeneous designs consist of several subdesigns that are offered to different consumers and can be constructed with relative ease for a wide range of conjoint-choice models.Footnote 12

Ter Hofstede et al. [167] proposed a general model (finite mixture regression model) that includes the effects of discrete and continuous heterogeneity as well as self-stated and derived attribute importance in hybrid conjoint studies. As a departure from earlier studies, they treat self-stated importance as data rather than as prior information, and include them in the formulation of the likelihood thus helping them investigate the relationship of self-stated and derived importance at the individual level. Furthermore, the order constraints derived from the self-stated importances are “hard” constraints, ignoring the relative distance between importances and measurement error in the self-stated part-worths, which may result in the stated order differing stochastically from the “true” underlying order. Their study shows that including self-stated importance in the likelihood leads to much better predictions than does considering them as prior information. An excellent resource on HB methods in marketing and conjoint analysis can be found in Rossi et al. [147].

6.1.1 Suggested Directions for Future Research

  1. 1.

    It has not been conclusively demonstrated in what contexts consumer heterogeneity is better described by a continuous [4] or by a discrete distribution [44], pointing to a need for further research to resolve this issue (see also Ebbes et al. [55]). Still, we believe that the HB method is a preferred approach when a large number of part-worths need to be estimated compared to more classical methods of estimation that can use up a large number of degrees of freedom and where the likelihood function may have multiple maxima [84, 138].

  2. 2.

    More research is required to examine the potential effects of distributional misspecification concerning the likelihood, prior, and hyper prior distributions in HB conjoint analyses (not just prior sensitivity).

6.2 D2. The Latent Class Approach

Market segmentation remains one of the most important uses for conjoint analysis based on the estimated attribute part-worths [31, 105, 168, 186]. Historically, segments were developed in a rather disjointed two-step fashion (clustering after estimating individual-level conjoint part-worths). This resulted in various problems, for instance, in highly fractionized designs, the estimated individual-level part-worths are often unstable and are stochastic and quite different loss functions are optimized using these disjointed methods. In this light, there has been research dedicated to simultaneously performing this two-step approach more parsimoniously; for instance an early example includes the Q-factor analytic procedure that maximizes the predictive power of the derived segment-level utility function. DeSarbo and colleagues provide alternative cluster-wise regression based formulations for such benefit segmentation approaches utilizing conjoint analysis [42].

Following these deterministic cluster-wise approaches, a number of latent class or finite mixture-based solutions to simultaneously perform conjoint and market segmentation analysis had been developed. The advantages of these simultaneous procedures are that they employ stochastic frameworks involving mixtures of conditional distributions which allow for heuristic tests for the optimal number of segments (via AIC, BIC, CAIC, ICOMP, etc. heuristics),Footnote 13 fuzzy posterior probability of memberships that permit fractional membership in more than one market segment, and a stochastic approach that allows for computation of the standard errors of the estimated part-worths. Many such latent class conjoint procedures also allow for heteroscedasticity among groups of consumers as well as for variation within these groups’ responses. Interested readers are referred to several early articles by DeSarbo and colleagues (cited in DeSarbo and DeSarbo [42]). In the last decade, these authors develop a host of latent class models that can be applied to conjoint analysis, addressing the issue of segment identification. Chung and Rao [31] develop a comparability-based balance (COBA) model that accommodates bundle choices with any degree of heterogeneity among components (products) and incorporates consumer preference heterogeneity that can be used for segmentation and optimal bundle pricing.

Much of the early literature involved modeling heterogeneity through the use of individual-level traditional conjoint analysis. Bayesian conjoint analysis and latent class conjoint analysis had initially focused on the modeling of metric data. In more recent times, effort has been devoted to conjoint-choice experiments. This was motivated by the fact that conventional rating-based (metric) conjoint analysis depends on a consideration (rating) task that does not link directly to any behavioral theory. We feel that employing actual choice between alternatives is more realistic than the conventional approach of using mere artificial rankings and ratings. As such, we applaud the development of such latent class conjoint procedures for the analysis of choice data.

6.2.1 Suggested Directions for Future Research

  1. 1.

    Latent class models all typically assume that the respondent belongs to one and only one underlying segment allowing for the calculation of posterior probabilities. By definition, these posterior probabilities for each respondent sum to one, indicating a convex combination of these segment memberships. These individual-level predictions obtained from such finite mixture-based models tend to be rather poor depending upon the degree of separation of the centroids of the conditional segment-level support distributions and the within segment variation, thus limiting the range of the predictions. We encourage the development of new methods for improved prediction.

  2. 2.

    Segment identifiability remains a problem with such latent class segmentation procedures in conjoint analysis since individual differences in the estimated individual-level parameters are rarely well predicted by demographics, psychographics, etc. This same problem lies with respect to the estimated segment-level parameters as well. Even with explicit reparameterization of the mixing proportion via the concomitant approach, it is uncommon to be able to shed sufficient light on describing the derived market segments vis-à-vis traditional individual difference measurements. We encourage the development of new methods in improving segment identifiability.

  3. 3.

    Using the ideas of Hidden Markov Models [129, 65, 144], additional research is required to investigate the dynamic nature of such derived market segments including switching segment memberships over time, the evolution of different market segments over context or consumptive situations, and the time path of changing parameters.

6.3 D3. The Polyhedral Estimation Approach

Toubia et al. [171] proposed and tested a new “polyhedral” choice-based question-design method that adapts each respondent’s choice sets on the basis of previous answers by that respondent.Footnote 14 The simulations conducted suggest that polyhedral question design does well in many domains, particularly those in which heterogeneity and part-worth magnitudes are relatively large. In particular, the polyhedral algorithms hold potential when profile comparisons are more accurate than self-explicated importance measures and when respondent fatigue is a concern due to a large number of features. For example, in product development scenarios, managers may want to learn the incremental utility of a large number of features allowing them to screen several features quickly [138].

Toubia et al. [170] validated the polyhedral approach and found that it was superior to the fixed efficient design in both internal and external validity, and slightly better than the adaptive conjoint method. However, the polyhedral approach is highly sensitive to errors in the early choices. Despite mixed results of the polyhedral questions especially when response error is high, Toubia et al. [172] subsequently proposed and tested a probabilistic polyhedral method by recasting the polyhedral heuristic into a Bayesian framework which includes prior information in a natural, conjugate manner. This method shows potential to improve accuracy in high response-error domains by minimizing the expected size of the polyhedron (i.e., choice balance) and also by minimizing the maximum uncertainty on any combination of part-worths (i.e., post-choice symmetry). Evgeniou et al. [58] introduce methods from statistical learning theory to conjoint analysis that compares favorably to the polyhedral heuristic.

While, Toubia et al. [172] demonstrated improved accuracy in using probabilistic polyhedral method, the analytic-center estimation does not yet perform as well as the HB method. Abernethy et al. [1], using complexity control machine learning, demonstrate robustness to response errors inherent in adaptive choice which outperforms polyhedral estimation proposed by Toubia et al. [170]. More recently, Dzyabura and Hauser [54] developed and tested an active machine-learning algorithm to identify noncompensatory heuristic decision rules based on prior beliefs and respondent’s answers to previous questions. Currently, research that frames the fast polyhedral method in HB specification (GENPACE) has shown to outperform FastPACE under certain conditions [177].

6.3.1 Suggested Directions for Future Research

  1. 1.

    We suggest future conjoint scholars working with the polyhedral algorithm to combine self-explicated data within the framework of stated choice data to improve the estimation as shown by Toubia et al. [171] and Ter Hofstede et al. [167] in traditional conjoint analysis. Such self-explicated data can help constrain the rank order of part-worths and thereby shrink the polyhedral confidence region for estimated part-worths.

  2. 2.

    Combining analytic-center (AC) estimation with Bayesian methods may broaden the scope and applicability of the polyhedral algorithm when respondent heterogeneity and response accuracy in stated choice are both low. Also, the polyhedral ellipsoid algorithm can perhaps be further broadened to newer domains of application including situations marked by a lack of nondominance, choice balance, and symmetry—criteria that are presupposed in the current algorithm.

6.4 D4. Integrating Multiple Sources of Data

Based on existing research, conjoint analysis could also benefit substantially by combining multiple sources of data. Traditionally, preference measurement studies have relied on data provided explicitly by consumers during the preference measurement task. Both stated and revealed preference data provide information on the utility of offerings, and thus one source of data can be integrated as a covariate in a model of the other [82]. Further, Allenby et al. [6] recommend that information across datasets may be combined by forming a joint likelihood function with common parameters that will result in more precision. For example, stated preference data may require corrections for various response biases, while revealed preference data may require information controlling for contextual effects.

An interesting development by Ashok et al. [11] is the structural equation models (SEM) that integrate softer variables (e.g., attitudes) into binary and multinomial choice models to explain choice decisions. They compare the limited information model (without latent variables) in which factor scores for the exogenous latent variables are included in the utility function as error-free variables with the full information model with latent variables. In general, full information estimation methods yield structural parameter estimates that are significantly more precise than those obtained by using two-stage limited information approach where latent constructs are treated as error free instead of as random variables.

Furthermore, there is potential for combining stated preference data with auxiliary revealed preference data. For instance, researchers could look at qualitative and observational research techniques to capture response latencies, eye movement, and other psychosomatic patterns. Haaijer et al. [74] demonstrated that response time is related to preference and choice uncertainty such that shorter response times represent more certain choices. In a very recent study, Toubia et al. [173] conduct two eye-tracking studies (using Tobii 2150 tracker) to compare incentive-compatible conjoint poker with incentive-compatible choice-based conjoint. The assumption is that choice-based conjoint participants make choices based on a smaller subset of attributes resulting in decreased visual attention for a large proportion of attributes and levels.

The different approaches to modeling consumer preference (e.g., compositional model, decompositional model, subjective expected utility model, etc.) are based on the inherent assumption of traditional utility theory and attribute processing. However, consumer researchers for some time now have also established the power of affect, feelings, and emotions in consumer judgment, preference, and choice [136]. Unfortunately, not much research has been done to integrate the traditional utility-based paradigm with such affective responses in conjoint experiments. The concept of “attribute prominence” consisting of attribute importance and emotionality would better capture choice than merely using cognitive-based importance measures as earlier suggested by Luce et al. [118].

6.4.1 Suggested Directions for Future Research

  1. 1.

    A promising area in need of more work is the marriage of discrete choice models with latent variables such as attitudes and perceptions. Following Ashok et al. [11], we encourage more researchers to integrate latent constructs in discrete choice models such as attitude, satisfaction, service quality perception, and other widely used marketing-based perceptual constructs. A related area is the marriage of scanner-panel data with multinomial choice, where nonproduct attributes such as consumer attitudes and motivations and store level data may drive brand purchase along with product attributes [60, 165].

  2. 2.

    Additional research should be aimed at understanding the underlying mechanism (rules and heuristics) that determines consumers’ decisions and develop measures of the decision process variables—decision problems, decision contexts, social situations, and individual factors.

  3. 3.

    We believe that integrating multiple sources of data in innovative ways can add to the reliability, validity, and generalizability of conjoint studies in the future. The integration of qualitative aspects and emotional reactions of consumers with stated preference data in forming preferences and choices is an important research avenue [43]. While aesthetic stimuli pose special challenge in designing a factorial design due to the difficulty of decomposing what is essentially unitary or holistic stimuli, researchers are encouraged to work creatively in harnessing the benefits of such auxiliary data.

  4. 4.

    Conjoint analysis provides an exacting measurement of consumer preferences, but to design a product or set marketing variables a firm must often do so in light of the actions and potential actions of its competitors. We are now beginning to see equilibrium (or nonequilibrium) models, which include the reactions of firms, competitors, and customers, coupled to conjoint analyses. One example is Kadiyali et al. [96]. More work needs to be done in this promising line of research.

7 (E) Managerial Issues Concerning Implementation

We now discuss selected implementation issues relevant for the manager including product optimization, market value of attribute improvement, optimal pricing, and product line decisions.

7.1 E1. Product Optimization

The primary goal of traditional conjoint analysis was to find a parsimonious manner of estimating consumer utility functions and deriving attribute (level) importances. In this effort, one could design a product with maximum utility whose attribute levels correspond to the highest estimated utility values. While the problem was first formulated as a zero–one integer programming model, a more general and thorough approach to product design optimization was developed by Green and colleagues with their Product Optimization and Selected Segment Evaluation (POSSE) procedures. Soon thereafter, efforts were directed to extend single-product design optimization heuristics to entire product lines introducing two objective functions (the buyer’s and seller’s welfare problem). This marked a critical development in product optimization research that triggered a flurry of research.

Another major advance in this field was the idea that consumers’ preference structures were dynamic rather than static (due to variety seeking, learning, and fatigue), which calls for models that can capture the dynamics and respondents heterogeneity (for a review, see Wittink and Keil [188]. More recent artificial intelligence and engineering optimization approaches to product line optimization using conjoint analysis include Belloni et al. [14], Wang et al. [182], Luo [119], and Michalek et al. [124]. Recently, some progress has been made by Luo et al. [120] wherein they propose a hierarchical Bayesian structural equation model by incorporating subjective characteristics along with objective attributes in new product design. Their results indicate that by collecting additional information about consumers’ perceptions of the subjective characteristics, the proposed model provides the product designer with a better understanding and a more accurate prediction of consumers’ product preferences compared to traditional conjoint models. We encourage more research in this area such as testing the virtual-reality prototypes [36], instead of physical prototypes, when attributes are large in number and therefore expensive.

7.1.1 Suggested Directions for Future Research

  1. 1.

    A line of research with promising potential is the area of improving preference measurement for really new products as opposed to incrementally new products. In attempts to improve preference measurement by building consumer knowledge, more research needs to be conducted to fully understand consumer inferential techniques in reducing uncertainty (i.e., consumer-initiated analogy generation and marketer-supported analogy). More needs to be done on how consumers think and learn about really new products pre-, during, and post adoption stages, and how we can modify measurement techniques to maximize the predictive accuracy of preference measurement.

  2. 2.

    We believe that attribute-based conjoint models are potentially limited and that further investigation should proceed at least as far as customer-ready prototypes for a spectrum of design concepts. The prototypes are likely to provide more accurate information on customer reactions and costs and more accurate information on the attribute levels achieved (rather than expected) with particular designs. One possible direction for future extension is to combine this with other related methods such as Neural Network Approaches and Genetic Algorithms to gain better prediction. See Chung and Rao [32] for modeling of unobserved attributes in experiential products using virtual expert model.

7.2 E2. Market Value of Attribute Improvement

Predicting performance in the marketplace and gaining insight into the value of design features are important goals of market research. One question of managerial relevance is whether or not attribute improvement can be measured in terms of cost-benefit analysis. In other words, given that an attribute improvement (positive change) always comes with a price increase (negative change), there is a trade-off involved in its impact on market share. Ofek and Srinivasan [131] show that the market value of an attribute improvement (MVAI) can be expressed as the ratio of the change in market share due to an improvement in attribute to the ratio of decrease in market share due to change in price. These authors tested this approach using five portable camera mount products described on five attributes each varied at three levels. They estimate the MVAI for each of the attributes and show that these have less bias than the commonly used attribute values computed by averaging the ratio of weights of attribute and price across individuals. They also demonstrated that profitability of attribute improvements decreased when factoring in competitive reactions. The firm should undertake attribute improvement if MVAI exceeds the cost of attribute improvement. To mimic a real-world situation, MVAI can incorporate choice set, competitive reactions, and heterogeneity of respondents and translate utilities into choice probabilities [138].

7.2.1 Suggested Directions for Future Research

  1. 1.

    Future research should pay more attention to the dynamic issue of consumer choice or preference (both before product design and before product launch), which means that studies should extend over multiperiods and respondents should be able to upgrade [138, 100]. Also, research should be done after product diffusion (i.e. multiperiod analysis), as attributes’ importance will change as consumers gain more experience with the products as will the market value of the attribute.

  2. 2.

    Meta analyses in this area would be particularly desirable. More specifically, publishing research on tracking the monetary implications of pursuing optimal conjoint design implementations in different commercial scenarios would prove a great aid in advancing more applications of conjoint analyses.

7.3 E3. Optimal Pricing

Kohli and Mahajan [104] propose a model for determining the price that maximizes the profit of a product that has been screened based on share criterion. They do so by incorporating the effect of measurement and estimation error in demand estimates which in turn affects the price that maximizes profit. They model heterogeneity in individual reservation prices by assuming that the variance of the distribution is constant but the mean is normally distributed. Jedidi and Zhang [90] further develop this method to allow for the effect of new product introduction on category-level demand, and Jedidi et al. [92] describe a method for estimating consumer reservation prices for product bundles. Chung and Rao [31] evolve the issue of optimal pricing to the level of bundle choice models which employ attribute-based products (i.e., components) of a bundle as the ultimate unit of analysis in estimating the utility of the bundle. Reservation price for bundles is higher for attributes regarded as desirable or complimentary.

More recently, Iyengar et al. [87] describe a conjoint model for the multipart pricing of products and services. Given that for many product and service categories there is a two-way dependence of price and consumption (fixed fee and usage-based fee), Iyengar et al. [87] incorporate the effects of consumption on consumer choice and the uncertainty of service usage (by using a quadratic utility function). A benefit of their model is its ability to infer consumption at different prices from choice data which can aid marketers in their market share maximization objectives.

Ding et al. [50] demonstrate that consumers demonstrate two behavioral regularities in relation to how their utility functions depend on the role of price: consumers infer quality information from a product’s price and they have a reference price for a given product. Consumer heterogeneity is captured through an individual-specific reference point and an individual-specific information coefficient. They demonstrate that the classic economic model where price serves the allocative purpose is more relevant for inexperienced or uninvolved customers. On the other hand, price maximally serves as an informational price cuing quality where customers are the most involved. This piece of research is one of the pioneering first steps in integrating behavioral regularities into classic utility models in pricing research. Kannan et al. [98], through an online choice experiment on digital versus print products, propose a model to account for customers’ perceptions of substitutability or complementarity of content forms in developing pricing policies for digital products. Research on product line extensions has traditionally treated this issue as substitutes, although it is possible that customers may perceive digital products as imperfect substitutes or even complements to printed products. Bundling and pricing strategies are determined by capturing customers’ heterogeneity in their perceptions of substitutability and complementarity by estimating parameters of the model using a finite mixture (FM) model.

7.3.1 Suggested Directions for Future Research

  1. 1.

    Along the lines of Iyengar et al. [87], future research can examine computationally efficient methods for optimal selection of product features and prices. There is also potential for factoring in the effect of competitive actions and reactions on multipart pricing.

  2. 2.

    Future researchers can look into additional behavioral regularities built into the utility model such as a reflexive shape around the reference point and the effect of dynamic competition. This would be a useful area for the application of game theoretic models employing alternative strategies and competitive scenarios.

7.4 E4. Product Line Decisions

The optimal product line design problem belongs to the class of NP-hard combinatorial optimization problems. A number of optimization algorithms have been applied to solve such difficult problems including dynamic programming, beam search, genetic algorithms, and Lagrangian relaxation with branch and bound [12, 23]. More recently, alternative heuristics have been devised employing conjoint and choice models. Michalek et al. [124] recently presented a unified methodology for product line optimization that coordinates positioning and design models to achieve realizable firm-level optima. Their procedure incorporates a general Bayesian representation of consumer preference heterogeneity, and manages attributes over a continuous domain to alleviate issues of combinatorial complexity using conjoint based consumer choice data. Tsafarakis et al. [174] devise particle swarm optimization technology for the problem of optimal product line design and employ a Monte Carlo simulation to favorably compare its performance to the use of genetic algorithms. In addition, these authors use concepts from game theory to illustrate how the proposed algorithm can be extended to incorporate retaliatory actions from competitors using Nash equilibrium concepts.

7.4.1 Suggested Directions for Future Research

  1. 1.

    Future research in this area should extend such models beyond linear and continuous cost functions, to accommodate mixed level product attributes (discrete and continuous), to handle category expansion and pioneering advantages, and allow for the enactment of various designated offensive and defensive strategies.

  2. 2.

    It would also be useful to extend such computer science-based procedures to accommodate multiple objectives for optimization in conjoint applications.

8 Conclusion

From the rigorous psychometric tradition from which conjoint analysis has evolved, a plethora of advances have been made. In this manuscript, we have attempted to integrate several substantive issues of interest in conjoint analysis within an organizing framework that impacts major stakeholders (i.e., researcher, respondent, and manager). For each of the five categories in our framework, we summarize recent developments in the field, provide some critical insights, and present suggested directions for future research. We hope that conjoint scholars will gainfully employ this organizing framework as a repository for drawing additional new insights and conducting future research. We believe that research in conjoint continues to be vibrant and the recent advances, developments, and directions discussed in this paper will contribute to the realization of the tremendous potential of conjoint analysis.

In conclusion, our paper makes several contributions to the literature (including the recent book by Rao [139]). First, our review incorporates an organizing framework based on the behavioral and theoretical processes underlying several issues related to the researcher, the respondent, and the manager in conjoint analysis. We have an expanded and provided recent coverage of the behavioral and theoretical underpinnings (see section A) that sets the tone for the rest of the review. Second, our framework allocates adequate attention to critical issues surrounding the three major stakeholders: the researcher, the respondent, and the manager. Third, we cite publications from major marketing and nonmarketing journals across disciplines. Fourth, our paper also sets a comprehensive research agenda going forward, 55 research directions in total, which can be leveraged for future development of conjoint analysis methodology. Finally, we believe that a review paper on conjoint analysis will be able to draw wide readership and citation by scholars in the future, thereby enhancing the impact factor of this journal.