History of conjoint analysis
The earliest forms of conjoint analysis can be traced back to the 1970s having developed from the psychology of decision making and econometric choice theory.
Key developers have been Paul Green (Marketing use of decompositional models), Jordan Louviere (Choice-based conjoint) and Rich Johnson (Sawtooth Software and Adaptive conjoint methods) and more recently Sawtooth has pioneered a number of new approaches.
It's sometimes remarkable to think that there are still market research professionals who have no experience of trade-off techniques such as conjoint analysis, or who still rely on likert scales and self-explication to try to predict consumer behaviour. The first academic papers describing conjoint came in 1971 (Paul Green) and Harvard Business Review was describing the technique to the wider business audience in 1975. The advent of computer-based personal interviewing in the 1980s saw major strategy consultants like McKinsey, Bains and PWC start to use conjoint, while from the economics field choice theory and the ability to estimate parameters using Logit-based maximum likelihood estimation lead to the development of choice-based methods
Early years - full profile and part-worths
Conjoint analysis has its earliest roots in psychology and the testing of the theory that when people take decisions the result is the 'sum' of all the bits of value for each part in that decision. So when you buy a computer, there is a difference in value between a computer with a small screen compared to a large screen for instance. The difference in value to the customer can be linked just to this one feature and by adding up different values in a kind of 'configurator' style you can come to a conclusion about the overall value of the product or service.
This seemed sensible in theory, but to test it required a method for breaking a product down into constituent parts, building profiles from these parts, then gathering preference data, then finally testing untried combinations to see if the customer preference was as expected. This is then the root of conjoint design. A demonstration can be seen here.
The first such tests were called full-profile designs (since the respondent saw profiles with one of each type of part) and were designed on cards using some form of ranking exercise typically. A major hurdle was in reducing the number of profiles down to a manageable number. This drew on work from statistics looking at experimental design - how do you minimise the number of experiments required to test a number of combinations of properties, for instance in drug development? The result was the use of 'fractional factorial orthogonal designs' to build the profiles to test (the same type of approach is used in statistical process control and Taguchi methods in operations management). Since the objective was to validate the decision making process, tests were carried out and analysed and then compared to 'hold-out' profiles. These were profiles that were not counted for statistical accuracy but were required to validate the model.
The method of analysis initially was ANOVA or analysis of variance to produce a statistical model to predict the preference drivers. Early studies evaluated attributes as continuous (eg price) or discrete (eg colour), but it soon became apparent that it was more effective to treat all variables as if they were discrete variables as even 'linear' attributes such as price were often non-linear in nature. The result of the analysis was the calculation of 'part-worths'. That is the model betas (see our Excel worked up demonstration to see how the calculations are done) which describe how much each variable contributes to the final model. A higher beta being more important. Since these part-worths have no units and because they are predicting an abstract entity such as preference, it is possible to scale and adjust the parameters without affecting the underlying model outcomes. This meant many of the strategy consultancies turned these part-worths into the more user-friendly 'utilities' and worked on the communication of the methods.
In order to demonstrate the potential power of conjoint analysis a famous case study for Marriot Hotels was carried out by Green in the 1980s.
Hybrid designs and adaptive conjoint
One of the most fundamental problems in conjoint design is reducing the number of profiles that need to be evaluated by respondents. Richard Johnson was developing a range of techniques including a 'pairwise grid' approach on paper, but then went on to develop Sawtooth Software's Adaptive Conjoint Analysis (ACA) - a computer based approach which relied on initial self-explicated exercises where respondents pre-rank and pre-value items before undertaking the preference task (see an example on SurveyGarden). These techniques are described as 'hybrid' containing as they do a combination of trade-off and non-trade-off elements. ACA was taken up as a very practical method for estimating and valuing customer demands by management consultancy firms, although it tended to be disfavoured by academics because of the lack of a firm underlying theoretical model and the somewhat arbitrariness of the design. That's not to say it didn't work, just that it was more a practical tool than a theoretical one. At the end of 1990s ACA was the most common form of conjoint analysis in use, but the demand for shorter internet surveys and the arrival of techniques such as Hierarchical Bayes, means that choice-based conjoint is now much more common. Sawtooth's interest in adaptive methods has continued leading to the creation of Adaptive CBC approaches.
Choice based designs
From the point of view of development, conjoint analysis had essentially grown up from custom and practice and practical experimentation. However, it lacked a firm theoretical basis and it still used tasks that were research-like (ranking and rating) rather than reality-like. The next developments came from the econometric world and in particular from researchers looking at 'revealed preference' in behavioural economics - that is what do individuals' choices say about the elements underpinning those choices. For instance in making a choice to take the train versus taking a car, individuals are revealing a preference. Looking across a market as a whole, it becomes possible to try and identify the underlying factors behind these revealed preferences in terms of probabilities. To do this requires a model describing how people make choices and a method for properly analysing the behaviours to be able to value the underlying preferences.
Describing the choice model in terms of a utility function with appropriate error terms to account for unobserved factors was the first step. Secondly developing a method of estimation (Maximum Likelihood Estimation - MLE) to enable factors to estimated (and getting a solvable set of equations) was the second. These were then tied togethor by Jordan Louviere to apply the same types of techniques to market research data or "stated preferences". Like other formats, profiles were generated for testing, but in this case respondents indicated preference not with an artificial ranking or rating, but by choosing from a number of alternatives - a much more realistic task
As choosing provides less information than either ranking or rating, this was combined with an approach which aggregated analysis over the market as a whole - in a similar way to the method for estimating revealed prefence by looking at market level behaviour. Finally, the method for analysis was to use a full MLE. These three elements then gave this particular form conjoint a stronger theoretical background and since the 1990s more studies have been choice-based than adaptive. (Louviere disputes that his approach of Discrete Choice modelling is really a form of conjoint).
However, not all problems were solved. In using a full-profile research technique, there is still a limitation on the number of attributes that can be used and by aggregating the analysis across the sample as a whole there is an assumption of market homogeniety, which many marketers would question. What this means is that if an attribute such as "Welcomes children" for a restaurant is in the study, it can be extremely important for two groups. Parents say yes, singles saying no. In aggregate these mask out and it would appear not to be important - the average being in the middle. However for a restaurant chain they may need to know they need two types of restaurant. The ability to look at individual-level data can therefore be vitally important if you are looking to carry out segmentation research.
An updated version of CBC is Adaptive Choice-based Conjoint - also from Sawtooth - (not to be confused with ACA). Using a longer interview it allows respondents to give more detail about their preferences before undertaking a form of choice-based exercise. This eases the task for respondents and makes it feel that the interview is better responding to their answers.
The problem of individual-level utilities and choice-based data gave rise to a number of different approaches to make choice-based data easier to analysis at a subgroup level and the use of choice-based data for segmention, including latent-class segmentation based on raw data, but a breakthrough from Sawtooth was the use of Hierarchical Bayesian analysis to impute individual level data from aggregate choice based information, producing, in theory at least, the best of both worlds from choice-based data yet with individual level data.
This is not to say that this is the only approach and certainly there are still plenty of studies being carried out using full-profile and ACA.
Research into conjoint-type techniques continues to develop. The advent of online research means that computer-based techniques such as adaptive are much lower cost than when computer interviewing was carried out face-to-face. Consequently there is an ongoing development of computer learning techniques using elements such as genetic algorithms and evolutionary learning to 'search' the decision making space with the consumer and so make judgements about how people would make decisions. In addition the development of product configurators (such as those used by Dell selling computers), means that there are other ways of getting at choice data.
The design principles can also be applied to live market situations to optimise adverts, landing pages where the use of real response data (Big Data) can be used to assess real choices and real decisions from different types of designs.
There is still also controversy at the academic level since there are still some very fundamental assumptions being made about the types of models that people use - we blithely treat everything as linear functions and often disregard non-linear factors such as diminishing returns. Though conjoint is a behavioural-based technique (results come from watching the choices made), in markets with a high emotional content it is less useful and behavioural economics shows us that we need to test for factors like anchoring or even the way the choice is set up which can affect outcomes.
To begin with conjoint analysis was mainly the preserve of management consultant firms and used as the basis of fundamental strategic reviews of the business. The arrival of Sawtooth's easy to use software and the ease of online and computer-based interviewing means that many larger companies have had experience of conjoint analysis. Unfortunately this has also led to a number of bad experiences where conjoint has been sold or offered by agencies inexperienced in its use (eg not knowing how to present utilities, or not knowing how to create or use the modelling). As a result opinions can be polarised with both very strong advocates and strong detractors. The main contention remains the nature of the choice-tasks which can be seen as repetitive. There are ways around this and other methods, but businesses with a poor experience often won't know there are better, or newer solutions.
For help and advice on carrying out conjoint analysis research contact our experts on firstname.lastname@example.org or phone +44(0)020 7193 6640 or +1 713-983-8700.