Conjoint analysis alternatives
Conjoint analysis is a widely established market research technique for understanding how people value the component elements that make up a product or service - the attributes and levels. However, in certain circumstances, for instance where there are lots of attributes to consider, or where bundles are being built, it may be better to consider alternatives such as MaxDiff, Configurators, Simalto or a range of other more bespoke designs and choice tasks.
Note that we would consider options such as Discrete Choice Modelling (DCM), stated preference research and elements like shop-display tests for pricing as 'flavours' of conjoint analysis rather than purely distinct.
The most common adjunct to conjoint analysis is the use of MaxDiff (also known as best-worst analysis). Respondents are asked to pick the best and worst items from a short list. They are then shown another list and the process is repeated. This enables the elements of the list to be allocated a utility score (and rank) in a similar way to conjoint analysis. One difference is that the items are not shown in combination as part of a product profile but are valued individually relative to each other and so is more like a plain ranking than a full product trade off. MaxDiff is typically analysed using a heirarchical bayes (HB) method with the effect that data is scaled like a conjoint study.
Menu-based conjoint (MBC or configurator)
Menu-based conjoint is comes in two forms. It is the formal name of an approach and software from Sawtooth, but it can also be used to describe more bespoke configurator type approaches. Respondents are given the task of choosing or creating products via a configuration menu where individual items might have its own price, or the respondent can choose from bundles or menus (a hamburger meal is typically given as an example). In other examples respondents have to choose items to build up preferred products. This is similar to say the Dell PC-configurator. Careful design and analysis allows both the valuation and price sensitivity around individual items to be assessed.
Simalto (simultaneous multi attribute level trade-offs) is originally a paper-based method of getting individuals to make trade-offs on a trade-off grid where there are lots of attributes and levels such as in a service review. A trade-off grid looks like the attributes and levels laid out on a single grid and respondents are asked to use the grid to rate performance and to indicate which areas are priorities for change. Being detailed, the process of completing a Simalto service grid often generates a wealth of additional comment and detail making Simalto an effective prompting tool for depth interviews. Simalto can also be combined with point allocation tasks to indicate the value of improvements. Where these points are allocated dynamically by the computer they can result in something similar to the configurator model. These types of approach can be more rewarding for the respondent as they are less repetitive and more information is given about the reasons why detailed decisions are being taken.
Ranking questions also force individuals to trade-off between alternatives and ranking is one method of collecting data from a simple full-profile conjoint analysis. Ranking is traditional cumbersome with more than about 8 or 10 items so tasks can be split or rotated to simplify the task. The use of online surveys allows ranking to be done more easily - eg a click-to-rank, drag-and-drop or by asking respondents to make selections in roughly rank order and monitoring mouse clicks.
Advanced ranking (eg anchored ranks, forced difference ratings)
One problem with plain ranking is that the step-size between items is equal. An alternative is an anchored ranking - that is to take a ranking then anchor a point (eg the top is worth 100 points) and get the respondent to allocate points to items ranked below - so indicating relative step-size. An alternate version which is made possible with sliders with long scales (eg to 100) is to disallow a rating at the same point - so no two features can score the same. This forces small differences between ratings (eg 92 to 93) which then gives both a rating and a unique rank.
Dynamic budget allocation
In some ways conjoint provides too much information in that it's objective is to provide a valuation for all the levels in all of the attributes. In general we are most interested in the items that are most valuable. One way of building this is to offer the levels at fixed 'point' values and then give respondents a point budget, then get them to optimise the product within the given budget. The point values are then adjusted and the respondent repeats the task essentially discarding the items not chosen, while focusing on the items that are of most value.
Search and filter tools
An extension to conjoint is to consider not a smaller set of items, but a bigger set of items and to provide search and filter tools to allow the respondent to find their optimum points. Combined with ranking and experimental design principles this process can better mirros classic online product aggregators so it looks like a process respondents are familiar with, while allowing the respondent to explore the choice set in a broader and more natural way. The use of searches and filters is in itself a choice and by providing a richer more natural interface it becomes possible to mimic typical online search and choose behaviour.
For help and advice on carrying out conjoint research contact our research specialists - email firstname.lastname@example.org or call +44(0)20 7193 6440 or +1 713-983-8700.