MD

MaxDiff Analysis

MaxDiff (also known as Maximum Difference Scaling or Best–Worst Scaling) is a statistical technique that creates a robust ranking of different items, such as product features. MaxDiff is an alternative to conjoint analysis from which the respondent has to indicate which feature is most important or most desirable, and which is least important or desirable. Conjointly’s novel robust approach to MaxDiff allows for:

  • Testing of multiple attributes in the same survey
  • Brand-Specific combinations of attributes for when each brand is substantially different (to enable that, first create a Brand-Specific Conjoint and then convert it into the MaxDiff variety)
  • Simulation of preference shares, at a highly indicative level

Traditionally, MaxDiff treats each product as an individual item, whilst conjoint treats products as a combination of attributes and levels. As such, the conjoint analysis produces rankings for particular products by summing the preference scores for each attribute level of a product, whilst MaxDiff produces rankings by polling the respondents directly.

Main outputs of MaxDiff Analysis

Relative importance of levels

Relative value by levels

How do customers rank potential phone colour options?

Each level of each attribute is scored for its performance in customers’ decision-making. In our example, navy is the most favourable colour and is displayed as positive. Yellow is the least preferred colour and therefore displayed as negative. It's important to remember that the performance score of each attribute is relative to the other levels shown to respondents. For instance, the colour red will only be shown as negative when compared against a specific set of colours (levels) — testing red against a different range of colours could yield a positive result.

Ranked list of product constructs

Ranked list of product constructs

List all possible level combinations and rank them by customers' preferences.

Conjointly forms the complete list of product constructs using all possible combinations of levels. They are then ranked based on the relative performance of the levels combined. This module allows you to find the best product construct that your customers will prefer over others.

Market segmentation

Segmentation of the market

Find out how preferences differ between segments.

With Conjointly, you can split your reports into various segments using the information our system collects: respondents' answers to additional questions (for example, multiple-choice), simulation findings, or GET variables. For each segment, we provide the same detailed analytics as described above.

Analyse with TURF Simulator

Analyse with TURF Simulator

Conduct TURF analysis on MaxDiff data using the TURF Analysis Simulator.

TURF analysis aims to find the combination of items that appeals to the largest proportion of consumers.

Preference share simulations

Preference share simulations

View simulations of preference shares for your product with the Preference Share Simulator.

With Conjointly, you can simulate shares of preference and volume projections for different product offerings, including those that are available in the market. Learn more about using the simulator for MaxDiff.


How it works

For each MaxDiff question, Conjointly asks respondents to select the best and worst options from a random selection of options. Each respondent is asked to complete 12-16 of these questions.

MaxDiff survey flow

The main output of the MaxDiff survey is a bar chart displaying the average preference scores, representing the relative preference for each item. An alternative output of the MaxDiff survey is a bar chart showing the best/worst options percentages and net percentage. This output shows the number of times an attribute level was selected as part of the best option and the number of times an attribute level was chosen as part of the worst option, divided by the number of times it was presented to respondents and expressed as a percentage. The net percentage is simply the best per cent - worst per cent and is another way of measuring respondents’ preferences for the features.

For example, let’s say we performed a MaxDiff survey on soda flavours:

Cola was presented to respondents in 100 trials. It was selected as the best option 76 times and the worst option 6 times. Then to calculate the outputs:

  • Best percent: 76/100 = 76%
  • Worst percent: 6/100 = 6%
  • Net percent: 76%-6% = 70%

Now compared to another flavour, Kiwi, which has a best per cent of 11%, a worst per cent of 70%, and a net per cent of -59%, we can infer that respondents prefer Cola to Kiwi.

With Conjointly, you can perform a MaxDiff survey on flavour, pack size, format, and any other attribute you may be looking to test in the same experiment. The preference scores and percentage chosen outputs are presented separately for each attribute.

Setting up on Conjointly

To set up a MaxDiff survey on Conjointly, you will need to prepare a list of attributes and levels you wish to test. Then, insert these attributes and levels in the experiment setup screen.

MaxDiff setup

Conjointly also allows you to present respondents with additional questions.

Only one MaxDiff block may be presented to respondents, but any number of additional questions may be added.

MaxDiff additional questions

Differences between MaxDiff and Conjoint Analysis

Both techniques are similar in presenting respondents with a set of options and asking them to choose, which is a method based on consumer trade-off decisions that realistically mimic decisions respondents would make in real life. However, there are some core differences in approach and usage between MaxDiff and Conjoint experiments:

MaxDiffConjoint
Respondent View
Respondent view of a MaxDiff
Respondent view of a Generic Conjoint
When do we use it?To create a ranking for different alternatives, such as:
  • Features of a product by importance,
  • Aspects of brands by customer satisfaction,
  • Flavours or variants of products by consumer preferences,
  • Usage occasions by frequency.
  • Feature selection for new or revamped products,
  • Marginal willingness to pay for specific features,
  • To test branding, packaging and advertising claims,
  • To find the optimal pricing of products while considering competitor offerings.
Similarities
  • Both techniques are advanced analytical tools constructed based on consumer trade-off decisions.
  • Both techniques result in interval-scaled utility scores, which can effortlessly be transformed into ratio-scaled probabilities so they can be hierarchised.
  • Both techniques are discrete-choice experiments.
Differences
  • Respondents are asked to choose both their favourite and least favourite alternatives.
  • Results are scored by directly polling the respondents.
  • Respondents are prompted to make a single choice across a set of alternatives for their most preferred option.
  • Results are calculated by summing all individual-level scores.
Example

A car manufacturer wants to discover which car colour is the most preferred among consumers.

MaxDiff analysis is used to provide a robust ranking of the colours according to consumer preferences.

A car manufacturer wants to discover how much each attribute of a car contributes to a consumer's buying decision.

It also seeks the optimal combination of these components that will increase its market share.

Using MaxDiff Analysis in conjunction with other research tools

TURF Analysis based on MaxDiff results

TURF Analysis is a natural extension of MaxDiff, as it allows you to identify which combination of attributes will “reach” the most amount of consumers, where reach is defined as the percentage of respondents for whom at least one of the attributes in a particular combination is their most preferred.

When considering launching multiple products/features, the powerful TURF Analysis Simulator lets you use the results of your MaxDiff experiment to identify the combination of items that appeals to the largest proportion of consumers with a single click.

Preference Share Simulator based on MaxDiff results

The Preference Share Simulator allows you to simulate preference shares for different product offerings, including those that are available in the market.

Caution must be exercised when using the preference share simulations with MaxDiff results because the simulator aims to predict what products people will choose, whereas MaxDiff scores also takes into account the ones they want the least. Therefore the simulator may underestimate the preference shares of the products that some people want the least.

MaxDiff Analysis and Van Westendorp

By combining both MaxDiff and Van Westendorp, you are able to create a Feature Placement Matrix that allows you to classify features by both importance and willingness to pay.

Brand-Specific MaxDiff Analysis

To create brand-specific combinations of attributes, first, create a Brand-Specific Conjoint and assign each attribute being tested to the appropriate brand. After the experiment is saved, it can be converted to MaxDiff under Advanced Survey Options.

Complete solution for features and claims research