Frequently Asked Questions (2): How Does The Voting Intention Question Work?
Recent correspondence has reminded me that not everybody knows exactly how we set about asking respondents their voting intentions, or how we calculate the figures from their responses.
Recent correspondence has reminded me that not everybody knows exactly how we set about asking respondents their voting intentions, or how we calculate the figures from their responses.
MORI's measurement of voting intention uses two questions. These are usually the first questions asked on the questionnaire, and the procedure is the same whether the interview is being conducted face-to-face or by telephone.
The first question is:
Q How would you vote if there were a General Election tomorrow?
Our interviewers record responses to this question as one of the following:
- Conservative
- Labour
- Liberal Democrats
- Scottish National Party/Plaid Cymru
- Green Party
- Democratic/UKIP/Referendum Party
- Other
- Would not vote
- Undecided
- Refused
Note that we don't prompt our respondents by suggesting the parties that they might pick, or by reading them a list. Although we have a single category for "other", in our regular face-to-face Omnibus polls we record in every case which particular party this response represents - there are no parties missed out or excluded from our polls, and if any supposedly minor party made a sudden breakthrough in support, our polls would report it. But in practice the "other" category rarely accounts for even 1% of responses, and consequently we don't usually feel the need to distinguish between the very minor parties in our telephone polls.
All the respondents who say they are undecided how they will vote, or refuse to answer (although not those who directly say that they will not vote) are then asked the second question:
Q Which party are you most inclined to support?
This follow-up question, often called the "squeeze" question, tends to find that many of those whose initial reaction is not to plump for any party have, nevertheless, a clear inclination or party preference. The voting figures that are reported are, therefore, the "combined voting" figures calculated by taking together the answers from Q1 and Q2. (Past experience at elections has shown that the combined figures are a better guide to the electorate's voting behaviour than the figures from Q1 alone - at most general elections most of the "incliners" still vote. The separate breakdown of figures for Q1 and Q2 is not always reported on this website, but is always available if required by contacting MORI.)
For the combined voting figures we simply add together for each party those who named it in response to either question. Those who said they "would not vote" at either question are similarly added together, while the residual "undecided" and "refused" respondents are those who still gave this response having been asked both Q1 and Q2.
The next stage is weighting the data. Weighting is a statistical procedure, carried out by computer, which simply compensates for any known unrepresentativeness in the sample. For example, women make up 52% of the British adult population; if we were to find that in one poll they made up 55% of the sample, the computer would downweight their answers slightly to ensure that in the final figures the opinions of women contributed the correct 52% of the numbers and men the other 48%. In practice the effect of weighting is almost always very small, and represents fine-tuning of the data rather than anything more drastic.
The final stage is to "repercentage" the data to exclude the "don't knows", so that the final "headline figures" that we report are percentages of those who have a party preference, rather than of the whole population. This is for comparability with election results (which, of course, measure only those who have voted for a party or candidate, not those who have abstained) and with other voting intention polls. This practice has been standard in Britain for many years; in the USA, by contrast, there is no generally accepted convention and each company reports its figures in different ways - some exclude don't knows and some do not, some exclude minor candidates and some do not, and the base may be all adults, registered voters or only those certain to vote. The result is confusion and a situation in which no two poll results are comparable with each other.
The omission of don't knows reflects the experience of more than sixty years of polling that the vast majority of this group will not eventually vote, and those that do generally split between the parties in similar proportions to those that give a voting preference. Our aim is clarity: we believe that presenting the data in this way makes its meaning and implications most easily comprehensible. Nevertheless, the percentages of don't knows are always published. In the reports of our polls for The Times, these figures are invariably given in the small print technical note at the end of the article; some other papers don't always report these figures (although we try to enforce it as far as we can, at least with our clients), but the full figures will always be available on this website.
The table below illustrates the entire process of calculating the figures, taken from the MORI/Times January 2001 poll.
160 | Q1 (unweighted) | Q2 (unweighted) | Q1+Q2 (unweighted) | Q1+Q2 (weighted) | Whole sample % | Repercentaged (headline) % |
Total | 2082 | 446 | 2082 | 2062 | 160 | 160 |
Conservative | 422 | 74 | 422+74=496 | 500 | 24% | 31% |
Labour | 713 | 99 | 713+99=812 | 795 | 39% | 50% |
Liberal Democrat | 185 | 35 | 185+35=220 | 218 | 11% | 14% |
SNP/PC | 62 | 9 | 62+9=71 | 57 | 3% | 4% |
Green | 9 | 4 | 9+4=13 | 15 | 1% | 1% |
Ref/UKIP/Dem | 7 | 2 | 7+2=9 | 7 | 0% | 0% |
Other | 4 | 0 | 4+0=4 | 3 | 0% | 0% |
Would not vote | 234 | 10 | 234+10=244 | 257 | 12% | 160 |
Undecided | 410 | 179 | 179 | 177 | 9% | 160 |
Refused | 36 | 34 | 34 | 33 | 2% | 160 |
Note: The 446 who were asked Q2 were the 410+36 who were either undecided or refused at Q1.
More insights about Public Sector