Election Polls: What Does MORI Do And Why?

This short note tries to explain in simple terms the main elements of the way MORI conducts its election polling and what those polls mean.

This short note tries to explain in simple terms the main elements of the way MORI conducts its election polling and what those polls mean.

What does MORI do?

MORI's chief principle in the way we publish our political polling data is transparency. As far as possible we aim to report directly the questions that we put to the public and the proportions of our sample that have given us each possible answer in reply. Where it is necessary to explain the implications of the data or to further manipulate it so that its meaning is clear, we try to explain at every step what we have done and why, but also to ensure that the raw data remains available so that anybody who doubts our analysis can see for themselves and know what the answer would have been if different assumptions were made. We use no "black box" methods or trade secrets.

Our voting intention polls have three simple elements -- sampling, demographic weighting and filtering.

Sampling: MORI's voting intention polls are conducted either by telephone or face-to-face in respondents' homes. The exact details of how we select the people we invite to take part in our polls differs somewhat between the two interviewing methods, but the principle in both cases is "quota sampling". Each poll uses a fresh sample of c. 1,000 or c. 2,000 respondents, and these are chosen to ensure that they broadly match the adult population in their distribution of the sexes, of age, of social class or housing tenure, and of working status (that is whether the respondent works full-time, part time or not at all), as well as geographical spread. The responses we receive from these respondents form our unweighted or raw data.

Demographic weighting: Because the quotas will never achieve a quite perfect distribution of our sample between the different demographic groups, and because we prefer to ensure that the sample is representative in more different respects than it is practical to include in the quota system, we then weight the data. [See note 1] In fact the effect of demographic weighting is normally very small, since our sampling method ensures that each sample rarely diverges much from the ideal, and it is really more a fine-tuning process than anything. (In fact, there has not been a single MORI voting intention poll published so far this year in which the weighting made a difference of more than a single percentage point for any of the three major parties' overall unfiltered voting intention share.) Unweighted figures from any published MORI poll are always available on request.

Filtering: The final voting intention figures are subject to one further process, since our "headline" or "topline" percentages do not include the whole sample. First we exclude those who are undecided how they would vote, those who say they would not vote at all and those (usually only a small number) who won't say. This leaves us with those who have named one of the parties, and voting intention percentages are always presented on this basis, since it is directly comparable with the way in which election results are normally published. The numbers undecided, saying they would not vote and refusing to answer are always given in the full report of the results of a poll on this website, and normally also in the technical note that accompanies the reports of MORI's polls in our client newspapers.

In recent years we have adopted a second filtering, to cope with the problem of low election turnouts. In the past, when the vast majority of British adults could be relied upon to vote, at least in general elections, we could be reasonably confident that a poll that accurately measured the voting intentions of the electorate would also accurately predict how an election held at that moment would pan out. (as recently as 1992, remember, 78% of the electorate voted.) These days, however, many of the public are less sure that they will vote, and supporters of the Labour Party are considerably less likely to say they are certain they will vote than are Conservatives; consequently, there is generally a substantial difference between the party vote shares if you consider the responses of everyone who names a party for which they would vote and if you consider only the people who say they are certain to vote. Our "headline" voting intention figure, which we consider to be the most useful indicator of the political climate, has since 2002 been calculated by excluding all those who are not "absolutely certain to vote". We measure this by asking our respondents to rate their certainty to vote on a scale from 1 to 10, where "1" means absolutely certain not to vote and "10" means absolutely certain to vote, and only those rating their likelihood of voting at "10" are included. At the 2001 election, the two surveys we know of that used this 10-point measure, one by ICM and the other for the academic British Election Study, both found that the number of "tens" was the closest predictor of the eventual turnout.

Again, although our headline figure uses the filtered data, the unfiltered data is also published -- the full trends are on the website here.

Why do the various voting intention polls from the different companies polling in Britain sometimes find different results?

There can be several reasons. Unless the polls were conducted over exactly the same period, it may be simply that public opinion has changed -- the public is volatile, especially during election campaigns, and remember we are NOT predicting what will happen at the election in the future, we are only measuring what the public thinks and says AT THE MOMENT, a "snapshot" measurement of a point in time.

Then, even if the polls really are measuring that same point in time, we cannot avoid the possibility of sampling error. Although it is a simplification of the true position, we normally state this in terms of a "margin of error"; for a poll with a sample size of 1,000, the margin of error is plus-or-minus three percentage points on the measurement of each party's share, so that a poll that records Labour as having 40% of the vote is really only stating that Labour's share is between 37% and 43%. In fact many polls published these days have an effective sample size of less than 1,000, because they are only concerned with the views of that part of the sample who are likely to vote, and so the margin of error is correspondingly larger. And even beyond that, we cannot prevent the occasional "rogue poll" -- sampling theory dictates that we must expect one poll in every twenty to be outside the margins of error.

But another possibility is differences in methodology between the various polling companies. The companies often take a slightly different view of the best way to achieve an accurate result, and produce their figures in different ways; and, indeed, the published figures are not always measuring the same thing. To understand what a poll means, it is essential to understand what it is measuring.

What We Did Differently In The Final "Prediction" Poll

For the final poll, two extra adjustments were added [See note 2]. We began, as usual, with demographic weighting, using the same variables and targets as for our other telephone polls throughout the election; and the voting intention table was filtered based on 10/10 "absolutely certain to vote" plus "already voted by post". Except for including those who said they had already voted, this was again exactly as on the previous polls.

Then a voting intention was imputed for those who said they were certain to vote or had already voted but refused to say how. We did not do this in the previous campaign polls, but did so in the final poll in 2001 (and signalled in Explaining Labour's Landslide that we would do so [See note 3], as it would have improved our final poll in 1997). Generally, we consider that such an adjustment is not necessary except in the final poll, as the number of refusals in the "peacetime" face-to-face polls is small, usually 1% to 2%, but it is clearly justified and necessary when refusals reach 9% of those certain to vote. Those who said they would not vote or did not know how they would vote were, however, excluded as usual from the percentages.

The past vote imputation was calculated on the basis of newspaper readership, i.e. refusers were assumed to split their votes in the same proportions as non-refusers who read the same newspapers, this being the most complete relevant data available. In the event the imputed votes split almost evenly between Tories and Labour, and the adjustment moved the overall voting intentions by only one percentage point.

A final additional adjustment was made for differential turnout. Because the projected turnout from the final poll seemed unrealistically high (70%), and because this increase had partly arisen from the closing of the projected turnout gap between the parties since the start of the election campaign, we judged that the predicted turnout was likely to include a degree of exaggeration, and that this exaggeration was likely to be greatest among the groups whose claimed certainty of voting had increased most sharply (i.e. Labour and Lib Dem voters). The adjustment therefore aimed at reducing the extent to which the turnout gap between the parties had been closed since the start of the campaign. The data from the final poll was compared with that from the March MORI Political Monitor (the last before the election campaign), and the proportional increase in projected turnout for each of the three main parties measured: for Labour and the Liberal Democrats a new turnout figure was calculated by reducing the excess proportional increase in turnout (i.e. over and above that found among Tories) by a third. The final voting intention figures were then down-weighted on this basis. Again the effect was very modest: in round figures projected turnout for Labour was reduced from 76% to 73% and for the Lindens from 75% to 73%. In retrospect a bigger adjustment on this basis might have been justified. (Discounting the whole narrowing of the turnout gap instead of one-third of it would have given us final figures of 34:36:22.)

We had made allowance for one further adjustment by conducting a call-back on the Wednesday of some respondents we had spoken to on the Tuesday, to guard against the possibility of last-minute swing. In the event, though, no adjustment was necessary on the basis of the call-back data, it seeming clear that no significant last-minute movement was taking place.

Notes

  1. Weighting. This means, quite simply, that in adding up the figures we treat each different group as if we had the number of responses that we would have if the sample was perfectly representative. For example, if we found that 55% of our sample were women, since we know in fact that women make up only 52% of the adult population, in adding up the figures we would count each woman as 52/55 of a response, and vice versa for the men, assuring that in the final results women account for their correct 52% share of influence.
  2. Separate tables showing the data after each stage are included in the final set of computer tables, which can be downloaded from MORI's website.
  3. Robert Worcester and Roger Mortimore, Explaining Labour's Landslide (London: Politico's Publishing, 1999), p 172.

Related news