The Polls In 2000

An editorial in the Daily Telegraph last month (5 December) suggested that MORI's polls in The Times systematically under-represent Conservative strength, and further that this is because they are conducted face-to-face rather than by telephone. The article cited several arguments in support of its case which were based on factual errors. We wrote to the paper correcting these errors, but it failed to publish our letter. It is not true as they alleged that face-to-face polls tend to find lower Conservative support than telephone polls. Nor is it true that MORI's polls find systematically lower Conservative support than those of the other polling companies. But since some of these misconceptions seem to be widespread, and the Telegraph was only echoing the wishful thinking which seems to be still entrenched in some corners of Conservative Central Office, it is perhaps time for a systematic review of the evidence, taking the whole of the year 2000 as our basis.

An editorial in the Daily Telegraph last month (5 December) suggested that MORI's polls in The Times systematically under-represent Conservative strength, and further that this is because they are conducted face-to-face rather than by telephone. The article cited several arguments in support of its case which were based on factual errors. We wrote to the paper correcting these errors, but it failed to publish our letter. It is not true as they alleged that face-to-face polls tend to find lower Conservative support than telephone polls. Nor is it true that MORI's polls find systematically lower Conservative support than those of the other polling companies. But since some of these misconceptions seem to be widespread, and the Telegraph was only echoing the wishful thinking which seems to be still entrenched in some corners of Conservative Central Office, it is perhaps time for a systematic review of the evidence, taking the whole of the year 2000 as our basis.

Throughout last year three regular monthly poll series which measured voting intention in Great Britain: MORI for The Times, with samples of c. 2,000, conducted face-to-face, in home; Gallup for The Daily Telegraph; ICM for the Guardian. Both Gallup and ICM interviewed telephone samples of c. 1,000. The table shows the average results over this year of each of the three main monthly poll series. There are twelve polls in each series (the extra poll Gallup published in the Telegraph in September has been excluded to ensure that the time spread is as even as possible).

Poll averages (monthly series), 2000

160 Con Lab LDem Other
160 % % % %
MORI/Times (face-to-face) 31.3 47.7 14.9 6.1
Gallup/Telegraph (telephone) 31.9 47.3 14.3 6.3
ICM /Guardian (telephone) 33.3 42.3 17.7 7.2

It is plain that there is no systematic difference between face-to-face and telephone poll results; in fact, there is virtually nothing to choose between MORI and Gallup. In fact a glance at the detailed table of all the polls [voting intentions (Westminster) - all companies' polls] will show that in many months all three polls were in close agreement; and, as they are conducted at different times of the month, where there were differences it may well simply have reflected real changes in public opinion between the fieldwork periods. Furthermore, of course, all polls are subject to sampling variation ('margin of error'), so some differences between any group of polls are only to be expected. While most of the effect of sampling variation ought to cancel out over the course of year by the law of averages, the effect of differences in timing might remain.

As far as there is a discrepancy, the odd man out is not MORI (as the Telegraph suggests) but ICM, and the biggest difference is not in the standing of the Tories but of Labour. ICM's Labour share tends to be several points lower, with the Liberal Democrats usually correspondingly higher - hence the Labour lead over the Tories is lower in ICM's polls, even though the measurement of Conservative share is not too far out of line. Why is this?

There are two factors. The first, and smaller, is the question of whether the polls need to compensate for the alleged existence of "shy Tories". ICM "adjusts" its data by weighting on the basis of respondents' reported vote at the last election, to correct for a supposed "spiral of silence". Before the last election this adjustment was often of the order of several percentage points added to the Tory and subtracted from the Labour share. We have always believed that, even if the shy Tory phenomenon was real, the ICM adjustment was an overcompensation. Since the 1997 election, ICM has stopped publishing its unadjusted data, so we can only guess how much difference the adjustment now makes; but since their figures for Conservative share are now only a little higher than MORI's unadjusted data (and Gallup's), it seems likely that the effect is now much smaller than it used to be. The 'spiral of silence' has unwound.

But, as already noted, it is in the Labour rather than Conservative share that there is a persistent discrepancy between ICM on the one hand and MORI and Gallup on the other. This is probably an effect of question wording. MORI, and Gallup (as far as we know), continue to use the traditional wording of the voting intention question as it has been used for most of the sixty-plus years that there have been opinion polls in Britain: "Q. How would you vote if there was a General Election tomorrow?". ICM by contrast, have used a different question for several years, which names the parties: "Q. The Conservatives, Labour, the Liberal Democrats and other parties would fight an immediate general election in your constituency. Please tell me which party you think you would actually vote for in the polling station.' Unlike the other pollsters, ICM are reminding respondents of the possibility of voting Lib Dem, and it has long been recognised that using such a preamble is liable to push up the Lib Dem share in any poll in mid-Parliament - the result is that there is a switch to Lib Dem from Labour as compared with other companies' polls.

Why? Pollsters have acknowledged for many years that between elections many potential supporters of the centre party (Liberal Democrats now, Alliance or Liberal as they once were) seem to forget its existence. Invariably questions about voting at the previous election after a couple of years have passed tend to find a much lower reported Liberal vote than was really the case. (And nobody has ever seriously suggested that there is a spiral of silence against the Liberals.) The same effect depresses Liberal Democrat voting intention - as can often be seen when a by-election raises their profile and causes a sudden upward blip in their national ratings. As the General Election approaches, however, the Liberal Democrat share in MORI's and Gallup's polls will almost certainly rise; since ICM has already injected this element into their figures, the chances are that the series will converge by election day (as they did in 1997). The average finding of the final election day polls, by the way, has not been out by more than one percentage point in 'predicting' the Liberal/Alliance/Lib Dem share in any of the last six elections - even though the vast majority of those polls used the traditional question wording rather than the ICM prompted version.

Why MORI's telephone polls do not match the Times series

One thing that is true, however, and is perhaps partly to blame for the misconceptions peddled by the Telegraph, is that MORI's telephone polls for various clients over the last year have tended to find a Labour share on average that has been lower than the average of our face-to-face Times series. But the reason for this is not how the polls were conducted but when they were conducted.

160 Con Lab LDem Other
160 % % % %
MORI/Times (face-to-face) 31.3 47.7 14.9 6.1
MORI telephone polls 34.4 44.4 15.6 5.7

How can this be? At first glance it might seem it would be a highly unlikely coincidence that two sets of polls over the same year, differing only in the fieldwork dates, could be accurately measuring public opinion and yet be this far apart. But this ignores the entirely different circumstances in which the polls are commissioned. Our Times polls are a fixed monthly series taking measurements at broadly regular intervals. The fieldwork dates determined months in advance, and the poll goes ahead come rain come shine.

Our telephone polls, by contrast, are ad hoc affairs, usually for one of our Sunday newspaper clients. Naturally enough, such polls tend to be commissioned when our clients think that they will give rise to a good story; and, equally naturally, given the course of politics over the last few years, a government in trouble is a usually more interesting news story than a government sailing serenely along. Consequently, our telephone polls have been disproportionately likely to be taken at periods of particular government unpopularity. (It is for this reason, among others, that we consider the trends from our regular monthly Times as the 'gold standard' among the polls; it is not that there is anything wrong with our other polls as snapshots of the public mood at the moment they are taken - which is what our clients want them for - but taken collectively they could present a selective and possibly distorted view of the course of events.)

The polls in September illustrate this perfectly. At the end of August when our Times poll was taken, the government was riding high on William Hague's 14-pint interview and other matters. Then came the controversy over Dome funding, followed by the fuel crisis, and we in common with the other polling companies found a brief Tory lead as the government's popularity plummeted. By the time of our next Times poll the government recovery had begun - again, the other pollsters' data agrees - so in that period our telephone polls on average found a much rosier picture for the Tories. But it certainly wasn't because telephone polls are methodologically more friendly to the Tories!

In fact, on the few occasions this year when it has been possible directly to test the two methods against each other - when we were commissioned to conduct a telephone poll over the same weekend when our face-to-face interviewers were in the field on our poll for The Times, or immediately after they finished - the results have usually been very similar, as the table shows.

160 Con Lab LDem Other Lead
160 % % % % %
13-15 Dec MORI/NotW 32 47 16 5 -15
7-12 Dec MORI/Times 34 46 14 6 -12
160 -2 +1 +2 -1 -3
24-25 Nov MORI/MoS* 34 47 13 6 -13
23-28 Nov MORI/Times 33 48 13 6 -15
160 +1 -1 0 0 +2
21-22 Sep MORI/MoS* 39 35 21 5 4
21-26 Sep MORI/Times 35 37 21 7 -2
160 +4 -2 0 -2 +6
17-18 Aug MORI/NotW* 32 51 12 5 -19
17-21 Aug MORI/Times 29 51 15 5 -22
160 +3 0 -3 0 +3
20-22 Jul MORI/MoS* 32 51 11 6 -19
20-24 Jul MORI/Times 33 49 12 6 -16
160 -1 +2 -1 0 -3
22-23 Jun MORI/NotW* 34 47 14 5 -13
22-27 Jun MORI/Times 33 47 13 7 -14
160 +1 0 +1 -2 +1
17-19 May MORI/Mail* 33 46 14 7 -13
18-23 May MORI/Times 32 48 15 5 -16
160 +1 -2 -1 +2 +3
25-27 Jan MORI/Mail* 29 49 15 7 -20
20-25 Jan MORI/Times 30 50 15 5 -20
160 -1 -1 0 +2 0

Average telephone* 33 47 15 5 -14
Average Times (face-to-face) 32 47 15 6 -15
Average difference +1 0 0 -1 +1

Telephone (exc. September) 32 48 14 6 -16
Times (exc. September) 32 48 14 6 -16
Difference 0 0 0 0 0

*=telephone poll MoS=Mail on Sunday NotW=News of the World

Almost all the differences are within the normal limits of sampling error, and are not systematic - sometimes the telephone poll finds a bigger Labour lead, sometimes the Times poll does. The only big difference is in September, which was at a period when other polls were also finding sharp changes over short periods, and even the difference between a Thursday-Friday poll and a Thursday-Tuesday poll, with the bulk of interviews on the Saturday and Sunday, could perfectly plausibly account for the change. Excluding September, the figures are identical over the year to the nearest whole number, as the last line of the table shows.

We are firmly convinced that both our polling methods are reliable and accurate, and that the differences between the two sets of polls simply reflect real differences in public opinion at the time they were conducted.

More insights about Public Sector

Society