Polling On The Internet

Since the late 1980s, telephone interviewing has become an increasingly effective way of conducting research among the general public. With telephone ownership rising above 90%, it became possible to interview representative samples of the public; the fast turnaround of results and the opportunity for tight control over the interview process have provided researchers with some real advantages over face-to-face methods.

Since the late 1980s, telephone interviewing has become an increasingly effective way of conducting research among the general public. With telephone ownership rising above 90%, it became possible to interview representative samples of the public; the fast turnaround of results and the opportunity for tight control over the interview process have provided researchers with some real advantages over face-to-face methods.

Over the last few months, several British newspapers have been using YouGov, which polls a panel of internet users, as their measure of political opinion in this country. Is this the start of a shift away from face-to-face and telephone towards an on-line future?

There is no doubt that using the internet to administer questionnaires presents some real possibilities. There is no longer a need to pay interviewers. It too offers fast turnaround of results. Respondents can be recruited from around the globe. For some audiences which are very familiar with the web, it represents a sensible option, and we can achieve very respectable response rates. However, when it comes to surveys of the general public, we are more cautious about using the internet as a vehicle for research.

MORI, along with other market research agencies, attempt to interview representative samples of the general public. In order to do this, we need to take the whole of the adult population - men and women, rich and poor, young and old, north and south - as our sampling "universe".

Internet polling companies, by definition, start by excluding all those who do not have access to the web - still more than half the UK population. Furthermore, respondents are self-selecting, consisting of those who volunteered to join in, usually after visiting certain websites. There is a real danger, therefore, of results being biased towards those with more interest in political issues than average. (Incentive payments may alleviate this problem but can't solve it - especially if panel members have to take the time to complete tens of surveys to get any payment).

Internet polling is usually based on members of a panel who are repeatedly interviewed. But, as researchers have known for decades, panels can quickly become unrepresentative because the interviewing process itself conditions the minds of the respondents in a way that the rest of the public have not been subjected to.

We are also cautious about the checks that are possible to verify the samples. The internet is notorious for people lying about their age and sex. Net pollsters may think that they are interviewing a 65 year old woman from Birmingham, but who is she - or he - really?

Polling organisations like YouGov do not claim that their samples are representative, but believe that by applying suitable weighting they can achieve representative results. But this brings us to our second concern, which centres on what opinion polls are trying to achieve in the first place. A conventional political poll, for example, might measure party support by using a representative sample; because that sample is representative, it can also be used to measure any other characteristic in the population.

Of course it is possible to predict an election result without using a representative sample. If at the last election we had interviewed a sample consisting solely of white men it would have given us the correct result, since the two biases such a sample contains cancel each other out. But it wouldn't be much use if we wanted to measure how many people in Britain have suffered racial prejudice, or whether people prefer watching snooker to football on TV, or even which party young people voted for.

This is where we believe the internet polls do need to be read with caution. Certainly the record of an organisation like YouGov suggests that it may be possible, through weighting and interpretation of the data, to come out with the right overall result for voting. But that is not the least guarantee it is any good at measuring anything else - like the voting intentions of, say, 17-22 year old young people - which, according to YouGov in the Daily Telegraph this month, are currently Liberal Democrat 30%, Conservative 28% and Labour 25%! (By way of context, the June MORI political monitor puts the Lib Dems on 22% among 18-24 year olds, and the Tories on 18%; ICM for the Guardian puts the Lib Dems 24%, Tories on 19% for the same age group.)

Readers will probably be aware of the apparent difference between MORI's voting intention results and ICM's, discussed in this column on 28 June. [Teflon Tony Rides Again]. This is because ICM's published figures are "adjusted"; and MORI's are not. Over the years, MORI's approach has been to report voting intention results that give a snapshot of what people are saying at a given point in time, rather than (for example) a projection as to what voting in a hypothetical future general election would actually be. Nevertheless, it is still worth bearing in mind that, when we look at the trends, both MORI and ICM are finding consistent results. Both companies take great care to make our samples representative of the whole public. And both have measured the same ups-and-downs of opinion over the last six months.

When we are looking to measure how opinion is changing, there is one cardinal rule that needs to be followed: to compare like with like. Some recent newspaper reports, launching a new internet poll, have claimed to show new and important trends in public opinion, pointing to how the findings compare with previous polls conducted by Gallup, NOP, ICM or MORI. This is not good practice, given that we have no evidence as to whether changes are due to real movements in public opinion, or simply as a result of two very different methodologies. Earlier this year, the Daily Telegraph reported that "don't knows"; on the Israel/Palestinian issue had fallen from 26% in 2000 to just 7%. But the former figure was a Gallup poll of a representative sample, the latter a YouGov internet poll. A separate ICM poll in April 2002 on a similar question found 23% don't knows - little different from the Gallup poll of two years earlier.

In an article on YouGov's website, Peter Kellner argues for believing YouGov's figures rather than conventional polls, in an instance where they disagree, on the basis of "Common sense suggests that...". But what "common sense suggests"; about public opinion, even to the most experienced of political commentators, doesn't always turn out to be the case - that is why we do representative surveys of public opinion. Every now and then the results surprise us, because the people surprise us; history and experience are not always a good guide to the future.

YouGov has shown that it can present plausible figures for voting intention at the moment because its weighting and adjustments compensate for any biases caused by its unrepresentative sample. But any poll that relies heavily upon its weighting is at the mercy of any shifts in the underlying pattern of opinion; if some group of the population suddenly takes a distinctive view, and the weighting scheme doesn't take account of their existence, the measurement may no longer be accurate. Data modelling is not the same as replicable and thorough survey research using a scientifically-selected representative sample.

Simon Atkinson and Roger Mortimore

Related news