Who, What, Where and Why
The election is approaching, and we are frequently being asked a similar series of questions — or, occasionally, discovering that some have an alarming misunderstanding about what we do. So, with apologies to those of our readers to whom this is already obvious, let us begin at the beginning.
The election is approaching, and we are frequently being asked a similar series of questions -- or, occasionally, discovering that some have an alarming misunderstanding about what we do. So, with apologies to those of our readers to whom this is already obvious, let us begin at the beginning.
- Who do we interview?160
- 160
- Anyone over the age of 15 living in Great Britain is eligible to be interviewed in our regular Omnibus surveys (although in our political polls we normally only report the views of those aged 18+, and during the election we shall also filter out those not registered to vote). To choose who we interview in our political polls, we use a two stage sampling method. First, we choose a number of randomly selected "sampling points" (150 in our regular Omnibus, soon to be increased to 174), strictly defined geographical areas (such as local government wards), chosen so that together their residents make up an accurate sample of Great Britain. To each of those sampling points we send one of our trained interviewers, with a "quota" -- instructions about what sort of people to interview, how many men and how many women, how many in each age group, how many who own their homes and how many in council houses, and so on, to ensure that their group of respondents -- 14 in each quota under the current design -- is representative of all the people who live in that locality. (Each interviewer has a different quota, individually tailored to the population characteristics of that particular area, which ensures that not only do we interview the right sort of people, but we do it in the right places.) The interviewer then finds the appropriate number of adults, as dictated by his or her quota, and interviews them -- in their own homes.
-
160
What are the questions?160 - 160
- The political questions that we ask are exactly as they are published here in BPO, and those are the first questions that our "respondents" answer after the interviewer has ensured that they fit into the quota -- voting intention, satisfaction with the government and party leaders, issues facing the country, economic optimism, and whatever other topical questions we have included. All these questions, of course, have been discussed and agreed with our client -- The Times. Then, since this is an Omnibus survey, there will follow other questions on a multitude of topics for other clients -- some of which may later be published, and perhaps find their way into BPO, others of which will remain confidential for the private information of the clients who commissioned them. (During the election, the surveys will run ad hoc rather than on the Omnibus, with only political questions and examining the issues in greater depth.) Finally, the interviewer will ask the detailed demographic questions -- everything from age to marital and work status, the number of cars in the household and which newspapers the respondent regularly reads. These have two purposes -- they enable us to carry out detailed analysis of the answers to the other questions on the survey, so we can isolate or compare the opinions of, say, women with children or of Mail on Sunday readers; but they also give us information about the overall composition of our sample so that we can correct any imbalances and ensure that, as far as is practically possible, our survey really is representative of the entire British adult population. (This is achieved by "weighting" -- computer adjustment of the figures so that any given group of respondents contributes its proper share to the overall figures to reflect its size in the whole population. We do not incidentally, unlike some polling companies, weight by declared past vote to the result of the last election -- we have found that respondents' report of their past voting behaviour is unreliable, and we consequently consider weighting to the 1992 result to be potentially dangerous.)
-
160
Why do the polls always get it wrong?160 - 160
- They don't -- the polls usually get it right. MORI's average error on share of the two main parties in the three elections 1979-87 was one-third of a percentage point. Of course 1992 (when our average error was 4189%) was a disappointment, but we are confident that we know what went wrong and we expect to be able to prevent it happening again.
-
160
So what will MORI be doing differently from 1992?160 - 160
- We have entirely overhauled our quotas and weighting matrices, using the most accurate and up-to-date data sources available, and introducing new variables (notably car ownership) which will give us a tighter control over the character of the sample. Furthermore, experimentation has convinced us that reducing interviewer discretion in the selection of respondents gives us more representative samples, and consequently we expect to use census enumeration district (ED) sampling points for our election polling, rather than constituencies as at the last election. Interviews will be conducted in respondents' home (sampling points as small as EDs are not practical with in-street interviewing, but this also enables us more closely to monitor contact and refusal rates). And, finally, we will be looking carefully at the results of all the other questions on our surveys, to try to avoid being caught out by a late swing or by Tories being more reluctant to talk to us. These may sound only minor changes, but we have found that they have had a considerable effect in improving our samples, and we are confident that with these improvements the tried-and-tested methods, properly implemented, will give us accurate poll results in the future.
-
160
What should you look for when reading the polls?160 - 160
- (A few of the questions you might not be asking yourself, but ought to be, as you read the polls during the election, or at any other time):
- When was the fieldwork conducted? Is the data likely to be up-to-date? Did anything happen at the time of survey that might have affected the results, or has anything happened since that might have changed things? Has the press report you are reading got the polls in the right order -- by dates of fieldwork, not publication?
- Is it a panel study? Most surveys interview a new sample of electors every time, but a panel goes back to the same people as before, to find out if they have changed their minds, and why. Panels are vital for understanding the dynamics of change, but they may be less accurate at overall measurement because they consist only of those electors the pollsters are able to contact repeatedly (and who may become atypical of the public). None of the final "prediction" polls in the election is likely to be based on a panel, but there will probably be panels for other purposes.
- Is it a face-to-face or a telephone poll? We used to be more suspicious of telephone polls, but their recent record in Britain has been comparable with that of the face-to-face polls, and at least one main pollster (ICM) has now started using them for their routine monthly polls. Nevertheless, because telephone polls are reliant on weighting to compensate for those households without a telephone, the method is a little riskier than conventional polling, and if the sample is based on numbers listed in telephone directories, there may be a bias being created by the now-quite-substantial number of subscribers who choose to be ex-directory. On the other hand, telephone polls have the advantage over face-to-face polls that they need not be geographically clustered and do not face the same problems of accessibility in remote areas.
- How many sampling points were used? In general, the more the better in face-to-face surveys. (Telephone surveys don't need to use sampling points as such, which is one of their advantages -- there's no need to geographically cluster the interviews, which can increase the chance of sampling error.)
- Where was the sample taken? Is it a sample of the whole UK? (Rare -- if the report claims it is, that may be sloppy journalism rather than accuracy.) Of Great Britain, i.e. excluding Northern Ireland? (The normal design for a voting poll.) Or just in marginals, or certain regions, or some other unrepresentative sub-section?
- How big was the sample size? As a guide, a sample size of 1,000 is normally thought to have a "margin of error" of about 1773%. (That means that 95% of the time each party's share will be within these limits. Occasional "rogue" polls are inevitable -- we are bound by the law of averages. A rogue poll is one which, through the vagaries or sampling or occasionally through some uncharacteristic methodological hiccup, has thrown up a result out of line with what that poll would normally be expected to find -- what it is not is a poll that is wrong because it is conducted with inadequate methodology or which for any other reason could not reasonably be expected to get the right answer in the first place.) If you double the sample size, the accuracy should be half as good again. During the election, ignore any poll with a sample much smaller than 1,000. Only once you know the sample size and the sizes of any sub-groups in the sample can you judge whether differences are statistically significant.
- Are the figures "adjusted" or unadjusted? Both ways of reporting polls have their proponents, but of course they don't mix. And if the figures have been adjusted, what method has been used? If the methods in two polls are very different, the figures may not be directly comparable. This seems likely to be the biggest stumbling block for the media in reporting the next election's polls -- if the reporter or pundit is not making clear the distinction, he probably does not know what he is talking about. (Also check that "Don't knows" have been re allocated, so that party shares add to 100% and are comparable with last time's election results -- although it is rare not to find this done in Britain.)
All simple and straightforward, isn't it? You could now go out and run your own poll? Unfortunately, there may be other people who think the same, so the final tip is check the brand name. A poll by one of the APOPO agencies (MORI, Gallup, NOP, ICM or Harris) will have been conducted by experienced researchers and interviewers in conformity with the necessary professional standards. Polls from other companies with a political polling pedigree (for example, Scottish polls from System Three) can be accepted without misgivings. But be wary of polls which are not organised by established market research agencies and/or which are not conducted by a professional fieldforce: polls, especially constituency polls for local media, conducted by academics or journalists using students as interviewers tend to proliferate at election time and are rarely reliable. And especially beware of phone-in polls ("voodoo polls"), which can too easily be confused through sloppy terminology with perfectly reputable telephone surveys. Phone-in polls measure the opinions of self-selecting samples (of the audience which saw the invitation to phone in the first place), and furthermore normally have no controls to prevent "respondents" voting more than once. They are totally open to cynical manipulation -- one former chief whip recently admitted that he had on occasion organised colleagues to rig the results of such polls (Lord Cocks, quoted in the Independent, 18 March 1996) -- and should not be given any credence whatsoever.160160
Dr Roger Mortimore is political assistant to the Chairman at MORI
More insights about Public Sector