With polling day in the EU referendum fast approaching, and with our polls showing more voters making up their minds, the time has come for us to introduce some changes in the analysis and reporting of our polls. In order to be as transparent as possible, we are announcing these changes before fieldwork begins for our next poll, which will be published next week. We have also tested the impact of different questions relating to potential turnout, and are publishing the results here for wider interest.
The first change is one that we have made as standard practice in advance of other elections and referenda for many years. It means that, as registering to vote in the referendum has now closed, we will exclude those who tell us that they are not registered or who are not sure from the voting intention figures in the remaining polls.
Secondly, we will now add educational attainment to the list of demographic factors for which we control in the composition of our weighted samples. We believe that this will increase the representativeness of our surveys so that they reflect the national population more accurately. This follows on from our deliberations after the unsatisfactory performance of our polls at the 2015 general election. Our examination of our surveys, backed up by the industry-wide inquiry led by Professor Patrick Sturgis which reported earlier this year, led us to conclude that our samples had been under-representing politically disengaged members of the public. Last year, we began to weight our samples by newspaper readership, which has already gone some way towards offsetting this bias, and would have made us much more accurate in the general election. Nevertheless, this did not completely overcome it, and further investigation showed that we were still interviewing too many graduates, and not enough people with no formal qualifications.
After experimenting with this, we concluded that we should include a control for educational attainment in our quotas in future polls, as well as adding education to our weighting scheme to fine-tune the results. This will ensure that approximately the right number of people at each attainment level are interviewed in the “raw” sample, instead of only relying on weighting, since this results in a heavily weighted sample and an unnecessarily high “margin of error” due to a reduced effective sample size.
Together this means that we will control for education at both the sampling and analysis stages of each poll. We believe that the result of this change is to make our samples more representative.
Both of these (registration and education controls) were tested in our most recent poll (conducted 14-16 May), and they both have a small impact on voting intentions, reducing the advantage Remain has over Leave. The combined effect of the two changes would be to change our headline figure of Remain 55%, Leave 37% to Remain 54%, Leave 39% - or, excluding don’t knows, from Remain 60%, Leave 40% to Remain 58%, Leave 42%. (The poll in May did not include an education quota, so it is only possible to demonstrate the effect of the weight, but they should have a similar effect, subject to sampling error.) We will also be asking the referendum voting questions first, before the party voting questions, as this will be in line with the approach in our final poll (and in line with our approach for our Scottish independence referendum polling).
Finally, as we and many others have been pointing out, turnout will be a crucial factor in the upcoming election. We have been publishing figures showing the impact of turnout since March, and consistently those who are most likely to vote have been more favourable towards Britain leaving the EU than the public as a whole (the average effect of our standard turnout filter over the last three months has been to reduce the Remain share by 2.67% and increase the Leave share by 2.67%, when excluding don’t knows).
At the same time, we are also aware that our new standard turnout filter, which would have given us a more accurate result in the General Election, is not necessarily guaranteed to work in the same way in a referendum – these are not common events in Britain, and simply assuming that people’s impressions of how likely they were to vote would behave in the same way as in the election might easily have been misleading, especially as so many were still to make up their minds.
However, as polling day approaches it makes increasing sense to concentrate primarily on those who are likely to turn out. Again, in our May poll we tested a number of different ways of accounting for turnout, based on our standard turnout questions and also extra questions about how important the result is to people and their voting behaviour in previous elections. The results from some of these are given below, and as can be seen at this stage while they all show the same pattern of reducing Remain’s lead, there is little significant difference between them, even with filters that select different proportions of the total sample. Nevertheless we will test these different methods of looking at turnout again in our next poll, and will continue to publish figures based both on all and turnout-adjusted, but with the main focus of our reporting on the turnout-adjusted one.
Commissioning Complex Evaluations
Commissioning complex evaluations? How do you de-risk specifying and proposing complexity-appropriate methods in tender processes and get the right commissioner-contractor balance for responsive and collaborative delivery? Thoughts below from Ipsos MORI’s Policy and Evaluation Unit.