Survey Methods At Ipsos

Ipsos uses all major survey methods across our huge range of work. This runs from face-to-face in-home interviewer surveys using probability samples, through telephone surveys using quotas that are designed to reflect the population, to online studies using panels of people who have signed up to take part in surveys.

Choosing The Most Appropriate Approach

Ipsos uses all major survey methods across our huge range of work. This runs from face-to-face in-home interviewer surveys using probability samples, through telephone surveys using quotas that are designed to reflect the population, to online studies using panels of people who have signed up to take part in surveys.

Our aim when designing studies and responding to briefs is to put forward the approach that is most fit for purpose, taking account of clients' objectives and budget. Sometimes this will be a face-to-face or telephone methodology, sometimes internet or postal.

Where clients need a very high degree of accuracy or the assurance that all possible steps have been taken to remove any biases or distortions, only the purest forms of random probability sampling with face-to-face interviewing may be suitable, as is the case with many of the major government surveys that Ipsos conducts. Less rigorous methodologies would simply not be fit for purpose.

For many other purposes, however, it is often the case that clients can gain sufficient insight into their question of interest using an approach that has limitations to a greater or lesser extent.

  • Quota sampling can be a more appropriate choice, which does not offer the same degree of statistical purity as random probability sampling but which has a good record in practice of producing reliable results and which is generally significantly quicker and less expensive; depending on the circumstances this might involve face-to-face or telephone interviewing.
  • Where appropriate, particularly when the target group is a nationally-representative sample of adults and the number of questions in the survey limited, one of the Ipsos Omnibus surveys (using quota sampling) may be the most cost-effective solution.
  • For other purposes a postal survey may be most appropriate, which can easily and inexpensively reach a large random sample, but which sometimes suffers from a lower response rate and greater bias than other methodologies.
  • Internet surveys of the general public can be cheaper still, though they suffer from known biases arising from differences between the parts of the population that are online and those that are not. Further, unlike the other available methods, internet approaches to general public surveys often also rely on a volunteer panel, which might also cause distortion. But for some purposes internet approaches with the general public will be acceptable. Further, there are many studies where online surveys may have advantages over other methods, both among particular groups within the general public (for example, young people) and specialist audiences (for example, GPs).

Our view is that, as long as our clients are aware of the drawbacks of whatever method they choose and use the data appropriately, they will be making good use of research — and doing this is certainly better than basing decisions on no information.

Understanding Public Opinion via Internet Surveys

Of course we recognise that there is still much to be learned about which methods are most accurate and representative in different circumstances. Indeed, we put a lot of time and effort into methodological studies exploring these issues, through the work of our Research Methods Centre. For example, we have published papers that have helped progress the most contentious debate in recent years — on the relative accuracy of internet panel surveys in representing the public (Linked Article 1), and we are continuing to conduct major experiments on this (papers will be published in early 2007).

On the face of it, the simple fact has to be that online surveys cannot be fully representative of the public at large. Given that only around 60% of households in Britain have access to the internet, we will of necessity exclude a large swathe of the population before we start — and, more to the point, we know that those who do have access to the internet are very different from those who don't in a great many respects.

While it is true that there can be problems with sample frames and non-response bias for in-home or telephone surveys, the scale of exclusions and biases are still much larger for online studies. Not only do online surveys omit the non-internet using c40% of the population, but they also usually depend on respondents' willingness to commit themselves to being part of an online panel. Given that there are no reliable or comprehensive sample frames of email addresses, this is the only real option available — and it is certainly preferable to uncontrolled, open invitation surveys sometimes run on websites.

It is true that internet panel surveys are being used successfully to measure narrowly defined issues like voting behaviour — but the adjustments and weighting involved mean that many question areas that are not being individually calibrated are likely to be much less accurate than the headline measure (e.g. voting intention) that is subject to the targeted adjustment and calibration. Where no calibration is possible, because a new area or issue is being researched, there is a real risk that measures from an internet panel will be less reliable than an equivalent representative survey of the population as a whole.

Whatever the issue being researched, the quality of online panels is going to be more of a focus in the future, and will be a key factor in distinguishing high quality research. As price pressures grow it is likely that the recruitment and management of panelists will be compromised by some panel providers in order to save on costs — and this will have a direct impact on the accuracy and reliability of results. Making sure the source of panelists is balanced (including a mix of offline and online recruitment, as with the Ipsos online panel), alongside rigorous management, monitoring and renewal will be vital.

But it should be noted that even higher quality online research has limitations that may not be immediately obvious. For example, while quality online research is generally conducted in such a way as to achieve the "right" number of interviews with groups who are severely under-represented on the internet — for example, those aged 65+ in the lowest social classes — analysis of sub-group data suggests that even in the best quality panel they are, not surprisingly, very different from the norm for their group. This will no doubt help to explain some of the strange sub-group data seen in some online surveys.

Of course, if you are not concerned about sub-groups and just want an overall indication of opinion on subjects not likely to be highly correlated with internet usage, an online survey can provide it. In short, the usefulness of an online survey will depend entirely on who and what is being measured, the use to which results are being put and what is being claimed for their accuracy and representativeness. It is as wrong to say that online approaches to general public surveys are never appropriate as it is to say that they are always appropriate. The question the researcher should ask should be is an online survey fit for my purposes? The answer to this is often yes — as is demonstrated by the fact that Ipsos will conduct over 1,000,000 interviews over the internet in 2006 (Ipsos Online).

But some online research providers refuse to admit the limitations of online approaches and make blanket statements about general public online samples being "representative" or sometimes even that online general public data are of "higher quality" than data collected in other ways. Such statements do not stand up to scrutiny and are in direct conflict with both the survey methods literature and the judgements of acknowledged independent experts (see Articles below).

Ensuring clients get the greatest value from research will not be achieved by making unsupported claims for a single approach and then promoting it in all circumstances. It is only if an informed judgement is made on the basis of a proper understanding of the relative strengths and weaknesses of all methods that we can be sure that a proposed study methodology is genuinely fit for purpose.

Articles on Online Research Methods for General Public Research

There are a large number of papers on how reliable online methods are when researching the general public. We have outlined just a few of the key ones below — this is not a comprehensive list and we do not endorse the views in any particular examples (except our own), but it should give those interested a good start in understanding the issues. The first section includes articles with links to full papers, the second are available from a range of individual journals and books.

1. Linked Articles

2. Other Articles And Books

  • Propensity Score Adjustment As A Weighting Scheme For Volunteer Panel Web Surveys: Lee, S. (2006) Journal of Official Statistics, Vol.22, No.2, 2006. pp. 329-349
  • Mail And Internet Surveys: The Tailored Design Method: Dillman, D. (2006) John Wiley & Sons Inc
  • An Experimental Comparison Of Web And Telephone Surveys Fricker, S. et al (2005) Public Opinion Quarterly 2005 69(3):370-392
  • Virtual Methods: Issues In Social Research On The Internet: Hine, C. (ed.) (2005) Oxford. Berg.
  • Should We Trust Web-Based Studies?: Gosling, S. D., Vazire, S., Srivastava, S. and John, O. P. (2004) American Psychologist, 59, 93-104.
  • Online Social Research: Methods, Issues, And Ethics: Johns, M. D., Chen, S. S. and Hall, G. J. (Eds.) (2004) New York. Peter Lang.
  • Online Social Sciences: Batinic, B., Reips, U. D., Bosnjak, M. (Ed.) (2002) Cambridge. Hogrefe & Huber.
  • An Assessment Of The Generalizability Of Internet Surveys: Best, S. J., Krueger, B., Hubbard, C. and Smith, A. (2001) Social Science Computer Review, 19, 131-145.
  • Web Survey Design And Administration: Couper, M. et al (2001) Public Opinion Quarterly 65:230-253
  • Can We Trust The Data Of Online Research?: Miller, T. (2001) Marketing Research, Volume 13, No 2, Summer, pp 26-32
  • Web Surveys: A Review Of Issues And Approaches: Couper, M. (2000) Public Opinion Quarterly 64 p.464-494

More insights about Public Sector

Related news