Audience Measurement in the Data Age
In this paper, we outline ten predictions for the future of audience measurement. It is an important area of expertise for Ipsos. We measure the audiences to media content in more than 70 countries. This includes those accessing this content via both traditional and digital means.
Media audience measurement is in robust health. Worth $7 billion in 2014, according to ESOMAR, it represented 16% of market research spending.
In this paper, we outline ten predictions for the future of audience measurement. These predictions are informed by our ongoing conversations with audience measurement stakeholders and users around the world.
We see both continuity and change ahead. The baby will not – and should not in our view – be thrown out with the bathwater. Random samples will still have value, but they will be supplemented with much larger databases. Hybrid approaches will become the norm. The use of advanced statistical techniques to integrate multiple data sources will accelerate.
Audience measurement will not be displaced by the programmatic targeting of digital users, as some have predicted. Such targeting will, of course, assume much greater importance than it has to date, driven by the growth in digital consumption and the search for media buying efficiencies. But people will continue to lead off-line as well as on-line lives for the foreseeable future. Advertisers will still need to target new customers, as well as those who have already bought from them or are researching their products online.
In many ways they are less robust than traditional measures used to count, say, TV viewers. But they will certainly develop and improve over time. At the end of the day, audience measurement is still about measuring people, not machines. We do not see that changing anytime soon.
All change
Over the last ten years, the digital revolution has hit the media industry hard. Audiences via ‘legacy’ platforms like print have declined, while digital audiences have grown.
This has changed the economics of the business and the affordability of the audience measurement underpinning it; for traditional media content creators, digital revenues have rarely kept pace with digital audiences. So profitability has been undermined and belts necessarily tightened.
All media are now cross-platform, so they need cross-platform measurement. Growth in programmatic buying, ‘digital-first’ and ‘mobile-first’ initiatives demand faster and more granular data than has hitherto been available.
Old ‘Opportunities to See’ (OTS) measures of audience are being joined by new OTS measures such as ‘Page Views’. Reach & Frequency evaluation is now considered less important than the promise of pushing messages to people deemed to be ‘in the market’ by virtue of their web activity.
‘Data’ is the new God of marketing. As a result, audience measurement studies must link with other relevant information through data partnerships, statistical integration and so on.
High quality surveys, where random probability, face-to-face sampling and data collection were the norm are becoming prohibitively expensive to execute in many countries. And many users are increasingly prepared to trade quality off against other goals such as price, speed, granularity and comprehensiveness. Choices must be made.
A good proportion of the Audience Measurement business is based on long-term contracts, which means that radical change is unlikely to occur swiftly.
Where contracts do come up for renewal, recurring requirements include greater cross-platform coverage, faster reporting speeds and more granularity.
But the forces against change are often as strong as those in favour. Individual publishers and broadcasters will not easily accept spending more on a new service that puts them at a short-term competitive disadvantage versus where they stood on a legacy measurement service.
Cross-platform measurement is not the same as cross-media, (e.g. TV, radio, newspapers etc.) which is rarely a priority for media suppliers. While there has been some progress with initiatives like Touchpoints, there is little agreement, for example, on what the definition of media exposure should be across the various media (video, audio and text).
The digital revolution will not stop and wait for industry stakeholders to decide what they want from audience measurement services. Methods and practices are already available for addressing most of the technical challenges that have been thrown up by the changes. It is now a matter of implementing them.
Forces for change
Users of audience measurement services have been heavily challenged by the digital revolution. These users - primarily newspaper and magazine publishers, television and radio broadcasters and media agencies – are not the only industries that have been affected by digitisation. The market research industry has also had to re-engineer itself to cope with change.
- Newspaper publishers
- Television broadcasters
Market research
We could extend this analysis to magazine publishing, radio broadcasting and the Out of Home industry – but the conclusions are similar.
More interesting perhaps is to look briefly at the industry that actually operates audience measurement services. Because changes in this business are having an important influence on what is technically and economically possible in the audience measurement area.
Since the 1940s, when audience measurement started to take off, the primary forms of measurement have been surveys and panels. Television measurement typically involved randomly recruited panels of households first filling out diaries and then, from the late 1980s, allowing their TV sets to be metered.
Readership measurement, first carried out in a systematic fashion in the 1930s, has developed quite slowly since then. It is still based largely on random face-to-face recruitment and interviewing of a sample, although change has accelerated in
recent years.
Radio still uses mainly the telephone or diaries to capture information on what people remember listening to, although passive measurement technologies have been trialled and implemented in some markets.
Survey challenges
It has become less easy and more expensive to carry out traditional random probability surveys. A high quality survey depends on everybody in a target population having a reasonably similar probability of being invited to take part. This helps minimise bias from recruiting only those who are easy to reach.
For various reasons, including an increase in the popularity of apartment living and heightened security concerns (involving the raising of both physical barriers to entry and a growing reluctance to allow strangers into one’s home), face-to-face interviewing has become increasingly scarce. In some countries it has virtually ceased to exist.
Telephone research has been blighted by the rise of mobile only (or mainly) households, by widespread call screening and by the growth in unsolicited sales (and market research) calls. ESOMAR data on the global market research industry show that the proportion of survey-based turnover derived from face-to-face, postal and telephone interviewing – the ‘traditional’ quantitative methods – fell from 61% in 2005 to just 26% in 2014.
Online interviewing, meanwhile, has grown from 13% to 23% of the total, while the share generated by automated digital and electronic survey methods (including TV panel research and retail audits) stood at 23% in 2014.
Digital data challenges
Critics cast doubt on whether survey research is fit-for-purpose today, let alone tomorrow. Instead, they argue, we should be planning and trading media space using the large data sets generated by digital behaviour. Even printed newspapers and magazines have access to Big Data sets in the form of daily circulation or sales information.
The many millions of digital set-top boxes used to carry TV signals into households can, for example, be used to help track viewing behaviour. Samples move from panels numbering thousands to groups of households in the millions. No action is demanded of those living in the set-top box homes (apart from permission to monitor them). But it is not, of course, quite that simple. Not every household watches television through a set-top box. There are multiple set-top box technologies and many other forms of digital delivery. Secondly, not every TV set in a household with a box operates through that box.
Set-top boxes can tell us when a set is on or off (though they may get confused when they are on Standby). They also tell us the channel being tuned to. But this is where it ends.
They don’t indicate whether anybody is watching. If anybody is, we can’t tell how many of them there are, who they are and whether they are being attentive to what’s on the screen.
Set-top box (or Return Path) data is not a panacea. They cannot replace high quality panels. But they can certainly add value to panel data.
What about internet measurement itself? Many have hailed the internet (and mobile) as the most targetable of media, powered by the most accurate data – in complete contrast to the limited samples, broad demographic targeting and infrequent reports that characterise ‘traditional’ audience measurement. But digital audience data has its own set of challenges. There are usually two sources of audience data: measurement of site traffic and measurement of individuals. Site traffic measurement is built into the medium. When somebody visits a website on any device, their device ID will be logged by the site and a cookie typically dropped onto it (so it can be recognised the next time it visits).
Sites can report on the number of ‘unique’ device IDs which have opened any page over a given period. They can also filter out non-human visits from known bots and spiders, as well as any visits from overseas. But they cannot distinguish between people and devices. Somebody can visit on multiple devices and will be counted separately for each visit.
The anti-virus software on most peoples’ devices automatically deletes most cookies after a period. This means that audience inflation will occur when ‘new’ users are in fact the same users, but with their cookies deleted. On the other hand, in many households, desktop PCs, laptops and tablets are shared devices, meaning that multiple individuals will only be counted once – as a single device ID.
All this is without the well-known challenges of view ability and digital fraud. In the same way a GRP (gross rating point) in traditional media is really an ‘opportunity’ to see an ad message, in digital media too, a Page ‘View’ simply records the initiation of a page load. The loading may not complete or the ad may appear ‘below the fold’ on a user’s screen – i.e. he would have to scroll down to see it. In short, pure site-centric audience data, such as the number of Page Views or Unique Users for a website is fraught with limitations.
In many countries it is supplemented with data from panels of internet users. Devices used by these panelists are metered so they can log all visits to websites that have been ‘tagged’ by the audience measurement service.
The panels are not usually recruited in a strict random manner like TV peoplemeter panels or readership survey samples. Data weighting, data ascription and other forms of modelling are generally employed to adjust the data. The most common usage of digital audience data is for targeted buying. Internet users are identified via their (device) browsing behaviour or the search terms they enter rather than by broad demographic descriptors common in traditional audience measurement. They are then ‘followed’ wherever they go online.
Barriers to change
Meeting all the new needs will cost money, so choices and trade-offs have to be made.
The companies which fund audience measurement services know they need change, tracking their audiences across all the platforms used to access their content.
They know data must be delivered faster and more frequently to feed the programmatic machines (although programmatically traded media space represented less than 3% of total advertising spend in 2015 and will barely reach 5% by 2018, according to Magna Global). But they are also faced in many cases with falling margins and growing budgetary pressures. Meeting all the new needs will cost money, so choices and trade-offs have to be made.
A second challenge is that, when a method is changed, reported audience levels will change too. Shares and rankings will also be disrupted.
Some will benefit and some will lose from this; the losers will be far less enthusiastic about moving ahead then those doing well.
A third item of contention is the disconnect between advertising and content in traditional media. This is less of a concern for television, which is able to measure at least how many people are present in a room at the exact time an advertisement is airing on their TV set. If addressable TV moves off the starting block, this too will be affected.
Most radio advertisers have long had to rely on people remembering at what times of day they were listening to various stations and then making the assumption that, if they listened for at least a small part of a given quarter hour, they will have heard any ads running at those times.
Newspaper and magazine advertisers assume that if you read or look through any part of a publication (however long), you will see their ad. Poster advertisers assume you will see the poster if you walk, cycle or drive past it.
Each medium, in other words, defines its ‘audience’ differently. Advertising ‘exposure’ is computed in multiple ways. But these traditional methods suit the media they measure. In today’s and tomorrow’s cross-platform world, this will need to be re-assessed.
Our predictions
Much is uncertain, but below are ten predictions on how we see the evolution of audience measurement over the coming years:
- Panels will remain paramount. Video content producers and distributors need visibility into where their audiences are coming from and how they behave across devices and services. A high quality panel brings diverse streams of TV Big Data together and helps to make sense of it.
- Mixed methods will mature. Single source methods can no longer capture the totality of audience behaviour. Cross-platform audience fragmentation demands a hybrid measurement approach, which is being trialled in several countries. More countries will move in this direction.
- Meters will be modified. Heavily engineered, high cost TV people meters will be displaced by more passive, more affordable fixed and portable meters, based on existing consumer devices.
- Richer radio markets will adopt meters. Concerns about different numbers have delayed this. But it is the only way of showing the power of radio at its most potent.
- Out of Home measurement will be enriched by mobile phone ‘Big Data’. Mobile phone companies know a lot about where and when people travel. Technical, regulatory and budgetary barriers will be overcome as they play a more important role in mapping travel behaviour.
- Readership measurement will become multiplatform, faster and more granular. Print represents 92% of newspaper revenues globally. But growth is on the digital side. All studies will have to measure cross-platform reading. Faster and more detailed reports will become the norm.
- Digital audiences will be reported more frequently and will encompass all devices. New approaches will overcome the limitations of current panel methods.
- Data partnerships will develop. We cannot collect all the data we need from a single source. Audience measurement data will increasingly be integrated with other relevant data sets.
- Maths Men will multiply. The power of modelling, ascription and data integration has been proven. As hybrid methods increase in popularity, the importance of employing sound statistical techniques will expand exponentially.
- New cross-platform metrics will emerge. Every medium has its own definition of ‘exposure’. Advertisers still need cross-media, as well as cross-platform insight. A cross-platform, multi-media ‘Time Spent’ metric will compete for attention.