ARF: Cultural Effectiveness Council

On March 16, the ARF Cultural Effectiveness Council hosted a discussion moderated by Ipsos’ Janelle James, exploring the bias in the algorithms and models used by organizations, particularly those in advertising and marketing, to make selection or recommendation decisions. Speakers from Publicis Media, Twitter, Wunderman Thompson, Cassandra, and the University of Southern California shed light on why this issue arises, what its effects can be and how to contend with it. 

Algorithmic bias refers to situations in which statistical and econometric models or a programmed set of instructions systematically – though usually unintentionally — treat members of some groups differently than others. Such models and algorithms can end up favoring majority populations and ignoring, or even discriminating against, minority segments. As described by Kalinda Ukanwa of USC, there are two sources of this bias – the data side and the design side. The data on which a model is trained could be unrepresentative or the way the algorithm is designed could produce biased outcomes of which the algorithm’s builders might not be aware.

The panelists agreed that the keys to overcoming algorithmic bias are:

  • Awareness: Wang stressed that everyone needs to understand the effects of algorithms and has conducted trainings on data ethics at her agency.
  • Measuring outcomes, although, Ukanwa pointed out, there is no broad agreement on how to measure bias from algorithms.
  • Monitoring: Amanda Bower of Twitter noted that algorithms used by many companies in the media and advertising industries are “living systems” that need to be checked every few months in the same way that people need to see doctors periodically.
  • Updating algorithms to redress biases that have come to light while monitoring the system.
  • Reframing an algorithm’s objectives: Wang pointed out that algorithms that are optimized for short-term activation tend to base decisions on customers that look like a company’s current customers and so run the risk of biased outcomes. Algorithms optimized for long-term brand growth are less likely to be subject to such risks.

Watch the full recording now.

Speakers :

  • Janelle James, Senior Vice President, Ipsos UU

New Services