Listen-through Windows and 100% Audibility: Creating a Language for Programmatic Audio
In recent years music streaming and podcast services have created waves within the digital audio space. Freemium streaming services have introduced a large volume of new audio inventory into the market, allowing advertisers to branch out from radio buys when they want their message to be communicated exclusively via sound.
It is now easier than ever to buy audio ad spots programmatically, whether through Spotify’s new self-serve platform or the podcast aggregator Acast. The ad spots seem familiar because we’re accustomed to linear radio advertising, however the fact that this is a digital buy means we can hold the media to a higher standard and better measure the effect it has on the consumer.
As audio advertising still plays second fiddle to video and display, it is naturally discussed in the language of those mediums. We use impression trackers even though our primary concern is a sound being played to the consumer, and we attach significance to the CTR on companion banners even though those banners are what their name suggests, merely add-on’s. On reflection, the only piece of terminology which holds truer in audio advertising – when compared to display or video – is share of voice.
To demonstrate the unique role it can play on our media plans, we need to proactively adopt a specific language for programmatic audio. First and foremost, we need to recognise the critical importance of the actions users take after listening to an audio ad. This would be audio’s version of view-through attribution – Listen-through Attribution. Audio creatives are often scripted with a call to action, but they rarely ask the user to click through. Our focus should be to isolate the online or offline actions taken by all users who are exposed to the audio spot, not just those that click onto a companion banner.
There are also more ways that we can describe the context of our audio placements. Where possible, it would be great to see performance broken out by Captive Listens. These three scenarios on a streaming app can have very different implications for how receptive the user might be to our audio ad spot:
- Audio is playing. User’s phone screen is off (Assumption: Fully immersed)
- Audio is playing. The streaming app is the user’s active window (Assumption: Highly immersed)
- Audio is playing. The user is browsing other windows with the streaming app running in the background (Assumption: Potentially passive)
According to the IAB, 74% of US audio ad spend in 2017 was on mobile, making it increasingly important to distinguish between the above scenarios and shed extra light on how our media is being received by the end user.
The language of audio also needs to encompass the negative aspects of digital media. Viewability is an issue that has affected display and video advertising since its inception and, whilst audio spots can’t be ‘viewed’ as such, they are affected by similar issues. As an unashamed Spotify freemium user, I have experience muting any audio adverts that I find too loud or irritating. Spotify must be
able to detect when users toggle their volume to mute and, in the case of audio advertising, this should count as low Audibility.
I was once sold video on the merits of its ‘sound and motion’, however, with the increased penetration of out-stream and Facebook video, a lot of video advertising is in fact consumed with the sound off. This development plays into the hands of audio platforms as sound becomes the commodity which only these partners can guarantee to the advertiser. Equipping ourselves with the right language to describe programmatic audio will make it easier to justify its importance, leading to multi-sensory media plans which combine a powerful mix of impressions, views and listens.