ADOTAS — Online advertising has faced many different challenges in its brief history. These days a new phenomenon has popped up that has made it hard to measure success: competitive bidding. However, this most recent issue and its solution seem to go against the grain of what advertisers are looking to accomplish with multiple programmatic partners. The solution that currently exists to mitigate the problem while still accomplishing the objective is siloing; it seems counterintuitive based on advertisers’ prerogatives, but here’s why it works.
It is logical to use test campaigns to identify the best partners for your programmatic buys and audience targeting. The best performers should be part of the mix, and the champion challenger methodology has always been part of digital media buying. Yet, there are cases where the testing method can actually work against a budget and campaign success. How so?
Most of the campaigns that are administered on a customers’ behalf have test (or “competitive”) periods during which performance is actively measured between programmatic buying and audience targeting vendors. The outcome of the test period is that the champion or a few of the top performing solutions grab the majority of the ad budget.
In all digital campaigns, but especially in programmatic campaigns, each of the competing vendors can and frequently will reach the same consumer. When partners target the same audience repeatedly consumers tune out the ad message and perhaps even get turned off to the brand. Overall response drops, and attribution of response is diluted — obscuring the real contributors to performance. If decreased response and misleading outcomes aren’t enough, consequences can be manifested in higher CPMs as the many partners bid against each other for the same impressions.
For example:
- A large advertiser launched a retention campaign retargeting a list of 1MM consumers.
- The campaign launched with a limited number of partners and met moderate success in performance with a CPM of $1.00.
- The advertiser added partners for more reach and competition.
- Performance dropped significantly.
- eCPM surged from $1.00 to $1.60.
All advertisers are anxious about performance, but anxiety can lead the best of us to over-react with negative consequences. In this case, the client’s desire to test thoroughly led to too many programmatic buying and targeting vendors. The impact was diminished campaign results. How could the advertiser have met the goals of competitive selection and campaign performance simultaneously?
One tactic that could increase results while decreasing eCPM is “siloing.” The advertiser could set mutually exclusive parameters that would reduce or eliminate overlap between partners on the plan. This can be done by assigning each partner to an exclusive subset of geography, audience, inventory or a number of other targeting factors. When using this tactic, the advertiser must take care not to introduce bias by inadvertently creating segments with different response propensities. Alternatively, the advertiser can put “exclusion pixels” on ads to assure that each partner on the plan doesn’t show ads to the same audience members.
Both “siloing” and “exclusion pixels” can really be useful early in the lifecycle of a campaign. Combined they can be used to their fullest potential during the test period to help the advertiser and agency make an informed decision about which partners are best, especially when the target audience is finite and the inventory is limited.
By using this technique while being careful not to introduce bias, advertisers can increase performance of programmatic ad buying and audience targeting throughout the life of the campaign, rather than just the early stages.