Last week, I had the opportunity to discuss the critical topic of assessing vendor performance with an audience of marketing leaders at Argyle’s Retail CMO Forum in New York. Given the rapid pace of change in marketing technology, and a landscape already filled with more than 2,000 solution providers, the need to quickly and confidently assess what’s working has never been more critical.

Building a Testing Infrastructure

Fortunately, Performance Advertising is closely linked to the conversion, providing a clear opportunity to rely on scientific testing to understand what’s working and what’s not. When it comes to comparing two Performance Advertising partners (often a challenger and an incumbent), we always encourage our clients to use a Head-to-Head (H2H) test. In fact, we often encourage marketers to focus on scientific testing as internal competency. This helps to ensure an apples to apples comparison based on key business metrics and allows marketers to stay flexible in face of continued industry innovation.

At a basic level, Head to Head testing is a relatively simple way to compare the performance of “Vendor A” to “Vendor B” against specific online sales metrics such as total conversions and return on ad spend (ROAS).  But, just like today’s vendor landscape itself, not all approaches to testing are created equal. Here’s a quick run-down of the common approaches:

  • Sequential Test (Not recommended) - Vendors are compared via back to back campaigns. Results can be distorted by multiple timing-related variables such as promotions, seasonality, competitor and marketplace dynamics.
  • Simultaneous Test (Not recommended) - Vendors are tested simultaneously but the cookie pool is not split. This avoids the pitfalls of a Sequential Test, but allowing multiple vendors to compete for the same cookies can cause user fatigue, price inflation, and difficulty with sales attribution. This approach also potentially allows for a vendor to optimize to your attribution approach instead of actual sales.
  • Split Cookie Pool Test (Recommended approach). The marketers cookie pool is subject to a random 50/50 split and vendors run their (equally funded) campaigns simultaneously.This avoids the pitfalls posed by the Sequential and Simultaneous testing approaches and provides a clear path to evaluate how much sales revenue each vendor drives for their cookies.

I should note that, although the Split Cookie Pool Test is preferable for accuracy, it’s not necessarily the simplest. Therefore, we typically recommend adopting a Tag Management Solution (TMS) as an important step towards developing an internal testing infrastructure.


 Cracking the Code

During my talk at Argyle, I also took some time to address the current industry debate surrounding a focus on algorithms and managed services versus self-service platforms. Again, for Performance Advertising, it all boils down to being close to the end conversion and being able to best capitalize on that powerful, real-time feedback loop.

The reality is that every consumer and every ad impression targeted to that consumer has a unique value. Past purchase history, timing, publisher, ad position, and hundreds of other variables all play a role in determining the likelihood and value of a future purchase, and thus the real-time value of an ad. That’s not even taking into account the additional optimization levers posed by things like dynamic ad creative and product recommendation. When you consider all this, it’s clear that manual segmentation is more of “feel good” solution rather than an opportunity to gain true visibility and control over a one-to-one marketing platform.

So how do you gain confidence that your chosen vendor is the best at driving business results? If you’ve read the entirety of this post, you already know the answer. Test regularly and test accurately against the metrics that matter most to your business, not a set of reports driven by someone else’s agenda.


Categories: Performance Marketing