What Did We Miss?
If you are a digital marketer, you may have noticed something the last 18 months. A consistent, slow drip of audience signal loss in digital campaigns. Increased CPAs, decreased engagement metrics – all examples of signal loss. It could be that your campaign needs optimization (think better 1st party data use), but the dynamic could also be endemic to the loss of audience identifiers, e.g. ios cookie deprecation.
That dynamic will continue, and grow, as more big tech, aka Google, defaults to a consumer opt-in tracking and legislative privacy laws make tracking even more challenging.
With that it is no wonder that probabilistic modeling search rates are up double digits year over year. These are models that attribute media investments without being reliant on cookies or audience identifiers. The models then prescribe what to spend on each channel.
And there are some surprising players getting into the non-cookie, non-identifier modeling space. Namely, Meta and Google.
But here’s the thing – those brands are made for scale and scale doesn’t come with bespoke customer service. So while you can pick a model “off the shelf” each selection will deliver wildly different results.
Why?
Garbage in = garbage out.
One-size-fits-all modeling is not so reliable. And it makes sense – a model for a regional auto dealer might not work for an e-retailer. And the tough part is, if the model is tested and doesn’t correctly prescribe where to invest ad dollars – is the model broken or is the advertising broken?
We believe in a more bespoke approach. Models are as smart as the data that they ingest. And that data can include exogenous variables like weather, distribution or interest rates.
Bespoke = custom-fit model + exogenous variables
So beware of the DIY media mix model. Ask a ton of questions and think carefully about what you’re feeding your model.