Ad Tech is starting to evolve again, so it is time to take a look at it, poke at it, and offer a viewpoint on where it is going. To do this, it is necessary to sound a touch clinical in framing what’s up before providing an opinion on where it is going.
Performance is about effectiveness and efficiency. Effectiveness: “what works”, and efficiency: “doing it for the least amount of money”.
Effectiveness entails identifying, targeting, and converting prospects into sales. To do this efficiently, you need to unify the measurements of these activities and then forecast, optimize plans (for optimal return on investment), and activate opportunities.
The key data are prospects, exposures, duplications, presale steps, and sales.
For Connected TV, we need exposure patterns for prospects to forecast where we are likely to find them. Prospects are population segments. Exposures are tracked in panels and census samples. Since panels can only report broad segments like gender and ages, it is necessary to combine information from panels and censuses to discern exposure patterns for prospect segments.
Census & Panel Data
Census samples measure all activities of specific apps or devices. Panel samples relate data across different census samples and in best cases projects them to universes.
Panels are used as general estimation tools, or as calibrators for census data. In the latter case, narrow segments of census data can be projected to national estimates.
Often overlapping samples work better together. As an example, a smart TV census sample can gauge the duplication between linear and streaming TV, while a “projectable” panel sample can be used to calibrate that smart TV data using geography and devices to create a converged measurement. Even without granular data from one or both of the sources, you can still leverage the aggregated patterns to inform both duplication and projections.
Advertisers can implant pixels on their digital assets (ads, websites, apps, and presale steps) to track and connect exposures to outcomes.
Sellers often offer to implant pixels on a buyer's digital assets to provide an audit of their performance. If the seller also supplies which programs the ad were on and when they were viewed, this can be used with the viewing planning data sources to evaluate which opportunities have better engagement.
To connect pixels deterministically, it is necessary to have a device graph. When an activity happens like “the ad has been successfully served”, the pixel “fires”, announcing the date, time, device, and sometimes location. The device graph says which devices have the same owners. The graph connects multiple activities to specific devices which can then be mapped through similar graphs to specific homes and people.
If privacy rules do not allow deterministic connections, then proximity matching (or more nuanced probability scores to create synthetic agents) using date, time, and more generalized location data such as post code and post code profiles can be an excellent near-term substitute and long-term solution, as the emerging privacy laws will require this.
If Advertisers are managing their own pixels, they can often connect them all the way to sales. Brand direct advertisers generally know the people who they are selling to. Brand advertisers that sell through intermediary retailers need to leverage other census and panel data sources focused on sales to make this connection. This can likewise be done either deterministically or probabilistically.
A segment represents a specific slice of the universe. In the case of prospects, they are potential purchasers. Segments are generally used to target potential behaviors with the purpose of converting potentials into action.
Advertisers often know past purchasers. They leverage this list of people to find similar people, through “look-a-like” models, to target a larger list of potential purchasers. Sometimes Advertisers buy characteristics or a list of people who purchase competitive or similar products to expand their scope of potential purchasers.
These segments become their targets, and the advertiser then plans how to convert potentials into action through various communication investments.
The original focus of measuring direct response to digital ads conflated media planning and buying. The immediacy of direct response to gauge performance has led media investment decisions to become tactical and in some cases real-time. This makes tremendous sense for sales of known products that have no brand differentiation or message.
For advertisers that want to create brands to enhance pricing and sales opportunities, a messaging plan prior to activating messages is necessary. Branding objectives and hence their effectiveness measures vary.
What’s the best way to plan for effectiveness? Firstly, we need ways to measure it. Like: how many times and how often has your target segment seen the ad? Content Performance or: “Was the context right?” (Some messages work better than others in a video about airplane crashes!). Most importantly, does the message resonate with the people you’re trying to reach? This is typically measured with branding surveys and A/B testing messages.
These effectiveness measures then become inputs to planning future campaigns. For newer brands, these can be based on look-a-like brand benchmarks or market test results.
Wielding these inputs to plan forward involves mountains of data processing to figure out a brand’s potential and the least expensive way of getting there. This is where planning pivots from focusing on effectiveness measures to efficiency.
Adtech is evolving from two connected quagmires: privacy and operations. In “The Rise and Fall of Deterministic Analytics”, we addressed the impact of privacy on data ownership, management, and attribution modeling.
As privacy challenges arise, Advertisers are seeing their response data becoming more integral to internal strategies and operations, with analytics and workflows becoming more automated. This inward pull and customization of adtech is disrupting large vertically integrated, external adtech solutions. Friction is increasing. Large tech companies want to control the adtech ecosystem while advertisers want to integrate the data and tech into their own enterprise resource planning (ERP) systems.
The new players chomping at the adtech pie will be the ERPs and the modular players, who can plug-and-play and reside in anyone’s cloud.
Oracle exemplifies ERP companies trying to incorporate adtech.
MediaBrain is a modular player that enhances ERPs. Its OptiBrain module ingests effectiveness measures and reports out optimal plans that deliver the most effective potential for a given price or guide decisions by framing the best price solutions for different levels of effectiveness.
Companies that provide modular solutions are natural partners for the ERPs. Modular components enable innovation through enhancements and replacements without disrupting mature ecosystems. Adtech modules allow ERP platforms to evolve with the digital transformation of advertising and communications
Until recently, deterministic analytics was on the ascendency with its claim of following the journey of actual consumers from their first exposure to the product, in ads and displays, to every exposure along the way to the purchase. Tracking every touchpoint and attributing its value towards getting the sale is the objective of multi-touch attribution (MTA).
From the digital certainty, that someone clicked my ad and bought my product, came the religion of determinism. The digerati of the ad world, led by the goliaths Google and Facebook who controlled large swaths of the data, argued that only events that could be directly traced counted. This posed a huge challenge to linear media whose messages were broadcasted to and received by unknown persons. The consequence was television got squeezed and the rest of linear media (newspapers, magazines, etc.) came tumbling down as ad dollars went online.
Directly tracing what people want is a fast and powerful tool to deciding what to create and sell. Google and Facebook taught this lesson to advertisers and direct to consumer sales took off. Netflix taught the TV industry this lesson and television is now transforming.
Tracking digital media and messages is well understood and practiced with cookies, pixels, and device IDs. Since advertisers buy exposures or clicks, they only need to track their advertising message. Complexities arise when the advertiser wants to govern the acceptability of the content its messages show in, but that is another discussion entirely. Determining the journey of exposures of their advertising message requires the advertiser to connect all the touch point identifiers: the cookies, pixels, and device IDs to persons. The challenge is connecting the myriad of cookies and pixels to device IDs and then to persons and transactions.
Ecosystems, such as data management platforms (DMPs) and later consumer data platforms (CDPs), rose to connect these data. Google and Facebook built tagging and tracing systems - through their activation platforms - to enable smaller advertisers to gain journey insights without having to finance a DMP or CDP. Google with its urchin tracking module (UTM) tags takes this a step further with their Analytics, letting advertisers track their website traffic and non-Google digital ads too. Advertisers get the journey analytics leading up to the transactions, but not who they are, making the final connection to the transaction probabilistic.
Linear is more complicated. Linear is one way communication. In some cases, there are devices to say it was received, such as cable boxes or smart TVs, and in other cases. In these cases, there is the possibility of connecting linear exposures to journeys. In other cases where there are no devices to say it was received, it is impossible to connect such exposures to journeys. Beyond this leaking of touch points, the challenge of corralling the devices that report receiving linear is immense, as they are owned and controlled by a myriad of competing cable and smart device companies. To get complete coverage on linear, you have to get all this data and then distill it down to the time and channel your advertisement aired to see who was exposed. Alternatively, you can get a large sample of this exposure data, connect it to your journeys, and then model the touch points gaps from the data that you could not get. Getting complete coverage is practically impossible. No one gets it. Consequently everyone models the gaps.
Then there is the challenge of objective. Do you want to measure Direct Response or Brand Resonance? Last touch attributions are always focused on direct response.
Since the determinist puritans came from digital direct marketing, these challenges are waved aside as yet-to-be-transformed parts of media and marketing.
Privacy is starting to take root and likely to kill determinism for brand marketing. Europe’s legislation general data protection regulation (GDPR) is moving through court cases, defining its scope. The California Consumer Privacy Act (CCPA) is just starting its court cases.
Personal information is any information including patterns that can be identified back to a person or household.
The basic ideas of these two laws are:
Google and Apple are now starting to champion privacy to their advantage. In Google's case, they are no longer allowing third party cookies, making tracing journeys beyond their walls probabilistic. In Apple's case, they are requiring users to say that they want to be tracked for each app, making journey's that include Apple devices probabilistic.
Probabilities to associate data and draw performance insights are being tapped to deal with the reality that not all data can be connected deterministically anymore.
Determinism will continue with last touch attribution. Brands will need to find probabilistic paths to handle the increasing data gaps that privacy brings if they want to maintain or grow their resonance.
MediaBrain anticipates ad-tech and mar-tech startups to focus on the implications of privacy to transform the ecosystem infrastructure. We expect artificial intelligence to play a role in filling these gaps. Soon we will hear about “machine learning” all over again. This time it will come back as “privacy preserving machine learning”. Think of learning bots that go from one private data source to another to develop an aggregate view of how these sources look and behave according to different characteristics. The learnings are encoded with one way math, so that others cannot reverse engineer the learnings to identify any of the private information.
The first generation of learning bots will focus on predicted behaviors of common characteristics, such as age, gender, wealth, and location. The second generation will move to deep learning methods that do not presuppose which characteristics are predictive and look to discover which data are explanatory and of what. These multilayer techniques are currently being used to recognize things in photos. Of course the sequencing, that privacy requires, complicates the maths. Google is starting to deploy some first generation privacy preserving learning bots across the Android ecosystem with a technique called Federated Learning. The second generation of privacy preserving deep learning is still in development.
Look for these methods to start transforming the ad-tech and mar-tech ecosystems over the next five years.