First, let’s be honest: measuring TV activity and measuring people activity are not one thing. They are two independent measurement systems, although some measurement companies like to marry them to make claims about people watching TVs. The quality of these claims vary, more on that later.
TV activity is measured by monitoring tuning, monitoring digital packets sent to the TV, matching audio coming through the TV, matching images on the TV screen, or some combination of these. A few more weedy sentences, then we will exit this paragraph and get to the crux of the matter. All these methods fall into two types: native measurements, intrinsic to the devices being measured, and invasive measurements, added to the devices being measured for the purpose of measuring. Examples of native measurements are cable boxes, app hosting devices, smart TVs, and packet distributors (aka publishers and streamers). The essential point is that native measurements are of all devices while invasive measurements are limited to samplings of devices. Invasive measurement entails soliciting cooperation to deploy in homes and operate the incremental tech to take the measurements. While invasive measurements gather more information, they are much more expensive to set up and operate. It costs less to set up and operate native measurements on 50,000,000 devices than invasive measurements on 10,000 devices.
With native measurements, one can glean what is happening on devices like cable boxes, roku players, and fire sticks, but still not know if the TV, a separate device, is switched on unless it is being monitored too. Beyond the devices, one does not know if anyone is nearby when this device activity is taking place, let alone know if anyone is actually paying attention. Finally, one does not have the holistic view of that person’s attention; we may know what is happening with some of a person’s devices but not all of their devices. Invasive studies and panels help answer the unknowns. One can then rely on the invasive measurements alone or use them to impute probable unmeasured behavior from the native measurements.
When the world of TV had many people watching the same programs at the same time, that world could all be measured with a single invasive sample. Since invasive measurements are expensive, the industry typically supported a single measurement system in each country and deemed it the currency that audiences get negotiated on. In the U.S., Nielsen had the lead position for national measurement.
As the world of TV fragmented with people watching different programs at different times, the current invasive samples could not reliably report this activity without the help of native measurements. So the Nielsens of the world started coupling their invasive samples with native measurements to report the fragmenting ratings.
The rise of fragmentation has led to the rise of competitive measurement systems using native measurements with modeling to impute viewing. Smaller, less expensive invasive studies and panels allows for experimentation in improving the imputation models, instead of requiring these panels to report the measurements. This in turn will lead to competition on how to put together the native measurements, and hence more competitive pricing. Finally, native measurements of global products, like smart TVs, provide pathways to international and holistic TV to all things digital measurement systems too.
All this transformation has led to the breakdown of traditional currencies and is leading towards a fluid future of combining native and invasive measurements. Changing measurement systems will soon burst the next bubble of the planning and buying systems that are integrated with specific measurement systems. A fluid future requires systems that can work with multiple, continuously evolving measurement sources. Best start getting ready today, because these cycles of change are speeding up.
All marketing starts with attention. Certainly marketing effectiveness is much more than just attention. It has soft sells for branding and hard sells for calls to action, along with many other factors such as availability, convenience, and pricing. However, none of this happens without getting attention first. And the most common way to get attention is to buy it. But where?
Attention is available on television, streaming services, search engines, social media, and on assorted websites, apps, and games.
To evaluate attention, one needs to equivalize the attention from the various sources. Since sources gain attention in different ways, the key to equivalizing is to identify the attention components and grade them. This idea of grading was invented in a mid-size, midwestern US city in 1854 to equivalize grain from different sources. Today, the now merged Chicago Board of Trade and Chicago Mercantile Exchange dominates global commodities trading in more than just grain.
For marketers, attention is about seeing and hearing. So let’s not make this complicated. Let’s start with the five quantifiable components of attention: seconds visible, percent visible, seconds hearable, seconds eyes-on, seconds ears-on. For grading purposes - much like the different qualities of grain, we might want to score the quality of production too. We do not consider context factors, as they are inherently relative to specific brands and their brand strategies.
These components give us the framework for evaluating attention across sources.
Next to equivalize attention for planning and buy optimization, the next generation of systems need to be able to handle today's classic measurements - because change always takes time - in addition to these attention components. The ability to incorporate context for brand specific evaluations as well. If you are interested in how planning and buy optimization systems can handle classic measurements in tandem with components to evaluate attention across sources, drop me a note.
Future musings will delve into the gnarly world of measuring attention, classically and for its gradable components.