While a growing number of competitive TV ratings companies are combining native measurements of cable box and smart TV data, there is another group of companies innovating invasive measurements to enhance the informative value of native measurements.
TVision is a great example of such an innovator.
Sensing that button pushing is not the best way to measure who is watching the television, TVision decided to swap out the button for a camera: thereby converting the active requirement of self identifying button pushing with the passive camera-based recognition measurement of who is in the room and incrementally, who is looking at the television with their eyes open (what I call watching the television).
TVision started with specialized commercial technologies that require a team of installers to set up in cooperating households to deploy, collect, and process the feeds to report both in the room exposures and eyes on second-by-second TV viewing. The purpose of these operations were twofold: they offered proof of concept and data to start understanding television watching better. Upon achieving these milestones, TVision went on to innovate by rewriting all aspects of the systems, migrating from specialized, hard to install, gear to generic, easy to install, gear.
Today, the now inexpensive gear is so easy to self-install that TVision simply mails it to cooperating households. Since the camera imagery never leaves households, cooperators are comfortable deploying the gear: yes, even in bedrooms! Given the costs of setting up and maintaining panels is expensive, TVision sends only one kit to each cooperating household and asks them to install it on their primary television.
TVision’s panel proves that its technology is both useful, with its new ability to passively report second-by-second whose eyes are on the television, and usable, with its real life in the field deployment and operations.
Strategically, TVision has accomplished two things of note.
Well, well, well, my “Losing Relevance” post certainly caused a stir.
Yes, measuring who is paying attention to TV has many dimensions, many of which are independent measurements (orthogonal for the math crowd).
The key measurements are:
In a perfect world, we would measure everything everywhere. In a second best world, we would measure everything in a single random sample big enough to report everything. My viewpoints on these: Good Luck! … Measuring a random sample over time in the form of a panel involves extensive upkeep to maintain its randomness relative to the universe it purports to represent. And even that does not convey the whole of this complexity, as you need to not only keep your cooperation frame random relative to the universe but you have to keep the participation (for none passive measurements) random relative to the universe too. … Many tricks can be used to lower the immense cost of randomly selecting people, such as focus on households, enumerate them, stratify them, before randomly selecting within strata, then weight for individuals - this is much less expensive than randomly selecting enough people to report out basic demographic behaviors. OK, real random samples are unaffordable but this solution looks pretty random and representative: so, let’s do it! Now how do we get a big enough panel to measure advanced targets, low rated programs, etc., etc.
My viewpoint: Tackle these key measurements in parts and then stitch them together to report the whole. Yes, there are many challenges to understanding what is on the television as an example. Native measurements (cable boxes, Smart TVs, etc.) have their assorted shortcomings. Invasive measurements to determine “nearby” and “eyes on” do too, in addition to the sampling challenges. But if we pivot our thinking to leveraging invasive measurements to calibrate and impute behaviors on native measurements, then we solve for panel sizes and the disadvantages of the various native sources while creating an easier ecosystem to innovate in. Innovating specific measures is much less challenging than innovating the ecosystem as a whole. See “Attention is Everything”, “Into the Weeds”, and “Losing Relevance” for more on the arc of this viewpoint.
Originally formed by the US Congress to review and accredit the Nielsen Company’s ratings service, the Media Rating Council (MRC) needs to evolve on TV ratings or lose its relevance.
For the longest time, the MRC has stipulated that TV ratings have to be measured from either census or random samples. This stipulation is the root of Nielsen’s TV ratings problems.
Big picture: affordable random samples are not large enough to successfully report ratings of advanced targets and / or low viewing programs. This became apparent three decades ago, and the evolving use of first cable box feeds and later ACR feeds from smart TVs too has now grown into robust competitive TV ratings sources and services from not only Comscore, but now iSpot, VideoAmp, Samba, and Inscape among others. In all these new cases (originally innovated by Comscore), competitors are weighting collections of native measurements from cable box and smart TV feeds to project local and national TV ratings.
These very large samples of native measurements allow for easy reporting of advanced targets and / or low viewing programs.
During this period of evolution, Nielsen has stuck to its knitting of reporting from random samples. Even when Nielsen incorporates native measurements from cable boxes or smart TVs, they only include them as direct measurements for those households and do not leverage them to project those viewing patterns to local or national TV ratings. Consequently, Nielsen’s very large native measurements enhance but do not solve their reporting challenges yet.
Shortly after leaving Nielsen many years ago, I started to publicly advise investors that Nielsen’s coveted random samples for measuring TV Ratings would devolve into a calibrating panel to be applied to native measurements. If you are interested in more on this, please read “Into the Weeds of TV Currencies”.
I wonder if Nielsen will finally make this pivot away from the MRC stipulations (of reporting either census or random samples only) to the lower cost solution of using small invasive measurement panels like their random samples to calibrate much larger native measurements to compete with the new generation of competitors.
First, let’s be honest: measuring TV activity and measuring people activity are not one thing. They are two independent measurement systems, although some measurement companies like to marry them to make claims about people watching TVs. The quality of these claims vary, more on that later.
TV activity is measured by monitoring tuning, monitoring digital packets sent to the TV, matching audio coming through the TV, matching images on the TV screen, or some combination of these. A few more weedy sentences, then we will exit this paragraph and get to the crux of the matter. All these methods fall into two types: native measurements, intrinsic to the devices being measured, and invasive measurements, added to the devices being measured for the purpose of measuring. Examples of native measurements are cable boxes, app hosting devices, smart TVs, and packet distributors (aka publishers and streamers). The essential point is that native measurements are of all devices while invasive measurements are limited to samplings of devices. Invasive measurement entails soliciting cooperation to deploy in homes and operate the incremental tech to take the measurements. While invasive measurements gather more information, they are much more expensive to set up and operate. It costs less to set up and operate native measurements on 50,000,000 devices than invasive measurements on 10,000 devices.
With native measurements, one can glean what is happening on devices like cable boxes, roku players, and fire sticks, but still not know if the TV, a separate device, is switched on unless it is being monitored too. Beyond the devices, one does not know if anyone is nearby when this device activity is taking place, let alone know if anyone is actually paying attention. Finally, one does not have the holistic view of that person’s attention; we may know what is happening with some of a person’s devices but not all of their devices. Invasive studies and panels help answer the unknowns. One can then rely on the invasive measurements alone or use them to impute probable unmeasured behavior from the native measurements.
When the world of TV had many people watching the same programs at the same time, that world could all be measured with a single invasive sample. Since invasive measurements are expensive, the industry typically supported a single measurement system in each country and deemed it the currency that audiences get negotiated on. In the U.S., Nielsen had the lead position for national measurement.
As the world of TV fragmented with people watching different programs at different times, the current invasive samples could not reliably report this activity without the help of native measurements. So the Nielsens of the world started coupling their invasive samples with native measurements to report the fragmenting ratings.
The rise of fragmentation has led to the rise of competitive measurement systems using native measurements with modeling to impute viewing. Smaller, less expensive invasive studies and panels allows for experimentation in improving the imputation models, instead of requiring these panels to report the measurements. This in turn will lead to competition on how to put together the native measurements, and hence more competitive pricing. Finally, native measurements of global products, like smart TVs, provide pathways to international and holistic TV to all things digital measurement systems too.
All this transformation has led to the breakdown of traditional currencies and is leading towards a fluid future of combining native and invasive measurements. Changing measurement systems will soon burst the next bubble of the planning and buying systems that are integrated with specific measurement systems. A fluid future requires systems that can work with multiple, continuously evolving measurement sources. Best start getting ready today, because these cycles of change are speeding up.
All marketing starts with attention. Certainly marketing effectiveness is much more than just attention. It has soft sells for branding and hard sells for calls to action, along with many other factors such as availability, convenience, and pricing. However, none of this happens without getting attention first. And the most common way to get attention is to buy it. But where?
Attention is available on television, streaming services, search engines, social media, and on assorted websites, apps, and games.
To evaluate attention, one needs to equivalize the attention from the various sources. Since sources gain attention in different ways, the key to equivalizing is to identify the attention components and grade them. This idea of grading was invented in a mid-size, midwestern US city in 1854 to equivalize grain from different sources. Today, the now merged Chicago Board of Trade and Chicago Mercantile Exchange dominates global commodities trading in more than just grain.
For marketers, attention is about seeing and hearing. So let’s not make this complicated. Let’s start with the five quantifiable components of attention: seconds visible, percent visible, seconds hearable, seconds eyes-on, seconds ears-on. For grading purposes - much like the different qualities of grain, we might want to score the quality of production too. We do not consider context factors, as they are inherently relative to specific brands and their brand strategies.
These components give us the framework for evaluating attention across sources.
Next to equivalize attention for planning and buy optimization, the next generation of systems need to be able to handle today's classic measurements - because change always takes time - in addition to these attention components. The ability to incorporate context for brand specific evaluations as well. If you are interested in how planning and buy optimization systems can handle classic measurements in tandem with components to evaluate attention across sources, drop me a note.
Future musings will delve into the gnarly world of measuring attention, classically and for its gradable components.