Nielsen and Paramount Global finally reached an agreement on a new multi-year deal, which, not surprisingly, coincided with CBS’s Grammy Awards telecast. So this seems like a good time to provide a brief history of TV audience measurement, and discuss why it is so difficult for anyone to compete with Nielsen to become marketplace currency (although there might now be minor – and I stress minor – cracks in Nielsen’s armor).
This report will cover several topics, including:
• How and why Nielsen is an industry-mandated monopoly.
• Are legacy research companies nimble enough to keep up with rapid and continual change?
• Why virtually no one really wants more accurate TV/video audience measurement.
• Can samples still measure the TV universe?
• When Nielsen was forced to improve its measurement.
• When research started taking a back seat to marketing.
• Shifting alliances: why TV research is not what it used to be.
• Measuring live vs. DVR viewing – implications for streaming ads.
• Why alternatives to Nielsen have traditionally failed to sustain industry support.
• Why industry committees traditionally don’t accomplish much.
• Does big data = more accurate data?
• How to determine the accuracy of Nielsen (or anyone else’s) ratings and actually improve audience measurement (and why it won’t happen).
Yes, Nielsen is a Monopoly, Albeit an Industry-Mandated One
When it comes to national television measurement used as currency for the buying and selling of commercial time, Nielsen is an industry-mandated monopoly. From the 1980s through the early 2020s, potential competitors have periodically emerged, but most failed to sustain enough media industry support to survive – not necessarily because Nielsen did anything specific to impede them, but because advertisers and networks have been unwilling to pay for multiple sources to provide similar national TV audience data.
While there have been many calls for change over the years, the industry’s unwillingness to support any real competition to Nielsen, along with media conglomerates and media agencies continuing to sign long-term Nielsen contracts, make the status quo hard to budge.
I’ve dealt with Nielsen contracts on both the buy and sell sides, and we always opted for the longest-term contract Nielsen offered. The simple reason was that the yearly cost for a five or seven year contract came with a significantly lower annual cost than a three year or one year deal. And since no one expected any real competition to emerge, why not take the least expensive option? Until the industry decides that they will only sign one or two year deals, any thoughts of a competitor actually replacing Nielsen as industry currency is imaginary.
After Years of Stagnation, Continual and Rapid Change to Video Landscape
Until the early 2000s, the pace of change to how and where viewers received content was relatively slow, enabling Nielsen to trudge along with serviceable audience measurement. Most people across the country had access to the same programming at the same time on roughly the same number of channels on the same platform. Then, after years of the status quo, technological advancement, multiple viewing devices, multiple program distribution platforms, and multiple on-demand viewing options, resulted in quick and continual change to consumer viewing habits – but how those viewing habits are measured has not nearly kept pace.
Here’s a brief look at how the video media landscape has evolved over the past four decades:
During the late 1980s, VCRs started to become widely available and watching pre-recorded videocassettes became the first major new use of the television set since videogames (the Magnavox Odyssey was the first home videogame that plugged into a TV in 1972). This was the first time a new device impacted network programming decisions. The broadcast networks gave up on original scripted series on Saturday nights (which was now the biggest movie rental night of the week), and eventually cut back on airing made-for-tv and theatrical movies as well.
The home VCR became the fastest growing electronic device since television itself, going from 1% penetration in 1980, to 66% in 1990, to 90% in 2000. It then started to decline when DVDs and Blu-ray became available. In 2001 DVD players outsold VCRs for the first time, and by 2007 reached 80% of American households.
VCRs had promised but never realized the idea of people programming their own viewing schedules. Its major use continued to be renting or buying movies. The average amount of actual time-shifted viewing in primetime never rose above 5%.
VCRs caused real problems for audience measurement, however. It marked the first time Nielsen had to admit it couldn’t measure an element of television viewing. Never able to measure VCR playback among people (although it could measure recording and playback among households), Nielsen made the controversial decision to ascribe household playback to demos. While this became a big deal among advertisers and agencies, the networks didn’t see it as a problem (since it artificially inflated their ratings). Nielsen eventually just said sorry, nothing we can do, and the industry moved on.
In 1990, the average home could still only receive 33 channels. Viewing habits remained relatively stable, and aside from VCR usage, there weren’t many major challenges to audience measurement. During the early-to-mid 1990s, as cable and satellite television expanded, and phone companies were allowed into the TV business, the number of channels available to the average home began to rise sharply.
In the late 1990s, DVRs became a reality, but growth was considerably slower than it had been for VCRs. It would take another decade before DVRs were even in 20 percent of the country and needed to be addressed in developing television samples for audience measurement. DVR penetration started to grow more quickly when cable and satellite systems started integrating them into their set-top boxes, and people no longer needed to buy separate devices. But even then, DVRs topped out at slightly under 55% of U.S. TV households.
Throughout the 1990s, cable continued to siphon off broadcast viewers, and the number of channels available to the average home continued to rise. But the TV set was still the only viewing platform. People couldn’t watch reruns of TV shows or series they hadn’t seen unless a network aired the show again or they bought previous seasons on DVD.
Prior to 2005, changes to the media landscape – what was available to view and how it could be viewed – were gradual. Cable had just recently started airing original scripted series. Less than 20% of the country had HDTV. The slow pace of change made predicting consumer television viewing habits relatively simple, and slow-to-change research companies didn’t have to pivot too quickly, nor innovate too often.
Since then, rapid change has been the norm – from the founding of YouTube, and introduction of Apple’s first video iPod in 2005 (and the revolutionary step of ABC making a deal with Apple allowing people to access five TV shows from iTunes, including the popular Desperate Housewives and Lost for $1.99 each), to social media led by Facebook and Twitter in 2006, to the first iPhone in 2007, to the launch of Netflix and Hulu in 2008/2009, to the first iPad in 2010. HDTV became the standard in 2009, and rose from 23% of the U.S. in 2008 to 75% by 2013. Facebook (now Meta) launched Instagram in 2010. TikTok was released internationally in 2017. By 2005, the average home could receive roughly 100 channels, which continued to rise, hitting more than 200 today.
Netflix’s first original scripted hit (House of Cards) debuted in 2013. Prime Video released its first original scripted series the same year (Alpha House and Betas). Hulu debuted its first successful original scripted series in 2013 as well, with the teen drama, East Los High. Other subscription video on demand services followed. CBS All Access (now Paramount+) launched in 2014, with its first original scripted programming, Star Trek: Discovery and The Good Fight, debuting in 2017. In late 2019, Disney+ and Apple TV+ joined the mix, followed in mid-2020 by HBO Max (now Max) and Peacock.
Despite recently cutting back on spending, the eight major streaming services are now spending at least as much on original content each year as the five broadcast networks combined. All of them now have advertising tiers. There are also numerous smaller and niche streaming services available to viewers. Today the average home subscribes to four streaming platforms. Three-quarters of Americans have at least one, with almost 10% having six.
Can Samples Even Measure the TV Universe Anymore?
It’s hard to imagine now, but before people meters debuted in 1987, the national Nielsen television sample was only 1,700 homes (which had grown from just 1,200 a decade earlier), and demographic data was only available 36 weeks of the year. In a three-network, one-screen, single platform, 15-channel, one or two TV home, live-viewing world, this was acceptable. People generally had the same access to the same channels and devices, and as cohesive demographic groups with similar viewing patterns aged, their media habits were largely predictable and relatively easy to measure.
The whole purpose of a TV sample is to be representative of the total U.S., with the idea that you and your demographic cohorts have similar viewing habits. Demographic cohorts are based on characteristics that Nielsen has determined impact viewing behavior – such as age, sex, race, ethnicity, presence of children, household income, education, language spoken at home, etc. Also important is whether or not you have cable/satellite/telco (i.e., pay TV services) or DVRs.
In today’s video environment, with hundreds of channels available to the average home on multiple and mobile devices, with numerous streaming platforms and on-demand viewing sources, people in the same demographic segments don’t watch the same things at the same time anymore.
In the past, broad platform mattered more than individual platform. In other words, whether or not you have cable does not require the sample being representative of every cable system, because that does not significantly affect viewing choices or behavior. Nor is it important to know what type of DVR you have, since that will not substantially impact your viewing behavior and how much you fast-forward through commercials.
Whether or not you subscribe to streaming services, however, is not at all the same. If you only have Netflix, the programming available to you, and your actual viewing habits, will be dramatically different than if you only subscribe to Paramount+. Likewise, if you bundle Disney+, Hulu, and ESPN+, and add Max, your viewing habits will be different than if you have Netflix, Prime Video, Peacock, and Apple TV+. And there won’t just be differences in what you stream, there will be significant differences in your linear TV viewing as well. The days when all Nielsen needed to worry about were broadcast vs. cable and DVR penetration are gone forever.
But even if you are able to put together a sample that keeps up with the shifting combinations of streaming service usage, it won’t necessarily provide significantly more accurate data. This is primarily because as more and more programming becomes available on more and more platforms, demographic cohorts no longer have similar viewing patterns. That means samples can no longer accurately measure TV viewing across the country.
Does Anyone Really Want More Accurate TV Audience Measurement?
The short answer is no. The long answer is more complicated and has more words, but still winds up being no. Claims to the contrary notwithstanding, neither buyers nor sellers have traditionally wanted more accurate audience measurement (although advertisers certainly do). Sellers want higher ratings so they can charge more for ads; buyers want stability so they can project future performance. More accurate audience data is conducive to neither.
Whenever television audience measurement is actually improved, the higher rated networks tend to decline and the lower rated tend to increase. So unless marketplace forces compel changes to how audiences are counted, inertia sets in and the biggest players resist any real improvements.
Sellers want higher ratings – you never hear any network complain that Nielsen is overcounting their audience (as it did for most of its history). But every time I can recall reported television usage or network ratings declining suddenly, or for no reason anyone can pinpoint, they blame Nielsen and call for change. There’s a reason some networks pushed so hard for viewing in bars and restaurants to be included in currency measurement. Anyone who knows anything about viewing habits or audience measurement knows how absurd it is to think more than a small fraction of people in bars or restaurants watch (or can even hear) commercials. It’s all about higher reported ratings. When Nielsen was unable to report VCR playback, on the other hand, the networks were happy to continue using the artificially inflated numbers Nielsen reported (for about 20 years) until DVRs overrode the need to properly measure VCRs.
Any new measurement system does not provide higher broadcast network ratings than Nielsen, was not able to sustain major media support. Today, however, with the major media companies owning linear broadcast and cable networks, as well as streaming platforms, losing broadcast viewers but gaining more viewers elsewhere may be more palatable.
Buyers want stability – accuracy is fine, but stability is required to be able to project future performance. Anyone who was in a media agency research group when the industry switched over to buying and selling based on C3 remembers the chaos and exponential workload increase that ensued during the upfront and for much of that year. There’s a reason that what was supposed to be a “one-year band-aid” is still being used 15 years later. Moving to more accurate commercial measurement would require not only additional staff, but would result in significantly less stable and projectable ratings.
Back in the days when there were relatively few viewing options, and most networks started and ended their seasons at the same time, there actually was a degree of stability to people’s viewing habits. Not so today. Traditional TV seasons, with new series debuting in the fall and spring, might still exist for the broadcast networks, but not for other platforms (or for viewers). Streaming services drop new seasons and new shows throughout the year (as do cable networks), whenever they are ready, and not always at the same time as previous seasons.
Some streamers drop entire seasons at once, some two or three episodes before switching to weekly, and others use the broadcast network once-a-week model. When a new season of Stranger Things, or Squid Games, or a new show like The Agency or Landman premieres, for example, my viewing habits immediately change. I’ll binge a few episodes a night, and DVR my regular linear shows (which I’ll binge two or three weeks later). My viewing is not nearly as stable or predictable as it was 10 years ago, and much of my viewing would not be captured by Nielsen’s C3 (or C7) measurement.
The main problem, of course, is that neither sellers nor buyers have any incentive to actually figure out the real shortcomings of video audience measurement. Sellers certainly don’t want to risk finding out how much smaller actual viewership is to their commercials than what Nielsen currently reports. Buyers don’t want to have to tell their advertising clients that data they’ve been using for decades is fundamentally flawed, and they waited until now to do anything about it, when this could have been handled years ago.
And both buyers and sellers realize that once you pinpoint specific weaknesses in research methodology, you have to actually fix them – which will be time-consuming and expensive. Media agencies do not want to incur dramatically increased costs for improved ratings systems or multiple currencies that require significant spending for new or custom databases (and staff) but do nothing to increase their revenue.
When Nielsen Has Been Forced to Improve TV Audience Measurement
Not counting increasing sample sizes, there have been only two instances, more than 20 years apart, when Nielsen made real improvements to its audience measurement and how it reported television viewing – in 1987, when the switch was made from the household meter and persons diary system to national people meters, and in 2009, when DVRs and time-shifted viewing led to the shift from program ratings to average commercial-minute ratings (C3).
In 1985, when AGB, which had been using people meters in the UK, tried to introduce them into the U.S. market, it represented the first real threat to Nielsen’s dominance. Even Nielsen had to admit that this new electronic measurement was far superior to its own meter/diary system, which contained considerable human bias.
The highest rated networks and programs, as well as shows airing multiple times per week (daytime, early morning, evening news, late night, syndication) benefited when people manually filled out diaries at the end of each week. Recall favors the most popular and longer-duration programming. For example, people often didn’t remember that they watched MTV for 20 minutes three or four days ago, and they also tended to think they watched their favorite program this week, even if they missed it. Facing the prospect of losing clients for the first time, Nielsen was forced to develop its own people meter.
Nielsen started including DVR households in its national people meter sample in January 2006, when the devices were in nearly 20% of homes, but did not report DVR usage as part of its regular ratings. By 2007, commercial avoidance through DVR fast-forwarding had become one of the biggest and most urgent concerns among advertisers. Unlike VCRs, the DVR allowed viewers to record television shows without having to use videotape, and both recording and playback were significantly more convenient and user-friendly. As a result, there was a substantial amount of time-shifted viewing. Also unlike VCRs, Nielsen was able to measure playback activity among people.
Measuring DVR playback and commercial ratings became an industry priority. Virtually every study and survey ever done indicated that anywhere between 70% and 80% of VCR or DVR playback involved fast-forwarding through commercials. But there was no actual Nielsen data to prove it. Nielsen was forced to report DVR playback, which led to C3 (average commercial minute ratings up to three days after the original broadcast).
Commercial Pod studies paved the way for C3. As the industry started to publicly grapple with how to deal with delayed DVR viewing and fast-forwarding, some media agency analyses were already underway. My Audience Analysis group at MAGNA Global conducted extensive commercial pod studies over four consecutive years (2005-2008), examining the dynamics of commercial pod versus program performance for broadcast, cable, and syndication.
We recorded hundreds of television shows across seasons, dayparts, and genres. We separated program segments from commercial pods for each telecast, and matched it up with Nielsen’s NPower minute-by-minute ratings. We shared our reports with advertisers, networks, Nielsen, and the press. These studies gave the industry confidence that commercial-minute ratings were consistent enough from year-to-year to use as marketplace currency – which led to Nielsen implementing C3 in 2009.
When Research Started to Take a Back Seat to Marketing
How these two fundamental changes (people meters and C3) were handled shows how much the industry’s priorities shifted over time – the rapid change in the volume of and access to video content and multi-media devices, as well as delayed viewing options and the resulting changes in consumer behavior, caused good research to take a back seat to rushing new products to market.
There was a time when any changes to audience measurement needed to undergo extensive vetting by both buyers and sellers before the industry would consider implementing them. People meters were evaluated by committees of researchers from agencies and broadcast and cable networks for nearly three years – including a full year of side-by-side data to compare it with the old meter/diary method – before it was deemed ready to use as marketplace currency.
Nielsen switched to C3, on the other hand, with virtually no research into the methodology or whether it actually accounted for all fast-forwarding. While an extensive amount of analysis went into determining whether “commercial-minute” ratings were viable to use as marketplace currency, virtually no research was done by Nielsen to determine the proper methodology for calculating them. Speed to market was considered more important than evaluating the measurement methodology. The MAGNA Global Commercial Pod Studies defined commercial minutes as minutes containing at least 30 seconds of commercials, and recommended measuring commercial pods. Nielsen decided to define commercial minutes as those containing even one second of commercial time – and then weight averaging those minutes based on their commercial duration to arrive at the “average commercial-minute” rating.
In 2006 and 2007, Nielsen held a series of industry-wide meetings to figure out how to report DVR playback and commercial-minute ratings. The biggest debate at the time was not on methodology, but rather on what to report. Buyers (advertisers and their agencies) wanted only live viewing to be used as marketplace currency. Sellers (broadcast, cable, syndication) wanted a full seven days of delayed DVR viewing included in the reported ratings.
The final meeting on the subject, in which the C3 compromise was established, was the first time that Nielsen had invited top research, buying, and sales executives in the same room to discuss audience measurement methodology and related business issues. This was also the first time that business concerns overrode the necessity for validating research methodology. And most significantly, it was the moment that Nielsen shifted away from being a pure research company – at the following year’s national client meeting, Nielsen actually proclaimed that getting a product to market was now more important than validating the research beforehand.
Those of us who were involved in the discussions on how to best measure commercial-minute ratings, recall that C3 was designed as a one- or two-year band-aid. The industry’s major computerized pre- and post-buy analysis systems, Donovan Data and Media Bank (which have since merged to create MediaOcean), could not yet handle individual minute measurement. They still needed audience data in the format of Nielsen’s MIT tapes, which used average ratings data. Advertisers wanted individual commercial ratings as soon as they were feasible. I had been working with Donovan and Media Bank, and both did have the capability of providing pre- and post-buys based on individual minutes the following year.
Even though research was starting to clearly demonstrate fundamental flaws in C3 measurement, we were told it didn’t really matter because individual commercial measurement would soon be the marketplace currency. I recall telling whoever would listen at the time that given the disruption during the upfront with the switch to C3, and the additional expense and staffing required to handle moving to actual commercial measurement, that C3 would likely be the end point. That was 15 years ago.
It should also be noted that the vast majority of Nielsen executives did not (and probably still do not) know exactly how C3 is calculated. Many have probably forgotten, that when the “C” streams of ratings data were first made available, Nielsen sent an advisory to clients saying that you couldn’t simply subtract Live viewers from Live + 7 Days to calculate the amount of time-shifted viewing. Since that’s what everyone was doing, and it made logical sense to do so, many of us research types started to wonder why. It raised some uncomfortable questions about exactly how these calculations were done, because Nielsen quickly stopped saying it. If you specifically asked them they would tell you, but otherwise they never brought it up again.
Nielsen’s rules for allocating minute-by-minute viewing were designed for a time when virtually no one had remote control and it actually took a minute or more to get up and change the channel. Basically, Nielsen looks for the plurality of viewing to a minute. For example, if you watch ABC for 20 seconds of a clock minute and four other networks for 10 seconds each, you are counted as watching ABC for that minute. If there’s no plurality, Nielsen continues to go back a minute until there is a plurality, then allocates that viewing to the current minute.
In the 1960s and 70s, before remote control, plurality of viewing to a given minute was pretty much the same as viewing to the entire minute. Individual minutes were not reported, so it didn’t matter how Nielsen measured them. They were simply aggregated up to average program ratings. No one ever thought about how viewing to a single minute was derived.
When it comes to time-shifted viewing, however, the flaws in C3 methodology become apparent. With live viewing, there are multiple channels. During a commercial break, you can stay on that channel for 20 seconds and then scan several other channels during a given clock minute. When you DVR a program and play it back, however, there is only one channel. The majority and plurality of viewing are therefore the same.
If you are playing back a program, watch 20 seconds of a commercial minute and fast-forward the remaining 40 seconds, your viewing falls into something Nielsen calls AOT (“all other tuning”), and the commercial minute will receive no credit for your viewing (because both the majority and plurality of tuning is not to the commercial minute). You would need to be tuned to the majority of a clock minute to be counted in delayed DVR viewing. So, Nielsen’s methodology for counting live commercial minutes and delayed commercial minutes are not only different, they are both fundamentally flawed.
Shifting Alliances: Why TV Research is Not What it Used to Be
When broadcast and cable networks became siblings under the same corporate umbrella, it started to affect the type of research these companies were willing to underwrite. The broadcast networks and network television and cable television trade organizations used to do a fair amount of research highlighting their strengths and their competitors’ weaknesses (who remembers CBS’s The Cable Fable?). The CAB is now the VAB (Video Advertising Bureau), with membership including both broadcast and cable networks.
Not only do giant media conglomerates now own broadcast and cable outlets, they all now have their own streaming platforms as well.
The upfront sales pitch is no longer “Broadcast is bigger and better, and the only place to reach your customers all in one place” or “Our cable network has the most engaged audiences and is the only one that can effectively compete with the broadcast networks.” In essence, they are all telling us that they – Disney (ABC, ESPN, Hulu, Disney+…), Paramount Global (CBS, MTV Networks, Showtime, Paramount+ …), Comcast (NBCUniversal, USa, Syfy, Bravo, Peacock...), Warner Bros. Discovery (Turner cable networks, HBO, Max...) – are basically all the same: self-contained, massive communication hubs and unrivaled unified companies, armed with multiple networks, platforms, and genres, that can reach everyone, everywhere, all at once.
As a result, there is seldom any good research that highlights the weaknesses of any part of these entities (which also results in little research showing the strengths of other parts). Advertisers (remember them?) are often left with meaningless studies proclaiming “advertising works,” along with assurances that if they just add this, optimize that, or use this new and improved planning or buying tool, everything will be better.
Measuring Live vs. DVR Viewing and the Implications for Streaming Ads
Ten years ago, when I was head of audience analysis at ION Media, my group conducted a study that I believe is still the best (only?) analysis comparing engagement and ad recall between Live and DVR playback viewing. At the time, ION (now owned by Scripps) was an independent network, whose programming consisted almost exclusively of off-network repeats, which are largely watched live (more than 95% of ION’s adult 25-54 average audience at the time watched its shows live). Highlighting the advertising strength of live viewing would not hurt any other part of the company.
Trying to measure ad recall a day or more after the commercials aired is not particularly meaningful unless it’s at the start of a new ad campaign with new creative that is airing for the first time. Otherwise, you don’t really know where the respondents actually saw the commercial. Much better is measuring ad recall right after the program airs.
I got the idea for this analysis from a CAB study that was conducted roughly 30 years ago, which remains the best I’ve ever seen. It was an unaided ad recall study, where adults 18+ were surveyed by phone at home during primetime (I believe they got permission from respondents to call them after 10pm). It demonstrated that there was no statistical difference between broadcast and cable based on verified recall of commercials and attentiveness to the programming.
There were also a lot of other interesting findings about the impact of commercial pod positioning and length. The main purpose of the study was to show that cable should have the same value per rating point as broadcast. At the time, ad-supported cable networks were not yet producing original scripted programming, and they were still considered lower quality and less valuable than the much higher rated broadcast series.
We used a similar approach, although the times had changed – we conducted our survey online.
Here’s what we did along with the results.
• We maintained a panel of television viewers through Vision Critical. We recruited members to participate in a special research study (there were 476 respondents).
• Respondents were allowed to watch any program they wanted, and could view the program live, at the time it originally aired, or via DVR playback. They only had to tell us the day and time they would be watching (it had to be between 7pm and 10pm).
• They were instructed to watch TV as they normally would. The only stipulation was that it had to be a regular program that contained commercials.
• Each participant received a 9-question survey at 10pm the night they watched the program. If they submitted the completed survey by 11pm that night, they were placed in a drawing to receive a $500 Amazon gift card. We made sure they understood that correct answers were not necessary to win the prize – so if they recorded a show they wouldn’t go back and try to find the right answers.
• Questions included asking respondents to list up to three plot points of the program, list all the brands they could recall in the commercials, and to list any specific messages they remembered from each brand they listed.
The results were eye opening but not really surprising.
• Attentiveness and recall to the program content was virtually identical for live viewing and DVR playback.
• The percentage of respondents who recalled one brand during commercials was 2.3 times greater among live viewers than among DVR viewers (the percentage who recalled three brands was lower, but was more than 3 times greater among live viewers).
• Brand message recall was roughly 3 times greater among live viewers.
• There was no advertising on the few streaming platforms at the time, so there was no reason to include them. But now, this type of analysis can easily include streaming.
When I left ION, I recommended that they replicate this analysis every year, so chances are there’s more recent data out there. But despite changes in the media landscape over the past 10 years, there’s no reason to think the difference in ad recall between live and DVR viewers is much different.
So, how does any of this apply to advertising on streaming platforms? Well, we know that DVR penetration has remained steady over the past 10 years at about 50% of the U.S. It some cases, streaming has taken the place of DVRs – ABC and FX series come to Hulu soon after airing, as do CBS shows on Paramount+ and NBC shows on Peacock. So, if you missed the original broadcast, you can catch up if you subscribe to the right streaming service. People can’t pause or fast-forward through the commercials, and virtually no one switches channels when they are streaming a series or movie. There are also shorter and fewer commercial pods on streaming platforms, both of which we know (from numerous research studies) positively impacts ad recall.
So, while the advantage of streaming over linear TV ads might not be quite as extreme as that of live versus DVR viewing, it is certainly substantial. I’d love to see more research done on this. It seems like a prime project for Nielsen or ComScore, or one of the other emerging potential audience measurement competitors to tackle (or Netflix). I think if positioned properly, it would enhance efforts to sell streaming ads without necessarily hurting anyone’s linear TV efforts.
Why Alternatives to Nielsen Have Failed to Sustain Support
Over the past 40 years, there have been several failed attempts by various companies to compete with Nielsen. Except for ComScore (and more recently, VideoAmp and iSpot), none have managed to pose any real threat to Nielsen’s dominance in the U.S. Even these more recent companies have been seen as complements to Nielsen, not replacements (at least on a national level). While some of these companies served to pressure Nielsen to make improvements to its own service and increase its sample size, the industry (both buyers and sellers) has balked at funding multiple audience measurement companies that provide essentially the same thing.
Here’s a brief look at the most notable attempts to compete with Nielsen.
• 1985: After more than 30 years of being essentially unchallenged in the national TV audience measurement arena, British company AGB (Audits of Great Britain) entered the game and posed the first real threat to Nielsen’s dominance. It had been using something called people meters in Europe, and saw an opportunity to expand into the U.S. market. AGB had the advantage of actually using a better methodology than Nielsen’s antiquated meter/diary approach.
AGB spent nearly two years testing its people meter in Boston before trying to roll it out nationally. Nielsen responded by launching its own people meter, and increased its sample to 2,000 homes (to match the size of AGB’s sample). The following year Nielsen increased its sample to 4,000. As a result, AGB failed to sustain industry support and ceased its U.S. operation in 1988. The main problem was that ad agencies were not willing to pay for two identical national measurement services, and most of the networks were not willing to support a new ratings system that did not report higher numbers than Nielsen (I believe at the time only CBS, MTV, and a few other cable networks were willing to go forward, but agency support virtually disappeared).
• 1987: R.D. Percy & Co. developed a passive meter, which used a heat-sensitive, infrared device to detect when someone was in front of the television set and when someone left the room. But when people found out that the heat sensors could be thrown off by the presence of a large dog, jokes ensued, and industry support eventually fizzled. Nielsen announced its own plans to develop a passive meter, but it never went beyond the testing phase.
• 1991: Arbitron’s ScanAmerica tried to marry TV viewing to product usage. Sample participants were asked to run a scanner wand over the product code on each store-bought item. Single-source data was considered the holy grail at the time, and this approach seemed to be what advertisers had been seeking. It consisted of 1,000 households in five markets – New York, Los Angeles, Chicago, Atlanta, and Dallas. But a lack of belief in the methodology, the limited number of scannable items, and an unwillingness to invest in an unproven system in a faltering economy, resulted in ScanAmerica shutting down only a year after being introduced. Had ScanAmerica come along a few years later, when the economy was back on track and other viewing sources were cutting into network ratings, it probably would have received more interest and support.
• 1993: Arbitron, which had been competing with Nielsen for four decades in collecting local TV ratings (in more than 200 markets), canceled the service. Local stations, which had been forced to subscribe to both Nielsen and Arbitron to accommodate advertisers and their agencies, who were more or less equally divided among the two services, were under increasing economic pressure from cable and other competition. It no longer made financial sense for them to support two essentially identical rating systems. Nielsen, which already had a lock on national TV measurement, won the day. Nielsen acquired Arbitron (renamed Nielsen Audio) in 2013.
• 1993: The broadcast networks tried to develop a rival rating service, SMART (System for Measuring and Reporting Television), with a new people meter provided by Statistical Research, Inc. (SRI). But they couldn’t get the necessary ad-agency support and it never got beyond the “media lab” stage (I thought at the time SRI did great work and provided valuable insights). It folded in 1997.
• 1999: IAG Research, founded to address the growing concerns about ad avoidance via DVRs, started conducting research to measure ad effectiveness and viewer engagement across television and the internet. In 2004, it introduced product-placement and branded-content measurement. While there was a lot of skepticism about IAG’s methodology, it was the only company providing this type of data, leading all the broadcast networks and an increasing number of advertisers to subscribe. Advertisers wanted it so agencies were forced to subscribe (I didn’t know a single agency research executive who believed in the data). IAG was largely seen as a supplement to Nielsen, rather than a replacement. Instead of trying to develop its own version, Nielsen acquired the company in 2008.
• 2007: erinMedia’s plan to launch a new national ratings system to collect and process viewing data from set-top boxes was derailed when Nielsen announced it was forming a new unit called DigitalPlus to essentially do the same thing (which caused erinMedia’s funding to disappear). erinMedia had filed an anti-trust lawsuit against Nielsen Media, which was confidentially settled in 2008.
• 2008: TRA (Television ROI Audit), founded by Mark Lieberman and Bill Harvey, was the first company to merge single-source and big data . TRA’s Media TRAnalytics combined second-by-second live and time-shifted viewing in 1.5 million homes with information from 55 million frequent-shopping-card users to create a single-source database of 370,000 anonymous households where TV viewing and purchase behavior could be compared and tracked. CBS and MTV were early subscribers (both networks tended to subscribe to the most promising new research companies). Several media agencies and advertisers also used TRA data. The company was sold to TiVo in 2012.
• 2008: Rentrak started tracking viewing behavior from 35 million televisions across all 210 markets, and provided video on demand measurement from set-top box data, as well as metrics on TV engagement. By 2015, more than 440 local stations across 68 station groups were using Rentrak for daily measurement. It was never seen as competition for Nielsen on a national level.
• ComScore, which merged with Rentrak in 2016 to form a new cross-platform measurement company, had been the only real competition to Nielsen – primarily with local TV stations. Most major media agencies, networks, station groups, and streaming services subscribe to some level of ComScore data.
• Other companies have emerged over the past few years, most notably VideoAmp and iSpot. While both provide important insights, including cross-platform and co-viewing measurement, and are becoming widely used by advertisers and media agencies, they are currently complements or supplements to Nielsen, not replacements. Until Paramount Global finally signed a new multi-year Nielsen contract, it had briefly been using VideoAmp data as its sole marketplace currency.
Industry Committees Traditionally Don’t Accomplish Much
There’s an ongoing truism in this business that if you want to appear to be doing something but don’t really want to accomplish anything, form a committee. I’ve been in this industry long enough, and sat on enough committees, to see the humor, frustration, and reality in that statement.
There are just a few times I can recall that advertisers, media companies (sellers) and media agencies (buyers) were in the same room discussing how to improve TV/video audience measurement (not including industry trade organizations). Ordinarily, buyers and sellers were separated. Even prior to the switchover to people meters, Nielsen held separate meetings among researchers from the networks and ad agencies – and neither the network salespeople nor agency buyers were present.
• In 2005, the Council For Research Excellence (CRE) was formed. The CRE initially consisted of 40 top industry researchers, representing advertisers, networks, local station groups, syndicators, and media agencies (I was privileged to be one of its founding members, representing Interpublic). While the CRE did conduct some landmark research, it didn’t lead to any changes in Nielsen’s measurement methodology – not really surprising, since the “independent” Council’s research was funded by Nielsen (roughly $3.5 million per year).
For those who don’t remember or weren’t around at the time, the CRE was formed following Senate hearings on Nielsen’s purported measurement bias, so Nielsen could avoid the federal government becoming involved in overseeing national TV audience measurement. Lawyers were present at many of our early meetings because the potential for collusion between buyers and sellers was a major concern. Nielsen pulled its funding in 2017.
• As already mentioned, in 2009, buyers, sellers, and researchers convened for a meeting with top Nielsen executives to discuss how to measure television viewing via DVRs. There was urgency to arrive at a solution before the 2009 upfront. Advertisers and agencies wanted only live viewing reported as marketplace currency, while the networks wanted a full week of delayed viewing (C7) reported. After much discussion, we arrived at the C3 compromise.
• A few years later, the Association of National Advertisers (ANA) convened a select group of advertisers, sellers, buyers, and researchers to discuss replacing Nielsen’s C3 measurement with true commercial ratings. This meeting ended with Nielsen, in its typical arrogance, basically telling us they heard our concerns, but had no intention of shifting away from C3 anytime soon (that was 12 years ago).
• In 2021, a number of advertisers, media agencies, and industry trade organizations joined NBCUniversal’s Measurement Innovation Forum, which was designed to evaluate current audience measurement methodology and come up with alternatives to Nielsen. NBCU sent out a request for proposal (RFP) to more than 50 companies (including Nielsen) to help “build a new measurement ecosystem for us that reflects the future.” Ironically, this was not in response to the basic flaws in current C3 measurement, but rather was spurred by the problems Nielsen encountered maintaining its national people meter sample during the pandemic – which resulted in reported network audience declines that seemed illogical to say the least. Again, media companies only seek action when they believe reported ratings are too low, not when they believe they are inaccurate.
Tests of potential alternatives to Nielsen (such as ComScore, iSpot.tv, Samba TV, VideoAmp, Xandr, and LiveRamp) reportedly did not yield the results the industry was looking for. Different companies provided substantially different numbers from Nielsen and from one another. And even the same suppliers sometimes produced widely different numbers from one week to the next. As I’ve already mentioned, in today’s media world, currently structured samples (and data modeling) are woefully inadequate in measuring the universe of TV viewers, so significant disparity in data produced by different samples should not surprise anyone – in fact, it proves my point.
In 2023, five major media companies (NBCUniversal, Paramount Global, Fox, TelevisaUnivision, and Warner Bros. Discovery) along with the Video Advertising Bureau (VAB) formed a “joint industry” committee (JIC) to create standards for audience measurement. Notably missing from this alliance (at the moment) is Disney (which owns ABC and ESPN as well as Disney+ and Hulu). Other members include, A&E Networks, AMC Networks, Discovery, Hallmark Media, Scripps, Samsung, OpenAP, Roku (the only streaming platform to join), and agencies, butler/till, Dentsu, Group M, Havas Media, Horizon Media, IPG Mediabrands, Omnicom Media Group, Publicis Media, and RPA.
When the JIC was formed, Disney’s reason for not joining, as they put it, “We own our own tech and we own our own audience graph (a proprietary tool which helps advertisers identify distinct segments of the audience across Disney’s media properties).” It is “anchored in the industry’s only scaled audience graph for streaming, with 250 million identifiers that represent 112 million households, across hundreds of thousands of audience attributes that paint a picture of your consumer.” Nevertheless, they said, “We are working with everyone, VideoAmp, Samba TV, and iSpot.” “We’re talking about measurement expansion…it’s important to differentiate between measurement and currency. Nielsen will be the currency in this upfront.” “We are going to be as flexible as possible on the measurement side, but from a currency perspective, no one has been able to scale currency to take the place of Nielsen today.”
The main purpose of the JIC is to audit and certify the transactional readiness of new potential marketplace currencies. In its own words, “The U.S. Joint Industry Committee (JIC) was created in January 2023 as a collaborative forum for both media suppliers and video suppliers to work together to define a more sustainable model for long-form video measurement. By driving consensus on common standards and requirements for cross-platform measurement, the organization seeks to provide transparency into transactional readiness of cross-platform currencies while enabling more accurate measurement of today’s modern consumer.” JIC initiatives include, creating standards for cross-platform measurement, certifying currencies for transaction, and building a publisher first-party streaming dataset.
JIC has certified ComScore, VideoAmp, and iSpot to be transactable as national TV currency. This basically means that all three have accurate and census-representative data sets, which the JIC considers the baseline requirements for buyers and sellers to transact. This, of course, says nothing about who will actually use these data and how they will be used. The JIC published a POV on the state of TV currencies in its 2004 Guidelines for Transactability of National Cross-Platform Solutions.
Lots of New Cross-Platform Measurement
ComScore, which has effectively taken on Nielsen on the local television measurement currency front, recently announced a new cross-platform solution, ComScore Content Measurement (CCM), which it calls “a major step in unifying ComScore’s device and platform-specific measurement capabilities…provides a deduplicated view of audience reach across linear TV, CTV/Streaming, PC, Mobile, and social, providing unmatched insights into viewer engagement…”
All the major media companies are using both VideoAmp and iSpot data to varying degrees, and several cable companies are using one or both, including A&E Networks, AMC Networks, Hallmark Media, and Scripps Networks. Media agencies, including Dentsu, IPG Mediabrands, OMG, and RPA are using them as well.
VideoAmp is currently powering advanced data platforms, Paramount’s Vantage and Warner Bros. Discovery’s Olli, and is available to advertisers via NBCUniversal’s One Platform. VideoAmp recently announced its VideoAmp Cross Screen Planner (VXP™), which is geared toward helping agencies and brands reach target audiences. The company is expanding its integrations to include census-level streaming data from publishers, including Disney, Paramount, and Fox.
Byron Allen’s Allen Media Group reportedly struck a 10-year deal with VideoAmp. iSpot has an ongoing partnership with Roku, the first of its kind with a steaming platform.
Both VideoAmp and iSpot have deals to use TVision’s 5,000 household panel of people with TVs outfitted with equipment to monitor who is watching, in the room, and whether they are actually looking at the screen. VideoAmp has also used HyphaMetrics to help shape its big data TV measurement. It collects viewing data for its panelists passively and measures viewing across a variety of devices.
In June 2024, The Association of National Advertisers (ANA) announced “significant progress and new milestones reached in its desire for a Cross-Media Measurement (CMM) solution. Improving advertiser decision-making for its members by enabling unduplicated reach and frequency at the campaign level is a strategic priority for the ANA. A new entity, Aquila, has been established by the ANA to govern, operationalize, and execute a CMM system in the U.S., which will support a broad range of use cases for planning, optimizing, post campaign reporting and outcome measurement. Aquila’s leadership includes a founder’s coalition of ANA member advertiser companies, and platforms including Google, Meta, Amazon, and TikTok.”
“After sufficient testing and validation, Aquila is moving the CMM initiative to the next phase of bringing a scaled CMM solution to the U.S…. Aquila has contracted with Kantar Media to build a single source cross-media calibration audience panel in the United States. The panel will be used for calibration purposes and as a core component of the service. Aquila is working with Accenture on an upfront phase of planning for the rollout of the CMM solution and the definition of its technology requirements.”
Aquila is reportedly expanding its research panel from 1,000 to 5,000 households and preparing to sign an as-yet unnamed TV measurement partner, as it plans a beta launch later this year (and a full rollout next year). This will, of course be competing with other companies’ cross-media measurement efforts, including Nielsen, ComScore, VideoAmp, and iSpot.
The Big Data (+ Panel) Solution
Nielsen understands the flaws in audience measurement as well as anyone, and now has a system that is a radical change from the way it did things in the past. Heading into the 2025 upfront, Nielsen will be combining viewership data from 45 million households across the country, covering more than 75 million set-top boxes and smart TVs (from partnerships with Comcast, DirecTV, Dish, Roku, and Vizio), as well as some first-party streaming data with a panel of more than 100,000 people. The Media Rating Council (MRC) has recently accredited this process (which basically means Nielsen is doing what it claims to be doing – it doesn’t attest to the accuracy of the product).
Obviously, a giant set-top sample in the range of tens of millions is a step in the right direction, but it still needs to be representative of the country at large based on age, income, ethnicity, media device ownership, streaming subscriptions, etc. But even that would significantly under-report viewing sources with a large amount of over-the-air only homes. But it will provide audience data for vehicles that are currently too small for Nielsen to measure. If I had faith that the household sample was indeed representative of the country at large, I’d have no problem with a large panel for demos (there’s really no other solution).
When You Can’t (or Won’t) Improve, Distract and Change the Subject
Whenever ratings substantially decline, sellers become focused on adding something and pointing out that the actual number of viewers don’t matter as much as (insert subject here).
In the early 1990s, as “network erosion” had entered the advertising lexicon, declining broadcast ratings started to become a thing. Linking TV viewing to product usage was in vogue, with survey-based companies like MRI and Simmons (which combined in 2019) providing cross-tabs to link various aspects of media consumption with product usage. Since this data was not really projectable from one year to the next, and the TV data was as soft as can be, it was used more as a looking back tool than a looking forward one.
Then it was viewer engagement. In the early 2000s, advertisers were understandably concerned about ad-skipping via DVRs and of high levels of commercial clutter. IAG Research (founded in 1999), purported to be able to measure advertising effectiveness, ad likeability and recall, and program engagement across television and the internet. In 2004, the company introduced a measurement for product placement and branded content. I remember discussing IAG with Nielsen executives, who were quick to point out what they believed was IAG’s flawed methodology, insisting they would never do that kind of shoddy research. In 2008, Nielsen acquired IAG, and suddenly the methodology was sound (although Nielsen did make some notable changes).
Over the years, I can’t count the number of times the broadcast networks and Nielsen pushed agencies to accept out-of-home viewing to be included in Nielsen’s currency ratings. We always laughed them out of the room. It was OK in a supplemental report, but no one who knew anything about research or viewing behavior would allow viewing in bars and restaurants to be given the same weight as in-home viewing. Unfortunately, research directors at media agencies don’t have nearly the clout they once did, and despite most not wanting OOH included in Nielsen ratings, couldn’t prevent it as they had so many times before.
Now, as multi-channels, on multi-platforms, on multi-devices, saturate the video landscape, attentiveness is the big thing (as it periodically is), even though you can’t really measure it on any cost-effective, ongoing basis. Most attempts at measuring attentiveness I’ve seen over the years have been gibberish. But as is always the case, someone will proclaim to have unlocked the secret, someone will start using it, and a lot of other folks will fall in line. I learned long ago that if you tell a prospective client something can’t be done, and someone else tells them they have the secret sauce, they will go with the someone else.
How to Actually Improve TV Audience Measurement (and why it won’t happen)
Even with all the criticism leveled at Nielsen, the company is still used as the base to compare any new potential vendors to gauge the validity their reported ratings. So, if another company reports data that is 30% higher or lower than Nielsen’s, it is immediately thought to be “wrong.” But we really have no idea if this is the case.
In addition to re-thinking sample construction, the first thing that needs to be done is to get a handle on what current rating services can and cannot accurately measure. While this might sound logical and simple, I can’t remember this ever being done (outside of the old telephone coincidental days). Even when the industry took the major step of moving to average commercial minute measurement (C3) in 2009, there was precious little analysis done into the methodology Nielsen uses to calculate these ratings.
The only good attempt I recall to get some of these answers was in the CRE’s landmark Video Consumer Mapping Study in 2007-08. I was co-Chair of the CRE’s Media Consumption and Engagement Committee, which commissioned the study. It was a massive (and expensive) undertaking, where trained observers shadowed a few hundred families and recorded their real-time television exposure and viewing over a full day (down to 10-second increments). The study was designed and executed by Ball State University and Sequent Partners. It took place in Atlanta, Chicago, Dallas, Philadelphia, and Seattle, with 376 participants, in spring and fall 2008. The study looked at live versus DVR viewing, and included viewing at home, in other people’s homes, at work, at other out-of-home locations such as bars and restaurants, schools, stores, and basically anywhere people are exposed to video and ads.
One of the lesser-known findings of the study was that Nielsen’s overall usage levels of households and broad demographic segments, such as total viewers and adults 18-49 were remarkably similar to the observed viewing behavior in our sample. Once you started to look at narrower age groups, however, the reported Nielsen data started to stray significantly from the observed viewing data.
And this was just overall TV usage levels, not individual program ratings. That was 17 years ago, when video viewing was not nearly as splintered as it is today – DVR penetration was barely 20% of the U.S., and there were no streaming services. At the time, I suggested we replicate the study every five years, but that never happened – primarily because it was so expensive, and the CRE (and Nielsen) had other priorities.
It is actually not that difficult to figure out how accurate current TV measurement is. I bring this up every time I write on this topic. When I was on the CRE, I proposed a methodology for gauging how accurate Nielsen was at measuring an increasingly fragmented video viewing environment. Nielsen didn’t want to have anything to do with it then, but the industry should consider it now.
Here’s all you need to do:
• Develop a sample of 100+ of the top researchers in the industry (buyers, sellers, advertisers), who are adept at detailed work and understand how audience measurement works. Nielsen, ComScore (and others) could meter their TVs, DVRs, smartphones, computers (desktops, laptops, tablets) and any other video viewing capable devices they own. They would also be given portable meters to measure their exposure to out-of-home video and audio.
• Set up several different viewing scenarios for what and how each participant will view video content and commercials during a single day, which would include linear TV (both live viewing and viewing via DVRs) and streaming.
• Participants log in their video and commercial exposure and viewing (down to the second).
• Compare their actual viewing to what the measurement companies report. We would then be able to see and address weaknesses in audience measurement, see how much real-world viewing is not being captured by current methodology, and gauge the differences between such things as exposure (the opportunity to view) and actual viewing, and between individual commercial viewing and average commercial minutes. We’d also be able to see how many people are actually watching the commercials in bars and restaurants compared to what portable devices pick up as “exposure.”
Not only will this demonstrate for the first time, the accuracy of reported ratings, but it will also tell us exactly where improvements to audience measurement need to be made (which was the original but unfulfilled purpose of the CRE). The project could be overseen by a group of retired research executives or some well-respected company with no skin in the game.
So Whats Next?
That’s a good question. I doubt whether the industry will follow my proposal to discover the actual weaknesses in current audience measurement techniques. That would require the expensive, time-consuming process of fixing them, and would not necessarily supply the answers anyone wants to deal with.
So, if past is prologue, we’ll probably see more talk and several new proposals on how to “better” measure video and cross-platform viewing. Nielsen will be forced to make some changes to how it operates. When the next upfront rolls around, virtually everyone will still be using Nielsen as their primary (but not necessarily only) national marketplace currency.
If the media conglomerates continue to sign long-term contracts with Nielsen, we’ll know most of the rumblings about finding an alternative currency was just talk and they have no genuine interest in improving audience measurement. If they sign one-year deals, making Nielsen (and others) continually prove themselves, then maybe real change will happen.
In today’s (and tomorrow’s) increasingly splintered media environment, there is room for additional players, which may not have been the case 10 years ago. ComScore is not going anywhere, and I don’t see VideoAmp or iSpot disappearing any time soon. While Nielsen has long dominated partly because it has decades trendable linear TV viewing data on conventional demographics, when it comes to the more advanced audience data, the newer players have a more even playing field.
But as has always been the case, when Nielsen starts perceiving another company as a real threat, it has the deepest pockets to develop something similar. And no matter what other services anyone uses, their multi-year Nielsen contracts means they are still paying Nielsen millions of dollars each year. So for any alternatives to sustain long-term support, they really have to produce data Nielsen doesn’t provide that media companies see as profitable. Should VideoAmp, which specializes in targeting advanced audiences, online behavior, and cross-screen measurement, merge with iSpot, which is widely used for planning and evaluating the impact of ad campaigns, they might combine into something that can be a real challenger to Nielsen’s dominance. And let’s not forget that Kantar Media, which competes head-to-head with Nielsen outside the U.S. is being acquired by private-equity firm, H.I.G. Capital, and seems poised to enter the U.S. market. Will they acquire or partner with either VideoAmp or iSpot (or ComScore)? Who knows?
Here are the LinkedIn overviews for some of these companies in their own words…
ComScore is a global, trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, ComScore empowers media buyers and sellers to quantify their multiscreen behavior and make meaningful business decisions with confidence. A proven leader in measuring digital and TV audiences and advertising at scale, ComScore is the industry's emerging, third-party source for reliable and comprehensive cross-platform measurement.
VideoAmp is a software and data company creating a more sophisticated data-driven advertising ecosystem that redefines how media is valued, bought and sold. The VideoAmp platform provides measurement and optimization tools that unify audiences across the disparate systems of traditional TV, streaming video and digital media. Unlocking new value for those currently operating within a siloed view of their audiences, VideoAmp creates efficiencies for the entire industry. VideoAmp is transforming a 100-year old industry by powering a more effective three-way value exchange that results in advertisers increasing their return on investment, publishers increasing their revenues and improving the viewing experience for consumers.
iSpot helps advertisers measure the brand and business impact of TV and streaming advertising, from concept to airing to conversion. Fast, accurate and actionable measurement and attribution solutions enable advertisers to assess creative effectiveness, enhance media plans and attribute advertising results for cross-platform campaigns, all while benchmarking against competitors and historical norms. Unlike legacy and ad hoc solutions, iSpot is purpose-built to measure the performance of every ad on television with digital-like precision and granularity in real time. With always-on performance insights unified across linear and streaming TV, advertisers can take quick and confident action to consistently drive business results.
Television remains a vibrant cultural influence and an essential source of entertainment and information worldwide. Tremendous growth in content choices, and viewing platforms that allow us to watch anything, anytime, on any screen, has actually made it harder for viewers to discover and keep up with all the great programming available. It’s also more competitive for content providers to keep your attention, and for marketers to make strong, measurable connections with their target consumers. Technology that improves the viewing experience, enables content discovery, and addresses audience fragmentation across screens will strengthen television’s business model and relevance to consumers. Data is at the center of any solution to make TV better. Samba TV's technology is built into Smart TVs and easily maps to smart phones and tablets. By recognizing what's on screen, Samba TV learns what viewers like and using machine learning algorithms, enables discovery of shows and actors in a whole new way. Likewise, our data and measurement products are transforming the way stakeholders across the media landscape are thinking about their business. Given the dramatic growth in streaming services, connected devices, time-shifting, and multi-screen viewership, our data products solve real problems and create a meaningful competitive advantage for our clients.
When we look at the way TV has always been measured - a lot is missing from the equation: Who’s actually in the room, and are their eyes even on the screen? TVision is the leader in TV performance metrics - an advertising technology firm that delivers real-time performance metrics for marketing effectiveness and efficiency. We use cutting edge technology to measure what was once unmeasurable - how people really watch TV. We’re a venture-backed company, deeply immersed in the intersection of data, media, and advertising. TVision enables the media industry - brands, networks, and data partners alike - to reduce waste and drive greater marketing results.
HyphaMetrics provides a unified understanding of media behavior that evolves at the speed of culture. We serve as the objective technology standard globally for the precise measurement of media at the individual level. Using Artificial Intelligence and Machine Learning to analyze and optimize advertising and video content for media executives overseeing content, ads, brand sponsorships, and product placements, we cost-efficiently support interoperability across the entire media ecosystem serving measurement companies, brands, agencies, and media publishers.
As people increasingly move across channels and platforms, Kantar Media’s data and audience measurement, targeting, analytics and advertising intelligence services unlock insights to inform powerful decision-making. Working with panel and first-party data in over 80 countries, we have the world's fastest growing cross-media audience measurement capabilities, underpinned by versatility, scale, technology and expertise, to drive long-term business growth for our clients and partners.
Comments