Media Intelligence

Download our one page MediaSignal overview, learn how we do it!

As an issues management agency, we run a sophisticated data operation to monitor relevant news and stakeholder activity and derive intelligence for strategy development. Our clients look to us to be ahead of the news cycles and provide them 360-degree situational awareness. To do this, we’ve built a toolchain that uses some of the industry’s most familiar media monitoring tools, our proprietary technology and Science4Data for additional insights. Over the years, we’ve tried all sorts of tools for media analytics and developed and re-developed many of our own. This experience has afforded insights I’m happy to share.

Original source/unique content is the foundation we consider the most valuable for evaluation. Clients often want those big reach figures, and marketers make a big deal out of them to rationalize investments, but we view quality over quantity as the first step and as a higher value in analysis. Comparing unique/original content against other unique/original content and then evaluating article counts and site traffic tells a more relevant story for decision making and evaluating real value/impact. Indeed, when measuring media reach and share of voice, the typical data points that many tools provide are not terribly useful for issues management and influence and have significant flaws that can lead to distorted analyses intended to support strategic decision-making. Here are some of the main problems we see with certain media metrics:

  1. Article count doesn’t discern or remove simple online syndication, which distorts influence and visibility. In the old days, when an article appeared in print, you had some sense of its potential audience and impact. Today, being one in thousands of headline links found on indiscernible websites is not a measurement I’d want to use for any real decision making. Measuring the count of unique content from original sources has more value. To be valuable, article count must discern original sources from the myriad content replicas hidden deep in the crevices of distributor sites.
  2. Reach by monthly traffic is, again, distorted by the above syndication issue. Both count and reach might tell us a little about comparative buzz, but these data points don’t give enough information to make a recommendation for strategy. Also, a paid release using CISION, Businesswire or another service might be syndicated on multiple major news websites with lots of traffic, but it’s not actually appearing in the main news section or place that would be found by a typical viewer, so it’s not the same as an actual article written for and promoted by that news site. This may also be true with actual news wire service syndication like Reuters, AP and APF.
  3. Density is interesting if you want to understand temporal shifts, peaks or valleys that might offer insights into recurring opportunities and/or one-time triggers for increased interest, but this too can be clouded by syndication. Density of unique, original source content extracted from the clutter of syndication gives more accurate insights.
  4. Sentiment tracking is highly subjective, and automated sentiment analysis is often erroneous with exacerbated poor conclusions when drawn from traffic volume of syndicated content noted above. With that, sentiment is only relevant from the perspective of the relevant audience (i.e., client or their customers/key stakeholders) whose perception matters. Automated sentiment tracking tools have yet to be able to make relevant distinctions about audience perspectives or intentions of publishers to provide useful evaluations when it comes to favorability trends.

Two things to consider, which we also track to assess influence/impact:

  1. Frequency of social sharing of an article – tracking the number of links back on Twitter is a relatively strong measurement of pick-up/buzz by identifiable influencers (not bots) for content, which also has added SEO value for more lasting influence.
  2. The lasting influence of content placed as found in search – how high up in search results (i.e., which of the 1-10 slots on page one of Google) against a relevant and measurably used term is a major reflection of the influence of that content.

The first is relatively easy to capture in near-real time. The second often requires evaluation (we have developed tools and algorithms for this) after a few months and repeated over time to demonstrate the influence value of tracked content placement campaigns.

In conclusion, syndication introduces a whole host of issues that can mislead strategic decision-making and misinform return on investment (ROI) evaluations. Comprehensive monitoring with sound evaluation is critical for all organizations to effectively participate in their markets and value chains. Sound evaluation requires an understanding of the importance of tracking original content and filtering out irrelevant syndicated “coverage” in your media monitoring. While we all know that measurement concepts like unique monthly visitors can be a proxy for influence, we can get closer to the truth by looking at other attributes including social sharing and search performance while giving less weight to the chaff too often used to demonstrate true ROI and help inform strategic decision-making.

About our Guest Blogger

Jay Byrne’s career spans more than 25 years with experience in public relations, campaign communications and government affairs. He has held senior communications positions at the White House, U.S. State Department, Monsanto Company and for the City of Boston. Jay has directed communications and media relations for various U.S. political campaigns, global activist campaign responses, complex litigation challenges and other international public relations initiatives.

More about v-Fluence