Posted on

Observations on The Chart by Law Professor Maxwell Stearns of U. Maryland

Law professor Maxwell Stearns, who blogs about law, politics, and culture, recently published this post about the chart, which has several useful insights about 1) distilling the ranking criteria into sub-categories, 2) why the sources on the chart form a bell curve, 3) how the rankings might be made more scientifically. Give it a read!

Posted on

Everybody has an Opinion on CNN

I get the most feedback by far on CNN, and, in comparison to feedback on other sources on the chart, CNN is unusual because I get feedback that it should be moved in all the different directions (up, down, left, and right). Further, most people who give me feedback on other sources suggest that I should just nudge a source one way or another a bit. In contrast, many people feel very strongly that CNN should be moved significantly in the direction they think.

I believe there are a couple of main reasons I am getting this kind of feedback.

  • CNN is the source most people are most familiar with. It was the first , and is the longest-running, 24 hour cable news channel. It’s on at hotels, airports, gyms, and your parent’s house. Even if people are news critics of nothing else, if they are critics of anything, they will be critics of CNN, because they are most familiar with it.
  • CNN is widely talked about by other media outlets, and conservative media outlets in particular, who often describe it as crazy-far left. Usually those who tell me it needs to go far left are the ones reading conservative media—no surprise there.
  • People tend to base their opinions of CNN on what leaves the biggest impression on them, and there are a lot of aspects that can leave an impression:
    1. For some people, the fact that they can just have it on in the background during the day, during which they see a large sampling of CNN’s news coverage, they see that programming is mostly accurate and informs them of a lot of US news they would be interested in. These individuals tend to think that CNN should be ranked higher, perhaps all the way up in “fact-reporting” and mainstream
    2. For others, they know they can tune into CNN for breathless, non-stop coverage of an impending disaster, like a Hurricane, or a breaking tragedy, such as a mass shooting. People can have a few different kinds of impressions from this. First, that they can count on that fact that they will get all the facts that are known repeated to them within 10 minutes of tuning in. That’s another reason to put them up in “fact-reporting.” Second, more savvy observers know that CNN makes not-infrequent mistakes and often jumps the gun in these situations. They usually qualify their statements properly, but they will still blurt out facts about a suspect, number of shooters, fatalities, that are not quite yet verified. That causes some people to rank them lower quality on the fact-reporting scale. Third, people know that once CNN runs out of current stuff to talk about, they will bring on analysts about all related (or unrelated) subjects (e.g., lawyers, criminologists, climate change scientists, etc.) often for several days following the story. This tends to leave people with the impression that CNN provides a lot of analysis and opinion (including lots of it that is valid and important) in addition to fact reporting. So a ranking somewhere along the analysis/opinion spectrum (a little above where I have it) seems appropriate.
    3. For yet others, the kind of coverage that leaves the biggest impression is the kind that includes interviews and panels of political commentators. The contributors and guests CNN has on for political commentary range widely in quality, from “voter who knows absolutely nothing about what he is talking about” to “extremely partisan, unreliable political surrogate” to “experienced expert who provides good insight.” People who pay attention to this kind of coverage note that CNN does a few crazy things.
      1. First, they have a chyron that says “Breaking News:…” followed by something that is clearly not breaking news. For example: “Breaking: Debate starts in one hour.” Eye roll. This debate has been planned for months and is not breaking. Further, they have a chyron (big banner on the bottom of the screen) for almost everything, which seems unnecessary and sensationalist, but has been adopted by MSNBC, FOX, and others. Often, the chyron’s content is sensationalist.
      2. Second, in the supposed interest of being “balanced” and “showing both sides, they often have extreme representatives from each side of the political spectrum debating each other. This practice airs and lends credibility to some extreme, highly disputed positions. Balance, I think, would be better represented by having guests with more moderate positions. Interviews with KellyAnne Conway, who often says things that are untrue, things that are misleading, and makes highly disputed opinion statements, are something else. Even though the hosts challenge her, it often appears that the whole point of having her as a guest is for the purposes of showcasing how incredulous the anchors are at her statements. This seems to fall outside of the purpose of news reporting. What’s worse, though (to me, anyway), is that they will hire partisan representatives as actual contributors and commentators, which gives them even more credibility as sources one should listen to about the news, even though they have a clear partisan, non-news agenda. They hired Jeffery Lord, who routinely made the most outlandish statements in support of Trump, and Trump’s ACTUAL former campaign manager, Corey Lewandowski. That was mind-boggling in terms of lack of journalism precedence (and ethics) and seemed to be done for sensationalism (and ratings, rather than for the purposes of news reporting, which is to deliver facts). Those hires were a big investment in providing opinion. I think it was extremely indicative of CNN’s reputation for political sensationalism when the Hill ran two headlines within a few weeks of each other saying something like “CNN confirms it will not be hiring Sean Spicer as a contributor” and CNN confirms it will not be hiring Anthony Scaramucci as a contributor” shortly after each of their firings.
  • Third, their coverage is heavily focused on American political drama. I’ll elaborate on this in a moment.

Personally, the topics discussed in (c) left the biggest impression on me. That is why I have them ranked on the line between “opinion, fair persuasion” and “selective or incomplete story, unfair persuasion.” The impact of the guests and contributors who present unfair and misleading statements and arguments really drives down CNN’s ranking in my view. I have them slightly to the left of center, though, because they tend to have a higher quantity of guests with left-leaning positions.

 

I have just laid out that my ranking is driven in large part by a subjective measure rather than an objective, quantitative one. An objective, quantitative one would take all the shows, stories, segments, guests, and analyze all the statements made, and would, on a percentage basis, say how many of these things were facts, opinions, analysis, fair or unfair, misleading, untrue, etc. I have not done this analysis but would guess that a large majority of the statements made in a 24 hour period on CNN would fall in to reputable categories (fair, factual, impartial). Perhaps even 80% or more would fall in to that category. So one could reasonably argue that CNN deserves to be higher; say, 80% of the way up (or whatever the actual number is), if that is how you wanted to rank it.

However, I argue for the inclusion of a subjective assessment that comes from the question “what impression does this source leave?” Related questions are “what do people rely on this source for,” “what do they watch it for,” and “what is the impact on other media?” I submit that the opinion and analysis panels and interviews, with their often-unreliable guests, leave the biggest impression and make up a large portion of what people rely on and watch CNN for. I also submit that these segments make the biggest impact in the rest of media and society. For example, other news outlets will run news stories, the content of which are “Here’s the latest crazy thing KellyAnne said on CNN.” These stories make a significant number of impressions on social media, therefore amplifying what these guests say.

I also include a subjective measure that pushes it into the “selective or incomplete story” category, which comes from trying to look at what’s not there; what’s missing. In the case of CNN, given their resources as a 24 hour news network, I feel like a lot is missing. They focus on American political drama and the latest domestic disaster at the expense of everything else. With those resources and time, they could inform Americans about the famine in South Sudan, the war in Yemen, and the refugees fleeing Myanmar, along with so many other important stories around the world. They could do a lot more storytelling about how current legislation and policies impacts the lives of people here and around the world. Their focus on White House palace intrigue inaccurately, and subliminally, conveys that those are the most important stories, and that, I admit, just makes me mad.

Many reasonable arguments can be made for the placement of CNN as a whole, but a far more accurate way to rank the news on CNN is to rank an individual show or story. People can arrive at a consensus ranking much more easily when doing that. I will be doing that on future graphs (I know you can’t wait for a whole graph just on CNN, and I can’t either!) for individual news outlets.

 

Posted on

The Chart, Version 3.0: What, Exactly, Are We Reading?

 

TL;DR: What’s new in this chart is:

  • I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
  • I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
  • I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
  • I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
  • Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.

Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.

OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.

As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.

As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.

So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric

The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:

  • True and Complete
  • Mostly True/ True but Incomplete
  • Mixed True and False
  • Mostly False or Misleading
  • False

Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.

It is valid and important to rate articles and statements for truthfulness. But it is apparent  that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:

  • (Presented as) Fact
  • (Presented as) Fact/Analysis (or persuasively-worded fact)
  • (Presented as) Analysis (well-supported by fact, reasonable)
  • (Presented as) Analysis/Opinion (somewhat supported by fact)
  • (Presented as) Opinion (unsupported by facts or by highly disputed facts)

In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.

Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.

The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.

I submit that the following characteristics of sentences can make them seem unfair:

-Not relevant to present story

-Not timely

-Ad hominem (personal) attacks

-Name-calling

-Other character attacks

-Quotes inserted to prove the truth of what the speaker is saying

-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point

-Emotionally-charged adjectives

-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises

This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:

  • Fair
  • Unfair

By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:

In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:

As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.

Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.

If you would like a blank version for education purposes, here you go:

Third Edition Blank

And here is a lower-resolution version for download on mobile devices: