TL;DR: What’s new in this chart is:
- I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
- I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
- I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
- I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
- Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.
Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.
OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.
As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.
As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.
So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric
The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:
- True and Complete
- Mostly True/ True but Incomplete
- Mixed True and False
- Mostly False or Misleading
Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.
It is valid and important to rate articles and statements for truthfulness. But it is apparent that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:
- (Presented as) Fact
- (Presented as) Fact/Analysis (or persuasively-worded fact)
- (Presented as) Analysis (well-supported by fact, reasonable)
- (Presented as) Analysis/Opinion (somewhat supported by fact)
- (Presented as) Opinion (unsupported by facts or by highly disputed facts)
In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.
Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.
The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.
I submit that the following characteristics of sentences can make them seem unfair:
-Not relevant to present story
-Ad hominem (personal) attacks
-Other character attacks
-Quotes inserted to prove the truth of what the speaker is saying
-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point
-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises
This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:
By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:
In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:
As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.
Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.
If you would like a blank version for education purposes, here you go:
And here is a lower-resolution version for download on mobile devices: