Posted on

The Chart, Version 3.0: What, Exactly, Are We Reading?

 

Summary: What’s new in this chart:

  • I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
  • I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
  • I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
  • I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
  • Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.

Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.

OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.

As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.

As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.

So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric

The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:

  • True and Complete
  • Mostly True/ True but Incomplete
  • Mixed True and False
  • Mostly False or Misleading
  • False

Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.

It is valid and important to rate articles and statements for truthfulness. But it is apparent  that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:

  • (Presented as) Fact
  • (Presented as) Fact/Analysis (or persuasively-worded fact)
  • (Presented as) Analysis (well-supported by fact, reasonable)
  • (Presented as) Analysis/Opinion (somewhat supported by fact)
  • (Presented as) Opinion (unsupported by facts or by highly disputed facts)

In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.

Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.

The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.

I submit that the following characteristics of sentences can make them seem unfair:

-Not relevant to present story

-Not timely

-Ad hominem (personal) attacks

-Name-calling

-Other character attacks

-Quotes inserted to prove the truth of what the speaker is saying

-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point

-Emotionally-charged adjectives

-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises

This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:

  • Fair
  • Unfair

By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:

In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:

As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.

Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.

If you would like a blank version for education purposes, here you go:

Third Edition Blank

And here is a lower-resolution version for download on mobile devices:

Posted on

Observations on The Chart by Law Professor Maxwell Stearns of U. Maryland

Law professor Maxwell Stearns, who blogs about law, politics, and culture, recently published this post about the chart, which has several useful insights about 1) distilling the ranking criteria into sub-categories, 2) why the sources on the chart form a bell curve, 3) how the rankings might be made more scientifically. Give it a read!

Posted on

Everybody has an Opinion on CNN

I get the most feedback by far on CNN, and, in comparison to feedback on other sources on the chart, CNN is unusual because I get feedback that it should be moved in all the different directions (up, down, left, and right). Further, most people who give me feedback on other sources suggest that I should just nudge a source one way or another a bit. In contrast, many people feel very strongly that CNN should be moved significantly in the direction they think.

I believe there are a couple of main reasons I am getting this kind of feedback.

  • CNN is the source most people are most familiar with. It was the first , and is the longest-running, 24 hour cable news channel. It’s on at hotels, airports, gyms, and your parent’s house. Even if people are news critics of nothing else, if they are critics of anything, they will be critics of CNN, because they are most familiar with it.
  • CNN is widely talked about by other media outlets, and conservative media outlets in particular, who often describe it as crazy-far left. Usually those who tell me it needs to go far left are the ones reading conservative media—no surprise there.
  • People tend to base their opinions of CNN on what leaves the biggest impression on them, and there are a lot of aspects that can leave an impression:
    1. For some people, the fact that they can just have it on in the background during the day, during which they see a large sampling of CNN’s news coverage, they see that programming is mostly accurate and informs them of a lot of US news they would be interested in. These individuals tend to think that CNN should be ranked higher, perhaps all the way up in “fact-reporting” and mainstream
    2. For others, they know they can tune into CNN for breathless, non-stop coverage of an impending disaster, like a Hurricane, or a breaking tragedy, such as a mass shooting. People can have a few different kinds of impressions from this. First, that they can count on that fact that they will get all the facts that are known repeated to them within 10 minutes of tuning in. That’s another reason to put them up in “fact-reporting.” Second, more savvy observers know that CNN makes not-infrequent mistakes and often jumps the gun in these situations. They usually qualify their statements properly, but they will still blurt out facts about a suspect, number of shooters, fatalities, that are not quite yet verified. That causes some people to rank them lower quality on the fact-reporting scale. Third, people know that once CNN runs out of current stuff to talk about, they will bring on analysts about all related (or unrelated) subjects (e.g., lawyers, criminologists, climate change scientists, etc.) often for several days following the story. This tends to leave people with the impression that CNN provides a lot of analysis and opinion (including lots of it that is valid and important) in addition to fact reporting. So a ranking somewhere along the analysis/opinion spectrum (a little above where I have it) seems appropriate.
    3. For yet others, the kind of coverage that leaves the biggest impression is the kind that includes interviews and panels of political commentators. The contributors and guests CNN has on for political commentary range widely in quality, from “voter who knows absolutely nothing about what he is talking about” to “extremely partisan, unreliable political surrogate” to “experienced expert who provides good insight.” People who pay attention to this kind of coverage note that CNN does a few crazy things.
      1. First, they have a chyron that says “Breaking News:…” followed by something that is clearly not breaking news. For example: “Breaking: Debate starts in one hour.” Eye roll. This debate has been planned for months and is not breaking. Further, they have a chyron (big banner on the bottom of the screen) for almost everything, which seems unnecessary and sensationalist, but has been adopted by MSNBC, FOX, and others. Often, the chyron’s content is sensationalist.
      2. Second, in the supposed interest of being “balanced” and “showing both sides, they often have extreme representatives from each side of the political spectrum debating each other. This practice airs and lends credibility to some extreme, highly disputed positions. Balance, I think, would be better represented by having guests with more moderate positions. Interviews with KellyAnne Conway, who often says things that are untrue, things that are misleading, and makes highly disputed opinion statements, are something else. Even though the hosts challenge her, it often appears that the whole point of having her as a guest is for the purposes of showcasing how incredulous the anchors are at her statements. This seems to fall outside of the purpose of news reporting. What’s worse, though (to me, anyway), is that they will hire partisan representatives as actual contributors and commentators, which gives them even more credibility as sources one should listen to about the news, even though they have a clear partisan, non-news agenda. They hired Jeffery Lord, who routinely made the most outlandish statements in support of Trump, and Trump’s ACTUAL former campaign manager, Corey Lewandowski. That was mind-boggling in terms of lack of journalism precedence (and ethics) and seemed to be done for sensationalism (and ratings, rather than for the purposes of news reporting, which is to deliver facts). Those hires were a big investment in providing opinion. I think it was extremely indicative of CNN’s reputation for political sensationalism when the Hill ran two headlines within a few weeks of each other saying something like “CNN confirms it will not be hiring Sean Spicer as a contributor” and CNN confirms it will not be hiring Anthony Scaramucci as a contributor” shortly after each of their firings.
  • Third, their coverage is heavily focused on American political drama. I’ll elaborate on this in a moment.

Personally, the topics discussed in (c) left the biggest impression on me. That is why I have them ranked on the line between “opinion, fair persuasion” and “selective or incomplete story, unfair persuasion.” The impact of the guests and contributors who present unfair and misleading statements and arguments really drives down CNN’s ranking in my view. I have them slightly to the left of center, though, because they tend to have a higher quantity of guests with left-leaning positions.

 

I have just laid out that my ranking is driven in large part by a subjective measure rather than an objective, quantitative one. An objective, quantitative one would take all the shows, stories, segments, guests, and analyze all the statements made, and would, on a percentage basis, say how many of these things were facts, opinions, analysis, fair or unfair, misleading, untrue, etc. I have not done this analysis but would guess that a large majority of the statements made in a 24 hour period on CNN would fall in to reputable categories (fair, factual, impartial). Perhaps even 80% or more would fall in to that category. So one could reasonably argue that CNN deserves to be higher; say, 80% of the way up (or whatever the actual number is), if that is how you wanted to rank it.

However, I argue for the inclusion of a subjective assessment that comes from the question “what impression does this source leave?” Related questions are “what do people rely on this source for,” “what do they watch it for,” and “what is the impact on other media?” I submit that the opinion and analysis panels and interviews, with their often-unreliable guests, leave the biggest impression and make up a large portion of what people rely on and watch CNN for. I also submit that these segments make the biggest impact in the rest of media and society. For example, other news outlets will run news stories, the content of which are “Here’s the latest crazy thing KellyAnne said on CNN.” These stories make a significant number of impressions on social media, therefore amplifying what these guests say.

I also include a subjective measure that pushes it into the “selective or incomplete story” category, which comes from trying to look at what’s not there; what’s missing. In the case of CNN, given their resources as a 24 hour news network, I feel like a lot is missing. They focus on American political drama and the latest domestic disaster at the expense of everything else. With those resources and time, they could inform Americans about the famine in South Sudan, the war in Yemen, and the refugees fleeing Myanmar, along with so many other important stories around the world. They could do a lot more storytelling about how current legislation and policies impacts the lives of people here and around the world. Their focus on White House palace intrigue inaccurately, and subliminally, conveys that those are the most important stories, and that, I admit, just makes me mad.

Many reasonable arguments can be made for the placement of CNN as a whole, but a far more accurate way to rank the news on CNN is to rank an individual show or story. People can arrive at a consensus ranking much more easily when doing that. I will be doing that on future graphs (I know you can’t wait for a whole graph just on CNN, and I can’t either!) for individual news outlets.

 

Posted on

Not “Fake News,” But Still Awful for Other Reasons:  Analysis of Two Examples from The Echo Chambers This Week

The term “fake news” is problematic for a number of reasons, one of which is that it is widely used to mean anything from “outright hoax” to “some information I do not like.” Therefore, I refrain from using the term to describe media sources at all.

Besides that, I refrain from discussing the term because I submit that the biggest problem in our current media landscape is not “hoax” stories that could legitimately be called “fake news.” What is far more damaging to our civic discourse are articles and stories that are mostly, or even completely, based on the truth, but which are of poor quality for other reasons.

The ways in which articles can be awful are many. Further, not all awful articles are awful in the same way. For these reasons, it is difficult to point out to most casual news readers how an article that is 90% true, or even 100% true, is biased, unfair, or deviant from respectable journalistic practices.

This post is the first in a series I plan to do in which I visually rank one or more recent articles on my chart and provide an in-depth analysis of why each particular article is ranked in that spot.  My analysis includes discussions of the headlines, graphics, other visual elements, and the article itself. I analyze each element and each sentence by asking “what is this element/sentence doing?”

This week, I break down one article from the right (from the Daily Wire, entitled “TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan) and one from the left (from Pink News, entitled “Bill O’Reilly caught in $32 million Fox News gay adult films scandal”).

 

  • From the Left: Article Ranking and Analysis of:

http://www.pinknews.co.uk/2017/10/24/bill-oreilly-caught-in-32-million-fox-news-gay-adult-films-scandal/

Source: Pink News

Author: Benjamin Butterworth

Date: October 24, 2017

Total Word Count: 706

  1. Title: Bill O’Reilly caught in $32 million Fox News gay adult films scandal

Title Issues:

Misleading about underlying facts

There is no current, known scandal involving Fox News and gay adult films. Bill O’Reilly settled a $32 million sexual harassment lawsuit while employed by Fox, and one of the allegations was that he sent a woman gay porn. However, the title suggests some sort of major financial involvement of Fox News in particular gay porn films. No mention of lawsuit settlement in title.

                        Misleading about content of article

The article is actually about the sexual harassment settlement, with one mention of the allegation of sending gay porn, the actions of Fox News in relation to O’Reilly’s employment after the settlement, and a listing of O’Reilly’s past anti-gay statements.

                        Misleading content is sensationalist/clickbait

  1. Graphics: Lead image linked to social media postings is this:

 

          Graphics Issues:

                        Misleading regarding content of article:

The image is half a gay porn scene and half Bill O’Reilly, which would lead a read to expect that the topic of gay porn makes up a significant portion of the article—perhaps up to half.

                        Misleading content is sensationalist/clickbait

The image is salacious and relies on people’s interest what they perceive as sexual misbehavior and/or hypocrisy of others

                        Image is a stock photo not related to a particular fact in the article

  • Other Elements (Lead Quote): Anti-gay former Fox News host Bill O’Reilly is caught up in a $32 million gay porn lawsuit.

Element Issues:

Inaccurate regarding underlying facts

The $32 million lawsuit is cannot be accurately characterized as being “about” gay porn. It is most accurately characterized as a sexual harassment (or related tort) lawsuit.

                        Inaccurate in relation to facts stated in article

The article itself states: “Now the New York Times (NYT) has claimed that, in January, O’Reilly agreed to pay $32 million to settle a sexual harassment lawsuit filed against him.”

Adjective describing subject of article selected for partisan effect

“Anti-gay” is used to describe Bill O’Reilly, which is used to make a point, to the site’s pro-LGBT audience, that O’Reilly is especially despicable beyond the transgressions that are subject of the present lawsuit being reported upon.

  1. Article:

            Genres:

  1. Embellished Reporting (i.e., reporting the timely story, plus other stuff)

Reports the current sexual harassment settlement story, relevant related timeline of events, plus extraneous information about how O’Reilly is anti-gay

  1. Promotion of Idea

                                    Idea that Bill O’Reilly is a bad person particularly because he is anti-gay

Sentence Breakdown:

                        706 total words, 28 sentences/quotes

Factual Accuracy:

% Inaccurate sentences: 0 out of 28 sentences (0%) inaccurate

% Misleading Sentences: 0 out of 12 sentences (0%) are misleading

 

                        Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 24/28 (86%)

% Fact/Quoted Statements with adjectives: 2/28 (7%)

% Analysis Statements: 1/28 (3.5%)

% Analysis/Opinion Statements: 1/28 (3.5%)

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

                        % Fair: 20/28 (71%)

Sentences 1-7, and 9-20 rated as “fair” because they are factual, relevant to the current story, and timely.

% Unfair: 8/28 (29%)

Sentences 8, 20-28 rated as “unfair” because they are untimely, unrelated to title, and used for idea promotion

Overall Article Quality Rating: Selective Story; Unfair Influence

Main reasons:

-29% of sentences included for unfair purpose

-Anything over with over 10% unfair influence sentences can be fairly rated in this category

-Title, Graphics, Lead element all extremely misleading

Overall Partisan Bias Rating: HYPER-PARTISAN (Liberal)

Main reasons:

  • Focus on pro-LGBT message even though underlying story is very loosely related to LGBT issues

 

  • From the Right: Article Ranking and Analysis of:

http://www.dailywire.com/news/22540/trump-was-right-gold-star-widow-releases-trumps-ryan-saavedra

Source: The Daily Wire

Author: Ryan Saavedra

Date: October 20, 2017

Total Word Count: 257

 

 

  1. Title: TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan

Title Issues:

Contains all caps statement of “TRUMP WAS RIGHT”

-Capitalization is sensationalist

Contains conclusory opinion statement of “TRUMP WAS RIGHT”

Directly appeals to confirmation bias with “TRUMP WAS RIGHT”

People likely to believe Trump is right in general are the most likely to click on, read, and/or share this, and are most likely to believe the contents of the article at face value

Misleading regarding context of current events, which says “Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan (see explanation after next issue)

Omitting relevant context of current events occurring between approximately Oct 16 and Oct 20, 2017, the four preceding days before this article was published

-In the context of a controversy over disputed phone call between Trump and a different black Gold Star Widow than the one this article is about, in which the presence of a recording of the call was also disputed, the omission of the fact that this is a different black Gold Star Widow who received a call from Trump is misleading. It is misleading because it is likely to confuse readers who are unfamiliar with specific facts of the current controversy ( such as 1) the names of the widow and solider, Myeshia Johnson and Sgt. La David Johnson, 2) what they look like, and 3) where he was killed

 

  1. Graphic Elements: An accurate photo of the widow who is the subject of the story (Natasha DeAlencar) and her fallen soldier husband (Staff Sgt. Mark DeAlancar)

Graphics Issue:

Accurate photo juxtaposed with other problematic elements

-Though the photo is accurate, its position next to the title again may lead readers who are uninformed as to the underlying facts that this call is regarding the current controversy between Myeshia Johnson and President Trump

III.             Other Elements (Lead Quote): “Say hello to your children, and tell them your father, he was a great hero that I respected.”

Element Issue:

Accurate quote juxtaposed with other problematic elements

-Similar to the photo, though the quote is accurate, its position next to the title and photo again may lead readers who are uninformed as to the underlying facts that this particular quote was from the current controversial call between Myeshia Johnson and President Trump

  1. Article:

Genres:

-Storytelling

Here, the story of one widow’s experience

Promotion of ideas

Here, the promotion of idea that Trump is respectful and kind; promotion of idea that media is deceitful

Sentence Breakdown:

257 words; 12 sentences

Factual Accuracy:

% Inaccurate sentences: 1 out of 12 sentences (8%) inaccurate

Quote from article: “In response to a claim by a Florida congresswoman this week claiming that President Donald Trump is disrespectful to the loved ones of fallen American soldiers, an African-American Gold Star widow released a video of a phone conversation she had with the President in April about the death of her husband who was killed in Afghanistan.”

  • The widow did not release the video “in response to a claim by a Florida congresswoman.” She released it in response to inquiries from reporters in the wake of the controversy between Myeshia Johnson and Trump[1]

 

  • The congresswoman, Frederica Wilson, did not say generally that Trump is disrespectful to the loved ones of fallen American soldiers. She said to a local Miami news station, about Trump’s particular comments to Myeshia Johnson, “Yeah, he said that. So insensitive. He should not have said that”[2] All recent instances of her talking about the President’s conduct are in the context of this incident.[3]

% Misleading Sentences: 1 out of 12 sentences (8%) are misleading

Quote from article: “The video comes a day after White House Chief of Staff John Kelly gave an emotional speech during the White House press briefing on how disgusting it was that the media would intentionally distort the words of the President to attack him over the death of a fallen American hero.”

 

This quote makes it sound like the media took the words from this call in the video and distorted them to attack the President. The words that are the subject of the controversy in the current Johnson call are not quoted in this article at all. This sentence uses a strong adjective—“disgusting”—to describe an action, and the context of this sentence may lead readers to think the “disgusting” action was the media taking these kind words and reporting different, false, insensitive words.

Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 9/12 (75%)

% Fact/Quoted Statements with adjectives: 3/12 (25%)

% Analysis Statements: 0

% Analysis/Opinion Statements: 0

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

% Fair: 84%

Sentences 2-11 rated as “fair” because they are factual and relevant to the underlying story

% Unfair: 16%

Sentence 1 rated as “unfair” because inaccurate statements are generally used unfairly for persuasion

Sentence 12 rated as “unfair” because misleading statements are generally used unfairly for persuasion

Overall Article Quality Rating: Propaganda/Contains Misleading Facts

Main reasons:

-Anything over 0% inaccurate automatically rated at least this low

-Anything over 2% misleading automatically rated at least this low

-Title, Graphics, Lead element all misleading

Overall Partisan Bias Rating: HYPER-PARTISAN

Main reasons:

  • Opinion statement in title
  • Misleading and inaccurate statements used for purpose of promoting partisan ideas

 

[1] https://www.washingtonpost.com/news/checkpoint/wp/2017/10/19/listen-soldiers-widow-shares-her-call-with-trump/?utm_term=.f615c80f2bd7

[2] https://www.local10.com/news/politics/trump-speaks-to-widow-of-sgt-la-david-johnson

[3] The author of this analysis is unaware of any general statement by Rep. Wilson that “Trump is disrespectful to the loved ones of fallen American soldiers,” but will revise this analysis if such quotes are brought to the author’s attention

Posted on

Top Six Red Flags that Identify a Conspiracy Theory Article

It can be tough to see your Facebook friends sharing conspiracy theory stories, and tough to respond to them effectively. Pointing it out and saying “that’s a conspiracy theory” doesn’t seem to be effective. But there are certain writing patterns and tropes that are common within such articles that make them compelling to some people. Sometimes, just pointing out patterns and tropes helps people see them for what they are.

Posted on

The Chart, Second Edition: What Makes A News Source “Good?”

In my original news chart, I wrestled with the questions of what made news sources “good” and came up with some categories that generally resonated with people. I ranked sources on a vertical axis with those at the top ranked as “high quality” and those at the bottom as “low quality.” I characterized the sources, from top to bottom, in this order: Complex, Analytical, Meets High Standards, Basic, and Sensational/ Clickbait. This mostly works, because it results in sources regarded as high-brow or classy (e.g., The Atlantic, The Economist) being ranked high on the axis, and trashy sources (e.g., Addicting Info, Conservative Tribune) being ranked low, and most sophisticated news consumers agree with that. However, the vertical placements ended up causing me and others some consternation, because some of the placements relative to other outlets didn’t make sense. The most common questions I got were along these lines:

“Does FOX News really “meet high standards,” on par with something like the New York Times?” (I think no.)

“Is USA today really that bad?” (I think no.)

“Is Slate really “better” or “higher quality” than, say, AP or Reuters just because it is analytical?” (I think no.)

“Is CNN really that bad?” (I think yes.)

These questions and my instinctive responses to them made me want to reevaluate what makes news sources high or low quality.

I believe that answer to that question lies in what makes an individual article (or show/story/broadcast) high or low quality. Article quality can vary greatly even within the same news source. One should be able to rank an individual article on the chart in the same way one ranks a whole news source. So, what makes an article/story high or low quality? It’s hard to completely eliminate one’s own bias on that issue, but one way to try to do it consistently is to categorize and rate the actual sentences and words that make it up its headline and the article itself. In order to try to rank any article on the chart in a consistent, objective-as-possible manner, I started doing sentence-by-sentence analyses of different types of articles.

In analyzing what kind of sentences make up articles, it became apparent that most sentences fall into (or in-between) the categories of 1) fact, 2) analysis, or 3) opinion. Based on the percentages of these kinds of sentences in an article, articles themselves can be classified in categories of fact, analysis, and opinion as well. Helpfully, some print newspapers actually label articles as “analysis” or “opinion.” However, most news sources, especially on TV or the internet, do not. I set about analyzing stories that were not pre-labeled as “analysis” or “opinion” on a sentence-by-sentence basis. I discovered that my overall impression of the quality of an article was largely a function of the proportion of fact sentences to analysis sentences to opinion sentences. As a result, I classified stories into “fact-reporting,” “analysis,” and “opinion” stories. Ones with high proportions of “fact” sentences (e.g., 90% + fact statements) were what I refer to here as traditional “fact-reporting” news pieces. These are the kinds of stories that have historically been the basis of late 20th century-to-early-21st century journalism, and what people used to refer to exclusively as “news.” They are the “who,” “what,” “when,” and “where” pieces (not necessarily “why”). I classified ones with high proportions of “analysis” sentences (e.g., 30%-50% analytical statements) as “analysis” stories, which are the types of stories commonly found in publications like The Economist or websites such as Vox. I classified stories with high proportions of opinion sentences (e.g., 30%-50% opinion statements) as “opinion,” which are typically the types of stories found on websites such as Breitbart or Occupy Democrats.

(If you’ve made it this far, bless your heart for caring so much about the news you read.)

In the past, national evening news programs, local evening news programs, and the front pages of print newspapers were dominated by fact-reporting stories. Now, however, many sources people consider to be “news sources” are actually dominated by analysis and opinion pieces. This chart ranks media outlets that people consider to be, at some level “news sources,” even though many of them are comprised entirely of analysis and opinion pieces.

In my previous version of the chart, I had regarded analysis pieces as “higher quality” than the fact-reporting pieces because they took the facts and applied them to form well-supported conclusions. I like analytical writing, which is essentially critical thinking. However, analysis has a lot in common with opinion, and writing that is intended to be analytical often strays into opinion territory. (Note—I’m defining “analysis” as conclusions well supported by facts and “opinion” as conclusions poorly supported or unsupported by facts). Fact-reporting articles—true “scoops”—typically have the intent of just reporting the facts and typically have a very high percentage (e.g., 90%+) of fact-statement sentences, whereas both analysis and opinion articles have the intent of persuading an audience and often have a comparatively high percentage of analysis and opinion statement sentences (30%-50%). So, although I initially had the quality axis of “news” laid out top to bottom as:

That ranking is more reflective of the quality of writing rather than the quality of news sources. Good analysis is often written persuasively and well, fact-reporting is often written directly but well, and opinion writing is often (but not always) written poorly or is most easily discredited.  I submit that given the confusion caused by the overwhelming number of organizations proclaiming to be (or which are commonly confused with) “news sources,” it is more important to rank the quality of news sources than the quality of writing. I further submit, for reasons outlined below, that the percentage of fact reporting articles and stories should be used as the most determinative factor by which a news source is ranked in quality on this chart.

Therefore, I believe a more relevant ranking of the quality of news sources would be:

 

 

I assert that one of the biggest problems with our current news media landscape is that there is too much analysis and opinion available in relation to factual reporting.  New technologies have given more people more platforms to contribute analysis and opinion pieces, so many “news sources” have popped up to compete for readers’ attention. Unfortunately news consumers often do not recognize the difference between actual fact-reporting news and the analysis and opinion writing about that news.  This increase in “news sources” has not corresponded with an increase in actual journalists or news reporting, though. Many local and national print news organizations have reduced their numbers of journalists, while many of the biggest ones have merely maintained similar numbers of journalists over the past 10 years or so.
Furthermore, primarily analytical news sources also have several downsides. One downside is that they can alienate news consumers by making what people consider “news sources” so complex or partisan that it is tiring to consume any “news.”  For example, CNN, MSNBC and FOX News, which are primarily analysis and opinion-driven, can make news consumers too weary to pay attention to fact-based reporting from, say, AP or Reuters. Another problem with analysis and opinion-driven news sources is that it can be difficult for casual readers to differentiate between good analysis and pure opinion.
There are several good reasons why we should value fact-reporting sentences, fact-reporting articles, and fact-reporting news sources high on the quality scale of news, at least on this chart. For one, reported facts take a lot of work to obtain. They require journalists on the ground investigating and interviewing. Once a story is reported, dozens, hundreds, or thousands of other writers can chime in with their analysis or opinions of it. This is not to say analysis and opinion writing isn’t important. The critical thinking presented in analytical writing—especially good, complex analysis—is essential to public discourse. Our society’s best ideas are advanced by analytical articles. This piece you are reading now is analytical. But analysis in the news wouldn’t even exist without the underlying factual reporting.

For example, AP and Reuters have maintained around 2,000-2,500 journalists each over that time, while the New York Times and Washington Post have fluctuated in the 500-1000 range over the same period. The value of these organizations with large staffs of journalists, editors, and other newsroom employees is hard to overstate; not only do they provide a majority of the fact-reporting stories everyone else relies on, but they have the capacity to provide high-quality editorial review that stands up to industry scrutiny. In contrast, even some of the most popular analysis and opinion sites can be run with just a few dozen writers and staff; the number of these “news” websites, news aggregator websites, blogs, and podcasts has seemingly grown exponentially.

I believe improvements to our media landscape can be made if two things happen: 1) if news consumers start valuing factual reporting much more and analysis/opinion articles much less and 2) if news consumers become accustomed to differentiating articles in those categories.. Regarding point #2, I think it would be helpful if we narrowed the definition of “news” to only refer to fact reporting, and referred to everything else as “analysis” or “opinion.” It would be helpful if people could recognize the relative contributions of fact-reporting news organizations versus analysis and opinion sources. If people recognize just how much of what they read and watch is intended to persuade them, they may become more conscious and thoughtful about how much they allow themselves to be persuaded. One can hope.

To contribute to those goals, I’ve reordered the chart to value fact-reporting articles as the highest quality and everything else lower, even though there is some really excellent analysis out there. As a baseline, news consumers should understand when something is news (fact-reporting) and when it is not. On the new chart, the sources with the best analysis, but little reporting are at the top, but right under the sources that are comprised of high percentages of reporting articles. The most opinion-driven sources are at the bottom. There’s room for other things at the bottom below pure opinion, which can include sources that are sensationalist, clickbait, frequently factually incorrect, or which otherwise don’t meet recognized journalism standards.

On this version, I’ve included a number of different sources, mostly in the analysis and opinion categories, and left the most popular mainstream sources from the original chart, but have reordered some of them. Now, the rankings are more consistent with my initial answers to the example questions at the beginning of this post. Fox News is now ranked far lower than the New York Times for two main reasons; one, Fox News is dominated by opinion and analysis, and two, it has gotten precipitously worse in other measures (sensational chyrons, loss of experienced journalists, hyperbolic analysis by contributors, etc.) within the last six months. USA today, despite its basic nature, has been elevated because of its high percentage or fact-reporting stories. Slate, though it provides thoughtful, well-written analysis, is ranked lower than AP and Reuters, which better reflects their relative contributions to the news ecosystem. CNN still sucks, but it is clearer why now; CNN has the resources to provide twenty-four hours of news—it could provide Americans with a detailed global-to-local synopsis of the world—but instead it chooses to spend 5% of its time fact reporting a handful of stories, comprising mostly American political drama and maybe one violent leading world news story, and 95% on analysis and opinion ranging from the competent to the inane.

My analysis of news sources in the manner I’ve described herein has revealed that individual stories can and should be ranked on the chart in the same manner, and that individual stories can be placed in different places than the news sources in which they are published. I’ll be putting out individual story rankings and reasoning for those rankings from time to time for those that are interested. I’ll also take requests for rankings of sources and individual stories in the comments and on twitter. Thanks for reading and thinking.

*Update: a high-resolution PDF version is available here: Second Edition News Chart.V2

And a blank version is here: Second Edition Blank

 

Posted on

What is the difference between the statues of George Washington and Robert E. Lee?

The pro-confederate-statue side asks this question, likely in earnest, and it is worth grappling with the distinction. Indeed, since slavery is evil and horrible, as generally agreed by liberals and conservatives alike, and both men owned slaves, why is it preferable to take down the confederate statue and not the Washington statue?

This is not cut and dry, or “obvious” to everyone, and we shouldn’t treat it as such. It is a difficult task to distinguish between two things that are alike in some ways and different in others, so let’s look at the details and facts of these cases in order to distinguish, like courts do.

It is a general rule that we put up statues of good people and not bad ones, but this in itself is a hard rule to follow because no one person is all good or all bad. It’s a bit easier to distinguish with some people than others. MLK=almost all good and Hitler=almost all bad is not hard. I think it is legitimately closer with both George Washington and Robert E. Lee. I think the reason the argument comes down to GW=mostly good (despite slaves!) is because he is most known and respected for 1) fighting in the Revolutionary War for American independence, which modern Americans view as a righteous cause, and 2) being our first President. The argument comes down to Lee=mostly bad (plus slaves!) because he is most known for 1) fighting in the Civil War for the cause of keeping slaves, which most modern Americans view as a morally wrong cause.

The question of what they are most known for is an important one, because that is usually the same reason their statue was put up in the first place. When it comes to the question of whether to take one down, people tend to base their opinion on the questions on 1) what it meant when it was put up in the first place and 2) what it means now, in the context of history. With GW, it was put up because of his role in the Revolution and as President. With Lee, it was put up during an era of brutal reinforcement of white supremacy (see comments for link discussing this history) with a purpose of intimidating recently freed slaves. Today, in the context of history, GW’s statues are widely seen as a reflection of his leadership and role as a founder, not his role as a slave owner. Most people don’t go to a GW monument for the purpose of celebrating his slave ownership. Today, though, in the context of history, Lee’s statues are commonly given two negative meanings: First, they serve as a reminder of white supremacy to black people, and second, they serve as a rallying point for actual white supremacists. Yes, to many people, it may mean a “commemoration of Southern history” too, but if it’s 50% a brutal white supremacist reminder/rallying point and 50% Southern history commemoration, that’s enough to justify it being removed. We have made a moral decision as a society that its (even partial) role as a white supremacy beacon is not acceptable, in response to a particular flash point of a white supremacist resurgence. We have not made a similar decision about the Washington statues, because there has been no recent flash point around those.

However, I can’t actually morally justify Washington owning slaves, and that practice is indeed so reprehensible that it is valid to argue that if slavery is that wrong, then we should take down the statues of any slave holder, no matter how “good” they were otherwise. Joe Paterno’s statue was taken down because his biggest moral failing—protecting a child predator—outweighed the other good he had done. Perhaps the removal of Washington (slave owner) and Jefferson (slave owner and likely slave rapist) is the morally correct thing to do. We would likely remove the statues of contemporary heroes (say, MLK or Wayne Gretzky) if we suddenly found out they were rapists or owned slaves.

But there is a distinguishing factor between how we judge the actions of contemporaries compared to how we judge those of historical figures, and that is the factor of relative morality of a time in history compared to the present. Those who argue “slave owners weren’t all bad people” are inherently taking this factor into account. Yes, we all view slavery as evil now, but when it was a somewhat normalized aspect of society, it is plausible and even likely that many slave owners tried to live what they thought were upstanding moral lives in many ways. They may even have had moral dilemmas about slavery but felt that it was an intractable problem for them to solve, let alone forgo participation in. “Slave owners were not all bad people” ( a typically conservative argument) is a very similar argument to “George Washington’s statue should remain up because he did other good things, even though he owned slaves” (an argument liberals are currently making in relation to the confederate statue issue). “George Washington was not all bad,” essentially.

It seems that the right thing to do is to take down the Confederate statues because the of the bad things they were best known for (explicitly fighting for slavery), plus the reasons they were put up, plus the reasons they cause people pain now. But we must also admit that it would be logically consistent to remove other slave owners, even our founding fathers, if some contemporary flash point were to bring the issue of how bad slavery really is to the forefront. Perhaps it is a moral failing of our current time that we have not come to this realization yet. Perhaps future generations will come to the consensus that the founding fathers’ statues should be removed and hold it against our generations that we did not. Perhaps they will judge us harshly for tolerating other injustices, like unequal  women’s rights and queer rights for so long. Societal morals evolve over time. In the near term, though, it is likely that the “contemporary, widely-held perception of the statues” factor and the “relative morality of the time of the person” factor saves the Washington and Jefferson statues now but not the Confederate statues. So down with the Confederate statues. And shame, at least, on the moral failings of those whose statues we leave in place.

Posted on

High Resolution File Formats for Full Chart and Blank Versions of News Quality Chart

A few people have asked me to post links to various file formats of this chart for their own use. Feel free to download and use them. There is a Creative Commons license on them which requests attribution and non-commercial use. They contain minor updates from recent versions. Most notably, The Economist has been moved to the left. I agree with commentators who pointed out that was an erroneous initial placement. Also, I changed the snarky designation “Basic AF” to “Basic” so that the chart’s use would be more appropriate in middle school and/or high school settings.  (Note: the abbreviation “AF” stands for “as fuck,” which is text/internet slang for “very,” or “quite.” Sorry for any classroom snickers this may have caused for unsuspecting teachers.)

 

News Quality.Blank.V2

News Quality.V5

 

Posted on

The Reasoning and Methodology Behind The Chart

 

tl;dr: There are lots of reasons. Many are subjective. More data would make it better. I am not a media expert.

Since my News Quality graphic got widely shared, I have been asked what my inspiration, methodology, and process was for creating it. I note that I have been asked this question by academics, journalists, and laypersons that care about accuracy and quality. Unfortunately, a lot of people don’t care about accuracy and quality. And a lot of those same people don’t like to read.

Why I Created It

I am frustrated by the reality that people don’t like to read. I LOVE to read and write. I have an English degree and a law degree, and I read and write every day for work. As a hobby, I read the great articles that are out there on the topic of media bias and accuracy. All of you who are reading this know that there is an abundance of great journalism out there—truly more than ever. I have the pleasure and privilege of reading a lot of this stuff, as do you.

But I know that the medium of a well-written article just doesn’t reach people who don’t read long things. In this post, I refer to such people as “non-readers” or “infrequent readers.” I am fully aware that the website MediaBiasFactCheck, and the organization Pew Research, and media research departments at many universities have large sets of empirical data available to review, and that those sources are more reputable than *just me*. But non/infrequent readers don’t read those sources. What do they read? Memes, which are often just two juxtaposed pictures with a pithy, terrible, one-sentence argument placed on top in large white letters. Tweets in which arguments are limited to 140 characters. They also prefer to watch videos, like YouTube “documentaries,” no matter how deceptively edited or spun.

Memes and tweets and YouTube videos spread quickly. They don’t take any effort to read, and people are convinced by them. They base their viewpoints upon them IN PLACE of basing their opinions upon long written pieces. To the extent that infrequent readers read, they prefer short articles that confirm their biases. Because they read very little, their comprehension skills and ability to distinguish good writing from bad writing is low. This is true for infrequent readers across the political spectrum. All of this is extremely disturbing to me.

Many non/infrequent-readers prefer easily digestible, visual information. I wanted to take the landscape of news sources that I was highly familiar with and put it into an easily digestible, visual format. I wanted it to be easily shareable, and more substantive than a meme, but less substantive than an article. I cite the fact that it has been shared over 20,000 times on Facebook (that I know of) and viewed 3 million times on Imgur as evidence that I accomplished the goal of it being shareable. In contrast, maybe one-one millionth the amount of people will read this boring-ass article about my methodology behind it.

Many non/infrequent readers are quite bad at distinguishing between decent news sources and terrible news sources. I wanted to make this chart in the hopes that if non/infrequent readers saw it, they could use it to avoid trash. For those of you who can discern between the partisan leanings of The Economist and the Wall Street Journal, I have to say this chart was not primarily made for your benefit. You are already good at reading and distinguishing news sources.

The fact that the chart is shareable does not necessarily make it TRUE. Having heard feedback from all corners of the internet, I know that many people disagree with my placements of news sources upon it. However, even people who disagree with the placements find the taxonomy helpful, because it provides a baseline for a discussion about media sources, which are inherently difficult to classify. Often, verbal and written discussions about news sources are limited to descriptions of sources as “good” and “bad,”  and “biased” and “unbiased.” This chart allows for a few more dimensions to the conversation. However, as discussed below, there are many metrics on which to evaluate and classify media, and this chart doesn’t include them all.

In creating the chart, I had to make (mostly) subjective decisions regarding four particular aspects, explained below.

Choosing the Vertical Categories

First, I considered what makes a news source generally “high quality” or “low quality.” “Quality” itself is an incredibly subjective metric. I figured a good middle category to start with would be journalism that regularly meets recognized ethics standards the profession, such as those set by the Society of Professional Journalists. http://www.spj.org/ethicscode.asp. Above and beyond that, I determined that factors that can make a particular article or broadcast “higher quality” include 1) a high level of detail, 2) the presence of analysis, and 3) a discussion of implications and/or complexity. So I created the categories of “Analytical” for sources that have 1) detail and 2) analysis, and “Complex” for sources that regularly have the discussions of 3) implications and/or complexity. To read the “Complex” and “Analytical” sources, you often have to be familiar with facts learned from sources ranked lower on the vertical axis. However, complexity is not always a good thing. Sometimes, real issues get obscured with complex writing.

Then, I considered what makes a news source “lower quality.” One of the factors is simplicity. Simplicity CAN lead to “low quality” if a deep issue is only covered at a very surface level. Simplicity is fine for stories like “a man robbed a liquor store,” but it’s often bad for, say, coverage of a complex bill being considered by your state legislature. There are sources that cover complex stories (e.g, Hillary e-mail stories, Trump foundation stories, and really, most political stories) in a VERY simple format, and I think that decreases civic literacy. Therefore, I created a below-average quality category called “Basic AF.” However, simplicity is not necessarily a bad thing. Sometimes you need “just the story.”

I have strong feelings about what factors really lower the quality of a source, and those are 1) sensationalism and 2) self-promotion in the form of “clickbait” headlines. Sources that engage in these actions are often geared toward attracting the attention of the non/infrequent reader. Sensationalism plays upon the worst emotions in us, such as fear and anger. Clickbait online articles have headlines that are rife with hyperbole. Then, the content of the articles themselves are loaded with adjectives (e.g., “clearly,” “obviously,” “desperately,” “amazing,” “terrific”) that are hallmarks of poor persuasive writing. That category definitely went at the bottom.

Few people quibble with the vertical categories as I have selected them, but as stated above, “complex” is not necessarily good and “basic” is not necessarily bad. Therefore, the “journalistic quality” arrow does not correlate perfectly with the vertical categories, and as a result, I myself find it to be an imperfect way to rank journalistic quality. However, they correlate enough that the ranking still makes sense, minus a few outliers. In particular, USA Today and CNN get pretty harsh vertical rankings due to my categories. I think USA Today is a pretty high quality publication, even though most of its stories are basic.

Note that the vertical categories do not take into consideration the presence of “truth” in a source. For example, the Wall Street Journal near the top, and CNN near the bottom, both generally report on things that are “true.” The vertical categories also do not differentiate between whether sources are more fact or opinion based. For example, both The National Review (near the top) and The Blaze (at the bottom) write very opinionated pieces.

 

Choosing the Horizontal Categories

Sorting sources based on partisan bias was a bit more straightforward, but I wanted to differentiate between the level of partisan bias. The categories are fairly self-explanatory. They are also the most highly debatable. Good arguments can be made as to whether a source is minimally partisan, “skews” partisan, or is “hyper” partisan. The “Utter Garbage/Conspiracy Theories” category is for those sources that “report” things that are demonstrably false and for which no apology or retraction is issued in the wake of publishing such a false story. These stories may include, for example, how the Obamas’ children were stolen from another family (on the right), or that the government is purposely poisoning us and changing the weather with chemtrails from airplanes (on the left). For the most part, even the “hyper-partisan” sites try to base their stories on truth (e.g., Occupy Democrats, Red State), and are held to account if they publish something demonstrably false. Generally, the closer a source is to the middle on this chart, the more they are taken to task by their peers for publishing or reporting something false.

The categorization of a source in the hyper-partisan or even utter garbage category does not mean that every story published there is false. Many articles may just be very opinionated versions of the truth, or half-truths. And occasionally, sometimes a hyper-partisan or garbage site will stumble upon an actual scoop, due to their willingness to publish stories that haven’t been sourced or verified. Their classification in these categories is mainly because they are widely recognized by other journalists as regularly falling short of standard journalism ethics and practices.

Lots of people have a problem with the category of “mainstream/minimally partisan.” To clarify, the category is called “minimally partisan,” not “non-partisan.” Because journalists are human people, they have opinions, and these opinions can make their way into their reporting. However, they also have professional standards and are held to account by their peers. Further, one can police one’s own biases to a certain extent if one is cognizant of them. The difference between “minimally partisan” and “skews partisan” is easily distinguishable by the intent of the organization. If they mean to be objective, that counts as minimally partisan here. If they mean to present a progressive point of view (MSNBC), or mean to present a conservative point of view (FOX News) that’s at least skewing partisan.

Choosing the News Sources to Include

The sources I initially chose include those I read most often and those I am exposed to most often through aggregators or other sources. They also include sources which I have reason to believe many others are exposed to most often. For people who get their news on the internet, their default browser home page is often a starting point for where to find news, and these home pages are often news aggregators. Yahoo, MSN, and the Microsoft Windows Edge Browser home page all present particular news sources. Many people also get their news sources from Facebook and Twitter (an alarming number, 40%, as I have seen in a recent survey, ONLY get their news from Facebook). Another aggregator is the Apple News App. Between these sources, I selected some of the most popular, making sure to include some in each category, and an approximately equal number of left and right partisan sources.

Note that I did not quantitatively determine how many sites are out there on each partisan side. Some people object to this and believe there are far more trash websites on one side or the other. I do not have the time or resources to conduct such a quantitative measure, so I did not conduct one. Some believe that because this measure is omitted, I am promoting a false equivalency between the sides. This may be true, if there is truly one partisan side that has significantly more garbage news sources. However, I believe there is value in presenting partisan balance within the chart so that more people across the spectrum are willing to take it seriously.

Many sources are not on here. That’s because there are hundreds of them. I could add twice as many easily, but then it would lose its readability. Remember, some people don’t like to read. For many, the words on the chart were too much.

 

Factors for Placing the News Sources on the Chart

I could have taken a number of empirical and quantitative approaches, but as stated earlier, but I did not set out to first conduct such a wide ranging study and then publish the results thereof. I just wanted to visually present a concept that many of us already hold in our heads. I am not affiliated with any research organizations that do this kind of work. I was actually very surprised that this chart was so widely shared, because  I am not an authority on this subject, and literally nothing I have ever written or drawn has attracted so much attention and scrutiny.

I am, however, experienced in defending my positions with facts and arguments, and I place value on the notion that assertions must be supported. I have outlined my support for these placements below.

One way to analyze sets of complex facts  is the approach used in our courts. There are some legal questions to which our courts have determined the best way to answer is through a multi-factor test. These multi-factor tests are appropriate for factual scenarios where there are many considerations to weigh. For example, in trademark law, to determine whether consumers are likely to be confused by competing trademarks, there is a 13- factor test. In patent law, to determine a reasonable amount of royalties to be paid for patent infringement, there is a 15-factor test. As a lawyer, I am comfortable with this multi-factor test approach, so I created one and applied it.

Given the popularity of this chart, though, I think it would be valuable to take my taxonomy and multi-factor test for placement and use it as a starting point for an actual study. A good empirical, data-driven study would probably look like a large panel of well-regarded journalists, writers, academics, and media observers poring over voluminous amounts of writing, spanning tens of thousands of articles and at least thousands of individual news sources, with the help of research assistants. It would probably use software to count and categorize words used in these articles and require cross-checking for verification of facts. As noted below in my list of factors, some just require a yes or no answer, but some are truly measurable and quantifiable. For each of the factors that are quantifiable here, I note that in my own evaluation, I only quantified these factors very generally, based on my observation and reading of headlines and articles. That is, I did not precisely count everything that could be measured. A real study could precisely quantify each of these factors, which would result in more precise placement of news sources. However, even in a quantitative study, certain aspects to placement will still be subjective; namely, the weight given to a particular factor in determining the ultimate ranking. It appears that any high-quality study of media sources requires both subjective and objective aspects, given that it is an analysis of written and spoken words.

Here are the factors I considered for each source, in no particular order. Below each factor is a note regarding what categories the factor weighted a source toward, and why. The notes also indicate whether a factor is quantifiable and could be more precisely measured in a future study for a future version of the chart

  1. Whether it exists in print

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for several reasons. Print publication costs much more money, time, and effort to build than an internet one. Most print publications have significant numbers of staff members, including professional journalists. In order to have built a successful print publication, an organization will have had to spend time and effort building credibility among a significant audience. Reputation is necessary in order to have people buy newspapers for the purposes of getting the news. As a result of the above reasons, most print publications have longevity.

2.  Whether it exists on TV, and if so, whether it existed before cable

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for similar reasons factor #1 (print). Cable lowered barriers to entry for radio broadcast news.

3. Whether it exists on radio, and if so, whether it existed before satellite radio

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for similar reasons factors 1 (print) and 2 (TV). Satellite radio lowered barriers to entry for radio broadcast news.

4. Length of time established

Greater longevity weighted sources somewhat toward “mainstream/minimal partisan bias.” Longevity allows for the establishment of reputation (even a changing one) over time. However, newer sources can still be reputable and high-quality.

5. Readership/Viewership

This is a quantifiable factor. Greater readership and viewership weighted heavily toward “mainstream/minimal partisan bias” and somewhat toward the middle category of “meets high standards.”

6. Reputation for a partisan point of view among other news sources

“Reputation” is a highly subjective term, just like “quality.” Reputation varies and is fuzzy, but no one denies that it exists. Reputation testimony is admissible in court as evidence, so I included a few specific kinds of reputation as valid factors here. Other news sources talk about each other. If a large, established newspaper calls an internet website “left-wing,” or “right-wing,” and if these same internet websites call the large, established newspaper “the mainstream media,” they are in agreement as to each other’s partisan point of view.

7. Whether the source actively differentiates between opinion and reporting pieces

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” and was a determinative factor in whether the source was categorized at least in part as “mainstream” or fell completely into “skews partisan.” For example, the Washington Post, New York Times, and Wall Street Journal all have labeled opinion sections, while MSNBC, FOX, and Vox do not.

8. Proportion of opinion pieces to reporting pieces

This measure is also quantifiable. Greater percentages of reporting pieces weighted heavily toward “mainstream” and somewhat toward the middle category of “meets high standards.

9.Proportion of world news coverage to American political coverage

This measure is also quantifiable. Greater international news coverage weighted sources heavily upward. However, this measure is also subjective. I am of the opinion that if a source spends more time on world news, that indicates that it views itself as responsible for delivering all major news, rather than just focusing on ones that drive website traffic, like political gossip.

10. Repetition of same news stories

High repetition, in view of the medium, weighted sources heavily into the lowest vertical category for sensationalism. This was a main reason for CNN’s ranking toward sensationalism.

11. Reputation for a partisan point of view among my peers on social media

This factor sounds the most biased and subjective of all the factors, and it probably is. It is also typically the MAIN criteria upon which most people would rank these sources on the chart. There is some validity to using this measure; if your known conservative friend likes a source, it likely has a conservative point of view, and if your known liberal friend likes a source, it likely has a liberal point of view. There are obvious drawbacks to using this measure given the “echo chamber” nature of our social media feeds. If most of your friends have the same viewpoint as you, and you are all ideologically very partisan, then if they call a particular partisan source credible, that impacts one’s impartiality.

This factor was somewhat determinative of the placement of sources along the partisan spectrum, and hardly determinative of placement vertically.

12. Party affiliation of regular contributors/interviewees

This factor is also quantifiable. A balance of party affiliation weighted somewhat toward mainstream, and imbalance weighed to the partisan sides proportionally.

13. Presence of hyperbole in titles of articles

This factor is also quantifiable. The presence of hyperbole weighted heavily away from the center for partisanship, and weighted heavily downward for quality. I correlated more hyperbole with more partisanship and less quality.

14. Presence of adjectives in persuasive writing

This factor is also quantifiable. The presence of many adjectives weighted heavily away from the center for partisanship, and weighted heavily downward for quality. I correlated more adjectives with more partisanship and less quality.

15. Quality of grammar, spelling, punctuation, capitalization, and font size

Mistakes in grammar, spelling, punctuation weighted sources heavily downward for quality. Improper capitalization also weighted sources heavily downward for quality. Excessive capitalization (e.g., all caps) and excessive font size weighted heavily horizontally for partisanship and somewhat downward for quality. For example, the enormous, daily, all caps top headline on HuffPo pushed it well into the hyper-partisan category, but only down a little for quality.

16. Presence of an ideological reference or party affiliation in the title of the publication

Presence of reference or affiliation weighted sources heavily to the edges for partisanship and downward for quality (e.g., Occupy Democrats, Red State).

17. Effects of trying to actively control for my own known bias

I tried to evaluate my own bias and take it into account by first defining what my bias is and then making adjustments to correct for it. This exercise is difficult but crucial. It is imprecise and highly subjective. However, anyone who tries to make placements on this chart should engage in it.

I submit that a first way to evaluate your partisan bias is to categorize yourself on a number of political issues upon which there is consensus of what constitutes left, right, and center. Therefore, I started by evaluating my own views on what I think is “correct” and “true” on the issues of civil rights, taxes, business regulation, and the role of government in general. I am pretty adamant about civil rights and equality for all, especially for people of color, women, immigrants, and the LGBTQ community. I believe that places me in a somewhat left-of center category. On taxes and business regulation, I believe that neither “the government” nor “corporations” are all good or all bad. On the whole, I believe government does good things about 70-90% of the time and messes things up 10-30% of the time. I believe corporations do good things about 70-90% of the time and mess things up 10-30% of the time. As a result, I fall quite squarely in the middle, ideologically, on issues of taxes, business regulation, and the role of government.

In view of these evaluations, it would be fair to call me a left-leaning moderate.

To correct for this bias, I had to consider that there is a decent chance I am just wrong on what “the truth” or “the correct answer” is on one or more (or all) political issues. The likelihood that any one of us is completely right on all the issues is quite low. I have to acknowledge that there exists consensus about certain issues to the right of where I stand on them. That is, because approximately 46% of voters consider FOX News reputable and conservative principles acceptable, I cannot simply discount their likelihood of being right on the bet that I am right and they are wrong. As a result, I ranked Fox News higher on quality and less extreme on partisanship than I probably would have otherwise. I also ranked hyper-partisan left wing sites lower on the quality scale than I would have otherwise, and ranked complex/analytical conservative sources more centrally and higher than I would have otherwise.

Questions of bias, truth, and whether there is a center get philosophical and existential very quickly. All any of us can do is try to recognize and control for our biases.

Overall, this factor pushed conservative sources up and to the center, and liberal sources down and to the left in relation to where I might have ranked them purely on my ideological stances. It also pushed the sources into a relative balance that some argue does not exist.

A future study would benefit from having an somewhat equal number of left-leaning and right-leaning moderates arriving at a consensus to control for bias.

Factors Not Considered

I did not weigh the role of money from advertisers, ownership of sources, or corporate structure as factors in any meaningful way. I believe those factors are more closely related to the issue of media focus as opposed to media partisanship and journalistic quality. This chart was about partisanship and quality. It intersects with the topic of media focus only tangentially. I think the factors of money from advertisers, ownership of sources, and corporate structure can and do influence the topics that media sources focus upon.

Complaints about mainstream media focus are valid, but this is a whole complex topic in and of itself. Examples of these complaints include “why did it take so long to get mainstream coverage of the Standing Rock/Dakota Access Pipeline protests?” “Why did it take so long to get mainstream coverage about Bernie Sanders?” Why all the obsession with Hillary’s e-mails?” “Why the all-consuming coverage of all things Trump?” People point to money from advertisers, ownership of sources, and corporate structure as the root of these problems of misplaced focus, but I think it is more complex than that. Factors related to human psychology and attention, as well as modern technology likely play a role. Therefore, I left out the factors of money and corporations because it is an altogether different inquiry, and not necessary to resolve now in order to rank sources according to partisanship and quality. I believe factors 1-17 are sufficient to meaningfully place news sources along the continuum of this particular chart.

Edits, Arguments, and Future Versions

Based on thoughtful and legitimate feedback, I would likely make some edits on placement in my original chart. These include moving the Economist to the left of the midline, and splitting CNN into TV and Internet versions, and ranking the CNN Internet version in the middle circle while leaving the CNN TV version where it is. I would consider moving the Washington Post A LITTLE to the left, but I’d like to engage in a discussion about that.

I would be happy to have arguments about each of the listed factors above, and would entertain suggestions for other factors. I am also considering suggestions for future versions.

If others are inclined to take on the work of gathering data for the factors identified as quantifiable, I would be interested in supporting such work in some way.

Thanks for reading and thinking.

 

Posted on

News Quality

 

news-quality-v4

We are living in a time where we have more information available to each of us than ever before in history. However, we are not all proficient at distinguishing between good information and bad information. This is true for liberal, moderate, and conservative people. I submit that these two circumstances are highly related to why our country is so politically polarized at the moment.

Why is it that I can have such different views on the same subject or topic as someone else who lives in the same country? Take the polarizing example of people’s opinions on Hillary. Why do I think she is qualified and inspiring but others think she is literally evil incarnate? I don’t know her personally. And neither do you. We must both admit that our opinions of her are informed by the news sources we read and believe. And news sources vary widely in what they report.

Which news sources should we believe, when there are so many to choose from, and each one is telling you not to believe another one? I put together this chart of which news sources I think you should use and which ones you should not. If you value my opinion as someone who both is reasonable and well-informed, you may find it helpful. If you don’t really care what I think, it will be useless to you. These are my subjective opinions based on having read many news stories from each of the listed sites. The only credibility or authority I can claim in this regard is that I read and write analytically for a living.

Before you look at the chart, I’d like to address the fact that many people object to media sources on the basis that they are “mainstream.” They say “I don’t believe the mainstream media! They are owned by big corporations and do things for money!” But where did they get that idea? From another media source. Remember that each media source has their own incentives (like monetary ones) to get people to listen to them and not to someone else. You have to evaluate media based on something other than the fact that one source told you not to listen to another source.

Remember that journalism is a professional and academic field with a set of agreed-upon standards. People get degrees in it and people who are really good at it get jobs in it at good organizations. Peer review helps ensure mainstream sources adhere to standards; if a story doesn’t meet those standards, other news outlets report on that. Not believing the mainstream media just because it is mainstream is like not believing a mainstream doctor or a mainstream lawyer. Sure, you should question and rate the quality of what the newspaper, doctor, or lawyer says, but you shouldn’t dismiss them out of hand because the paper is big, the doctor works at a hospital, or the lawyer works at a firm.

The chart is pretty self-explanatory. Here are some caveats and reasons for my rankings:

-I am operating out of the assumption that the less blatantly partisan the source is, the more accurate it is.
-I understand that individual reporters, even at the most reputable news sources, have their own personal biases and opinions. The rankings are an overall ranking of each site.
-“Sensational” means the article have titles like “So and so DESTROYS so and so with THIS response!”
-“Clickbait” means the articles have titles like “She walked into a meeting. What happened next will shock you!”
-“Conspiracy theories” means shit that is just made up. Like National Enquirer type stories.
-I’m sure this will offend some people that typically agree with me politically. Sorry.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.