Posted on

Observations on The Chart by Law Professor Maxwell Stearns of U. Maryland

Law professor Maxwell Stearns, who blogs about law, politics, and culture, recently published this post about the chart, which has several useful insights about 1) distilling the ranking criteria into sub-categories, 2) why the sources on the chart form a bell curve, 3) how the rankings might be made more scientifically. Give it a read!

https://www.blindspotblog.us/single-post/2017/11/18/The-Viral-Media-Graphic-with-special-thanks-to-Vanessa-Otero

Posted on

Everybody has an Opinion on CNN

I get the most feedback by far on CNN, and, in comparison to feedback on other sources on the chart, CNN is unusual because I get feedback that it should be moved in all the different directions (up, down, left, and right). Further, most people who give me feedback on other sources suggest that I should just nudge a source one way or another a bit. In contrast, many people feel very strongly that CNN should be moved significantly in the direction they think.

I believe there are a couple of main reasons I am getting this kind of feedback.

  • CNN is the source most people are most familiar with. It was the first , and is the longest-running, 24 hour cable news channel. It’s on at hotels, airports, gyms, and your parent’s house. Even if people are news critics of nothing else, if they are critics of anything, they will be critics of CNN, because they are most familiar with it.
  • CNN is widely talked about by other media outlets, and conservative media outlets in particular, who often describe it as crazy-far left. Usually those who tell me it needs to go far left are the ones reading conservative media—no surprise there.
  • People tend to base their opinions of CNN on what leaves the biggest impression on them, and there are a lot of aspects that can leave an impression:
    1. For some people, the fact that they can just have it on in the background during the day, during which they see a large sampling of CNN’s news coverage, they see that programming is mostly accurate and informs them of a lot of US news they would be interested in. These individuals tend to think that CNN should be ranked higher, perhaps all the way up in “fact-reporting” and mainstream
    2. For others, they know they can tune into CNN for breathless, non-stop coverage of an impending disaster, like a Hurricane, or a breaking tragedy, such as a mass shooting. People can have a few different kinds of impressions from this. First, that they can count on that fact that they will get all the facts that are known repeated to them within 10 minutes of tuning in. That’s another reason to put them up in “fact-reporting.” Second, more savvy observers know that CNN makes not-infrequent mistakes and often jumps the gun in these situations. They usually qualify their statements properly, but they will still blurt out facts about a suspect, number of shooters, fatalities, that are not quite yet verified. That causes some people to rank them lower quality on the fact-reporting scale. Third, people know that once CNN runs out of current stuff to talk about, they will bring on analysts about all related (or unrelated) subjects (e.g., lawyers, criminologists, climate change scientists, etc.) often for several days following the story. This tends to leave people with the impression that CNN provides a lot of analysis and opinion (including lots of it that is valid and important) in addition to fact reporting. So a ranking somewhere along the analysis/opinion spectrum (a little above where I have it) seems appropriate.
    3. For yet others, the kind of coverage that leaves the biggest impression is the kind that includes interviews and panels of political commentators. The contributors and guests CNN has on for political commentary range widely in quality, from “voter who knows absolutely nothing about what he is talking about” to “extremely partisan, unreliable political surrogate” to “experienced expert who provides good insight.” People who pay attention to this kind of coverage note that CNN does a few crazy things.
      1. First, they have a chyron that says “Breaking News:…” followed by something that is clearly not breaking news. For example: “Breaking: Debate starts in one hour.” Eye roll. This debate has been planned for months and is not breaking. Further, they have a chyron (big banner on the bottom of the screen) for almost everything, which seems unnecessary and sensationalist, but has been adopted by MSNBC, FOX, and others. Often, the chyron’s content is sensationalist.
      2. Second, in the supposed interest of being “balanced” and “showing both sides, they often have extreme representatives from each side of the political spectrum debating each other. This practice airs and lends credibility to some extreme, highly disputed positions. Balance, I think, would be better represented by having guests with more moderate positions. Interviews with KellyAnne Conway, who often says things that are untrue, things that are misleading, and makes highly disputed opinion statements, are something else. Even though the hosts challenge her, it often appears that the whole point of having her as a guest is for the purposes of showcasing how incredulous the anchors are at her statements. This seems to fall outside of the purpose of news reporting. What’s worse, though (to me, anyway), is that they will hire partisan representatives as actual contributors and commentators, which gives them even more credibility as sources one should listen to about the news, even though they have a clear partisan, non-news agenda. They hired Jeffery Lord, who routinely made the most outlandish statements in support of Trump, and Trump’s ACTUAL former campaign manager, Corey Lewandowski. That was mind-boggling in terms of lack of journalism precedence (and ethics) and seemed to be done for sensationalism (and ratings, rather than for the purposes of news reporting, which is to deliver facts). Those hires were a big investment in providing opinion. I think it was extremely indicative of CNN’s reputation for political sensationalism when the Hill ran two headlines within a few weeks of each other saying something like “CNN confirms it will not be hiring Sean Spicer as a contributor” and CNN confirms it will not be hiring Anthony Scaramucci as a contributor” shortly after each of their firings.
  • Third, their coverage is heavily focused on American political drama. I’ll elaborate on this in a moment.

Personally, the topics discussed in (c) left the biggest impression on me. That is why I have them ranked on the line between “opinion, fair persuasion” and “selective or incomplete story, unfair persuasion.” The impact of the guests and contributors who present unfair and misleading statements and arguments really drives down CNN’s ranking in my view. I have them slightly to the left of center, though, because they tend to have a higher quantity of guests with left-leaning positions.

 

I have just laid out that my ranking is driven in large part by a subjective measure rather than an objective, quantitative one. An objective, quantitative one would take all the shows, stories, segments, guests, and analyze all the statements made, and would, on a percentage basis, say how many of these things were facts, opinions, analysis, fair or unfair, misleading, untrue, etc. I have not done this analysis but would guess that a large majority of the statements made in a 24 hour period on CNN would fall in to reputable categories (fair, factual, impartial). Perhaps even 80% or more would fall in to that category. So one could reasonably argue that CNN deserves to be higher; say, 80% of the way up (or whatever the actual number is), if that is how you wanted to rank it.

However, I argue for the inclusion of a subjective assessment that comes from the question “what impression does this source leave?” Related questions are “what do people rely on this source for,” “what do they watch it for,” and “what is the impact on other media?” I submit that the opinion and analysis panels and interviews, with their often-unreliable guests, leave the biggest impression and make up a large portion of what people rely on and watch CNN for. I also submit that these segments make the biggest impact in the rest of media and society. For example, other news outlets will run news stories, the content of which are “Here’s the latest crazy thing KellyAnne said on CNN.” These stories make a significant number of impressions on social media, therefore amplifying what these guests say.

I also include a subjective measure that pushes it into the “selective or incomplete story” category, which comes from trying to look at what’s not there; what’s missing. In the case of CNN, given their resources as a 24 hour news network, I feel like a lot is missing. They focus on American political drama and the latest domestic disaster at the expense of everything else. With those resources and time, they could inform Americans about the famine in South Sudan, the war in Yemen, and the refugees fleeing Myanmar, along with so many other important stories around the world. They could do a lot more storytelling about how current legislation and policies impacts the lives of people here and around the world. Their focus on White House palace intrigue inaccurately, and subliminally, conveys that those are the most important stories, and that, I admit, just makes me mad.

Many reasonable arguments can be made for the placement of CNN as a whole, but a far more accurate way to rank the news on CNN is to rank an individual show or story. People can arrive at a consensus ranking much more easily when doing that. I will be doing that on future graphs (I know you can’t wait for a whole graph just on CNN, and I can’t either!) for individual news outlets.

 

Posted on

The Chart, Version 3.0: What, Exactly, Are We Reading?

 

Summary: What’s new in this chart:

  • I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
  • I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
  • I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
  • I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
  • Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.

Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.

OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.

As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.

As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.

So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric

The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:

  • True and Complete
  • Mostly True/ True but Incomplete
  • Mixed True and False
  • Mostly False or Misleading
  • False

Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.

It is valid and important to rate articles and statements for truthfulness. But it is apparent  that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:

  • (Presented as) Fact
  • (Presented as) Fact/Analysis (or persuasively-worded fact)
  • (Presented as) Analysis (well-supported by fact, reasonable)
  • (Presented as) Analysis/Opinion (somewhat supported by fact)
  • (Presented as) Opinion (unsupported by facts or by highly disputed facts)

In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.

Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.

The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.

I submit that the following characteristics of sentences can make them seem unfair:

-Not relevant to present story

-Not timely

-Ad hominem (personal) attacks

-Name-calling

-Other character attacks

-Quotes inserted to prove the truth of what the speaker is saying

-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point

-Emotionally-charged adjectives

-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises

This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:

  • Fair
  • Unfair

By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:

In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:

As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.

Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.

If you would like a blank version for education purposes, here you go:

Third Edition Blank

And here is a lower-resolution version for download on mobile devices:

Posted on

Not “Fake News,” But Still Awful for Other Reasons:  Analysis of Two Examples from The Echo Chambers This Week

The term “fake news” is problematic for a number of reasons, one of which is that it is widely used to mean anything from “outright hoax” to “some information I do not like.” Therefore, I refrain from using the term to describe media sources at all.

Besides that, I refrain from discussing the term because I submit that the biggest problem in our current media landscape is not “hoax” stories that could legitimately be called “fake news.” What is far more damaging to our civic discourse are articles and stories that are mostly, or even completely, based on the truth, but which are of poor quality for other reasons.

The ways in which articles can be awful are many. Further, not all awful articles are awful in the same way. For these reasons, it is difficult to point out to most casual news readers how an article that is 90% true, or even 100% true, is biased, unfair, or deviant from respectable journalistic practices.

This post is the first in a series I plan to do in which I visually rank one or more recent articles on my chart and provide an in-depth analysis of why each particular article is ranked in that spot.  My analysis includes discussions of the headlines, graphics, other visual elements, and the article itself. I analyze each element and each sentence by asking “what is this element/sentence doing?”

This week, I break down one article from the right (from the Daily Wire, entitled “TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan) and one from the left (from Pink News, entitled “Bill O’Reilly caught in $32 million Fox News gay adult films scandal”).

 

  • From the Left: Article Ranking and Analysis of:

http://www.pinknews.co.uk/2017/10/24/bill-oreilly-caught-in-32-million-fox-news-gay-adult-films-scandal/

Source: Pink News

Author: Benjamin Butterworth

Date: October 24, 2017

Total Word Count: 706

  1. Title: Bill O’Reilly caught in $32 million Fox News gay adult films scandal

Title Issues:

Misleading about underlying facts

There is no current, known scandal involving Fox News and gay adult films. Bill O’Reilly settled a $32 million sexual harassment lawsuit while employed by Fox, and one of the allegations was that he sent a woman gay porn. However, the title suggests some sort of major financial involvement of Fox News in particular gay porn films. No mention of lawsuit settlement in title.

                        Misleading about content of article

The article is actually about the sexual harassment settlement, with one mention of the allegation of sending gay porn, the actions of Fox News in relation to O’Reilly’s employment after the settlement, and a listing of O’Reilly’s past anti-gay statements.

                        Misleading content is sensationalist/clickbait

  1. Graphics: Lead image linked to social media postings is this:

 

          Graphics Issues:

                        Misleading regarding content of article:

The image is half a gay porn scene and half Bill O’Reilly, which would lead a read to expect that the topic of gay porn makes up a significant portion of the article—perhaps up to half.

                        Misleading content is sensationalist/clickbait

The image is salacious and relies on people’s interest what they perceive as sexual misbehavior and/or hypocrisy of others

                        Image is a stock photo not related to a particular fact in the article

  • Other Elements (Lead Quote): Anti-gay former Fox News host Bill O’Reilly is caught up in a $32 million gay porn lawsuit.

Element Issues:

Inaccurate regarding underlying facts

The $32 million lawsuit is cannot be accurately characterized as being “about” gay porn. It is most accurately characterized as a sexual harassment (or related tort) lawsuit.

                        Inaccurate in relation to facts stated in article

The article itself states: “Now the New York Times (NYT) has claimed that, in January, O’Reilly agreed to pay $32 million to settle a sexual harassment lawsuit filed against him.”

Adjective describing subject of article selected for partisan effect

“Anti-gay” is used to describe Bill O’Reilly, which is used to make a point, to the site’s pro-LGBT audience, that O’Reilly is especially despicable beyond the transgressions that are subject of the present lawsuit being reported upon.

  1. Article:

            Genres:

  1. Embellished Reporting (i.e., reporting the timely story, plus other stuff)

Reports the current sexual harassment settlement story, relevant related timeline of events, plus extraneous information about how O’Reilly is anti-gay

  1. Promotion of Idea

                                    Idea that Bill O’Reilly is a bad person particularly because he is anti-gay

Sentence Breakdown:

                        706 total words, 28 sentences/quotes

Factual Accuracy:

% Inaccurate sentences: 0 out of 28 sentences (0%) inaccurate

% Misleading Sentences: 0 out of 12 sentences (0%) are misleading

 

                        Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 24/28 (86%)

% Fact/Quoted Statements with adjectives: 2/28 (7%)

% Analysis Statements: 1/28 (3.5%)

% Analysis/Opinion Statements: 1/28 (3.5%)

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

                        % Fair: 20/28 (71%)

Sentences 1-7, and 9-20 rated as “fair” because they are factual, relevant to the current story, and timely.

% Unfair: 8/28 (29%)

Sentences 8, 20-28 rated as “unfair” because they are untimely, unrelated to title, and used for idea promotion

Overall Article Quality Rating: Selective Story; Unfair Influence

Main reasons:

-29% of sentences included for unfair purpose

-Anything over with over 10% unfair influence sentences can be fairly rated in this category

-Title, Graphics, Lead element all extremely misleading

Overall Partisan Bias Rating: HYPER-PARTISAN (Liberal)

Main reasons:

  • Focus on pro-LGBT message even though underlying story is very loosely related to LGBT issues

 

  • From the Right: Article Ranking and Analysis of:

http://www.dailywire.com/news/22540/trump-was-right-gold-star-widow-releases-trumps-ryan-saavedra

Source: The Daily Wire

Author: Ryan Saavedra

Date: October 20, 2017

Total Word Count: 257

 

 

  1. Title: TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan

Title Issues:

Contains all caps statement of “TRUMP WAS RIGHT”

-Capitalization is sensationalist

Contains conclusory opinion statement of “TRUMP WAS RIGHT”

Directly appeals to confirmation bias with “TRUMP WAS RIGHT”

People likely to believe Trump is right in general are the most likely to click on, read, and/or share this, and are most likely to believe the contents of the article at face value

Misleading regarding context of current events, which says “Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan (see explanation after next issue)

Omitting relevant context of current events occurring between approximately Oct 16 and Oct 20, 2017, the four preceding days before this article was published

-In the context of a controversy over disputed phone call between Trump and a different black Gold Star Widow than the one this article is about, in which the presence of a recording of the call was also disputed, the omission of the fact that this is a different black Gold Star Widow who received a call from Trump is misleading. It is misleading because it is likely to confuse readers who are unfamiliar with specific facts of the current controversy ( such as 1) the names of the widow and solider, Myeshia Johnson and Sgt. La David Johnson, 2) what they look like, and 3) where he was killed

 

  1. Graphic Elements: An accurate photo of the widow who is the subject of the story (Natasha DeAlencar) and her fallen soldier husband (Staff Sgt. Mark DeAlancar)

Graphics Issue:

Accurate photo juxtaposed with other problematic elements

-Though the photo is accurate, its position next to the title again may lead readers who are uninformed as to the underlying facts that this call is regarding the current controversy between Myeshia Johnson and President Trump

III.             Other Elements (Lead Quote): “Say hello to your children, and tell them your father, he was a great hero that I respected.”

Element Issue:

Accurate quote juxtaposed with other problematic elements

-Similar to the photo, though the quote is accurate, its position next to the title and photo again may lead readers who are uninformed as to the underlying facts that this particular quote was from the current controversial call between Myeshia Johnson and President Trump

  1. Article:

Genres:

-Storytelling

Here, the story of one widow’s experience

Promotion of ideas

Here, the promotion of idea that Trump is respectful and kind; promotion of idea that media is deceitful

Sentence Breakdown:

257 words; 12 sentences

Factual Accuracy:

% Inaccurate sentences: 1 out of 12 sentences (8%) inaccurate

Quote from article: “In response to a claim by a Florida congresswoman this week claiming that President Donald Trump is disrespectful to the loved ones of fallen American soldiers, an African-American Gold Star widow released a video of a phone conversation she had with the President in April about the death of her husband who was killed in Afghanistan.”

  • The widow did not release the video “in response to a claim by a Florida congresswoman.” She released it in response to inquiries from reporters in the wake of the controversy between Myeshia Johnson and Trump[1]

 

  • The congresswoman, Frederica Wilson, did not say generally that Trump is disrespectful to the loved ones of fallen American soldiers. She said to a local Miami news station, about Trump’s particular comments to Myeshia Johnson, “Yeah, he said that. So insensitive. He should not have said that”[2] All recent instances of her talking about the President’s conduct are in the context of this incident.[3]

% Misleading Sentences: 1 out of 12 sentences (8%) are misleading

Quote from article: “The video comes a day after White House Chief of Staff John Kelly gave an emotional speech during the White House press briefing on how disgusting it was that the media would intentionally distort the words of the President to attack him over the death of a fallen American hero.”

 

This quote makes it sound like the media took the words from this call in the video and distorted them to attack the President. The words that are the subject of the controversy in the current Johnson call are not quoted in this article at all. This sentence uses a strong adjective—“disgusting”—to describe an action, and the context of this sentence may lead readers to think the “disgusting” action was the media taking these kind words and reporting different, false, insensitive words.

Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 9/12 (75%)

% Fact/Quoted Statements with adjectives: 3/12 (25%)

% Analysis Statements: 0

% Analysis/Opinion Statements: 0

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

% Fair: 84%

Sentences 2-11 rated as “fair” because they are factual and relevant to the underlying story

% Unfair: 16%

Sentence 1 rated as “unfair” because inaccurate statements are generally used unfairly for persuasion

Sentence 12 rated as “unfair” because misleading statements are generally used unfairly for persuasion

Overall Article Quality Rating: Propaganda/Contains Misleading Facts

Main reasons:

-Anything over 0% inaccurate automatically rated at least this low

-Anything over 2% misleading automatically rated at least this low

-Title, Graphics, Lead element all misleading

Overall Partisan Bias Rating: HYPER-PARTISAN

Main reasons:

  • Opinion statement in title
  • Misleading and inaccurate statements used for purpose of promoting partisan ideas

 

[1] https://www.washingtonpost.com/news/checkpoint/wp/2017/10/19/listen-soldiers-widow-shares-her-call-with-trump/?utm_term=.f615c80f2bd7

[2] https://www.local10.com/news/politics/trump-speaks-to-widow-of-sgt-la-david-johnson

[3] The author of this analysis is unaware of any general statement by Rep. Wilson that “Trump is disrespectful to the loved ones of fallen American soldiers,” but will revise this analysis if such quotes are brought to the author’s attention

Posted on

Top Six Red Flags that Identify a Conspiracy Theory Article

It can be tough to see your Facebook friends sharing conspiracy theory stories, and tough to respond to them effectively. Pointing it out and saying “that’s a conspiracy theory” doesn’t seem to be effective. But there are certain writing patterns and tropes that are common within such articles that make them compelling to some people. Sometimes, just pointing out patterns and tropes helps people see them for what they are.

Posted on

The Chart, Version 2.0: What Makes A News Source “Good?”

In my original news chart, I wrestled with the questions of what made news sources “good” and came up with some categories that generally resonated with people. I ranked sources on a vertical axis with those at the top ranked as “high quality” and those at the bottom as “low quality.” I characterized the sources, from top to bottom, in this order: Complex, Analytical, Meets High Standards, Basic, and Sensational/ Clickbait. This mostly works, because it results in sources regarded as high-brow or classy (e.g., The Atlantic, The Economist) being ranked high on the axis, and trashy sources (e.g., Addicting Info, Conservative Tribune) being ranked low, and most sophisticated news consumers agree with that. However, the vertical placements ended up causing me and others some consternation, because some of the placements relative to other outlets didn’t make sense. The most common questions I got were along these lines:

“Does FOX News really “meet high standards,” on par with something like the New York Times?” (I think no.)

“Is USA today really that bad?” (I think no.)

“Is Slate really “better” or “higher quality” than, say, AP or Reuters just because it is analytical?” (I think no.)

“Is CNN really that bad?” (I think yes.)

These questions and my instinctive responses to them made me want to reevaluate what makes news sources high or low quality.

I believe that answer to that question lies in what makes an individual article (or show/story/broadcast) high or low quality. Article quality can vary greatly even within the same news source. One should be able to rank an individual article on the chart in the same way one ranks a whole news source. So, what makes an article/story high or low quality? It’s hard to completely eliminate one’s own bias on that issue, but one way to try to do it consistently is to categorize and rate the actual sentences and words that make it up its headline and the article itself. In order to try to rank any article on the chart in a consistent, objective-as-possible manner, I started doing sentence-by-sentence analyses of different types of articles.

In analyzing what kind of sentences make up articles, it became apparent that most sentences fall into (or in-between) the categories of 1) fact, 2) analysis, or 3) opinion. Based on the percentages of these kinds of sentences in an article, articles themselves can be classified in categories of fact, analysis, and opinion as well. Helpfully, some print newspapers actually label articles as “analysis” or “opinion.” However, most news sources, especially on TV or the internet, do not. I set about analyzing stories that were not pre-labeled as “analysis” or “opinion” on a sentence-by-sentence basis. I discovered that my overall impression of the quality of an article was largely a function of the proportion of fact sentences to analysis sentences to opinion sentences. As a result, I classified stories into “fact-reporting,” “analysis,” and “opinion” stories. Ones with high proportions of “fact” sentences (e.g., 90% + fact statements) were what I refer to here as traditional “fact-reporting” news pieces. These are the kinds of stories that have historically been the basis of late 20th century-to-early-21st century journalism, and what people used to refer to exclusively as “news.” They are the “who,” “what,” “when,” and “where” pieces (not necessarily “why”). I classified ones with high proportions of “analysis” sentences (e.g., 30%-50% analytical statements) as “analysis” stories, which are the types of stories commonly found in publications like The Economist or websites such as Vox. I classified stories with high proportions of opinion sentences (e.g., 30%-50% opinion statements) as “opinion,” which are typically the types of stories found on websites such as Breitbart or Occupy Democrats.

(If you’ve made it this far, bless your heart for caring so much about the news you read.)

In the past, national evening news programs, local evening news programs, and the front pages of print newspapers were dominated by fact-reporting stories. Now, however, many sources people consider to be “news sources” are actually dominated by analysis and opinion pieces. This chart ranks media outlets that people consider to be, at some level “news sources,” even though many of them are comprised entirely of analysis and opinion pieces.

In my previous version of the chart, I had regarded analysis pieces as “higher quality” than the fact-reporting pieces because they took the facts and applied them to form well-supported conclusions. I like analytical writing, which is essentially critical thinking. However, analysis has a lot in common with opinion, and writing that is intended to be analytical often strays into opinion territory. (Note—I’m defining “analysis” as conclusions well supported by facts and “opinion” as conclusions poorly supported or unsupported by facts). Fact-reporting articles—true “scoops”—typically have the intent of just reporting the facts and typically have a very high percentage (e.g., 90%+) of fact-statement sentences, whereas both analysis and opinion articles have the intent of persuading an audience and often have a comparatively high percentage of analysis and opinion statement sentences (30%-50%). So, although I initially had the quality axis of “news” laid out top to bottom as:

That ranking is more reflective of the quality of writing rather than the quality of news sources. Good analysis is often written persuasively and well, fact-reporting is often written directly but well, and opinion writing is often (but not always) written poorly or is most easily discredited.  I submit that given the confusion caused by the overwhelming number of organizations proclaiming to be (or which are commonly confused with) “news sources,” it is more important to rank the quality of news sources than the quality of writing. I further submit, for reasons outlined below, that the percentage of fact reporting articles and stories should be used as the most determinative factor by which a news source is ranked in quality on this chart.

Therefore, I believe a more relevant ranking of the quality of news sources would be:

 

 

I assert that one of the biggest problems with our current news media landscape is that there is too much analysis and opinion available in relation to factual reporting.  New technologies have given more people more platforms to contribute analysis and opinion pieces, so many “news sources” have popped up to compete for readers’ attention. Unfortunately news consumers often do not recognize the difference between actual fact-reporting news and the analysis and opinion writing about that news.  This increase in “news sources” has not corresponded with an increase in actual journalists or news reporting, though. Many local and national print news organizations have reduced their numbers of journalists, while many of the biggest ones have merely maintained similar numbers of journalists over the past 10 years or so.
Furthermore, primarily analytical news sources also have several downsides. One downside is that they can alienate news consumers by making what people consider “news sources” so complex or partisan that it is tiring to consume any “news.”  For example, CNN, MSNBC and FOX News, which are primarily analysis and opinion-driven, can make news consumers too weary to pay attention to fact-based reporting from, say, AP or Reuters. Another problem with analysis and opinion-driven news sources is that it can be difficult for casual readers to differentiate between good analysis and pure opinion.
There are several good reasons why we should value fact-reporting sentences, fact-reporting articles, and fact-reporting news sources high on the quality scale of news, at least on this chart. For one, reported facts take a lot of work to obtain. They require journalists on the ground investigating and interviewing. Once a story is reported, dozens, hundreds, or thousands of other writers can chime in with their analysis or opinions of it. This is not to say analysis and opinion writing isn’t important. The critical thinking presented in analytical writing—especially good, complex analysis—is essential to public discourse. Our society’s best ideas are advanced by analytical articles. This piece you are reading now is analytical. But analysis in the news wouldn’t even exist without the underlying factual reporting.

For example, AP and Reuters have maintained around 2,000-2,500 journalists each over that time, while the New York Times and Washington Post have fluctuated in the 500-1000 range over the same period. The value of these organizations with large staffs of journalists, editors, and other newsroom employees is hard to overstate; not only do they provide a majority of the fact-reporting stories everyone else relies on, but they have the capacity to provide high-quality editorial review that stands up to industry scrutiny. In contrast, even some of the most popular analysis and opinion sites can be run with just a few dozen writers and staff; the number of these “news” websites, news aggregator websites, blogs, and podcasts has seemingly grown exponentially.

I believe improvements to our media landscape can be made if two things happen: 1) if news consumers start valuing factual reporting much more and analysis/opinion articles much less and 2) if news consumers become accustomed to differentiating articles in those categories.. Regarding point #2, I think it would be helpful if we narrowed the definition of “news” to only refer to fact reporting, and referred to everything else as “analysis” or “opinion.” It would be helpful if people could recognize the relative contributions of fact-reporting news organizations versus analysis and opinion sources. If people recognize just how much of what they read and watch is intended to persuade them, they may become more conscious and thoughtful about how much they allow themselves to be persuaded. One can hope.

To contribute to those goals, I’ve reordered the chart to value fact-reporting articles as the highest quality and everything else lower, even though there is some really excellent analysis out there. As a baseline, news consumers should understand when something is news (fact-reporting) and when it is not. On the new chart, the sources with the best analysis, but little reporting are at the top, but right under the sources that are comprised of high percentages of reporting articles. The most opinion-driven sources are at the bottom. There’s room for other things at the bottom below pure opinion, which can include sources that are sensationalist, clickbait, frequently factually incorrect, or which otherwise don’t meet recognized journalism standards.

On this version, I’ve included a number of different sources, mostly in the analysis and opinion categories, and left the most popular mainstream sources from the original chart, but have reordered some of them. Now, the rankings are more consistent with my initial answers to the example questions at the beginning of this post. Fox News is now ranked far lower than the New York Times for two main reasons; one, Fox News is dominated by opinion and analysis, and two, it has gotten precipitously worse in other measures (sensational chyrons, loss of experienced journalists, hyperbolic analysis by contributors, etc.) within the last six months. USA today, despite its basic nature, has been elevated because of its high percentage or fact-reporting stories. Slate, though it provides thoughtful, well-written analysis, is ranked lower than AP and Reuters, which better reflects their relative contributions to the news ecosystem. CNN still sucks, but it is clearer why now; CNN has the resources to provide twenty-four hours of news—it could provide Americans with a detailed global-to-local synopsis of the world—but instead it chooses to spend 5% of its time fact reporting a handful of stories, comprising mostly American political drama and maybe one violent leading world news story, and 95% on analysis and opinion ranging from the competent to the inane.

My analysis of news sources in the manner I’ve described herein has revealed that individual stories can and should be ranked on the chart in the same manner, and that individual stories can be placed in different places than the news sources in which they are published. I’ll be putting out individual story rankings and reasoning for those rankings from time to time for those that are interested. I’ll also take requests for rankings of sources and individual stories in the comments and on twitter. Thanks for reading and thinking.

*Update: a high-resolution PDF version is available here: Second Edition News Chart.V2

And a blank version is here: Second Edition Blank

 

Posted on

What is the difference between the statues of George Washington and Robert E. Lee?

The pro-confederate-statue side asks this question, likely in earnest, and it is worth grappling with the distinction. Indeed, since slavery is evil and horrible, as generally agreed by liberals and conservatives alike, and both men owned slaves, why is it preferable to take down the confederate statue and not the Washington statue?

This is not cut and dry, or “obvious” to everyone, and we shouldn’t treat it as such. It is a difficult task to distinguish between two things that are alike in some ways and different in others, so let’s look at the details and facts of these cases in order to distinguish, like courts do.

It is a general rule that we put up statues of good people and not bad ones, but this in itself is a hard rule to follow because no one person is all good or all bad. It’s a bit easier to distinguish with some people than others. MLK=almost all good and Hitler=almost all bad is not hard. I think it is legitimately closer with both George Washington and Robert E. Lee. I think the reason the argument comes down to GW=mostly good (despite slaves!) is because he is most known and respected for 1) fighting in the Revolutionary War for American independence, which modern Americans view as a righteous cause, and 2) being our first President. The argument comes down to Lee=mostly bad (plus slaves!) because he is most known for 1) fighting in the Civil War for the cause of keeping slaves, which most modern Americans view as a morally wrong cause.

The question of what they are most known for is an important one, because that is usually the same reason their statue was put up in the first place. When it comes to the question of whether to take one down, people tend to base their opinion on the questions on 1) what it meant when it was put up in the first place and 2) what it means now, in the context of history. With GW, it was put up because of his role in the Revolution and as President. With Lee, it was put up during an era of brutal reinforcement of white supremacy (see comments for link discussing this history) with a purpose of intimidating recently freed slaves. Today, in the context of history, GW’s statues are widely seen as a reflection of his leadership and role as a founder, not his role as a slave owner. Most people don’t go to a GW monument for the purpose of celebrating his slave ownership. Today, though, in the context of history, Lee’s statues are commonly given two negative meanings: First, they serve as a reminder of white supremacy to black people, and second, they serve as a rallying point for actual white supremacists. Yes, to many people, it may mean a “commemoration of Southern history” too, but if it’s 50% a brutal white supremacist reminder/rallying point and 50% Southern history commemoration, that’s enough to justify it being removed. We have made a moral decision as a society that its (even partial) role as a white supremacy beacon is not acceptable, in response to a particular flash point of a white supremacist resurgence. We have not made a similar decision about the Washington statues, because there has been no recent flash point around those.

However, I can’t actually morally justify Washington owning slaves, and that practice is indeed so reprehensible that it is valid to argue that if slavery is that wrong, then we should take down the statues of any slave holder, no matter how “good” they were otherwise. Joe Paterno’s statue was taken down because his biggest moral failing—protecting a child predator—outweighed the other good he had done. Perhaps the removal of Washington (slave owner) and Jefferson (slave owner and likely slave rapist) is the morally correct thing to do. We would likely remove the statues of contemporary heroes (say, MLK or Wayne Gretzky) if we suddenly found out they were rapists or owned slaves.

But there is a distinguishing factor between how we judge the actions of contemporaries compared to how we judge those of historical figures, and that is the factor of relative morality of a time in history compared to the present. Those who argue “slave owners weren’t all bad people” are inherently taking this factor into account. Yes, we all view slavery as evil now, but when it was a somewhat normalized aspect of society, it is plausible and even likely that many slave owners tried to live what they thought were upstanding moral lives in many ways. They may even have had moral dilemmas about slavery but felt that it was an intractable problem for them to solve, let alone forgo participation in. “Slave owners were not all bad people” ( a typically conservative argument) is a very similar argument to “George Washington’s statue should remain up because he did other good things, even though he owned slaves” (an argument liberals are currently making in relation to the confederate statue issue). “George Washington was not all bad,” essentially.

It seems that the right thing to do is to take down the Confederate statues because the of the bad things they were best known for (explicitly fighting for slavery), plus the reasons they were put up, plus the reasons they cause people pain now. But we must also admit that it would be logically consistent to remove other slave owners, even our founding fathers, if some contemporary flash point were to bring the issue of how bad slavery really is to the forefront. Perhaps it is a moral failing of our current time that we have not come to this realization yet. Perhaps future generations will come to the consensus that the founding fathers’ statues should be removed and hold it against our generations that we did not. Perhaps they will judge us harshly for tolerating other injustices, like unequal  women’s rights and queer rights for so long. Societal morals evolve over time. In the near term, though, it is likely that the “contemporary, widely-held perception of the statues” factor and the “relative morality of the time of the person” factor saves the Washington and Jefferson statues now but not the Confederate statues. So down with the Confederate statues. And shame, at least, on the moral failings of those whose statues we leave in place.

Posted on

High Resolution File Formats for Full Chart and Blank Versions of News Quality Chart

A few people have asked me to post links to various file formats of this chart for their own use. Feel free to download and use them. There is a Creative Commons license on them which requests attribution and non-commercial use. They contain minor updates from recent versions. Most notably, The Economist has been moved to the left. I agree with commentators who pointed out that was an erroneous initial placement. Also, I changed the snarky designation “Basic AF” to “Basic” so that the chart’s use would be more appropriate in middle school and/or high school settings.  (Note: the abbreviation “AF” stands for “as fuck,” which is text/internet slang for “very,” or “quite.” Sorry for any classroom snickers this may have caused for unsuspecting teachers.)

 

News Quality.Blank.V2

News Quality.V5