Posted on

Why Measuring Political Bias is So Hard, and How We Can Do It Anyway: The Media Bias Chart Horizontal Axis

Post One of a Four-Part Series

The Media Bias Chart Horizontal Axis:

 

Part 1:

Measuring Political Bias–Challenges to Existing Approaches and an Overview of a New Approach

Many commentators on the Media Bias Chart have asked me (or argued with me about) why I placed a particular source in a particular spot on the horizontal axis. Some more astute observers have asked (and argued with me about) the underlying questions of “what do the categories mean?” and “what makes a source more or less politically biased?” In this series of posts I will answer these questions.

In previous posts I have discussed how I analyze and rate quality of news sources and individual articles for placement on the vertical axis of the Media Bias Chart. Here, I tackle the more controversial dimension of rating sources and articles for partisan bias on the horizontal axis. In my post on Media Bias Chart 3.0, I discussed rating each article on the vertical axis by taking each aspect, including the headline, the graphic(s), the lede, AND each individual sentence and ranking it. In that post, I proposed that when it comes to sentences, there are at least three different ways to score them for quality on a Veracity scale, an Expression scale, and a Fairness scale. However, the ranking system I’ve outlined for vertical quality ratings doesn’t address everything that is required to rank partisan bias. Vertical quality ratings don’t necessarily correlate with horizontal partisan bias ratings (though they often do, hence the somewhat bell-curved distribution of sources along the chart).

Rating partisan bias requires different measures, and is more controversial because it disagreements about it enflame the passions of those arguing about it. It’s also very difficult, for reasons I will discuss in this series. However, I think it’s worth trying to 1) create a taxonomy with a defined scope for ranking bias and 2) define a methodology for ranking sources within that taxonomy.

In this series, I will do both things. I’ve created the taxonomy already—the chart itself—and in these posts I’ll explain how I’ve defined its horizontal dimension. The scope of this horizontal axis has some arbitrary limits and definitions,  For example, it is limited in scope to US political issues, as they exist within the last one year or so, and uses the positions of various elected officials as proxies for the categories. You can feel free to disagree with each of these. However, it has to start and end somewhere in order to create a systematic, repeatable way of ranking sources within it. I’ll discuss how I define each of the horizontal categories (most extreme/hyper-partisan/skews/neutral). Then, I’ll discuss a formal, quantitative, and objective-as-possible methodology for systematically rating partisan bias, which has evolved from the informal and somewhat subjective processes I had been using to rate it in the past. This methodology comprises:

  • An initial placement of left, right, or neutral for the story topic selection itself
  • Three measurements of partisanship on quantifiable scales which include
    1. a “Promotion” scale
    2. a “Characterization” scale, and
    3. a “Terminology” scale

3)  A systematic process for measuring what is NOT in an article, the absence of which results in partisan bias.

  1. Problems with existing bias rating systems

To the extent that organizations try to measure news media stories and sources, they often do so only by judging or rating partisan bias (rather than quality). Because it is difficult to define standards and metrics by which partisan bias can be measured, such ratings are often made through admittedly subjective assessments by the raters (see here, for example), or are made by polling the public or a subset thereof (see here, for example). High levels of subjectivity can cause the public to be skeptical of ratings results (see, e.g., all the comments on my blog complaining about my bias), and polling subsets of the public can skew results in a number of directions.

Polling the public, or otherwise asking the public to rate “trustworthiness” or bias of news sources has proven problematic in a number of ways. For one, people’s subjective ratings of trustworthiness of particular sources tend to correlate very highly with their own political leanings, so while liberal people will tend to rate MSNBC as highly trustworthy and FOX as not trustworthy, conservative people will do the opposite, which says very little about an objective level of actual trustworthiness of each of those sources. Further, current events have revealed that certain segments of the population are extremely susceptible to influence by low-quality, highly biased, and even fake news, and those segments have proven themselves unable to reliably discern measures of quality and bias, making them unhelpful to poll.

Another way individuals and organizations have attempted to rate partisan bias is through software-enabled text analysis. The idea of text analysis software is appealing to researchers because the sheer volume of text of news sources is enormous. Social media companies, advertisers, and other organizations have recently used such software to perform “sentiment analysis” of content such as social media posts in order to identify how individuals and groups feel about particular topics, with the hopes that knowing such information can influence purchasing behavior. Some have endeavored to measure partisan bias in this way, by programming software to count certain words that could be categorized as “liberal” or “conservative.” A study conducted by researchers at UCLA tried to measure such bias by references by media figures to conservative and liberal think tanks. However, such attempts to rate partisan bias have had mixed results, at best, because of the variation in context in which these words are presented. For example, if a word is used sarcastically, or in a quote by someone on the opposite side of the political spectrum from the side that uses that word, then the use of the word is not necessarily indicative of partisan bias. In the UCLA study, references to political think tanks were too infrequent to generate a meaningful sample. I submit that other factors within an article or story are far more indicative of bias.

I also submit that large-scale, software-enabled analysis bias ratings are not useful if the results do not align well with the subjective bias ratings gathered by a group of knowledgeable media observers. That is, if we took a poll of an equal number of knowledgeable left-leaning and right-leaning media observers, we could come to some kind of reasonable average for ratings bias. To the extent the software-generated results disagree, that suggests that the software model is wrong. I earlier stated my dissatisfaction with consumer polls as the sole indicator of bias ratings because it is consumer-focused and not content-focused. I think there is a way to develop a content-based approach to ranking bias that aligns with our human perceptions of bias, and that once that is developed, it is possible to automate portions of that content-based approach. That is, we can get computers to help us rate bias, but we have to first create a very thorough bias-rating model.

  1. Finding a better way to rank bias

When I started doing ratings of partisanship, I, like all others before me, rated them subjectively and instinctively from my point of view. However, knowing that I, like every other human person, have my own bias, I tried to control for my own bias (as referenced in my original methodology post), possibly resulting in overcorrection. I wanted a more measurable and repeatable way to evaluate bias of both entire news sources and individual news stories.

I have created a formal framework for measuring political bias in news sources within the defined taxonomy of the chart. I have started implementing this formal framework when analyzing individual articles and sources for ranking on the chart. This framework is a work in progress, and the sample size upon which I have tested it is not yet large enough to conclude that it is truly accurate and repeatable. However, I am putting it out here for comments and suggestions, and to let you know that I am designing a study for the dual purposes of 1) rating a large data set of articles for political bias and 2) refining the framework itself. Therefore, I will refer to some of these measurements in the present tense and others in the future tense. My overall goal is to create a methodology by which other knowledgeable media observers, including left-leaning and right-leaning ones, can reliably and repeatably rate bias of individual stories and not deviate too far from each other in their ratings.

My existing methodology for ranking an overall source on the chart takes into account certain factors related to the overall source as a first step, but is primarily based on rankings of individual articles within the source. Therefore, I have an “Entire Source” bias rating methodology and an “Individual Article” bias rating methodology.

  1. “Entire Source” Bias Rating Methodology

I discussed ranking partisan bias of overall sources in my original methodology post, which involves accounting for each of the following factors:

  1. Percentage of news media stories falling within each partisanship category (according to the “Individual Story” bias ranking methodology detailed below)
  2. Reputation for a partisan point of view among other news sources
  3. Reputation for a partisan point of view among the public
  4. Party affiliation of regular journalists, contributors, and interviewees
  5. Presence of an ideological reference or party affiliation in the title of the source

In my original methodology post, I identified a number of other factors for ranking sources on both the quality and partisanship scales that I am not necessarily including here. These include the factors of 1) number of journalists 2) time in existence and 3) readership/viewership. This is because I am starting with an assumption that the factors (a-e) listed above are more precise factors indicating partisanship that would line up with polling results of journalists and knowledgeable media consumers. In other words, my starting assumption is that if you used factors (a-e) to rate partisanship of a set of sources, and then also polled significant samples of journalists and consumers, you would get similar results. I believe that over time, some of the factors 1-3 (number of journalists, time in existence, and readership/viewership) may be shown to correlate strongly with indications of partisanship or non-partisanship. For example, I suspect that the factor “number of journalists” may be found to correlate high numbers of journalists with low partisanship, for the reason that it is expensive to have a lot of journalists on staff, and running a profitable news enterprise with a large staff would require broad readership across party lines. I suspect that “time in existence” may not necessarily correlate with partisanship, because there are several new sources that have come into existence within just the last few years that strive to provide unbiased news. I suspect that readership/viewership will not correlate very much with partisanship, for the simple reason that as many people seem to like extremely partisan junk as like unbiased news. Implementation of a study based on the above listed factors should verify or disprove these assumptions.

I have “percentage of news media stories falling within each partisanship category” listed as the first factor for ranking sources, and I believe it is the most important metric. Whenever someone disagrees with a particular ranking of an overall source on the chart, they usually cite their perceived partisan bias of a particular story that they believe does not align with my ranking of the overall source. What should be apparent to all thoughtful media observers out there, though, is that individual articles can themselves be more liberal or conservative than the mean or median partisanship bias of its overall source. In order to accurately rank a source, you have to accurately rank the stories in it.

             2. “Individual Story” Bias Rating Methodology

As previously discussed, I propose evaluating partisanship of an individual article by: 1) creating an initial placement of left, right, or neutral based on the topic of the article itself, 2) measuring certain factors that exist within the article and then 3) accounting for context by counting and evaluating factors that exist outside of the article. I’ll discuss this fully in Posts 3 and 4 of this series.

In my next post (#2 in this series) I will discuss the taxonomy of the horizontal dimension. I’ll cover many reasons why it is so hard to quantify bias in the first place. Then I’ll define what I mean by “partisanship,” the very concepts of “liberal,” “mainstream/center,” and “conservative,” and what each of the categories (most extreme/hyper-partisan/skews/neutral or balance) mean within the scope of the chart.

 Until then, thanks for reading and thinking!

Posted on

An Exercise for Bias Detection

A great exercise to train your bias-detecting skills is to check on a high volume of outlets –say, eight to ten–across the political spectrum in the 6-12 hours right after a big political story breaks. I did this right after the release of the Nunes memo on Friday, Feb 2. This particular story provided an especially good occasion for comparison across sites for several reasons, including:

-It was a big political story, so nearly everyone covered it. It’s easier to compare bias when each source is covering the same story.

-The underlying story is fact-dense, meaning that a lot of stories about it are long:

-As a result, it is easier to tell when an article is omitting facts.

-It is also easier to compare how even highly factual stories (i.e., scores of “1” and “2” on the Veracity and Expression scales) characterize particular facts to create a slight partisan lean.

-There are both long and short stories on the subject. Comparison between longer and shorter stories lets you more easily find facts that are omitted in order to frame the issues one way or another.

-News outlets have had quite a while to prepare for this coming story, so those inclined to spin it one way or the other have had time to develop the spin. Several outlets had multiple fact, analysis, and opinion stories within the 12 hours following the story breaking. You could count the number of stories on each site and rate their bias and get a more complex view of the source’s bias.

I grabbed screenshots of several sources across the spectrum from the evening of Feb. 2 and morning of Feb. 3. These are from the Internet Wayback Machine https://web.archive.org (if you haven’t used it before, it’s a great tool that allows you to see what websites looked like at previous dates and times).  Screenshots from the following shots are below:

FoxNews.com

Breitbart.com

NationalReview.com

RedState.com

WashingtonPost.com

NYTimes.com

Huffpost.com

TheDailyBeast.com

Slate.com

BipartisanReport.com

 

You can get a good sense of bias from taking a look at the headlines, but you can get deeper insight from reading the articles themselves. For some sources, the headlines are a lot more dramatic than the articles themselves; for others, the articles are equally or more biased.

If you want to rank these articles (based on the articles themselves, or just on the headlines and pages below) on a blank version of the chart, I recommend placing the ones that seem most extremely biased first, then placing the ones that seem less biased. It’s easiest to identify the most extreme of a set, and then place the rest in relative positions. There’s not always a source or story that will land in whatever you consider “the middle,” but you can find some that are closer than others.

Going through this exercise is especially beneficial when big stories like this break. I know it is time-consuming to read so many sources and stories, so most people don’t read laterally like this very often, if ever (if you do, nice work!).  Doing so from time to time can help you remember that people are reading very different things from you, and increase your awareness of the range of biases across the spectrum. It can also help you identify how detect more subtle bias in the sources you regularly rely on.

Happy bias-detecting!

FOX NEWS

BREITBART

NATIONAL REVIEW

REDSTATE

 

WASHINGTON POST

NYTIMES

HUFFPOST

THE DAILY BEAST

SLATE

BIPARTISAN REPORT

 

Posted on

Media Bias Chart, 3.1 Minor Updates Based on Constructive Feedback

So why is it time for another update to the Media Bias Chart? I’m a strong believer in changing one’s mind based on new information. That’s how we learn anyway, and I wish people would do it more often. I think it would lead to nicer online discussions and less polarization in our politics. Perhaps people don’t “change their minds based on new information” as much as they should because it is often framed more negatively as “admitting you are wrong.” I don’t particularly mind admitting I’m wrong.

In any event, I’m making some minor updates to the Media Bias Chart, corrections, and improvements based on feedback I’ve gotten. I’ve been fortunate to hear from many of you thoughtful observers out there, and I’m so grateful that so many of you care about the subject of ranking quality and bias.

The Media Bias Chart Updates

Here are the changes for version 3.1. I’m calling it 3.1 because they are mostly minor changes. I got quite a bit of feedback on these topics in particular.

  • The middle column now says “Neutral: Minimal Partisan Bias OR Balance of Biases.” I moved away from the term “Mainstream” because that term is so loaded as to be useless to some audiences. Also, there are some sources that are not really minimally biased or truly neutral; some have extreme stuff from both political sides.

 

  • The horizontal categories have been updated slightly in our Media Bias Chart. The “skew conservative” and “skew liberal” categories no longer have the parenthetical comment “(but still reputable),” mostly because the term “reputable” has more to do with quality on the vertical axis, and I’m doing my best not to conflate the two. The “hyper-partisan conservative” and “hyper-partisan liberal” categories no longer have the parenthetical comments “(expressly promotes views),” mostly because “promoting views” is not the only characteristic that makes something hyper-partisan. Finally, the outermost liberal and conservative “utter garbage/conspiracy theories” categories are now re-labeled “most extreme liberal/conservative.” This is, again, because the terms “utter garbage” and “conspiracy theories,” though often accurate for sources in those columns, has more to do with quality than partisanship.

 

What has moved?

I am writing a separate post that more specifically defines the horizontal axis and the criteria for ranking sources within them. It’s a pretty complex topic, and I’ll discuss many additional points frequently raised by those of you who have commented. I will likely have more revisions accompanying that post.

 

  • I have moved Natural News from the extreme left to slightly right. I know this may still cause some consternation among commentators that note correctly that they have a lot of extreme right wing political content. However, after categorizing dozens of articles over several sample days and counting how many fell in each category, the breakdown looked like this: About a third fell in the range of “skew liberal” to “extreme liberal” (in terms of promoting anti-corporate and popular liberal pseudo-science positions), another third were relatively politically neutral “health news,” and about a third fell into the extreme conservative bucket. There wasn’t much that fell into the “skew conservative” or “hyper-partisan conservative” categories. So even though the balance was 1/3, 1/3, 1/3, left, center, right, the 1/3 on the right was almost all “most extreme conservative,” so that pushed the overall source rank to the right. For those who are still unhappy and think it should be moved further right, take consolation in the fact that it is still at the bottom vertically, and to an extent, it doesn’t matter how partisan the junk news is as long as you still know it’s junk.

 

  • I removed US Uncut, because as some of you correctly pointed out, that site is now defunct.

 

  • I removed Al-Jazeera from the top middle, but not because I don’t think it’s a mostly reputable news source. I removed it for two reasons.

Al-Jazeera Explained

  1. First, many people are unclear on what I am referring to as Al-Jazeera. It is a very large international media organization based out of Qatar, (see https://en.wikipedia.org/wiki/Al_Jazeera), but it is not a very popular source for news to Americans. Americans who are familiar with it could assume that I am referring to Al-Jazeera English (a sister channel), or Al-Jazeera America (a short-lived US organization (2013-2016) which arguably leaned left), or AJ+ (a channel that provides explanatory videos on Facebook and also arguably leans left). I do think these are worth including in the Media Bias Chart, but I will differentiate them before including them in future versions. What I meant originally was the main Al-Jazeera site that is in English, which covers mostly international news, and which I consider a generally high quality and reputable source.
  2. Second , it is somewhat controversial because it is funded by the government of Qatar, and it has been accused of bias as it pertains to Middle East politics. This doesn’t necessarily mean that it is disreputable, or that its ownership results in stories that are biased to the left or right on the US political spectrum. However, I have only two other non-US sources on the Media Bias Chart—the BBC and DailyMail—which both have significant enough coverage of US politics that you can discern bias on the US spectrum. I don’t have any other internationally sources on the Media Bias Chart, and none that are primarily funded by a non-democratic government (the BBC is funded by the British public, NPR is publicly and privately funded in the US). Until I can specify which articles I have rated to form the basis for Al-Jazeerza’s placement, I’m going to leave it off.

Thanks for the comments so far, and please keep them coming. I appreciate your suggestions for how to make this work better and your requests for what you want to see in the future.

Posted on

Observations on The Chart by Law Professor Maxwell Stearns of U. Maryland

Law professor Maxwell Stearns, who blogs about law, politics, and culture, recently published this post about the chart, which has several useful insights about 1) distilling the ranking criteria into sub-categories, 2) why the sources on the chart form a bell curve, 3) how the rankings might be made more scientifically. Give it a read!

https://www.blindspotblog.us/single-post/2017/11/18/The-Viral-Media-Graphic-with-special-thanks-to-Vanessa-Otero

Posted on

Everybody has an Opinion on CNN

I get the most feedback by far on CNN, and, in comparison to feedback on other sources on the chart, CNN is unusual because I get feedback that it should be moved in all the different directions (up, down, left, and right). Further, most people who give me feedback on other sources suggest that I should just nudge a source one way or another a bit. In contrast, many people feel very strongly that CNN should be moved significantly in the direction they think.

I believe there are a couple of main reasons I am getting this kind of feedback.

  • CNN is the source most people are most familiar with. It was the first , and is the longest-running, 24 hour cable news channel. It’s on at hotels, airports, gyms, and your parent’s house. Even if people are news critics of nothing else, if they are critics of anything, they will be critics of CNN, because they are most familiar with it.
  • CNN is widely talked about by other media outlets, and conservative media outlets in particular, who often describe it as crazy-far left. Usually those who tell me it needs to go far left are the ones reading conservative media—no surprise there.
  • People tend to base their opinions of CNN on what leaves the biggest impression on them, and there are a lot of aspects that can leave an impression:
    1. For some people, the fact that they can just have it on in the background during the day, during which they see a large sampling of CNN’s news coverage, they see that programming is mostly accurate and informs them of a lot of US news they would be interested in. These individuals tend to think that CNN should be ranked higher, perhaps all the way up in “fact-reporting” and mainstream
    2. For others, they know they can tune into CNN for breathless, non-stop coverage of an impending disaster, like a Hurricane, or a breaking tragedy, such as a mass shooting. People can have a few different kinds of impressions from this. First, that they can count on that fact that they will get all the facts that are known repeated to them within 10 minutes of tuning in. That’s another reason to put them up in “fact-reporting.” Second, more savvy observers know that CNN makes not-infrequent mistakes and often jumps the gun in these situations. They usually qualify their statements properly, but they will still blurt out facts about a suspect, number of shooters, fatalities, that are not quite yet verified. That causes some people to rank them lower quality on the fact-reporting scale. Third, people know that once CNN runs out of current stuff to talk about, they will bring on analysts about all related (or unrelated) subjects (e.g., lawyers, criminologists, climate change scientists, etc.) often for several days following the story. This tends to leave people with the impression that CNN provides a lot of analysis and opinion (including lots of it that is valid and important) in addition to fact reporting. So a ranking somewhere along the analysis/opinion spectrum (a little above where I have it) seems appropriate.
    3. For yet others, the kind of coverage that leaves the biggest impression is the kind that includes interviews and panels of political commentators. The contributors and guests CNN has on for political commentary range widely in quality, from “voter who knows absolutely nothing about what he is talking about” to “extremely partisan, unreliable political surrogate” to “experienced expert who provides good insight.” People who pay attention to this kind of coverage note that CNN does a few crazy things.
      1. First, they have a chyron that says “Breaking News:…” followed by something that is clearly not breaking news. For example: “Breaking: Debate starts in one hour.” Eye roll. This debate has been planned for months and is not breaking. Further, they have a chyron (big banner on the bottom of the screen) for almost everything, which seems unnecessary and sensationalist, but has been adopted by MSNBC, FOX, and others. Often, the chyron’s content is sensationalist.
      2. Second, in the supposed interest of being “balanced” and “showing both sides, they often have extreme representatives from each side of the political spectrum debating each other. This practice airs and lends credibility to some extreme, highly disputed positions. Balance, I think, would be better represented by having guests with more moderate positions. Interviews with KellyAnne Conway, who often says things that are untrue, things that are misleading, and makes highly disputed opinion statements, are something else. Even though the hosts challenge her, it often appears that the whole point of having her as a guest is for the purposes of showcasing how incredulous the anchors are at her statements. This seems to fall outside of the purpose of news reporting. What’s worse, though (to me, anyway), is that they will hire partisan representatives as actual contributors and commentators, which gives them even more credibility as sources one should listen to about the news, even though they have a clear partisan, non-news agenda. They hired Jeffery Lord, who routinely made the most outlandish statements in support of Trump, and Trump’s ACTUAL former campaign manager, Corey Lewandowski. That was mind-boggling in terms of lack of journalism precedence (and ethics) and seemed to be done for sensationalism (and ratings, rather than for the purposes of news reporting, which is to deliver facts). Those hires were a big investment in providing opinion. I think it was extremely indicative of CNN’s reputation for political sensationalism when the Hill ran two headlines within a few weeks of each other saying something like “CNN confirms it will not be hiring Sean Spicer as a contributor” and CNN confirms it will not be hiring Anthony Scaramucci as a contributor” shortly after each of their firings.
  • Third, their coverage is heavily focused on American political drama. I’ll elaborate on this in a moment.

Personally, the topics discussed in (c) left the biggest impression on me. That is why I have them ranked on the line between “opinion, fair persuasion” and “selective or incomplete story, unfair persuasion.” The impact of the guests and contributors who present unfair and misleading statements and arguments really drives down CNN’s ranking in my view. I have them slightly to the left of center, though, because they tend to have a higher quantity of guests with left-leaning positions.

 

I have just laid out that my ranking is driven in large part by a subjective measure rather than an objective, quantitative one. An objective, quantitative one would take all the shows, stories, segments, guests, and analyze all the statements made, and would, on a percentage basis, say how many of these things were facts, opinions, analysis, fair or unfair, misleading, untrue, etc. I have not done this analysis but would guess that a large majority of the statements made in a 24 hour period on CNN would fall in to reputable categories (fair, factual, impartial). Perhaps even 80% or more would fall in to that category. So one could reasonably argue that CNN deserves to be higher; say, 80% of the way up (or whatever the actual number is), if that is how you wanted to rank it.

However, I argue for the inclusion of a subjective assessment that comes from the question “what impression does this source leave?” Related questions are “what do people rely on this source for,” “what do they watch it for,” and “what is the impact on other media?” I submit that the opinion and analysis panels and interviews, with their often-unreliable guests, leave the biggest impression and make up a large portion of what people rely on and watch CNN for. I also submit that these segments make the biggest impact in the rest of media and society. For example, other news outlets will run news stories, the content of which are “Here’s the latest crazy thing KellyAnne said on CNN.” These stories make a significant number of impressions on social media, therefore amplifying what these guests say.

I also include a subjective measure that pushes it into the “selective or incomplete story” category, which comes from trying to look at what’s not there; what’s missing. In the case of CNN, given their resources as a 24 hour news network, I feel like a lot is missing. They focus on American political drama and the latest domestic disaster at the expense of everything else. With those resources and time, they could inform Americans about the famine in South Sudan, the war in Yemen, and the refugees fleeing Myanmar, along with so many other important stories around the world. They could do a lot more storytelling about how current legislation and policies impacts the lives of people here and around the world. Their focus on White House palace intrigue inaccurately, and subliminally, conveys that those are the most important stories, and that, I admit, just makes me mad.

Many reasonable arguments can be made for the placement of CNN as a whole, but a far more accurate way to rank the news on CNN is to rank an individual show or story. People can arrive at a consensus ranking much more easily when doing that. I will be doing that on future graphs (I know you can’t wait for a whole graph just on CNN, and I can’t either!) for individual news outlets.

 

Posted on

The Chart, Version 3.0: What, Exactly, Are We Reading?

Note: this is actually version 3.1 of The Chart. I made some minor changes from version 3.0, explained here: http://www.allgeneralizationsarefalse.com/chart-3-1-minor-updates-based-constructive-feedback/

Summary: What’s new in this chart:

  • I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
  • I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
  • I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
  • I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
  • Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.

Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.

OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.

As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.

As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.

So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric

The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:

  • True and Complete
  • Mostly True/ True but Incomplete
  • Mixed True and False
  • Mostly False or Misleading
  • False

Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.

It is valid and important to rate articles and statements for truthfulness. But it is apparent  that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:

  • (Presented as) Fact
  • (Presented as) Fact/Analysis (or persuasively-worded fact)
  • (Presented as) Analysis (well-supported by fact, reasonable)
  • (Presented as) Analysis/Opinion (somewhat supported by fact)
  • (Presented as) Opinion (unsupported by facts or by highly disputed facts)

In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.

Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.

The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.

I submit that the following characteristics of sentences can make them seem unfair:

-Not relevant to present story

-Not timely

-Ad hominem (personal) attacks

-Name-calling

-Other character attacks

-Quotes inserted to prove the truth of what the speaker is saying

-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point

-Emotionally-charged adjectives

-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises

This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:

  • Fair
  • Unfair

By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:

In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:

As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.

Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.

If you would like a blank version for education purposes, here you go:

Third Edition Blank

And here is a lower-resolution version for download on mobile devices:

Posted on

Not “Fake News,” But Still Awful for Other Reasons:  Analysis of Two Examples from The Echo Chambers This Week

The term “fake news” is problematic for a number of reasons, one of which is that it is widely used to mean anything from “outright hoax” to “some information I do not like.” Therefore, I refrain from using the term to describe media sources at all.

Besides that, I refrain from discussing the term because I submit that the biggest problem in our current media landscape is not “hoax” stories that could legitimately be called “fake news.” What is far more damaging to our civic discourse are articles and stories that are mostly, or even completely, based on the truth, but which are of poor quality for other reasons.

The ways in which articles can be awful are many. Further, not all awful articles are awful in the same way. For these reasons, it is difficult to point out to most casual news readers how an article that is 90% true, or even 100% true, is biased, unfair, or deviant from respectable journalistic practices.

This post is the first in a series I plan to do in which I visually rank one or more recent articles on my chart and provide an in-depth analysis of why each particular article is ranked in that spot.  My analysis includes discussions of the headlines, graphics, other visual elements, and the article itself. I analyze each element and each sentence by asking “what is this element/sentence doing?”

This week, I break down one article from the right (from the Daily Wire, entitled “TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan) and one from the left (from Pink News, entitled “Bill O’Reilly caught in $32 million Fox News gay adult films scandal”).

 

  • From the Left: Article Ranking and Analysis of:

http://www.pinknews.co.uk/2017/10/24/bill-oreilly-caught-in-32-million-fox-news-gay-adult-films-scandal/

Source: Pink News

Author: Benjamin Butterworth

Date: October 24, 2017

Total Word Count: 706

  1. Title: Bill O’Reilly caught in $32 million Fox News gay adult films scandal

Title Issues:

Misleading about underlying facts

There is no current, known scandal involving Fox News and gay adult films. Bill O’Reilly settled a $32 million sexual harassment lawsuit while employed by Fox, and one of the allegations was that he sent a woman gay porn. However, the title suggests some sort of major financial involvement of Fox News in particular gay porn films. No mention of lawsuit settlement in title.

                        Misleading about content of article

The article is actually about the sexual harassment settlement, with one mention of the allegation of sending gay porn, the actions of Fox News in relation to O’Reilly’s employment after the settlement, and a listing of O’Reilly’s past anti-gay statements.

                        Misleading content is sensationalist/clickbait

  1. Graphics: Lead image linked to social media postings is this:

 

          Graphics Issues:

                        Misleading regarding content of article:

The image is half a gay porn scene and half Bill O’Reilly, which would lead a read to expect that the topic of gay porn makes up a significant portion of the article—perhaps up to half.

                        Misleading content is sensationalist/clickbait

The image is salacious and relies on people’s interest what they perceive as sexual misbehavior and/or hypocrisy of others

                        Image is a stock photo not related to a particular fact in the article

  • Other Elements (Lead Quote): Anti-gay former Fox News host Bill O’Reilly is caught up in a $32 million gay porn lawsuit.

Element Issues:

Inaccurate regarding underlying facts

The $32 million lawsuit is cannot be accurately characterized as being “about” gay porn. It is most accurately characterized as a sexual harassment (or related tort) lawsuit.

                        Inaccurate in relation to facts stated in article

The article itself states: “Now the New York Times (NYT) has claimed that, in January, O’Reilly agreed to pay $32 million to settle a sexual harassment lawsuit filed against him.”

Adjective describing subject of article selected for partisan effect

“Anti-gay” is used to describe Bill O’Reilly, which is used to make a point, to the site’s pro-LGBT audience, that O’Reilly is especially despicable beyond the transgressions that are subject of the present lawsuit being reported upon.

  1. Article:

            Genres:

  1. Embellished Reporting (i.e., reporting the timely story, plus other stuff)

Reports the current sexual harassment settlement story, relevant related timeline of events, plus extraneous information about how O’Reilly is anti-gay

  1. Promotion of Idea

                                    Idea that Bill O’Reilly is a bad person particularly because he is anti-gay

Sentence Breakdown:

                        706 total words, 28 sentences/quotes

Factual Accuracy:

% Inaccurate sentences: 0 out of 28 sentences (0%) inaccurate

% Misleading Sentences: 0 out of 12 sentences (0%) are misleading

 

                        Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 24/28 (86%)

% Fact/Quoted Statements with adjectives: 2/28 (7%)

% Analysis Statements: 1/28 (3.5%)

% Analysis/Opinion Statements: 1/28 (3.5%)

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

                        % Fair: 20/28 (71%)

Sentences 1-7, and 9-20 rated as “fair” because they are factual, relevant to the current story, and timely.

% Unfair: 8/28 (29%)

Sentences 8, 20-28 rated as “unfair” because they are untimely, unrelated to title, and used for idea promotion

Overall Article Quality Rating: Selective Story; Unfair Influence

Main reasons:

-29% of sentences included for unfair purpose

-Anything over with over 10% unfair influence sentences can be fairly rated in this category

-Title, Graphics, Lead element all extremely misleading

Overall Partisan Bias Rating: HYPER-PARTISAN (Liberal)

Main reasons:

  • Focus on pro-LGBT message even though underlying story is very loosely related to LGBT issues

 

  • From the Right: Article Ranking and Analysis of:

http://www.dailywire.com/news/22540/trump-was-right-gold-star-widow-releases-trumps-ryan-saavedra

Source: The Daily Wire

Author: Ryan Saavedra

Date: October 20, 2017

Total Word Count: 257

 

 

  1. Title: TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan

Title Issues:

Contains all caps statement of “TRUMP WAS RIGHT”

-Capitalization is sensationalist

Contains conclusory opinion statement of “TRUMP WAS RIGHT”

Directly appeals to confirmation bias with “TRUMP WAS RIGHT”

People likely to believe Trump is right in general are the most likely to click on, read, and/or share this, and are most likely to believe the contents of the article at face value

Misleading regarding context of current events, which says “Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan (see explanation after next issue)

Omitting relevant context of current events occurring between approximately Oct 16 and Oct 20, 2017, the four preceding days before this article was published

-In the context of a controversy over disputed phone call between Trump and a different black Gold Star Widow than the one this article is about, in which the presence of a recording of the call was also disputed, the omission of the fact that this is a different black Gold Star Widow who received a call from Trump is misleading. It is misleading because it is likely to confuse readers who are unfamiliar with specific facts of the current controversy ( such as 1) the names of the widow and solider, Myeshia Johnson and Sgt. La David Johnson, 2) what they look like, and 3) where he was killed

 

  1. Graphic Elements: An accurate photo of the widow who is the subject of the story (Natasha DeAlencar) and her fallen soldier husband (Staff Sgt. Mark DeAlancar)

Graphics Issue:

Accurate photo juxtaposed with other problematic elements

-Though the photo is accurate, its position next to the title again may lead readers who are uninformed as to the underlying facts that this call is regarding the current controversy between Myeshia Johnson and President Trump

III.             Other Elements (Lead Quote): “Say hello to your children, and tell them your father, he was a great hero that I respected.”

Element Issue:

Accurate quote juxtaposed with other problematic elements

-Similar to the photo, though the quote is accurate, its position next to the title and photo again may lead readers who are uninformed as to the underlying facts that this particular quote was from the current controversial call between Myeshia Johnson and President Trump

  1. Article:

Genres:

-Storytelling

Here, the story of one widow’s experience

Promotion of ideas

Here, the promotion of idea that Trump is respectful and kind; promotion of idea that media is deceitful

Sentence Breakdown:

257 words; 12 sentences

Factual Accuracy:

% Inaccurate sentences: 1 out of 12 sentences (8%) inaccurate

Quote from article: “In response to a claim by a Florida congresswoman this week claiming that President Donald Trump is disrespectful to the loved ones of fallen American soldiers, an African-American Gold Star widow released a video of a phone conversation she had with the President in April about the death of her husband who was killed in Afghanistan.”

  • The widow did not release the video “in response to a claim by a Florida congresswoman.” She released it in response to inquiries from reporters in the wake of the controversy between Myeshia Johnson and Trump[1]

 

  • The congresswoman, Frederica Wilson, did not say generally that Trump is disrespectful to the loved ones of fallen American soldiers. She said to a local Miami news station, about Trump’s particular comments to Myeshia Johnson, “Yeah, he said that. So insensitive. He should not have said that”[2] All recent instances of her talking about the President’s conduct are in the context of this incident.[3]

% Misleading Sentences: 1 out of 12 sentences (8%) are misleading

Quote from article: “The video comes a day after White House Chief of Staff John Kelly gave an emotional speech during the White House press briefing on how disgusting it was that the media would intentionally distort the words of the President to attack him over the death of a fallen American hero.”

 

This quote makes it sound like the media took the words from this call in the video and distorted them to attack the President. The words that are the subject of the controversy in the current Johnson call are not quoted in this article at all. This sentence uses a strong adjective—“disgusting”—to describe an action, and the context of this sentence may lead readers to think the “disgusting” action was the media taking these kind words and reporting different, false, insensitive words.

Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 9/12 (75%)

% Fact/Quoted Statements with adjectives: 3/12 (25%)

% Analysis Statements: 0

% Analysis/Opinion Statements: 0

% Opinion Statements: 0

 

Sentence Type by Fair/Unfair Influence:

% Fair: 84%

Sentences 2-11 rated as “fair” because they are factual and relevant to the underlying story

% Unfair: 16%

Sentence 1 rated as “unfair” because inaccurate statements are generally used unfairly for persuasion

Sentence 12 rated as “unfair” because misleading statements are generally used unfairly for persuasion

Overall Article Quality Rating: Propaganda/Contains Misleading Facts

Main reasons:

-Anything over 0% inaccurate automatically rated at least this low

-Anything over 2% misleading automatically rated at least this low

-Title, Graphics, Lead element all misleading

Overall Partisan Bias Rating: HYPER-PARTISAN

Main reasons:

  • Opinion statement in title
  • Misleading and inaccurate statements used for purpose of promoting partisan ideas

 

[1] https://www.washingtonpost.com/news/checkpoint/wp/2017/10/19/listen-soldiers-widow-shares-her-call-with-trump/?utm_term=.f615c80f2bd7

[2] https://www.local10.com/news/politics/trump-speaks-to-widow-of-sgt-la-david-johnson

[3] The author of this analysis is unaware of any general statement by Rep. Wilson that “Trump is disrespectful to the loved ones of fallen American soldiers,” but will revise this analysis if such quotes are brought to the author’s attention

Posted on

Top Six Red Flags that Identify a Conspiracy Theory Article

It can be tough to see your Facebook friends sharing conspiracy theory stories, and tough to respond to them effectively. Pointing it out and saying “that’s a conspiracy theory” doesn’t seem to be effective. But there are certain writing patterns and tropes that are common within such articles that make them compelling to some people. Sometimes, just pointing out patterns and tropes helps people see them for what they are.

Posted on

The Chart, Version 2.0: What Makes A News Source “Good?”

In my original news chart, I wrestled with the questions of what made news sources “good” and came up with some categories that generally resonated with people. I ranked sources on a vertical axis with those at the top ranked as “high quality” and those at the bottom as “low quality.” I characterized the sources, from top to bottom, in this order: Complex, Analytical, Meets High Standards, Basic, and Sensational/ Clickbait. This mostly works, because it results in sources regarded as high-brow or classy (e.g., The Atlantic, The Economist) being ranked high on the axis, and trashy sources (e.g., Addicting Info, Conservative Tribune) being ranked low, and most sophisticated news consumers agree with that. However, the vertical placements ended up causing me and others some consternation, because some of the placements relative to other outlets didn’t make sense. The most common questions I got were along these lines:

“Does FOX News really “meet high standards,” on par with something like the New York Times?” (I think no.)

“Is USA today really that bad?” (I think no.)

“Is Slate really “better” or “higher quality” than, say, AP or Reuters just because it is analytical?” (I think no.)

“Is CNN really that bad?” (I think yes.)

These questions and my instinctive responses to them made me want to reevaluate what makes news sources high or low quality.

I believe that answer to that question lies in what makes an individual article (or show/story/broadcast) high or low quality. Article quality can vary greatly even within the same news source. One should be able to rank an individual article on the chart in the same way one ranks a whole news source. So, what makes an article/story high or low quality? It’s hard to completely eliminate one’s own bias on that issue, but one way to try to do it consistently is to categorize and rate the actual sentences and words that make it up its headline and the article itself. In order to try to rank any article on the chart in a consistent, objective-as-possible manner, I started doing sentence-by-sentence analyses of different types of articles.

In analyzing what kind of sentences make up articles, it became apparent that most sentences fall into (or in-between) the categories of 1) fact, 2) analysis, or 3) opinion. Based on the percentages of these kinds of sentences in an article, articles themselves can be classified in categories of fact, analysis, and opinion as well. Helpfully, some print newspapers actually label articles as “analysis” or “opinion.” However, most news sources, especially on TV or the internet, do not. I set about analyzing stories that were not pre-labeled as “analysis” or “opinion” on a sentence-by-sentence basis. I discovered that my overall impression of the quality of an article was largely a function of the proportion of fact sentences to analysis sentences to opinion sentences. As a result, I classified stories into “fact-reporting,” “analysis,” and “opinion” stories. Ones with high proportions of “fact” sentences (e.g., 90% + fact statements) were what I refer to here as traditional “fact-reporting” news pieces. These are the kinds of stories that have historically been the basis of late 20th century-to-early-21st century journalism, and what people used to refer to exclusively as “news.” They are the “who,” “what,” “when,” and “where” pieces (not necessarily “why”). I classified ones with high proportions of “analysis” sentences (e.g., 30%-50% analytical statements) as “analysis” stories, which are the types of stories commonly found in publications like The Economist or websites such as Vox. I classified stories with high proportions of opinion sentences (e.g., 30%-50% opinion statements) as “opinion,” which are typically the types of stories found on websites such as Breitbart or Occupy Democrats.

(If you’ve made it this far, bless your heart for caring so much about the news you read.)

In the past, national evening news programs, local evening news programs, and the front pages of print newspapers were dominated by fact-reporting stories. Now, however, many sources people consider to be “news sources” are actually dominated by analysis and opinion pieces. This chart ranks media outlets that people consider to be, at some level “news sources,” even though many of them are comprised entirely of analysis and opinion pieces.

In my previous version of the chart, I had regarded analysis pieces as “higher quality” than the fact-reporting pieces because they took the facts and applied them to form well-supported conclusions. I like analytical writing, which is essentially critical thinking. However, analysis has a lot in common with opinion, and writing that is intended to be analytical often strays into opinion territory. (Note—I’m defining “analysis” as conclusions well supported by facts and “opinion” as conclusions poorly supported or unsupported by facts). Fact-reporting articles—true “scoops”—typically have the intent of just reporting the facts and typically have a very high percentage (e.g., 90%+) of fact-statement sentences, whereas both analysis and opinion articles have the intent of persuading an audience and often have a comparatively high percentage of analysis and opinion statement sentences (30%-50%). So, although I initially had the quality axis of “news” laid out top to bottom as:

That ranking is more reflective of the quality of writing rather than the quality of news sources. Good analysis is often written persuasively and well, fact-reporting is often written directly but well, and opinion writing is often (but not always) written poorly or is most easily discredited.  I submit that given the confusion caused by the overwhelming number of organizations proclaiming to be (or which are commonly confused with) “news sources,” it is more important to rank the quality of news sources than the quality of writing. I further submit, for reasons outlined below, that the percentage of fact reporting articles and stories should be used as the most determinative factor by which a news source is ranked in quality on this chart.

Therefore, I believe a more relevant ranking of the quality of news sources would be:

 

 

I assert that one of the biggest problems with our current news media landscape is that there is too much analysis and opinion available in relation to factual reporting.  New technologies have given more people more platforms to contribute analysis and opinion pieces, so many “news sources” have popped up to compete for readers’ attention. Unfortunately news consumers often do not recognize the difference between actual fact-reporting news and the analysis and opinion writing about that news.  This increase in “news sources” has not corresponded with an increase in actual journalists or news reporting, though. Many local and national print news organizations have reduced their numbers of journalists, while many of the biggest ones have merely maintained similar numbers of journalists over the past 10 years or so.
Furthermore, primarily analytical news sources also have several downsides. One downside is that they can alienate news consumers by making what people consider “news sources” so complex or partisan that it is tiring to consume any “news.”  For example, CNN, MSNBC and FOX News, which are primarily analysis and opinion-driven, can make news consumers too weary to pay attention to fact-based reporting from, say, AP or Reuters. Another problem with analysis and opinion-driven news sources is that it can be difficult for casual readers to differentiate between good analysis and pure opinion.
There are several good reasons why we should value fact-reporting sentences, fact-reporting articles, and fact-reporting news sources high on the quality scale of news, at least on this chart. For one, reported facts take a lot of work to obtain. They require journalists on the ground investigating and interviewing. Once a story is reported, dozens, hundreds, or thousands of other writers can chime in with their analysis or opinions of it. This is not to say analysis and opinion writing isn’t important. The critical thinking presented in analytical writing—especially good, complex analysis—is essential to public discourse. Our society’s best ideas are advanced by analytical articles. This piece you are reading now is analytical. But analysis in the news wouldn’t even exist without the underlying factual reporting.

For example, AP and Reuters have maintained around 2,000-2,500 journalists each over that time, while the New York Times and Washington Post have fluctuated in the 500-1000 range over the same period. The value of these organizations with large staffs of journalists, editors, and other newsroom employees is hard to overstate; not only do they provide a majority of the fact-reporting stories everyone else relies on, but they have the capacity to provide high-quality editorial review that stands up to industry scrutiny. In contrast, even some of the most popular analysis and opinion sites can be run with just a few dozen writers and staff; the number of these “news” websites, news aggregator websites, blogs, and podcasts has seemingly grown exponentially.

I believe improvements to our media landscape can be made if two things happen: 1) if news consumers start valuing factual reporting much more and analysis/opinion articles much less and 2) if news consumers become accustomed to differentiating articles in those categories.. Regarding point #2, I think it would be helpful if we narrowed the definition of “news” to only refer to fact reporting, and referred to everything else as “analysis” or “opinion.” It would be helpful if people could recognize the relative contributions of fact-reporting news organizations versus analysis and opinion sources. If people recognize just how much of what they read and watch is intended to persuade them, they may become more conscious and thoughtful about how much they allow themselves to be persuaded. One can hope.

To contribute to those goals, I’ve reordered the chart to value fact-reporting articles as the highest quality and everything else lower, even though there is some really excellent analysis out there. As a baseline, news consumers should understand when something is news (fact-reporting) and when it is not. On the new chart, the sources with the best analysis, but little reporting are at the top, but right under the sources that are comprised of high percentages of reporting articles. The most opinion-driven sources are at the bottom. There’s room for other things at the bottom below pure opinion, which can include sources that are sensationalist, clickbait, frequently factually incorrect, or which otherwise don’t meet recognized journalism standards.

On this version, I’ve included a number of different sources, mostly in the analysis and opinion categories, and left the most popular mainstream sources from the original chart, but have reordered some of them. Now, the rankings are more consistent with my initial answers to the example questions at the beginning of this post. Fox News is now ranked far lower than the New York Times for two main reasons; one, Fox News is dominated by opinion and analysis, and two, it has gotten precipitously worse in other measures (sensational chyrons, loss of experienced journalists, hyperbolic analysis by contributors, etc.) within the last six months. USA today, despite its basic nature, has been elevated because of its high percentage or fact-reporting stories. Slate, though it provides thoughtful, well-written analysis, is ranked lower than AP and Reuters, which better reflects their relative contributions to the news ecosystem. CNN still sucks, but it is clearer why now; CNN has the resources to provide twenty-four hours of news—it could provide Americans with a detailed global-to-local synopsis of the world—but instead it chooses to spend 5% of its time fact reporting a handful of stories, comprising mostly American political drama and maybe one violent leading world news story, and 95% on analysis and opinion ranging from the competent to the inane.

My analysis of news sources in the manner I’ve described herein has revealed that individual stories can and should be ranked on the chart in the same manner, and that individual stories can be placed in different places than the news sources in which they are published. I’ll be putting out individual story rankings and reasoning for those rankings from time to time for those that are interested. I’ll also take requests for rankings of sources and individual stories in the comments and on twitter. Thanks for reading and thinking.

*Update: a high-resolution PDF version is available here: Second Edition News Chart.V2

And a blank version is here: Second Edition Blank

 

Posted on

What is the difference between the statues of George Washington and Robert E. Lee?

The pro-confederate-statue side asks this question, likely in earnest, and it is worth grappling with the distinction. Indeed, since slavery is evil and horrible, as generally agreed by liberals and conservatives alike, and both men owned slaves, why is it preferable to take down the confederate statue and not the Washington statue?

This is not cut and dry, or “obvious” to everyone, and we shouldn’t treat it as such. It is a difficult task to distinguish between two things that are alike in some ways and different in others, so let’s look at the details and facts of these cases in order to distinguish, like courts do.

It is a general rule that we put up statues of good people and not bad ones, but this in itself is a hard rule to follow because no one person is all good or all bad. It’s a bit easier to distinguish with some people than others. MLK=almost all good and Hitler=almost all bad is not hard. I think it is legitimately closer with both George Washington and Robert E. Lee. I think the reason the argument comes down to GW=mostly good (despite slaves!) is because he is most known and respected for 1) fighting in the Revolutionary War for American independence, which modern Americans view as a righteous cause, and 2) being our first President. The argument comes down to Lee=mostly bad (plus slaves!) because he is most known for 1) fighting in the Civil War for the cause of keeping slaves, which most modern Americans view as a morally wrong cause.

The question of what they are most known for is an important one, because that is usually the same reason their statue was put up in the first place. When it comes to the question of whether to take one down, people tend to base their opinion on the questions on 1) what it meant when it was put up in the first place and 2) what it means now, in the context of history. With GW, it was put up because of his role in the Revolution and as President. With Lee, it was put up during an era of brutal reinforcement of white supremacy (see comments for link discussing this history) with a purpose of intimidating recently freed slaves. Today, in the context of history, GW’s statues are widely seen as a reflection of his leadership and role as a founder, not his role as a slave owner. Most people don’t go to a GW monument for the purpose of celebrating his slave ownership. Today, though, in the context of history, Lee’s statues are commonly given two negative meanings: First, they serve as a reminder of white supremacy to black people, and second, they serve as a rallying point for actual white supremacists. Yes, to many people, it may mean a “commemoration of Southern history” too, but if it’s 50% a brutal white supremacist reminder/rallying point and 50% Southern history commemoration, that’s enough to justify it being removed. We have made a moral decision as a society that its (even partial) role as a white supremacy beacon is not acceptable, in response to a particular flash point of a white supremacist resurgence. We have not made a similar decision about the Washington statues, because there has been no recent flash point around those.

However, I can’t actually morally justify Washington owning slaves, and that practice is indeed so reprehensible that it is valid to argue that if slavery is that wrong, then we should take down the statues of any slave holder, no matter how “good” they were otherwise. Joe Paterno’s statue was taken down because his biggest moral failing—protecting a child predator—outweighed the other good he had done. Perhaps the removal of Washington (slave owner) and Jefferson (slave owner and likely slave rapist) is the morally correct thing to do. We would likely remove the statues of contemporary heroes (say, MLK or Wayne Gretzky) if we suddenly found out they were rapists or owned slaves.

But there is a distinguishing factor between how we judge the actions of contemporaries compared to how we judge those of historical figures, and that is the factor of relative morality of a time in history compared to the present. Those who argue “slave owners weren’t all bad people” are inherently taking this factor into account. Yes, we all view slavery as evil now, but when it was a somewhat normalized aspect of society, it is plausible and even likely that many slave owners tried to live what they thought were upstanding moral lives in many ways. They may even have had moral dilemmas about slavery but felt that it was an intractable problem for them to solve, let alone forgo participation in. “Slave owners were not all bad people” ( a typically conservative argument) is a very similar argument to “George Washington’s statue should remain up because he did other good things, even though he owned slaves” (an argument liberals are currently making in relation to the confederate statue issue). “George Washington was not all bad,” essentially.

It seems that the right thing to do is to take down the Confederate statues because the of the bad things they were best known for (explicitly fighting for slavery), plus the reasons they were put up, plus the reasons they cause people pain now. But we must also admit that it would be logically consistent to remove other slave owners, even our founding fathers, if some contemporary flash point were to bring the issue of how bad slavery really is to the forefront. Perhaps it is a moral failing of our current time that we have not come to this realization yet. Perhaps future generations will come to the consensus that the founding fathers’ statues should be removed and hold it against our generations that we did not. Perhaps they will judge us harshly for tolerating other injustices, like unequal  women’s rights and queer rights for so long. Societal morals evolve over time. In the near term, though, it is likely that the “contemporary, widely-held perception of the statues” factor and the “relative morality of the time of the person” factor saves the Washington and Jefferson statues now but not the Confederate statues. So down with the Confederate statues. And shame, at least, on the moral failings of those whose statues we leave in place.