Posted on

Why Patribotics is all the way left (even though its author is a self-described conservative) and National Enquirer only skews right (even though its owner squashed a damaging Trump story)

One of the most fascinating types of comments I get are the ones insisting that one of the sources I have ranked in the bottom left of my chart, the Patribotics blog, should actually be ranked to the right. More fascinating still are the comments defending the credibility of the site. This post explores why I have this source ranked where it is, and also discusses similar reasons why National Enquirer is ranked where it is.

It all has to do with the fact that the rankings on this chart are based primarily on content analysis. Remember, “Ad Fontes” (the name of my company) means “to the source.” When ranking quality and bias, we look at the content of the story itself as the primary basis. We do this because we believe it is the most fair, transparent, and repeatable way to rank information sources, all of which are created by human beings with their own biases.

Some content on the extreme right and left, including the blog Patribotics, includes actual purposeful falsehoods, conspiracy theories, and hoaxes. Most left and right political topics are assigned as such on the chart because of their associations with the political parties. However, these conspiracy theories cannot necessarily be associated with a particular political party, because usually, they are dismissed out-of-hand as just insane by most reasonable people of either party. However, I categorize these on the extreme ends of the horizontal axis when these stories tend to be shared overwhelmingly by the extreme fringe members on either the right or the left. For example, InfoWars has done “stories” claiming that the shooting at Sandy Hook was a hoax, and while reasonable conservatives rightly dismiss that nonsense, those who pick up, share, and believe such hoaxes are overwhelmingly right-wing extremists. This nonsense extends from an underlying pro-gun, right-leaning ideology, but goes off the deep end in the rightward direction.

The blogger Louise Mensch, who writes the blog Patribotics, has done “stories” claiming that “the Supreme Court notified Mr. Trump that the formal process of a case of impeachment against him was begun,” which did not happen, and moreover, does not accurately reflect how impeachment actually works. While reasonable liberals dismiss this nonsense, those who pick up, share, and believe such hoaxes are overwhelmingly left-wing extremists. The nonsense itself extends from an underlying Trump-is-bad, left-leaning ideology, but goes off the deep end in the leftward direction.

In these cases, when a particular story is nonsense and its content cannot accurately be ascribed to a political party, the clue you can look to in the content is what the underlying, reasonable ideology is. In these cases the audience that picks it up and shares is also an appropriate proxy for detecting whether it should be classified as right-wing or left-wing garbage. I submit that the partisanship of the author/publisher that writes or produces it is NOT, by itself, the most important proxy.

I believe it is important to categorize content as extreme left-wing or right wing in order to distinguish which kinds of readers and parties are most damaged by it. One of the main reasons I created this chart is because I believe misinformation and disinformation harms certain individual readers by compromising their reason and logic and manipulating their emotions. I believe misinformation and disinformation also harms each of the political parties by harming their members who are most susceptible to it.

Again, the political affiliations of the authors/publishers themselves are not a good proxy for how extremely right- or left-wing a story is because motives of various authors or publishers of nonsense can vary widely. Nearly all are profiteers with high awareness that they are exploiting gullible extremists, but some do it to their own “side.” For example, some (like Alex Jones of InfoWars) are known to be extreme conservatives themselves and publish for such content to gin up rage among other extreme conservatives for the dual purposes of profit and for advancing causes they truly promote. Others do it to the “other side” for the dual purposes of profit and sowing confusion and discord among the opposing political party. Mensch, for example, is believed to be very conservative politically, having been a conservative member of the British parliament in the past. This fact is often cited by her extreme left-wing followers as evidence of her credibility (e.g., “if a conservative person, who would ordinarily take all conservative positions, is saying things that I, a liberal person, would like to be true, then it must be true”). I’ll call this the “My Extreme Opposite Agrees With Me Extremely” trope. It is a logical fallacy which is, unfortunately, appealing to many people. The fact that Ms. Mensch is, herself, conservative, means nothing in our content analysis ranking, because the content itself is left-wing biased. The fact that she is conservative but is publishing things that are popular with liberals does not make her falsehoods any more true.

Many people are deceived into falling for Mensch’s and similar authors’ garbage based on this specific trope. “My Extreme Opposite Agrees With Me Extremely” is actually quite a common trope in low-quality, highly biased media. An example is Fox News’ use of internet celebrities Diamond and Silk as political analysts. I’ll explore this more in a subsequent post, because there are many more examples. One should be highly skeptical of outlets that employ this trope. Note that this is different and highly distinguishable from “My Moderate Opposite Agrees With Me Moderately.” When historically moderate people on opposite sides of an issue find agreement, that is more often a sign that there is truth in that agreement (all generalizations are false).

Yet other authors and publishers are mere profiteers who do not have identifiable extremist positions themselves, but know that there are many people to be fooled on both sides. For example, one prolific, admitted “fake news” publisher, the late Paul Horner, was not an extremist himself, but made plenty of money putting out content that was shared by both right- and left-wing extremists. Therefore, I submit that the partisan position of an author or publisher should not be a primary consideration in characterizing extreme partisan stories. Rather, content itself, then the underlying ideology, then audience (as a proxy), should be taken into consideration first.

If you look at The National Enquirer’s content under the Ad Fontes content analysis model, then, a “skews right” designation makes sense. Yes, the owner of the outlet is a friend of Trump’s and reportedly paid to “catch-and-kill” a damaging story about a Trump affair, which is certainly a “pro-Trump” move. This incident, however, is more indicative of why The National Enquirer is in the garbage bin on the quality scale. It is there for other reasons too, such as the fact that it often does not adhere to journalism ethics standards on sourcing, which results in them publishing rumors. Though they defend this practice because it sometimes results in them reporting something true, in which event they can claim they “broke the story,” (see, e.g., John Edwards’ affair), they broke the story in the same way a broken clock is right twice a day.

When looking at The National Enquirer’s content for political bias, it is hard to detect because most of their stories are not about political topics. One doesn’t turn to The National Enquirer to find out about immigration or business regulations. It primarily focuses on salacious or highly personal topics about its subjects. The skews right designation stems mostly from their propensity to run unflattering stories more often about Democrats, not from political position stances. If you are upset that The National Enquirer isn’t further right, take solace in the fact that it is in the lowest quality section.

In sum, don’t be influenced by who the author/publisher says they are. Look at what they SAY. In media ranking world, content should be king/queen.



Posted on

Junk Food and Junk News: The Case for “Information Fitness”

We, as humans, have basic needs for several things. One of them is food. Another is information. We are always, by necessity and want, taking in both. It’s fair to say we even love both food and information. But we also, as humans, have a propensity for indulging in too much of a good thing to the point that we turn it into a bad thing. It’s easy for us to develop bad habits around any of our basic needs. This is especially true when indulging provides some kind of instant gratification but long-term damage.

I submit that generally, our American habits around food consumption are highly analogous to the habits we have around news and information consumption. Similarly, the resulting problems we have because of those habits are highly analogous. [1] We love junk food and we love junk news. And they are both wreaking havoc on our individual and collective physical and mental health, and having detrimental effects on our whole society.

It first occurred to me how analogous food consumption and information consumption habits are quite recently, as I have been learning about the other few burgeoning attempts to rate the news. Several of these endeavors say (as do I), that we are trying to create a “nutrition label” for what is in your news. That just makes sense. In many instances, we are simply unaware of what kind of content we are consuming in our news. Is it good, true, biased, opinion-based, analysis-based, or reliable? And how much is it of those things? Right now, there is no standard nutrition label that tells us what is in our news before we consume it, and it used to be that way for our food. I assert that we should at least have some idea of what we are getting into before putting into our brains.

Availability and Proliferation

Our current media landscape is analogous to the American food landscape during the proliferation of fast-food restaurants and highly processed foods. Note that I am distinguishing between “food” generally and certain subsets of food— unhealthy fast food and processed food, which are characterized by poor nutritional content and low cost. Though food, and even certain types of fast food and processed food were around before the 1950’s and 1960’s, fast food restaurants exploded in popularity and availability during those years, and have continued to grow since. Advances in food science and technology have led to an abundance of available varieties of processed food in grocery stores. There are certainly benefits to fast food; it’s convenient, inexpensive, and tastes good, which provides people with the ability to spend less time and money feeding themselves and their families. There are certainly benefits to processed food too; it lasts a long time on shelves, is less prone to food-borne contaminants, it is inexpensive, there are all different kinds, and it tastes good. However, since the 50’s, we have become acutely aware of the drawbacks of both unhealthy fast-food and processed food; namely, that a lot of it has too much stuff in it that is just bad for you, like fat, sugar, and salt.

The last 20 years (when cable news began), the last 10 years (when smart phones became available), and the last 8 or so years (when social media really proliferated) have marked a similar explosion in the availability of all types of news and news-like information. Here, I am distinguishing between “news” and “news-like information,” the latter being characterized by high levels of opinion and analysis and low levels of editorial review.” In the realm of “news-like information,” we now have choices of multiple 24-hour cable news channels, thousands of online news sites and blogs, thousands of YouTube channels, and the constant promotion to us thereof through social media. There are certainly many benefits to this new era of information availability. Personally, I’m optimistic that more information availability to more people can and will lead to increased peace and prosperity in the world eventually, as it has time and time again throughout history. On a basic level, more people are able to know more things than ever before. However, we are now becoming acutely aware of the drawbacks of too much news-like information; namely, that a lot of it has too much stuff that is just bad for you, like misinformation and bias.

Bad Habits and Monetization

The reasons we are drawn to fast/processed (and generally “unhealthy”) food and opinionated and biased news-like (and generally “unhealthy”) information are similar.

We like fat, sugar, and salt because they taste good and because parts of our brains derive pleasure and reward from eating them. This is a feature, and not a bug, of how our brains work; we’re naturally drawn to eat good-tasting, high calorie food for sustenance and survival. We also consciously know that we need to eat these foods in moderation, and that we need to eat stuff that doesn’t taste as good, like vegetables, because we have this intelligence and capacity to learn this information from our own and others’ experiences.

We like opinionated and biased news-like information because being right feels good. Our brains are wired for confirmation bias, which is being more open to receiving information that comports with what you already believe. This is, again, a feature, not a bug, of how our brains work; it makes it easier for us to make sense of the world around us. We also consciously know that we should regularly learn seek out new information, including information that challenges our existing beliefs.

However, it’s easy to over-consume unhealthy food and unhealthy news in part because each provides instant gratification, and the drawbacks are not immediately evident. The drawbacks, if any, come from long-term, sustained unhealthy consumption, not from one-time, or infrequent unhealthy consumption.

It’s even easier because those who produce food and information are well aware of our desires and are monetarily incentivized to exploit them.

Food companies, of course, make more money when people buy more food, especially when that food is cheap to make. Unfortunately, it is easy to make food that is cheap, delicious, and terrible for you. Even worse, making it that way often increases both its addictive qualities and the maker’s profit margin. Though segments of the food industry have embraced healthy food and built successful businesses around it in the last couple of decades, many other segments have not. These segments, namely in the fast and processed food industries have, and continue to, create and aggressively market unhealthy food, especially to people most vulnerable to such marketing.

Media companies, of course, make more money when they attract a larger audience. Media companies have always relied on both subscription and ad revenue, but news production and distribution used to be limited to large organizations who had invested significant resources in journalists and print, TV, or radio distribution. But now, because of technology, there are thousands more sources available, and each is incentivized to monetize their source by driving clicks and views. Unfortunately, highly biased, opinionated, low-quality, and “clickbait” headlines and content drive revenue even more easily than high-quality, least-biased headlines and content. Not only have newer sites of questionable reputability supplied plenty of low-quality, highly-biased headlines and content, but their proliferation has, unfortunately, caused historically reputable outlets to start providing some lower-quality, highly-biased content just to compete for audience share.

The result of these combinations: 1) our predispositions to unhealthy food/info consumption and 2) monetary incentives for food/info companies to exploit them is a vicious cycle in which many well-meaning consumers fall into patterns of more and more unhealthy consumption.

We’ve come to the collective realization as a society that the consequence of unhealthy food consumption is an obesity epidemic. I submit that the consequence of unhealthy information consumption is an extreme polarization epidemic. We are polarized because so many of us are consuming such high quantities of low-quality, highly-biased information.

One main difference between food and information in this analogy is that the causal links between unhealthy food consumption and poor health effects have been studied and are now somewhat well known. For example, we know that diets too high in fat, sugar, and/or salt, are linked with heart disease, diabetes, high blood pressure, and a host of other illnesses.

In contrast, we are only now starting to study and realize the detrimental effects of overconsumption of low-quality and highly-biased information. Many of us intuitively attribute our increased political polarization to this cause, but it is also highly possible that other detrimental effects are due to this overconsumption. For example, people’s personal levels of anger, isolation, radicalization, or bad decision-making, I suspect, may be attributable to overconsumption of unhealthy information. Many people are blissfully unaware that they are suffering any ill effects from what they read and watch, even though they are consuming intellectual equivalents of donuts and fries at every sitting.

What To Do About It

I don’t mean to blame or shame anyone who struggles with unhealthy food habits and the resulting health outcomes. So many aspects of our society are set up to have people fail—everything from work schedules, to cost, to availability of options, makes having healthy eating habits hard.

I also don’t mean to blame or shame anyone who only consumes low-quality, highly-biased information and as a result, lives in a polarized, ideological silo. Social media has amplified the reach of unhealthy information, made it easy to only consume those articles and shows, and exacerbated our polarization problem.

The most important thing to focus on is not what or whom to blame for these causes and epidemics, but what we can do to try to address and fix some of them. The consequences are extremely damaging. It is imperative that we try to find solutions.

The bright side of the food/information analogy is that for the unhealthy food problem, we have created and implemented some effective solutions. Our existing solutions are by no means complete—we still have a lot more work to do –but we’ve made progress.

One of the first steps in addressing unhealthy food consumption problems was making knowledge available to consumers. Nutrition fact labels, as we know them today, were only mandated as recently as 1994, and they continue to be refined as we learn more about nutrition.

Currently, there is no equivalent “nutrition label” for information, though I and others are attempting to create this equivalent—something reliable and widely-recognized as reputable that tells people what is in their media content. There are a number of challenges to doing this, including the fact that trying to tell people what is “good” or “unbiased” is controversial. See more about that here and here.

Another challenge is that knowledge alone (i.e., nutrition labels alone) doesn’t solve the unhealthy food problem and obesity epidemic completely. So, will with media rating information labels solve the unhealthy info problem and polarization epidemic completely? Almost certainly not. But can it make a difference and should we try? Almost certainly yes.

We have seen a model for addressing our problems with unhealthy food arise in the form of increased knowledge and awareness campaigns and the rise of an entire health and fitness industry.  Public, private, and non-profit organizations have implemented studies, projects, and other efforts to let people know that they should eat their vegetables and lean proteins, eat in moderate quantities, limit fast, processed, and other foods with excess fat, sugar, and salt. Nutrition labels were an important part of spreading new knowledge about healthy eating. Many, many food companies and restaurants created and responded to new consumer demand for healthier options: whole grain, whole wheat, low-fat, low cholesterol, low-salt, low-sugar, low-calorie, organic, high protein, and many other types of improved food choices are now available, and they continue to become more accessible and popular. Certainly, there are still challenges to healthy eating, and may people fail to do so. But now, people are armed with more knowledge and choices, and entire segments of the population use those to eat in perfectly healthy ways. More people than ever before can now realistically choose to eat healthy as a long-term, sustainable lifestyle, even in the face of unhealthy food availability.

We are seeing the beginning of a similar concerted, societal response to unhealthy information diets. There are academic institutions studying the issue and journalism organizations trying to both create and respond to consumer demand for healthy (good quality, minimally-biased) information. You can see these efforts springing forth in the form of fact-checking sites and segments and new independent, subscription-driven journalism outlets.

However, our efforts lag far behind the need to solve the unhealthy information problem. I assert that the problem isn’t even fully realized and identified yet. Namely, we have largely identified the fact that outright “fake” news (lies, deliberate misinformation) is a problem, but do not realize the damage that is being done by news that isn’t completely false, but is highly opinionated and biased. I assert that it is a lurking, unidentified, unrealized problem, like the role of carbohydrates and sugars as dietary culprits used to be. Experts thought that just eating fat was bad—that fat made you fat. Low-fat food became all the rage, but food makers added sugar, which turned out to be worse.

We have seen internet giants Facebook, Twitter, and YouTube try to crack down on fake news (i.e., “fat”), without giving a thought to the role that highly opinionated and biased news plays in polarization. That stuff continues to be so widely distributed, I think, in part because it is highly profitable, but also because these companies don’t think it is unhealthy in the first place.

Exercise and Information Fitness

There is another dimension to this analogy. One part of the solution to unhealthy food is to not do something; namely, to not eat unhealthy food and choose healthy food instead. Essentially, you can be healthy just by controlling your diet.

But there is another positive, related thing you can do to combat unhealthy eating problems—exercise! Exercise actively combats the adverse effects of any past or current unhealthy food habits, and has a whole host of other health benefits. There are lots of ways to exercise—you can play sports, run, do yoga, weight train, join group classes—and all of these things help overall health.

The analogous “exercise” people can do to combat a bad information diet is, I submit, participate in civic engagement, which we can call “civic exercise.” There are many different types of civic exercise you can engage in that provide psychic and emotional benefits. I assert that these include having face-to-face conversations with your political opponents, working on your own business or projects that you are passionate about, learning facts about law and government, voting, volunteering for political causes and campaigns, attending town halls, donating to causes you care about, and calling your elected representatives. Civic exercise can take the form of things that actually make a difference in democracy or relationships with your fellow citizens, whether those things are big or small. While consuming lots of unhealthy information can make you feel angry, sad, and powerless, engaging in civic exercise can make you feel powerful, give you a sense of purpose and meaning, and create the feeling that you are making a difference, because you actually are.

The big question facing our society right now is how to address this polarization epidemic caused by our unhealthy information diet. I propose that we do everything we can to promote lifestyles of “information fitness.” Information fitness should be a thing. The fields of a “media literacy” and “information literacy”—also known as “InfoLit” have existed for years, but I believe we need to transform this concept so that people are not just competent to manage the information landscape (i.e., not just be “literate”), but that they can actually thrive in it. That is, we need to create opportunities for people to be Info Fit.

Info Fitness doesn’t really exist as a concept or industry right now, but the fitness industry didn’t used to exist either. For the longest time, we knew very little about diet and exercise, but as our unhealthy diet and exercise problems came into view, we started to figure out what to do about it.

Back in the day, fitness pioneers such as Jack LaLane and Jane Fonda introduced millions of Americans to new, structured forms of exercise. Today, there are millions of resources for helping people eat well, and millions of new opportunities for people to get fit through exercise. Even as recently as a generation ago, people didn’t have nearly as many healthy food or exercise options and resources. Today, people can choose to eat gluten-free, dairy-free, organic, high-protein, vegetarian or vegan, paleo, macronutrient-balanced, and other health-focused diets with the help of the internet, books, plans, and grocery stores. They can choose to run, take Zumba® classes, do CrossFit®, join OrangeTheory ®, do yoga, do pilates, play team sports in adult and senior leagues, swim, bike, endurance race, do home workout videos like P90X® or Insanity®. This entire health and fitness industry sprung up in the last couple of decades and is making a difference in the face of a health epidemic caused by too much unhealthy food consumption.

We can and should create opportunities for people to live a lifestyle of Information Fitness if they so choose, to combat this polarization epidemic caused by too much unhealthy information consumption.

What would Information Fitness look like? It would start with information nutrition labels, as we discussed before, so people could be aware of what they were consuming before deciding whether they should spend their precious attention on it. If you eat healthy, you know you only have a limited number of calories to eat every day, and you must make choices about whether a particular food item is worth it. Similarly, we have limited time and attention we can dedicate to consuming news. We should actively decide whether what we are about to read or watch is worth it.

I don’t think we need to define a “perfect” model for information diet and civic exercise, but I think we can identify some of the big things that are way out of balance in many people’s information diets. If you look back at the chart, you’ll see it goes from fact-based reporting at the top, to analysis in the middle, to opinion down below that, and outright misinformation below that. In this analogy, I submit analysis content is like carbs in many ways. The vast majority of what is available for us to consume are various sources of carbs (analysis). Now, you definitely need carbs (analysis) in your diet—they are (it is) important. But what you need is moderate portions of high-quality carbs (analysis), like whole grain bread (or an article from the Economist). However, most of us are consuming vast quantities of white bread, mashed potatoes, and cereal (like watching a ton of CNN and reading all your favorite partisan online sites everyday). Too much of this, I assert, is unhealthy.

What should we probably consume more of instead? I submit your least-biased, most fact-based articles are like your vegetables and lean proteins, so we should probably focus on getting more of those, with, as I mentioned, a healthy, high quality, moderate portion of analysis (carbs). Most of MSNBC and FOX News are donuts and fries. They are ok in small quantities every once in a while, but for the love of God don’t sit and consume those all day.

Twitter is candy. Each take is little, gratifying, and addictive. And if you have too much of it you feel sick. We should probably limit our Twitter to small doses.

In addition to cleaning up our information diets, info fitness as a lifestyle will require structures, tools, and resources for people to engage in civic exercise. I see these things sprouting up everywhere in the form of community projects, companies, and education initiatives, and they are inspiring. I believe the next generation will have the capability to engage in civic exercise in ways we currently don’t.

With a combination of 1) knowledge of what we are consuming, 2) choosing to consume mostly healthy information, and 3) engaging in civic exercise, people can become info fit, and fight against this extreme polarization epidemic. I submit that if more people choose to become info fit, we can make a difference in our politics and in our personal relationships with our fellow citizens. I’m working on doing what I can to help people become more info fit, and I hope you’ll do the same.

[1] If you have followed my writing for a while, you know I try not to generalize (because all generalizations are false, including this one). You know I am also highly critical of analogies, because you can always find differences that disprove your analogy. However, generalizations and analogies are useful and necessary rhetorical tools, and this one was compelling to me. It was compelling because we can use the models and solutions we have found, so far, to unhealthy eating problems to create models and solutions for unhealthy information consumption problems.

Posted on

Part 2 of 4: Why Measuring Political Bias is So Hard, and How We Can Do It Anyway: The Media Bias Chart Horizontal Axis

Post Two of a Four-Part Series


The Media Bias Chart Horizontal Axis:


How to Define Political Bias in a Meaningful, Useful Way

In part one of this series I laid out some problems with existing ways of measuring bias and outlined a proposed new methodology for rating such bias in news sources within a defined taxonomy (the horizontal axis of the Media Bias Chart).

In this post, I’ll first define what the terms “partisanship” and “political bias” in this taxonomy (“partisanship” and “political bias” are used somewhat interchangeably here, though they are distinguishable in some aspects). More specifically, I’ll define what the concepts of “liberal,” neutral/center,” and “conservative” mean within the scope of this chart, and the reasoning behind these definitions. Then, I’ll discuss what the horizontal categories on the chart represent.

For clarity, let’s go one step further back and specify that what “political/partisan” bias even means. Here, I refer to the preference for policy positions that are available for individual people to have on particular topics that are subject to legislation by government. I am not referring to individual people themselves as left- or right-biased. In other words, the definitions are topic-focused, not people-focused. For example, I will define policy positions, such as “taxes should be higher/the same/lower on wealthy people” as liberal/centrist/conservative, rather than define individual people, like journalists or politicians, as themselves being liberal/centrist/conservative.

Regarding the answer to the questions of what “liberal,” “centrist,” and “conservative” (hereinafter referred to as simply “liberal/conservative” or “left/right”) policy positions are, this is difficult to answer because 1) what is considered liberal or conservative is a moving target over time, 2) there isn’t necessarily a “center” on each topic, and 3) some people will always disagree on the definitions I or anyone else may come up with.

As an initial matter, many people object to trying to confine partisanship to a left-right axis, arguing that there are other dimensions, such as establishment-populist, or freedom-regulation. Those that insist that these dimensions exist and should be accounted for tend to be libertarians and/or people who feel their political positions are too nuanced to be captured by a simple right/left dimension. However, several forces, including our country’s two-party system, tend to flatten those other dimensions into the liberal-conservative dimension that most Americans easily recognize. As Steven Pinker states in his book Blank Slate, “while many things in life are arranged along a continuum, decisions must often be binary.” For more on this concept, see Pinker’s book or Maxwell Stearn’s writing on political dimensionality here. Therefore, I will stick with the liberal/conservative dimension, because it covers most bias issues, and because this is a visual two-dimensional chart. A visual chart cannot show analysis like written words can, so if you find yourself getting upset about a nuanced idea that is not depicted on the chart, try to remember that this is a chart, and one of the reasons it has reached so many people is that it is a picture that necessarily simplifies some concepts. Don’t worry, someone has probably already written an excellent and nuanced article about your point.

Regarding the question of “what locations on the chart correspond to particular liberal and conservative positions,” the answer is tricky because the horizontal dimension actually represents two distinct bias concepts: 1) Political position bias and 2) Linguistic expression bias.  Political position bias refers to the “rightness” or “leftness”—the extremism—of a particular political position itself. For example, if an article portrays an extreme right-wing position, such as white nationalism, favorably, even if the portrayal is only mildly favorable, would be ranked far to the right. Linguistic expression bias represents the degree to which an article or source promotes a political position through linguistic rhetoric, even if the political position itself is not extreme. For example, if an article uses extreme language and hyperbole to promote the concept that climate change is caused by humans, which is not an extreme position in and of itself, the article it would be ranked far to the left.

Although the questions of what constitutes bias are hard, I believe it is worthwhile and possible to come up with definitions for the horizontal categories, thereby creating a taxonomy on which reasonable people of differing political beliefs can find agreement. I think it is also worthwhile to create a methodology for ranking within the taxonomy, so reasonable people of differing political beliefs can rank the same sources and come up with similar results.

Several commentators on this blog have brought up a concept called the Overton Window, which refers to what constitutes acceptable political discourse during a particular time, and which inherently recognizes that the window shifts over time. The left-right dimension of this chart attempts to capture the range of political discourse (not just the acceptable portion) in our media at the present time. This is hard to capture, but I believe we can do this if we account for enough inputs. What I specifically refer to as “inputs” are the communications that exist throughout our political system from three groups of people, namely, elected officials, journalists, and citizens.

The communications that emanate from each of these groups are important and influential in different ways. The communications of elected officials are of course important because they have the actual power to change laws. The communications of journalists are important because their platforms give them influence over how citizens see political events. The communications of citizens are important because they are the most numerous, and because they have collective power over what the elected officials (by voting) and journalists (by reading and watching) say.

It is not always obvious whose political views (of these three groups) influence the politics of our overall society. For example, many wonder, is it the media that influences citizens and elected officials? Or do citizens influence the media and elected officials? Or do elected officials influence citizens and the media?  I submit that each of these groups (which overlap, of course) influence each other in varying degrees at different times, and that push and pull of influence is what causes definitions of “left” and “right” evolve over time. For example, one can argue that the civil rights movement was caused first by citizens who influenced the media and politicians. One can argue that Fox News influenced some citizens and politicians to become more conservative on a range of issues over the last 20 years. One can argue that Obama’s endorsement of same-sex marriage influenced many citizens and media to accept it. One can argue that Trump influenced citizens and media to tolerate lower standards of behavior and experience. Each of these instances is an example of how the influence of various actors results in movement of the political spectrum over time.

The Media Bias Chart takes into account the communications from each of these groups in different ways. As I’ll describe here, the categories themselves are largely defined by the communications of elected officials themselves. The placement of the media sources within the categories reflects the communications of the media about the elected officials and the citizens.

There are seven columns on the chart:

Before defining what these seven columns represent, I’ll start with defining the scope of the chart. The versions I am creating now strictly refer to United States political partisanship. Maybe eventually this project can include international versions, but that’s beyond the scope right now. To the extent sources from other countries are included on this chart (e.g., BBC, The Guardian, The Economist, Daily Mail, etc.), their political bias rating is only with respect to their treatment of US political stories. That is, a BBC story about a British political issue would not be included in the evaluation of BBC on this chart. This is because a large basis for the evaluation of political bias is “comparison between sources.”

In order to compare the bias of sources, one has to look at multiple sources writing about the same or similar topics. Further, one has to know what comprises the political spectrum from the extremes to neutral, and the linguistic rhetoric commonly used in those countries to categorize those topics, in order to then categorize those stories.  It is highly inaccurate, if not impossible, to rate the political bias of, for example, a single story from the BBC about British politics against the US political and media landscape.

To illustrate how our impression of bias is largely dependent on comparison, I invite you to do an exercise: look up an article on Al-Jazeera, RT, BBC, or CBC—one about a topic local to the region or country from which they report. Be sure to select an article about a news topic that you are totally unfamiliar with. Chances are the article will strike you as politically neutral because you are unable to place the political issue within a spectrum of political positions on it. To the extent you are able to detect bias, it is likely due to linguistic indicators, or some reliance on comparison to your knowledge of related US political issues. It is likely your assessment of an article as unbiased would be at odds with a politically astute resident of the country about which you are reading.

Turning back to defining the categories on this chart, precisely defining what comprises the center (of our political system and of the chart) is the hardest, but I think that something approximating a consensus “center” can best be found by defining the easier-to-recognize outer left and right and then working inward towards the middle. I assert that it is easier to define the sides of an issue rather than the center of it for several reasons. One of those reasons is that some issues do not have a center. Another is because politicians tend to advocate for positions of a particular side, not for positions that are in the center. The “center” or a “compromise” is typically a result of a negotiation between two sides, and is not what politicians typically run on.

Similarly, it is easiest to detect bias in a media source the more extreme or egregious the bias is, and harder the more nuanced or unintentional it is. When it comes to articles and sources listed toward the center of the chart, or which just slightly “skew liberal” or “skew conservative,” there is room for reasonable minds to disagree as to exactly how biased these are. Whether a particular observer views these articles as skewing slightly one way or another is largely dependent on the observer’s own political leaning. These articles and sources may contain only nuanced bias, reflected in the choice of one particular term over another, or in the emphasis of certain facts at the beginning of an article and others at the end.

Referring briefly back to the vertical columns of the chart:

Most journalists at reputable sources, when writing fact-based stories (i.e., ranked in the top two vertical rows of the chart, as opposed to purposeful analysis or opinion listed in rows below) attempt to write stories that are as unbiased as possible. But since they are human, they inherently have some political bias, which many manifest subconsciously in their writing.

I submit that the absolute placement of an article or source in the middle three columns of the chart (skews liberal/neutral-balance/skews conservative) is not as important as whether an article or source falls within those middle columns or within the outer columns (hyper-partisan and most extreme). In other words, it is important for all media consumers to recognize when a source is egregiously biased. Though I stated earlier that it is easier to detect egregious bias, the state or our politics indicates that too many people are still unable to so detect it. If a moderately conservative person finds a particular article in Time Magazine skews liberal, and a moderately liberal person thinks the same article skews conservative, that disagreement can generate interesting and healthy debate, but those debates are not the dangerous ones creating extremism and damaging polarization. Those two people are reading something fundamentally reputable in the first place and engaging in productive political discourse. The worst problems with our media environment arise when people follow the hyper-partisan and most extremely biased sources and don’t even recognize that they are biased.

What Comprises the “Politics” that we are discussing, anyway?

Again, we need more baselines and definitions, because the term “politics” is broad. For our purposes here, we can define U.S. politics generally as things our elected officials have the ability to influence. And a good way to determine what our politicians think they can influence is by looking at the topics they solicit feedback on and discuss when they run for office. These often represent topics they take up in legislation when in office.

To generate a useful list of these topics, I took a look at the contact forms for each of my Senators from the state of Colorado, Michael Bennett (D) and Cory Gardner (R). Fortunately (I think), our state happens to have two fairly moderate members of their respective parties representing us in the Senate. On each of their e-mail contact forms, there is a long drop-down list of political issues about which you can contact them. The lists are long, with over 35 issues on each, and negligible differences in how they categorize them.

On some political issues, there is a fairly wide left-right political divide, and on others, there tends to be more consensus. Because this chart measures political bias, it is most helpful to identify the topics on those lists for which a discernible political divide exists. Some topics are so widely agreed upon that their political significance is negligible, and their inclusion in an article would not tell you much about the source’s political bias; for example, an article about a discovery on Jupiter would be fairly non-political. Those tend to go in the middle column absent other factors (e.g., if the article was about how cuts in NASA’s budget are limiting discoveries on Jupiter and that is a bad thing, that would be more political because it includes another political topic: budgets). For the purposes of ranking bias on this chart, it is most useful to identify what political positions have a wide left-to-right spectrum of positions.

From the political topics on my Senators’ lists, I consolidated and selected the ones with the most discernible political spectrums. These are:


Campaign Finance

Civil Rights





Higher Education


K-12 Education



Food stamps/ Welfare

Gun Control

Health Care


Social Security


Foreign Policy

When I refer to parties’ and politicians’ “positions,” I generally mean positions about these topics listed. Some of these are more polarizing than others, meaning the existing “extremes” are further apart than on other issues. For example, I submit that abortion and gun control are more polarizing than higher education.

In order to categorize how left, right, or center a position is, I used the proxies of the positions of current elected officials, as explained in further detail below. I then created a table of positions for each topic that fall into each of the categories. Because this is an evolving project and these positions change over time (sometimes significantly within a short period of time), I currently have only have this table in this paper-and-pencil copy, but I plan on converting it to an electronic format eventually. Here are some hard-to-see pictures of it:

Because the political spectrum changes over time, these positions should be revaluated and updated over fairly frequent time periods: every six months, for example.

Having defined various political positions as falling within particular categories, one can then methodically use the advocacy or favorable treatment of these positions in an article to rank them in those corresponding categories.

I submit it is possible to separate the concepts of political extremism (as measured horizontally on the chart) from quality (as measured vertically) to a certain extent, and for certain political issues. In other words, more extreme political positions do not always have to correlate with quality. The distribution of sources on the chart appears to indicate that the more extreme a position is, the lower quality the article or source is, but that is not necessarily due to the extremism of the position itself. Consider the columns of “hyper-partisan” liberal and conservative. Notice that many sources fall completely or partially within these columns, all the way from “fact reporting” down to “contains inaccurate/fabricated info.”

The reporting of certain facts themselves can create a compelling case for an idea that may be considered politically extreme at the time of its reporting; for example, at a time where adoption of children by gay couples was largely banned by law, an article reporting a study which finds that children raised by gay couples turn out to be just as happy and well-adjusted as those raised by straight couples would appear to take a very liberal policy position. Therefore, it is possible for “fact reporting” articles to fall in the “hyper-partisan” category.

Similarly, even strongly hyper-partisan positions can be supported with analysis and opinion arguments of varying quality. For example, arguments that are strong, compelling, made good faith, based on valid moral concerns, and which do not omit relevant facts from the other side can be made for even somewhat radical economic/social concepts, like libertarianism and socialism. However, worse arguments can be made for these things as well, and those quality-lowering factors are what drag articles or sources down the chart. I submit it is even possible (but rare) to write high-quality stories and arguments about some even more highly-polarized topics, such as abortion. That is, you could have a high-quality complex analysis article that advocates for an extreme position on abortion (e.g., no abortion or birth control on the right, or publicly funded abortion and birth control on the left) that would fall in the top of the “complex analysis” row and right on the right-most or left-most “hyper-partisan” line. However, the nature of very extreme positions is that they tend to be extreme precisely because they ignore some realities and/or valid concerns of the other side. The more extreme the position, the more likely it is to rely on ignored or omitted facts, and the more untenable it is for an elected official to hold. Therefore, those positions that are too extreme for even any politician to hold (in the “most extreme” liberal/conservative) columns are all in the lowest categories for quality due to their misleading and inaccurate natures.


What Comprises Linguistic Expression Bias?

As previously discussed, the horizontal categories also represent levels of linguistic partisanship; that is, I propose that the use of certain words in certain contexts can indicate levels of bias. I refer to these as simply “biased words” herein. They comprise words in the following four categories: 1) words with political connotations connecting them to certain parties or positions, 2) adjectives that don’t necessarily have a political connotation themselves, but when used to describe a political actor, party, or position, indicate political bias, 3) insults and pejoratives commonly used to describe certain political opponents, and 4) bogeymen.

The first category of biased words refers to the preferred terminology about a political position or political topic by one side or the other. These include characterizations of positions like being for/against abortion as “pro-life” or “pro-choice,” or referring to certain immigrants as “illegal aliens” or “undocumented immigrants.”  These kinds of words can correlate with quality as well, because certain ones are used as insults or in a derogatory manner, which necessarily fall into the category of “unfair persuasion” on the quality scale.

The second category of biased words refers to adjectives used for ad hominem (personal) attacks on politicians. For example, if an article applies the words “ugly” or “stupid” to politicians, those words are biased words. Such words also correlate with low quality because they are unnecessarily mean, and therefore fall into the category of “unfair persuasion.”

The third category of words includes specific insults and pejorative terms that have inherent contemporary political connotations. Examples include “deplorables,” “snowflakes,” “leftists,” and “the mainstream media.”

The fourth category of words—bogeymen—refers to people or groups that may or may not exist, but whose names are invoked by politicians or media figures to incite fear, anger, or loathing among their constituents or audience. These may be real people or groups that have committed bad acts, or acts perceived as bad by their political opponents. However, they evolve into “bogeymen” terms when they become used as abstractions of these acts, thereby transforming into a sort of common enemy. Examples include “the Muslim Brotherhood,” “the 1%,” “the Deep State,” and “Big Pharma.”

In addition to the table that I created for mapping political positions to categories on the chart, I made another table that lists biased words from the four categories above. I placed these words and phrases into the horizontal categories, categorizing the biased words themselves based on degree of bias. Again, since this is a work in progress, I only have this in pencil-and-paper format, but I plan to put it in electronic form soon. Here are some more hard-to-see pictures of that table:

I’d be grateful for commentators to help me supplement this list and bring new words that should be included to my attention. I submit that, like the table of political positions, these lists of words should be updated frequently as well, because certain terms gain and fade in popularity fairly frequently. For example, “The Koch Brothers” are much more en vogue as a bogeyman than “Karl Rove” nowadays, though that was different just a few years ago.

In the next post (Part 3), I’ll go through each of the seven columns in more detail and list more examples of what political positions and biased words correspond with each. Finally, in Part 4, I’ll lay out how I take an article or story, and, using the criteria I’ve laid out here, go through the steps of ranking it as discussed in Part 1, which are 1) creating an initial placement of left, right, or neutral based on the topic of the article itself, and 2) measuring certain factors that exist within the article. I’ll also discuss step 3, which is accounting for context by counting and evaluating factors that exist outside of the article.

Posted on

Part 1 of 4: Why Measuring Political Bias is So Hard, and How We Can Do It Anyway: The Media Bias Chart Horizontal Axis

Post One of a Four-Part Series

The Media Bias Chart Horizontal Axis:


Part 1:

Measuring Political Bias–Challenges to Existing Approaches and an Overview of a New Approach

Many commentators on the Media Bias Chart have asked me (or argued with me about) why I placed a particular source in a particular spot on the horizontal axis. Some more astute observers have asked (and argued with me about) the underlying questions of “what do the categories mean?” and “what makes a source more or less politically biased?” In this series of posts I will answer these questions.

In previous posts I have discussed how I analyze and rate quality of news sources and individual articles for placement on the vertical axis of the Media Bias Chart. Here, I tackle the more controversial dimension of rating sources and articles for partisan bias on the horizontal axis. In my post on Media Bias Chart 3.0, I discussed rating each article on the vertical axis by taking each aspect, including the headline, the graphic(s), the lede, AND each individual sentence and ranking it. In that post, I proposed that when it comes to sentences, there are at least three different ways to score them for quality on a Veracity scale, an Expression scale, and a Fairness scale. However, the ranking system I’ve outlined for vertical quality ratings doesn’t address everything that is required to rank partisan bias. Vertical quality ratings don’t necessarily correlate with horizontal partisan bias ratings (though they often do, hence the somewhat bell-curved distribution of sources along the chart).

Rating partisan bias requires different measures, and is more controversial because it disagreements about it enflame the passions of those arguing about it. It’s also very difficult, for reasons I will discuss in this series. However, I think it’s worth trying to 1) create a taxonomy with a defined scope for ranking bias and 2) define a methodology for ranking sources within that taxonomy.

In this series, I will do both things. I’ve created the taxonomy already—the chart itself—and in these posts I’ll explain how I’ve defined its horizontal dimension. The scope of this horizontal axis has some arbitrary limits and definitions,  For example, it is limited in scope to US political issues, as they exist within the last one year or so, and uses the positions of various elected officials as proxies for the categories. You can feel free to disagree with each of these. However, it has to start and end somewhere in order to create a systematic, repeatable way of ranking sources within it. I’ll discuss how I define each of the horizontal categories (most extreme/hyper-partisan/skews/neutral). Then, I’ll discuss a formal, quantitative, and objective-as-possible methodology for systematically rating partisan bias, which has evolved from the informal and somewhat subjective processes I had been using to rate it in the past. This methodology comprises:

  • An initial placement of left, right, or neutral for the story topic selection itself
  • Three measurements of partisanship on quantifiable scales which include
    1. a “Promotion” scale
    2. a “Characterization” scale, and
    3. a “Terminology” scale

3)  A systematic process for measuring what is NOT in an article, the absence of which results in partisan bias.

  1. Problems with existing bias rating systems

To the extent that organizations try to measure news media stories and sources, they often do so only by judging or rating partisan bias (rather than quality). Because it is difficult to define standards and metrics by which partisan bias can be measured, such ratings are often made through admittedly subjective assessments by the raters (see here, for example), or are made by polling the public or a subset thereof (see here, for example). High levels of subjectivity can cause the public to be skeptical of ratings results (see, e.g., all the comments on my blog complaining about my bias), and polling subsets of the public can skew results in a number of directions.

Polling the public, or otherwise asking the public to rate “trustworthiness” or bias of news sources has proven problematic in a number of ways. For one, people’s subjective ratings of trustworthiness of particular sources tend to correlate very highly with their own political leanings, so while liberal people will tend to rate MSNBC as highly trustworthy and FOX as not trustworthy, conservative people will do the opposite, which says very little about an objective level of actual trustworthiness of each of those sources. Further, current events have revealed that certain segments of the population are extremely susceptible to influence by low-quality, highly biased, and even fake news, and those segments have proven themselves unable to reliably discern measures of quality and bias, making them unhelpful to poll.

Another way individuals and organizations have attempted to rate partisan bias is through software-enabled text analysis. The idea of text analysis software is appealing to researchers because the sheer volume of text of news sources is enormous. Social media companies, advertisers, and other organizations have recently used such software to perform “sentiment analysis” of content such as social media posts in order to identify how individuals and groups feel about particular topics, with the hopes that knowing such information can influence purchasing behavior. Some have endeavored to measure partisan bias in this way, by programming software to count certain words that could be categorized as “liberal” or “conservative.” A study conducted by researchers at UCLA tried to measure such bias by references by media figures to conservative and liberal think tanks. However, such attempts to rate partisan bias have had mixed results, at best, because of the variation in context in which these words are presented. For example, if a word is used sarcastically, or in a quote by someone on the opposite side of the political spectrum from the side that uses that word, then the use of the word is not necessarily indicative of partisan bias. In the UCLA study, references to political think tanks were too infrequent to generate a meaningful sample. I submit that other factors within an article or story are far more indicative of bias.

I also submit that large-scale, software-enabled analysis bias ratings are not useful if the results do not align well with the subjective bias ratings gathered by a group of knowledgeable media observers. That is, if we took a poll of an equal number of knowledgeable left-leaning and right-leaning media observers, we could come to some kind of reasonable average for ratings bias. To the extent the software-generated results disagree, that suggests that the software model is wrong. I earlier stated my dissatisfaction with consumer polls as the sole indicator of bias ratings because it is consumer-focused and not content-focused. I think there is a way to develop a content-based approach to ranking bias that aligns with our human perceptions of bias, and that once that is developed, it is possible to automate portions of that content-based approach. That is, we can get computers to help us rate bias, but we have to first create a very thorough bias-rating model.

  1. Finding a better way to rank bias

When I started doing ratings of partisanship, I, like all others before me, rated them subjectively and instinctively from my point of view. However, knowing that I, like every other human person, have my own bias, I tried to control for my own bias (as referenced in my original methodology post), possibly resulting in overcorrection. I wanted a more measurable and repeatable way to evaluate bias of both entire news sources and individual news stories.

I have created a formal framework for measuring political bias in news sources within the defined taxonomy of the chart. I have started implementing this formal framework when analyzing individual articles and sources for ranking on the chart. This framework is a work in progress, and the sample size upon which I have tested it is not yet large enough to conclude that it is truly accurate and repeatable. However, I am putting it out here for comments and suggestions, and to let you know that I am designing a study for the dual purposes of 1) rating a large data set of articles for political bias and 2) refining the framework itself. Therefore, I will refer to some of these measurements in the present tense and others in the future tense. My overall goal is to create a methodology by which other knowledgeable media observers, including left-leaning and right-leaning ones, can reliably and repeatably rate bias of individual stories and not deviate too far from each other in their ratings.

My existing methodology for ranking an overall source on the chart takes into account certain factors related to the overall source as a first step, but is primarily based on rankings of individual articles within the source. Therefore, I have an “Entire Source” bias rating methodology and an “Individual Article” bias rating methodology.

  1. “Entire Source” Bias Rating Methodology

I discussed ranking partisan bias of overall sources in my original methodology post, which involves accounting for each of the following factors:

  1. Percentage of news media stories falling within each partisanship category (according to the “Individual Story” bias ranking methodology detailed below)
  2. Reputation for a partisan point of view among other news sources
  3. Reputation for a partisan point of view among the public
  4. Party affiliation of regular journalists, contributors, and interviewees
  5. Presence of an ideological reference or party affiliation in the title of the source

In my original methodology post, I identified a number of other factors for ranking sources on both the quality and partisanship scales that I am not necessarily including here. These include the factors of 1) number of journalists 2) time in existence and 3) readership/viewership. This is because I am starting with an assumption that the factors (a-e) listed above are more precise factors indicating partisanship that would line up with polling results of journalists and knowledgeable media consumers. In other words, my starting assumption is that if you used factors (a-e) to rate partisanship of a set of sources, and then also polled significant samples of journalists and consumers, you would get similar results. I believe that over time, some of the factors 1-3 (number of journalists, time in existence, and readership/viewership) may be shown to correlate strongly with indications of partisanship or non-partisanship. For example, I suspect that the factor “number of journalists” may be found to correlate high numbers of journalists with low partisanship, for the reason that it is expensive to have a lot of journalists on staff, and running a profitable news enterprise with a large staff would require broad readership across party lines. I suspect that “time in existence” may not necessarily correlate with partisanship, because there are several new sources that have come into existence within just the last few years that strive to provide unbiased news. I suspect that readership/viewership will not correlate very much with partisanship, for the simple reason that as many people seem to like extremely partisan junk as like unbiased news. Implementation of a study based on the above listed factors should verify or disprove these assumptions.

I have “percentage of news media stories falling within each partisanship category” listed as the first factor for ranking sources, and I believe it is the most important metric. Whenever someone disagrees with a particular ranking of an overall source on the chart, they usually cite their perceived partisan bias of a particular story that they believe does not align with my ranking of the overall source. What should be apparent to all thoughtful media observers out there, though, is that individual articles can themselves be more liberal or conservative than the mean or median partisanship bias of its overall source. In order to accurately rank a source, you have to accurately rank the stories in it.

             2. “Individual Story” Bias Rating Methodology

As previously discussed, I propose evaluating partisanship of an individual article by: 1) creating an initial placement of left, right, or neutral based on the topic of the article itself, 2) measuring certain factors that exist within the article and then 3) accounting for context by counting and evaluating factors that exist outside of the article. I’ll discuss this fully in Posts 3 and 4 of this series.

In my next post (#2 in this series) I will discuss the taxonomy of the horizontal dimension. I’ll cover many reasons why it is so hard to quantify bias in the first place. Then I’ll define what I mean by “partisanship,” the very concepts of “liberal,” “mainstream/center,” and “conservative,” and what each of the categories (most extreme/hyper-partisan/skews/neutral or balance) mean within the scope of the chart.

 Until then, thanks for reading and thinking!

Posted on

An Exercise for Bias Detection

A great exercise to train your bias-detecting skills is to check on a high volume of outlets –say, eight to ten–across the political spectrum in the 6-12 hours right after a big political story breaks. I did this right after the release of the Nunes memo on Friday, Feb 2. This particular story provided an especially good occasion for comparison across sites for several reasons, including:

-It was a big political story, so nearly everyone covered it. It’s easier to compare bias when each source is covering the same story.

-The underlying story is fact-dense, meaning that a lot of stories about it are long:

-As a result, it is easier to tell when an article is omitting facts.

-It is also easier to compare how even highly factual stories (i.e., scores of “1” and “2” on the Veracity and Expression scales) characterize particular facts to create a slight partisan lean.

-There are both long and short stories on the subject. Comparison between longer and shorter stories lets you more easily find facts that are omitted in order to frame the issues one way or another.

-News outlets have had quite a while to prepare for this coming story, so those inclined to spin it one way or the other have had time to develop the spin. Several outlets had multiple fact, analysis, and opinion stories within the 12 hours following the story breaking. You could count the number of stories on each site and rate their bias and get a more complex view of the source’s bias.

I grabbed screenshots of several sources across the spectrum from the evening of Feb. 2 and morning of Feb. 3. These are from the Internet Wayback Machine (if you haven’t used it before, it’s a great tool that allows you to see what websites looked like at previous dates and times).  Screenshots from the following shots are below:


You can get a good sense of bias from taking a look at the headlines, but you can get deeper insight from reading the articles themselves. For some sources, the headlines are a lot more dramatic than the articles themselves; for others, the articles are equally or more biased.

If you want to rank these articles (based on the articles themselves, or just on the headlines and pages below) on a blank version of the chart, I recommend placing the ones that seem most extremely biased first, then placing the ones that seem less biased. It’s easiest to identify the most extreme of a set, and then place the rest in relative positions. There’s not always a source or story that will land in whatever you consider “the middle,” but you can find some that are closer than others.

Going through this exercise is especially beneficial when big stories like this break. I know it is time-consuming to read so many sources and stories, so most people don’t read laterally like this very often, if ever (if you do, nice work!).  Doing so from time to time can help you remember that people are reading very different things from you, and increase your awareness of the range of biases across the spectrum. It can also help you identify how detect more subtle bias in the sources you regularly rely on.

Happy bias-detecting!













Posted on

Media Bias Chart, 3.1 Minor Updates Based on Constructive Feedback

So why is it time for another update to the Media Bias Chart? I’m a strong believer in changing one’s mind based on new information. That’s how we learn anyway, and I wish people would do it more often. I think it would lead to nicer online discussions and less polarization in our politics. Perhaps people don’t “change their minds based on new information” as much as they should because it is often framed more negatively as “admitting you are wrong.” I don’t particularly mind admitting I’m wrong.

In any event, I’m making some minor updates to the Media Bias Chart, corrections, and improvements based on feedback I’ve gotten. I’ve been fortunate to hear from many of you thoughtful observers out there, and I’m so grateful that so many of you care about the subject of ranking quality and bias.

The Media Bias Chart Updates

Here are the changes for version 3.1. I’m calling it 3.1 because they are mostly minor changes. I got quite a bit of feedback on these topics in particular.

  • The middle column now says “Neutral: Minimal Partisan Bias OR Balance of Biases.” I moved away from the term “Mainstream” because that term is so loaded as to be useless to some audiences. Also, there are some sources that are not really minimally biased or truly neutral; some have extreme stuff from both political sides.


  • The horizontal categories have been updated slightly in our Media Bias Chart. The “skew conservative” and “skew liberal” categories no longer have the parenthetical comment “(but still reputable),” mostly because the term “reputable” has more to do with quality on the vertical axis, and I’m doing my best not to conflate the two. The “hyper-partisan conservative” and “hyper-partisan liberal” categories no longer have the parenthetical comments “(expressly promotes views),” mostly because “promoting views” is not the only characteristic that makes something hyper-partisan. Finally, the outermost liberal and conservative “utter garbage/conspiracy theories” categories are now re-labeled “most extreme liberal/conservative.” This is, again, because the terms “utter garbage” and “conspiracy theories,” though often accurate for sources in those columns, has more to do with quality than partisanship.


What has moved?

I am writing a separate post that more specifically defines the horizontal axis and the criteria for ranking sources within them. It’s a pretty complex topic, and I’ll discuss many additional points frequently raised by those of you who have commented. I will likely have more revisions accompanying that post.


  • I have moved Natural News from the extreme left to slightly right. I know this may still cause some consternation among commentators that note correctly that they have a lot of extreme right wing political content. However, after categorizing dozens of articles over several sample days and counting how many fell in each category, the breakdown looked like this: About a third fell in the range of “skew liberal” to “extreme liberal” (in terms of promoting anti-corporate and popular liberal pseudo-science positions), another third were relatively politically neutral “health news,” and about a third fell into the extreme conservative bucket. There wasn’t much that fell into the “skew conservative” or “hyper-partisan conservative” categories. So even though the balance was 1/3, 1/3, 1/3, left, center, right, the 1/3 on the right was almost all “most extreme conservative,” so that pushed the overall source rank to the right. For those who are still unhappy and think it should be moved further right, take consolation in the fact that it is still at the bottom vertically, and to an extent, it doesn’t matter how partisan the junk news is as long as you still know it’s junk.


  • I removed US Uncut, because as some of you correctly pointed out, that site is now defunct.


  • I removed Al-Jazeera from the top middle, but not because I don’t think it’s a mostly reputable news source. I removed it for two reasons.

Al-Jazeera Explained

  1. First, many people are unclear on what I am referring to as Al-Jazeera. It is a very large international media organization based out of Qatar, (see, but it is not a very popular source for news to Americans. Americans who are familiar with it could assume that I am referring to Al-Jazeera English (a sister channel), or Al-Jazeera America (a short-lived US organization (2013-2016) which arguably leaned left), or AJ+ (a channel that provides explanatory videos on Facebook and also arguably leans left). I do think these are worth including in the Media Bias Chart, but I will differentiate them before including them in future versions. What I meant originally was the main Al-Jazeera site that is in English, which covers mostly international news, and which I consider a generally high quality and reputable source.
  2. Second , it is somewhat controversial because it is funded by the government of Qatar, and it has been accused of bias as it pertains to Middle East politics. This doesn’t necessarily mean that it is disreputable, or that its ownership results in stories that are biased to the left or right on the US political spectrum. However, I have only two other non-US sources on the Media Bias Chart—the BBC and DailyMail—which both have significant enough coverage of US politics that you can discern bias on the US spectrum. I don’t have any other internationally sources on the Media Bias Chart, and none that are primarily funded by a non-democratic government (the BBC is funded by the British public, NPR is publicly and privately funded in the US). Until I can specify which articles I have rated to form the basis for Al-Jazeerza’s placement, I’m going to leave it off.

Thanks for the comments so far, and please keep them coming. I appreciate your suggestions for how to make this work better and your requests for what you want to see in the future.

Posted on

Observations on The Chart by Law Professor Maxwell Stearns of U. Maryland

Law professor Maxwell Stearns, who blogs about law, politics, and culture, recently published this post about the chart, which has several useful insights about 1) distilling the ranking criteria into sub-categories, 2) why the sources on the chart form a bell curve, 3) how the rankings might be made more scientifically. Give it a read!

Posted on

Everybody has an Opinion on CNN

I get the most feedback by far on CNN, and, in comparison to feedback on other sources on the chart, CNN is unusual because I get feedback that it should be moved in all the different directions (up, down, left, and right). Further, most people who give me feedback on other sources suggest that I should just nudge a source one way or another a bit. In contrast, many people feel very strongly that CNN should be moved significantly in the direction they think.

I believe there are a couple of main reasons I am getting this kind of feedback.

  • CNN is the source most people are most familiar with. It was the first , and is the longest-running, 24 hour cable news channel. It’s on at hotels, airports, gyms, and your parent’s house. Even if people are news critics of nothing else, if they are critics of anything, they will be critics of CNN, because they are most familiar with it.
  • CNN is widely talked about by other media outlets, and conservative media outlets in particular, who often describe it as crazy-far left. Usually those who tell me it needs to go far left are the ones reading conservative media—no surprise there.
  • People tend to base their opinions of CNN on what leaves the biggest impression on them, and there are a lot of aspects that can leave an impression:
    1. For some people, the fact that they can just have it on in the background during the day, during which they see a large sampling of CNN’s news coverage, they see that programming is mostly accurate and informs them of a lot of US news they would be interested in. These individuals tend to think that CNN should be ranked higher, perhaps all the way up in “fact-reporting” and mainstream
    2. For others, they know they can tune into CNN for breathless, non-stop coverage of an impending disaster, like a Hurricane, or a breaking tragedy, such as a mass shooting. People can have a few different kinds of impressions from this. First, that they can count on that fact that they will get all the facts that are known repeated to them within 10 minutes of tuning in. That’s another reason to put them up in “fact-reporting.” Second, more savvy observers know that CNN makes not-infrequent mistakes and often jumps the gun in these situations. They usually qualify their statements properly, but they will still blurt out facts about a suspect, number of shooters, fatalities, that are not quite yet verified. That causes some people to rank them lower quality on the fact-reporting scale. Third, people know that once CNN runs out of current stuff to talk about, they will bring on analysts about all related (or unrelated) subjects (e.g., lawyers, criminologists, climate change scientists, etc.) often for several days following the story. This tends to leave people with the impression that CNN provides a lot of analysis and opinion (including lots of it that is valid and important) in addition to fact reporting. So a ranking somewhere along the analysis/opinion spectrum (a little above where I have it) seems appropriate.
    3. For yet others, the kind of coverage that leaves the biggest impression is the kind that includes interviews and panels of political commentators. The contributors and guests CNN has on for political commentary range widely in quality, from “voter who knows absolutely nothing about what he is talking about” to “extremely partisan, unreliable political surrogate” to “experienced expert who provides good insight.” People who pay attention to this kind of coverage note that CNN does a few crazy things.
      1. First, they have a chyron that says “Breaking News:…” followed by something that is clearly not breaking news. For example: “Breaking: Debate starts in one hour.” Eye roll. This debate has been planned for months and is not breaking. Further, they have a chyron (big banner on the bottom of the screen) for almost everything, which seems unnecessary and sensationalist, but has been adopted by MSNBC, FOX, and others. Often, the chyron’s content is sensationalist.
      2. Second, in the supposed interest of being “balanced” and “showing both sides, they often have extreme representatives from each side of the political spectrum debating each other. This practice airs and lends credibility to some extreme, highly disputed positions. Balance, I think, would be better represented by having guests with more moderate positions. Interviews with KellyAnne Conway, who often says things that are untrue, things that are misleading, and makes highly disputed opinion statements, are something else. Even though the hosts challenge her, it often appears that the whole point of having her as a guest is for the purposes of showcasing how incredulous the anchors are at her statements. This seems to fall outside of the purpose of news reporting. What’s worse, though (to me, anyway), is that they will hire partisan representatives as actual contributors and commentators, which gives them even more credibility as sources one should listen to about the news, even though they have a clear partisan, non-news agenda. They hired Jeffery Lord, who routinely made the most outlandish statements in support of Trump, and Trump’s ACTUAL former campaign manager, Corey Lewandowski. That was mind-boggling in terms of lack of journalism precedence (and ethics) and seemed to be done for sensationalism (and ratings, rather than for the purposes of news reporting, which is to deliver facts). Those hires were a big investment in providing opinion. I think it was extremely indicative of CNN’s reputation for political sensationalism when the Hill ran two headlines within a few weeks of each other saying something like “CNN confirms it will not be hiring Sean Spicer as a contributor” and CNN confirms it will not be hiring Anthony Scaramucci as a contributor” shortly after each of their firings.
  • Third, their coverage is heavily focused on American political drama. I’ll elaborate on this in a moment.

Personally, the topics discussed in (c) left the biggest impression on me. That is why I have them ranked on the line between “opinion, fair persuasion” and “selective or incomplete story, unfair persuasion.” The impact of the guests and contributors who present unfair and misleading statements and arguments really drives down CNN’s ranking in my view. I have them slightly to the left of center, though, because they tend to have a higher quantity of guests with left-leaning positions.


I have just laid out that my ranking is driven in large part by a subjective measure rather than an objective, quantitative one. An objective, quantitative one would take all the shows, stories, segments, guests, and analyze all the statements made, and would, on a percentage basis, say how many of these things were facts, opinions, analysis, fair or unfair, misleading, untrue, etc. I have not done this analysis but would guess that a large majority of the statements made in a 24 hour period on CNN would fall in to reputable categories (fair, factual, impartial). Perhaps even 80% or more would fall in to that category. So one could reasonably argue that CNN deserves to be higher; say, 80% of the way up (or whatever the actual number is), if that is how you wanted to rank it.

However, I argue for the inclusion of a subjective assessment that comes from the question “what impression does this source leave?” Related questions are “what do people rely on this source for,” “what do they watch it for,” and “what is the impact on other media?” I submit that the opinion and analysis panels and interviews, with their often-unreliable guests, leave the biggest impression and make up a large portion of what people rely on and watch CNN for. I also submit that these segments make the biggest impact in the rest of media and society. For example, other news outlets will run news stories, the content of which are “Here’s the latest crazy thing KellyAnne said on CNN.” These stories make a significant number of impressions on social media, therefore amplifying what these guests say.

I also include a subjective measure that pushes it into the “selective or incomplete story” category, which comes from trying to look at what’s not there; what’s missing. In the case of CNN, given their resources as a 24 hour news network, I feel like a lot is missing. They focus on American political drama and the latest domestic disaster at the expense of everything else. With those resources and time, they could inform Americans about the famine in South Sudan, the war in Yemen, and the refugees fleeing Myanmar, along with so many other important stories around the world. They could do a lot more storytelling about how current legislation and policies impacts the lives of people here and around the world. Their focus on White House palace intrigue inaccurately, and subliminally, conveys that those are the most important stories, and that, I admit, just makes me mad.

Many reasonable arguments can be made for the placement of CNN as a whole, but a far more accurate way to rank the news on CNN is to rank an individual show or story. People can arrive at a consensus ranking much more easily when doing that. I will be doing that on future graphs (I know you can’t wait for a whole graph just on CNN, and I can’t either!) for individual news outlets.


Posted on

The Chart, Version 3.0: What, Exactly, Are We Reading?

Note: this is actually version 3.1 of The Chart. I made some minor changes from version 3.0, explained here:

Summary: What’s new in this chart:

  • I edited the categories on the vertical axis to more accurately describe the contents of the news sources ranked therein (long discussion below).
  • I stuffed as many sources (from both version 1.0 and 2.0, plus some new ones) on here as I could, in response to all the “what about ______ source” questions I got. Now the logos are pretty tiny. If you have a request for a ranking of a particular source, let me know in the comments.
  • I changed the subheading under “Hyper-Partisan” from “questionable journalistic value” to “expressly promotes views.” This is because “hyper-partisan” does not always mean that the facts reported in the stories are necessarily “questionable.” Some analysis sources in these columns do good fact-finding in support of their expressly partisan stances. I didn’t want anyone to think those sources were necessarily “bad” just because they hyper-partisan (though they could be “bad” for other reasons.
  • I added a key that indicates what the circles and ellipses mean. They mean that a source within a particular circle or ellipse can often have stories that fall within that circle/ellipse’s range. This is, of course, not true for all sources
  • Green/Yellow/Orange/Red Key. Within each square: Green is news, yellow is fair interpretations of the news, orange is unfair interpretations of the news, and red is nonsense damaging to public discourse.

Just read this one more thing: It’s best to think of the position of a source as a weighted average position of the stories within each source. That is, I rank some sources in a particular spot because most of its stories fall in that spot. However, I weight the ranking downward is if it has a significant number of stories (even if they are a minority) that fall in the orange or red areas. For example, if Daily Kos has 75% of its stories fall under yellow (e.g., “analysis,” and “opinion, fair”), but 25% fall under orange (selective, unfair, hyper-partisan), it is rated overall in the orange. I rank them like this is because, in my view, the orange and red-type content is damaging to the overall media landscape, and if a significant enough number of stories fall in that category, readers should rely on it less. This is a subjective judgment on my part, but I think it is defensible.

OK, you can go now unless you just really love reading about this media analysis stuff. News nerds, proceed for more discussion about ranking the news.

As I discussed in my post entitled “The Chart, Second Edition: What Makes a News Source Good?” the most accurate and helpful way to analyze a news source is to analyze its individual stories, and the most accurate way to analyze an individual story is to analyze its individual sentences. I recently started a blog series where I rank individual stories on this chart and provide a written analysis that scores the article itself on a sentence-by-sentence basis, and separately scores the title, graphics, lede, and other visual elements. See a couple of examples here. Categorizing and ranking the news is hard to do because there are so very many factors. But I’m convinced that the most accurate way to analyze and categorize news is to look as closely at it as possible, and measure everything about it that is measurable. I think we can improve our media landscape by doing this and coming up with novel and accurate ways to rank and score the news, and then teaching others how to do the same. If you like how I analyze articles in my blog series, and have a request for a particular article, let me know in the comments. I’m interested in talking about individual articles, and what makes them good and bad, with you.

As I’ve been analyzing articles on an element-by element, sentence-by-sentence basis, it became apparent to me that individual elements and sentences can be ranked or categorized in several ways, and that my chart needed some revisions for accuracy.

So far I have settled on at least three different dimensions, or metric, upon which an individual sentence can be ranked. These are 1) the Veracity metric, 2) the Expression metric, and 3) the Fairness metric

The primary way statements are currently evaluated in the news are on the basis of truthfulness, which is arguably the most important ranking metric. Several existing fact-checking sites, such as Politifact and Washington Post Fact Checker, use a scale to rate the veracity of statements; Politifact has six levels and Washington Post Fact Checker has four, reflecting that many statements are not entirely either true or false. I score each sentence on a similar “Veracity” metric, as follows:

  • True and Complete
  • Mostly True/ True but Incomplete
  • Mixed True and False
  • Mostly False or Misleading
  • False

Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking.

It is valid and important to rate articles and statements for truthfulness. But it is apparent  that sentences can vary in quality in other ways. One way, which I discussed in my previous post (The Chart, Second Edition: What makes a News Source ‘Good’) is on what I call an “Expression” scale of fact-to-opinion. The Expression scale I use goes like this:

  • (Presented as) Fact
  • (Presented as) Fact/Analysis (or persuasively-worded fact)
  • (Presented as) Analysis (well-supported by fact, reasonable)
  • (Presented as) Analysis/Opinion (somewhat supported by fact)
  • (Presented as) Opinion (unsupported by facts or by highly disputed facts)

In ranking stories and sentences, I believe it is important to distinguish between fact, analysis, and opinion, and to value fact-reporting as more essential to news than either analysis or opinion. Opinion isn’t necessarily bad, but it’s important to distinguish that it is not news, which is why I rank it lower on the chart than analysis or fact reporting.

Note that the ranking here includes whether something is “presented as” fact, analysis, etc. This Expression scale focuses on the syntax and intent of the sentence, but not necessarily the absolute veracity. For example, a sentence could be presented as a fact but may be completely false or completely true. It wouldn’t be accurate to characterize a false statement, presented as fact, as an “opinion.” A sentence presented as opinion is one that provides a strong conclusion, but can’t truly be verified or debunked, because it is a conclusion based on too many individual things. I’ll write more on this metric separately, but for now, I submit that it is an important one because it is a second dimension of ranking that can be applied consistently to any sentence. Also, I submit that a false or misleading statement that is presented as a fact is more damaging to a sentence’s credibility than a false or misleading statement presented as mere opinion.

The need for another metric became apparent when asking the question “what is this sentence for?” of each and every sentence. Sometimes, a sentence that is completely true and presented as fact can strike a reader as biased for some reason. There are several ways in which a sentence can be “biased,” even if true. For example, sentences that are not relevant to the current story, or not timely, or that provide a quote out of context, can strike a reader as unfair because they appear to be inserted merely for the purpose of persuasion. It is true that readers can be persuaded by any kind of fact or opinion, but it seems “fair” to use certain facts and opinions to persuade while unfair to use other kinds.

I submit that the following characteristics of sentences can make them seem unfair:

-Not relevant to present story

-Not timely

-Ad hominem (personal) attacks


-Other character attacks

-Quotes inserted to prove the truth of what the speaker is saying

-Sentences including persuasive facts but which omit facts that would tend to prove the opposite point

-Emotionally-charged adjectives

-Any fact, analysis, or opinion statement that is based on false, misleading, or highly disputed premises

This is not an exhaustive list of what makes a sentence unfair, and I suspect that the more articles I analyze, the more accurate and comprehensive I can make this list over time. I welcome feedback on what other characteristics make a sentence unfair, and I’ll write more on this metric in the future. Admittedly, many of these factors have a subjective component. Some of the standards I used to make a call on whether a sentence was “fair” or unfair” are the same ones in the Federal Rules of Evidence (i.e., the ones that judges use to rule on objections in court). These rules define complex concepts such as relevance and permissible character evidence, and determine what is fair for a jury to consider in court. I have a sense that a similar set of comprehensive rules for legal evidence could be developed for journalism fairness. For now, these initial identifiers of fairness metric helped me distinguish the presence of unfair sentences in articles. I now use a “Fairness” metric in addition to the Veracity scale and the Expression scale. This metric only has two measures, and therefore requires a call to be made between:

  • Fair
  • Unfair

By identifying a percentage of sentences that were unfair, I was able to gain an additional perspective on what an overall article was doing, which helped me create some more accurate descriptions of types of articles on the vertical quality axis. In my previous chart (second edition), the fact-to-opinion metric was the primary basis for the vertical ranking descriptions, so it looked like this:

In using all three metrics, 1) the Veracity scale, 2), the fact-to-opinion Expression scale, and 3) the Fairness scale, I came up with what I believe are more accurate descriptions of article types, which looks like this:

As shown, the top three categories are the same, but the lower ranked categories are more specifically described than in the previous version. The new categories are “Opinion; Fair Persuasion,” “Selective or Incomplete Story; Unfair Persuasion,” “Propaganda/Contains Misleading Facts,” and “Contains Inaccurate/ Fabricated Info.” If you look at the news sources that fall into these categories, I think you’ll find that these descriptions more accurately describe many of the stories within the sources.

Thanks for reading about my media categorizing endeavors. I believe it is possible (though difficult) to categorize the news, and that doing so accurately is a worthy endeavor. In future posts and chart editions I’ll dive into other metrics I’ve been using and refining, such as those pertaining to partisanship, topic focus (e.g., story selection bias), and news source ownership.

If you would like a blank version for education purposes, here you go:

Third Edition Blank

And here is a lower-resolution version for download on mobile devices:

Posted on

Not “Fake News,” But Still Awful for Other Reasons:  Analysis of Two Examples from The Echo Chambers This Week

The term “fake news” is problematic for a number of reasons, one of which is that it is widely used to mean anything from “outright hoax” to “some information I do not like.” Therefore, I refrain from using the term to describe media sources at all.

Besides that, I refrain from discussing the term because I submit that the biggest problem in our current media landscape is not “hoax” stories that could legitimately be called “fake news.” What is far more damaging to our civic discourse are articles and stories that are mostly, or even completely, based on the truth, but which are of poor quality for other reasons.

The ways in which articles can be awful are many. Further, not all awful articles are awful in the same way. For these reasons, it is difficult to point out to most casual news readers how an article that is 90% true, or even 100% true, is biased, unfair, or deviant from respectable journalistic practices.

This post is the first in a series I plan to do in which I visually rank one or more recent articles on my chart and provide an in-depth analysis of why each particular article is ranked in that spot.  My analysis includes discussions of the headlines, graphics, other visual elements, and the article itself. I analyze each element and each sentence by asking “what is this element/sentence doing?”

This week, I break down one article from the right (from the Daily Wire, entitled “TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan) and one from the left (from Pink News, entitled “Bill O’Reilly caught in $32 million Fox News gay adult films scandal”).


  • From the Left: Article Ranking and Analysis of:

Source: Pink News

Author: Benjamin Butterworth

Date: October 24, 2017

Total Word Count: 706

  1. Title: Bill O’Reilly caught in $32 million Fox News gay adult films scandal

Title Issues:

Misleading about underlying facts

There is no current, known scandal involving Fox News and gay adult films. Bill O’Reilly settled a $32 million sexual harassment lawsuit while employed by Fox, and one of the allegations was that he sent a woman gay porn. However, the title suggests some sort of major financial involvement of Fox News in particular gay porn films. No mention of lawsuit settlement in title.

                        Misleading about content of article

The article is actually about the sexual harassment settlement, with one mention of the allegation of sending gay porn, the actions of Fox News in relation to O’Reilly’s employment after the settlement, and a listing of O’Reilly’s past anti-gay statements.

                        Misleading content is sensationalist/clickbait

  1. Graphics: Lead image linked to social media postings is this:


          Graphics Issues:

                        Misleading regarding content of article:

The image is half a gay porn scene and half Bill O’Reilly, which would lead a read to expect that the topic of gay porn makes up a significant portion of the article—perhaps up to half.

                        Misleading content is sensationalist/clickbait

The image is salacious and relies on people’s interest what they perceive as sexual misbehavior and/or hypocrisy of others

                        Image is a stock photo not related to a particular fact in the article

  • Other Elements (Lead Quote): Anti-gay former Fox News host Bill O’Reilly is caught up in a $32 million gay porn lawsuit.

Element Issues:

Inaccurate regarding underlying facts

The $32 million lawsuit is cannot be accurately characterized as being “about” gay porn. It is most accurately characterized as a sexual harassment (or related tort) lawsuit.

                        Inaccurate in relation to facts stated in article

The article itself states: “Now the New York Times (NYT) has claimed that, in January, O’Reilly agreed to pay $32 million to settle a sexual harassment lawsuit filed against him.”

Adjective describing subject of article selected for partisan effect

“Anti-gay” is used to describe Bill O’Reilly, which is used to make a point, to the site’s pro-LGBT audience, that O’Reilly is especially despicable beyond the transgressions that are subject of the present lawsuit being reported upon.

  1. Article:


  1. Embellished Reporting (i.e., reporting the timely story, plus other stuff)

Reports the current sexual harassment settlement story, relevant related timeline of events, plus extraneous information about how O’Reilly is anti-gay

  1. Promotion of Idea

                                    Idea that Bill O’Reilly is a bad person particularly because he is anti-gay

Sentence Breakdown:

                        706 total words, 28 sentences/quotes

Factual Accuracy:

% Inaccurate sentences: 0 out of 28 sentences (0%) inaccurate

% Misleading Sentences: 0 out of 12 sentences (0%) are misleading


                        Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 24/28 (86%)

% Fact/Quoted Statements with adjectives: 2/28 (7%)

% Analysis Statements: 1/28 (3.5%)

% Analysis/Opinion Statements: 1/28 (3.5%)

% Opinion Statements: 0


Sentence Type by Fair/Unfair Influence:

                        % Fair: 20/28 (71%)

Sentences 1-7, and 9-20 rated as “fair” because they are factual, relevant to the current story, and timely.

% Unfair: 8/28 (29%)

Sentences 8, 20-28 rated as “unfair” because they are untimely, unrelated to title, and used for idea promotion

Overall Article Quality Rating: Selective Story; Unfair Influence

Main reasons:

-29% of sentences included for unfair purpose

-Anything over with over 10% unfair influence sentences can be fairly rated in this category

-Title, Graphics, Lead element all extremely misleading

Overall Partisan Bias Rating: HYPER-PARTISAN (Liberal)

Main reasons:

  • Focus on pro-LGBT message even though underlying story is very loosely related to LGBT issues


  • From the Right: Article Ranking and Analysis of:

Source: The Daily Wire

Author: Ryan Saavedra

Date: October 20, 2017

Total Word Count: 257



  1. Title: TRUMP WAS RIGHT: Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan

Title Issues:

Contains all caps statement of “TRUMP WAS RIGHT”

-Capitalization is sensationalist

Contains conclusory opinion statement of “TRUMP WAS RIGHT”

Directly appeals to confirmation bias with “TRUMP WAS RIGHT”

People likely to believe Trump is right in general are the most likely to click on, read, and/or share this, and are most likely to believe the contents of the article at face value

Misleading regarding context of current events, which says “Gold Star Widow Releases Trump’s Call After Husband Was Killed in Afghanistan (see explanation after next issue)

Omitting relevant context of current events occurring between approximately Oct 16 and Oct 20, 2017, the four preceding days before this article was published

-In the context of a controversy over disputed phone call between Trump and a different black Gold Star Widow than the one this article is about, in which the presence of a recording of the call was also disputed, the omission of the fact that this is a different black Gold Star Widow who received a call from Trump is misleading. It is misleading because it is likely to confuse readers who are unfamiliar with specific facts of the current controversy ( such as 1) the names of the widow and solider, Myeshia Johnson and Sgt. La David Johnson, 2) what they look like, and 3) where he was killed


  1. Graphic Elements: An accurate photo of the widow who is the subject of the story (Natasha DeAlencar) and her fallen soldier husband (Staff Sgt. Mark DeAlancar)

Graphics Issue:

Accurate photo juxtaposed with other problematic elements

-Though the photo is accurate, its position next to the title again may lead readers who are uninformed as to the underlying facts that this call is regarding the current controversy between Myeshia Johnson and President Trump

III.             Other Elements (Lead Quote): “Say hello to your children, and tell them your father, he was a great hero that I respected.”

Element Issue:

Accurate quote juxtaposed with other problematic elements

-Similar to the photo, though the quote is accurate, its position next to the title and photo again may lead readers who are uninformed as to the underlying facts that this particular quote was from the current controversial call between Myeshia Johnson and President Trump

  1. Article:



Here, the story of one widow’s experience

Promotion of ideas

Here, the promotion of idea that Trump is respectful and kind; promotion of idea that media is deceitful

Sentence Breakdown:

257 words; 12 sentences

Factual Accuracy:

% Inaccurate sentences: 1 out of 12 sentences (8%) inaccurate

Quote from article: “In response to a claim by a Florida congresswoman this week claiming that President Donald Trump is disrespectful to the loved ones of fallen American soldiers, an African-American Gold Star widow released a video of a phone conversation she had with the President in April about the death of her husband who was killed in Afghanistan.”

  • The widow did not release the video “in response to a claim by a Florida congresswoman.” She released it in response to inquiries from reporters in the wake of the controversy between Myeshia Johnson and Trump[1]


  • The congresswoman, Frederica Wilson, did not say generally that Trump is disrespectful to the loved ones of fallen American soldiers. She said to a local Miami news station, about Trump’s particular comments to Myeshia Johnson, “Yeah, he said that. So insensitive. He should not have said that”[2] All recent instances of her talking about the President’s conduct are in the context of this incident.[3]

% Misleading Sentences: 1 out of 12 sentences (8%) are misleading

Quote from article: “The video comes a day after White House Chief of Staff John Kelly gave an emotional speech during the White House press briefing on how disgusting it was that the media would intentionally distort the words of the President to attack him over the death of a fallen American hero.”


This quote makes it sound like the media took the words from this call in the video and distorted them to attack the President. The words that are the subject of the controversy in the current Johnson call are not quoted in this article at all. This sentence uses a strong adjective—“disgusting”—to describe an action, and the context of this sentence may lead readers to think the “disgusting” action was the media taking these kind words and reporting different, false, insensitive words.

Sentence Type by Fact, Analysis, and Opinion:

% Fact/ Quoted Statements: 9/12 (75%)

% Fact/Quoted Statements with adjectives: 3/12 (25%)

% Analysis Statements: 0

% Analysis/Opinion Statements: 0

% Opinion Statements: 0


Sentence Type by Fair/Unfair Influence:

% Fair: 84%

Sentences 2-11 rated as “fair” because they are factual and relevant to the underlying story

% Unfair: 16%

Sentence 1 rated as “unfair” because inaccurate statements are generally used unfairly for persuasion

Sentence 12 rated as “unfair” because misleading statements are generally used unfairly for persuasion

Overall Article Quality Rating: Propaganda/Contains Misleading Facts

Main reasons:

-Anything over 0% inaccurate automatically rated at least this low

-Anything over 2% misleading automatically rated at least this low

-Title, Graphics, Lead element all misleading

Overall Partisan Bias Rating: HYPER-PARTISAN

Main reasons:

  • Opinion statement in title
  • Misleading and inaccurate statements used for purpose of promoting partisan ideas




[3] The author of this analysis is unaware of any general statement by Rep. Wilson that “Trump is disrespectful to the loved ones of fallen American soldiers,” but will revise this analysis if such quotes are brought to the author’s attention