Posted on

Junk Food and Junk News: The Case for “Information Fitness”

We, as humans, have basic needs for several things. One of them is food. Another is information. We are always, by necessity and want, taking in both. It’s fair to say we even love both food and information. But we also, as humans, have a propensity for indulging in too much of a good thing to the point that we turn it into a bad thing. It’s easy for us to develop bad habits around any of our basic needs. This is especially true when indulging provides some kind of instant gratification but long-term damage.

I submit that generally, our American habits around food consumption are highly analogous to the habits we have around news and information consumption. Similarly, the resulting problems we have because of those habits are highly analogous. [1] We love junk food and we love junk news. And they are both wreaking havoc on our individual and collective physical and mental health, and having detrimental effects on our whole society.

It first occurred to me how analogous food consumption and information consumption habits are quite recently, as I have been learning about the other few burgeoning attempts to rate the news. Several of these endeavors say (as do I), that we are trying to create a “nutrition label” for what is in your news. That just makes sense. In many instances, we are simply unaware of what kind of content we are consuming in our news. Is it good, true, biased, opinion-based, analysis-based, or reliable? And how much is it of those things? Right now, there is no standard nutrition label that tells us what is in our news before we consume it, and it used to be that way for our food. I assert that we should at least have some idea of what we are getting into before putting into our brains.

Availability and Proliferation

Our current media landscape is analogous to the American food landscape during the proliferation of fast-food restaurants and highly processed foods. Note that I am distinguishing between “food” generally and certain subsets of food— unhealthy fast food and processed food, which are characterized by poor nutritional content and low cost. Though food, and even certain types of fast food and processed food were around before the 1950’s and 1960’s, fast food restaurants exploded in popularity and availability during those years, and have continued to grow since. Advances in food science and technology have led to an abundance of available varieties of processed food in grocery stores. There are certainly benefits to fast food; it’s convenient, inexpensive, and tastes good, which provides people with the ability to spend less time and money feeding themselves and their families. There are certainly benefits to processed food too; it lasts a long time on shelves, is less prone to food-borne contaminants, it is inexpensive, there are all different kinds, and it tastes good. However, since the 50’s, we have become acutely aware of the drawbacks of both unhealthy fast-food and processed food; namely, that a lot of it has too much stuff in it that is just bad for you, like fat, sugar, and salt.

The last 20 years (when cable news began), the last 10 years (when smart phones became available), and the last 8 or so years (when social media really proliferated) have marked a similar explosion in the availability of all types of news and news-like information. Here, I am distinguishing between “news” and “news-like information,” the latter being characterized by high levels of opinion and analysis and low levels of editorial review.” In the realm of “news-like information,” we now have choices of multiple 24-hour cable news channels, thousands of online news sites and blogs, thousands of YouTube channels, and the constant promotion to us thereof through social media. There are certainly many benefits to this new era of information availability. Personally, I’m optimistic that more information availability to more people can and will lead to increased peace and prosperity in the world eventually, as it has time and time again throughout history. On a basic level, more people are able to know more things than ever before. However, we are now becoming acutely aware of the drawbacks of too much news-like information; namely, that a lot of it has too much stuff that is just bad for you, like misinformation and bias.

Bad Habits and Monetization

The reasons we are drawn to fast/processed (and generally “unhealthy”) food and opinionated and biased news-like (and generally “unhealthy”) information are similar.

We like fat, sugar, and salt because they taste good and because parts of our brains derive pleasure and reward from eating them. This is a feature, and not a bug, of how our brains work; we’re naturally drawn to eat good-tasting, high calorie food for sustenance and survival. We also consciously know that we need to eat these foods in moderation, and that we need to eat stuff that doesn’t taste as good, like vegetables, because we have this intelligence and capacity to learn this information from our own and others’ experiences.

We like opinionated and biased news-like information because being right feels good. Our brains are wired for confirmation bias, which is being more open to receiving information that comports with what you already believe. This is, again, a feature, not a bug, of how our brains work; it makes it easier for us to make sense of the world around us. We also consciously know that we should regularly learn seek out new information, including information that challenges our existing beliefs.

However, it’s easy to over-consume unhealthy food and unhealthy news in part because each provides instant gratification, and the drawbacks are not immediately evident. The drawbacks, if any, come from long-term, sustained unhealthy consumption, not from one-time, or infrequent unhealthy consumption.

It’s even easier because those who produce food and information are well aware of our desires and are monetarily incentivized to exploit them.

Food companies, of course, make more money when people buy more food, especially when that food is cheap to make. Unfortunately, it is easy to make food that is cheap, delicious, and terrible for you. Even worse, making it that way often increases both its addictive qualities and the maker’s profit margin. Though segments of the food industry have embraced healthy food and built successful businesses around it in the last couple of decades, many other segments have not. These segments, namely in the fast and processed food industries have, and continue to, create and aggressively market unhealthy food, especially to people most vulnerable to such marketing.

Media companies, of course, make more money when they attract a larger audience. Media companies have always relied on both subscription and ad revenue, but news production and distribution used to be limited to large organizations who had invested significant resources in journalists and print, TV, or radio distribution. But now, because of technology, there are thousands more sources available, and each is incentivized to monetize their source by driving clicks and views. Unfortunately, highly biased, opinionated, low-quality, and “clickbait” headlines and content drive revenue even more easily than high-quality, least-biased headlines and content. Not only have newer sites of questionable reputability supplied plenty of low-quality, highly-biased headlines and content, but their proliferation has, unfortunately, caused historically reputable outlets to start providing some lower-quality, highly-biased content just to compete for audience share.

The result of these combinations: 1) our predispositions to unhealthy food/info consumption and 2) monetary incentives for food/info companies to exploit them is a vicious cycle in which many well-meaning consumers fall into patterns of more and more unhealthy consumption.

We’ve come to the collective realization as a society that the consequence of unhealthy food consumption is an obesity epidemic. I submit that the consequence of unhealthy information consumption is an extreme polarization epidemic. We are polarized because so many of us are consuming such high quantities of low-quality, highly-biased information.

One main difference between food and information in this analogy is that the causal links between unhealthy food consumption and poor health effects have been studied and are now somewhat well known. For example, we know that diets too high in fat, sugar, and/or salt, are linked with heart disease, diabetes, high blood pressure, and a host of other illnesses.

In contrast, we are only now starting to study and realize the detrimental effects of overconsumption of low-quality and highly-biased information. Many of us intuitively attribute our increased political polarization to this cause, but it is also highly possible that other detrimental effects are due to this overconsumption. For example, people’s personal levels of anger, isolation, radicalization, or bad decision-making, I suspect, may be attributable to overconsumption of unhealthy information. Many people are blissfully unaware that they are suffering any ill effects from what they read and watch, even though they are consuming intellectual equivalents of donuts and fries at every sitting.

What To Do About It

I don’t mean to blame or shame anyone who struggles with unhealthy food habits and the resulting health outcomes. So many aspects of our society are set up to have people fail—everything from work schedules, to cost, to availability of options, makes having healthy eating habits hard.

I also don’t mean to blame or shame anyone who only consumes low-quality, highly-biased information and as a result, lives in a polarized, ideological silo. Social media has amplified the reach of unhealthy information, made it easy to only consume those articles and shows, and exacerbated our polarization problem.

The most important thing to focus on is not what or whom to blame for these causes and epidemics, but what we can do to try to address and fix some of them. The consequences are extremely damaging. It is imperative that we try to find solutions.

The bright side of the food/information analogy is that for the unhealthy food problem, we have created and implemented some effective solutions. Our existing solutions are by no means complete—we still have a lot more work to do –but we’ve made progress.

One of the first steps in addressing unhealthy food consumption problems was making knowledge available to consumers. Nutrition fact labels, as we know them today, were only mandated as recently as 1994, and they continue to be refined as we learn more about nutrition.

Currently, there is no equivalent “nutrition label” for information, though I and others are attempting to create this equivalent—something reliable and widely-recognized as reputable that tells people what is in their media content. There are a number of challenges to doing this, including the fact that trying to tell people what is “good” or “unbiased” is controversial. See more about that here and here.

Another challenge is that knowledge alone (i.e., nutrition labels alone) doesn’t solve the unhealthy food problem and obesity epidemic completely. So, will with media rating information labels solve the unhealthy info problem and polarization epidemic completely? Almost certainly not. But can it make a difference and should we try? Almost certainly yes.

We have seen a model for addressing our problems with unhealthy food arise in the form of increased knowledge and awareness campaigns and the rise of an entire health and fitness industry.  Public, private, and non-profit organizations have implemented studies, projects, and other efforts to let people know that they should eat their vegetables and lean proteins, eat in moderate quantities, limit fast, processed, and other foods with excess fat, sugar, and salt. Nutrition labels were an important part of spreading new knowledge about healthy eating. Many, many food companies and restaurants created and responded to new consumer demand for healthier options: whole grain, whole wheat, low-fat, low cholesterol, low-salt, low-sugar, low-calorie, organic, high protein, and many other types of improved food choices are now available, and they continue to become more accessible and popular. Certainly, there are still challenges to healthy eating, and may people fail to do so. But now, people are armed with more knowledge and choices, and entire segments of the population use those to eat in perfectly healthy ways. More people than ever before can now realistically choose to eat healthy as a long-term, sustainable lifestyle, even in the face of unhealthy food availability.

We are seeing the beginning of a similar concerted, societal response to unhealthy information diets. There are academic institutions studying the issue and journalism organizations trying to both create and respond to consumer demand for healthy (good quality, minimally-biased) information. You can see these efforts springing forth in the form of fact-checking sites and segments and new independent, subscription-driven journalism outlets.

However, our efforts lag far behind the need to solve the unhealthy information problem. I assert that the problem isn’t even fully realized and identified yet. Namely, we have largely identified the fact that outright “fake” news (lies, deliberate misinformation) is a problem, but do not realize the damage that is being done by news that isn’t completely false, but is highly opinionated and biased. I assert that it is a lurking, unidentified, unrealized problem, like the role of carbohydrates and sugars as dietary culprits used to be. Experts thought that just eating fat was bad—that fat made you fat. Low-fat food became all the rage, but food makers added sugar, which turned out to be worse.

We have seen internet giants Facebook, Twitter, and YouTube try to crack down on fake news (i.e., “fat”), without giving a thought to the role that highly opinionated and biased news plays in polarization. That stuff continues to be so widely distributed, I think, in part because it is highly profitable, but also because these companies don’t think it is unhealthy in the first place.

Exercise and Information Fitness

There is another dimension to this analogy. One part of the solution to unhealthy food is to not do something; namely, to not eat unhealthy food and choose healthy food instead. Essentially, you can be healthy just by controlling your diet.

But there is another positive, related thing you can do to combat unhealthy eating problems—exercise! Exercise actively combats the adverse effects of any past or current unhealthy food habits, and has a whole host of other health benefits. There are lots of ways to exercise—you can play sports, run, do yoga, weight train, join group classes—and all of these things help overall health.

The analogous “exercise” people can do to combat a bad information diet is, I submit, participate in civic engagement, which we can call “civic exercise.” There are many different types of civic exercise you can engage in that provide psychic and emotional benefits. I assert that these include having face-to-face conversations with your political opponents, working on your own business or projects that you are passionate about, learning facts about law and government, voting, volunteering for political causes and campaigns, attending town halls, donating to causes you care about, and calling your elected representatives. Civic exercise can take the form of things that actually make a difference in democracy or relationships with your fellow citizens, whether those things are big or small. While consuming lots of unhealthy information can make you feel angry, sad, and powerless, engaging in civic exercise can make you feel powerful, give you a sense of purpose and meaning, and create the feeling that you are making a difference, because you actually are.

The big question facing our society right now is how to address this polarization epidemic caused by our unhealthy information diet. I propose that we do everything we can to promote lifestyles of “information fitness.” Information fitness should be a thing. The fields of a “media literacy” and “information literacy”—also known as “InfoLit” have existed for years, but I believe we need to transform this concept so that people are not just competent to manage the information landscape (i.e., not just be “literate”), but that they can actually thrive in it. That is, we need to create opportunities for people to be Info Fit.

Info Fitness doesn’t really exist as a concept or industry right now, but the fitness industry didn’t used to exist either. For the longest time, we knew very little about diet and exercise, but as our unhealthy diet and exercise problems came into view, we started to figure out what to do about it.

Back in the day, fitness pioneers such as Jack LaLane and Jane Fonda introduced millions of Americans to new, structured forms of exercise. Today, there are millions of resources for helping people eat well, and millions of new opportunities for people to get fit through exercise. Even as recently as a generation ago, people didn’t have nearly as many healthy food or exercise options and resources. Today, people can choose to eat gluten-free, dairy-free, organic, high-protein, vegetarian or vegan, paleo, macronutrient-balanced, and other health-focused diets with the help of the internet, books, plans, and grocery stores. They can choose to run, take Zumba® classes, do CrossFit®, join OrangeTheory ®, do yoga, do pilates, play team sports in adult and senior leagues, swim, bike, endurance race, do home workout videos like P90X® or Insanity®. This entire health and fitness industry sprung up in the last couple of decades and is making a difference in the face of a health epidemic caused by too much unhealthy food consumption.

We can and should create opportunities for people to live a lifestyle of Information Fitness if they so choose, to combat this polarization epidemic caused by too much unhealthy information consumption.

What would Information Fitness look like? It would start with information nutrition labels, as we discussed before, so people could be aware of what they were consuming before deciding whether they should spend their precious attention on it. If you eat healthy, you know you only have a limited number of calories to eat every day, and you must make choices about whether a particular food item is worth it. Similarly, we have limited time and attention we can dedicate to consuming news. We should actively decide whether what we are about to read or watch is worth it.

I don’t think we need to define a “perfect” model for information diet and civic exercise, but I think we can identify some of the big things that are way out of balance in many people’s information diets. If you look back at the chart, you’ll see it goes from fact-based reporting at the top, to analysis in the middle, to opinion down below that, and outright misinformation below that. In this analogy, I submit analysis content is like carbs in many ways. The vast majority of what is available for us to consume are various sources of carbs (analysis). Now, you definitely need carbs (analysis) in your diet—they are (it is) important. But what you need is moderate portions of high-quality carbs (analysis), like whole grain bread (or an article from the Economist). However, most of us are consuming vast quantities of white bread, mashed potatoes, and cereal (like watching a ton of CNN and reading all your favorite partisan online sites everyday). Too much of this, I assert, is unhealthy.

What should we probably consume more of instead? I submit your least-biased, most fact-based articles are like your vegetables and lean proteins, so we should probably focus on getting more of those, with, as I mentioned, a healthy, high quality, moderate portion of analysis (carbs). Most of MSNBC and FOX News are donuts and fries. They are ok in small quantities every once in a while, but for the love of God don’t sit and consume those all day.

Twitter is candy. Each take is little, gratifying, and addictive. And if you have too much of it you feel sick. We should probably limit our Twitter to small doses.

In addition to cleaning up our information diets, info fitness as a lifestyle will require structures, tools, and resources for people to engage in civic exercise. I see these things sprouting up everywhere in the form of community projects, companies, and education initiatives, and they are inspiring. I believe the next generation will have the capability to engage in civic exercise in ways we currently don’t.

With a combination of 1) knowledge of what we are consuming, 2) choosing to consume mostly healthy information, and 3) engaging in civic exercise, people can become info fit, and fight against this extreme polarization epidemic. I submit that if more people choose to become info fit, we can make a difference in our politics and in our personal relationships with our fellow citizens. I’m working on doing what I can to help people become more info fit, and I hope you’ll do the same.

[1] If you have followed my writing for a while, you know I try not to generalize (because all generalizations are false, including this one). You know I am also highly critical of analogies, because you can always find differences that disprove your analogy. However, generalizations and analogies are useful and necessary rhetorical tools, and this one was compelling to me. It was compelling because we can use the models and solutions we have found, so far, to unhealthy eating problems to create models and solutions for unhealthy information consumption problems.

Posted on

Part 2 of 4: Why Measuring Political Bias is So Hard, and How We Can Do It Anyway: The Media Bias Chart Horizontal Axis

Post Two of a Four-Part Series


The Media Bias Chart Horizontal Axis:


How to Define Political Bias in a Meaningful, Useful Way

In part one of this series I laid out some problems with existing ways of measuring bias and outlined a proposed new methodology for rating such bias in news sources within a defined taxonomy (the horizontal axis of the Media Bias Chart).

In this post, I’ll first define what the terms “partisanship” and “political bias” in this taxonomy (“partisanship” and “political bias” are used somewhat interchangeably here, though they are distinguishable in some aspects). More specifically, I’ll define what the concepts of “liberal,” neutral/center,” and “conservative” mean within the scope of this chart, and the reasoning behind these definitions. Then, I’ll discuss what the horizontal categories on the chart represent.

For clarity, let’s go one step further back and specify that what “political/partisan” bias even means. Here, I refer to the preference for policy positions that are available for individual people to have on particular topics that are subject to legislation by government. I am not referring to individual people themselves as left- or right-biased. In other words, the definitions are topic-focused, not people-focused. For example, I will define policy positions, such as “taxes should be higher/the same/lower on wealthy people” as liberal/centrist/conservative, rather than define individual people, like journalists or politicians, as themselves being liberal/centrist/conservative.

Regarding the answer to the questions of what “liberal,” “centrist,” and “conservative” (hereinafter referred to as simply “liberal/conservative” or “left/right”) policy positions are, this is difficult to answer because 1) what is considered liberal or conservative is a moving target over time, 2) there isn’t necessarily a “center” on each topic, and 3) some people will always disagree on the definitions I or anyone else may come up with.

As an initial matter, many people object to trying to confine partisanship to a left-right axis, arguing that there are other dimensions, such as establishment-populist, or freedom-regulation. Those that insist that these dimensions exist and should be accounted for tend to be libertarians and/or people who feel their political positions are too nuanced to be captured by a simple right/left dimension. However, several forces, including our country’s two-party system, tend to flatten those other dimensions into the liberal-conservative dimension that most Americans easily recognize. As Steven Pinker states in his book Blank Slate, “while many things in life are arranged along a continuum, decisions must often be binary.” For more on this concept, see Pinker’s book or Maxwell Stearn’s writing on political dimensionality here. Therefore, I will stick with the liberal/conservative dimension, because it covers most bias issues, and because this is a visual two-dimensional chart. A visual chart cannot show analysis like written words can, so if you find yourself getting upset about a nuanced idea that is not depicted on the chart, try to remember that this is a chart, and one of the reasons it has reached so many people is that it is a picture that necessarily simplifies some concepts. Don’t worry, someone has probably already written an excellent and nuanced article about your point.

Regarding the question of “what locations on the chart correspond to particular liberal and conservative positions,” the answer is tricky because the horizontal dimension actually represents two distinct bias concepts: 1) Political position bias and 2) Linguistic expression bias.  Political position bias refers to the “rightness” or “leftness”—the extremism—of a particular political position itself. For example, if an article portrays an extreme right-wing position, such as white nationalism, favorably, even if the portrayal is only mildly favorable, would be ranked far to the right. Linguistic expression bias represents the degree to which an article or source promotes a political position through linguistic rhetoric, even if the political position itself is not extreme. For example, if an article uses extreme language and hyperbole to promote the concept that climate change is caused by humans, which is not an extreme position in and of itself, the article it would be ranked far to the left.

Although the questions of what constitutes bias are hard, I believe it is worthwhile and possible to come up with definitions for the horizontal categories, thereby creating a taxonomy on which reasonable people of differing political beliefs can find agreement. I think it is also worthwhile to create a methodology for ranking within the taxonomy, so reasonable people of differing political beliefs can rank the same sources and come up with similar results.

Several commentators on this blog have brought up a concept called the Overton Window, which refers to what constitutes acceptable political discourse during a particular time, and which inherently recognizes that the window shifts over time. The left-right dimension of this chart attempts to capture the range of political discourse (not just the acceptable portion) in our media at the present time. This is hard to capture, but I believe we can do this if we account for enough inputs. What I specifically refer to as “inputs” are the communications that exist throughout our political system from three groups of people, namely, elected officials, journalists, and citizens.

The communications that emanate from each of these groups are important and influential in different ways. The communications of elected officials are of course important because they have the actual power to change laws. The communications of journalists are important because their platforms give them influence over how citizens see political events. The communications of citizens are important because they are the most numerous, and because they have collective power over what the elected officials (by voting) and journalists (by reading and watching) say.

It is not always obvious whose political views (of these three groups) influence the politics of our overall society. For example, many wonder, is it the media that influences citizens and elected officials? Or do citizens influence the media and elected officials? Or do elected officials influence citizens and the media?  I submit that each of these groups (which overlap, of course) influence each other in varying degrees at different times, and that push and pull of influence is what causes definitions of “left” and “right” evolve over time. For example, one can argue that the civil rights movement was caused first by citizens who influenced the media and politicians. One can argue that Fox News influenced some citizens and politicians to become more conservative on a range of issues over the last 20 years. One can argue that Obama’s endorsement of same-sex marriage influenced many citizens and media to accept it. One can argue that Trump influenced citizens and media to tolerate lower standards of behavior and experience. Each of these instances is an example of how the influence of various actors results in movement of the political spectrum over time.

The Media Bias Chart takes into account the communications from each of these groups in different ways. As I’ll describe here, the categories themselves are largely defined by the communications of elected officials themselves. The placement of the media sources within the categories reflects the communications of the media about the elected officials and the citizens.

There are seven columns on the chart:

Before defining what these seven columns represent, I’ll start with defining the scope of the chart. The versions I am creating now strictly refer to United States political partisanship. Maybe eventually this project can include international versions, but that’s beyond the scope right now. To the extent sources from other countries are included on this chart (e.g., BBC, The Guardian, The Economist, Daily Mail, etc.), their political bias rating is only with respect to their treatment of US political stories. That is, a BBC story about a British political issue would not be included in the evaluation of BBC on this chart. This is because a large basis for the evaluation of political bias is “comparison between sources.”

In order to compare the bias of sources, one has to look at multiple sources writing about the same or similar topics. Further, one has to know what comprises the political spectrum from the extremes to neutral, and the linguistic rhetoric commonly used in those countries to categorize those topics, in order to then categorize those stories.  It is highly inaccurate, if not impossible, to rate the political bias of, for example, a single story from the BBC about British politics against the US political and media landscape.

To illustrate how our impression of bias is largely dependent on comparison, I invite you to do an exercise: look up an article on Al-Jazeera, RT, BBC, or CBC—one about a topic local to the region or country from which they report. Be sure to select an article about a news topic that you are totally unfamiliar with. Chances are the article will strike you as politically neutral because you are unable to place the political issue within a spectrum of political positions on it. To the extent you are able to detect bias, it is likely due to linguistic indicators, or some reliance on comparison to your knowledge of related US political issues. It is likely your assessment of an article as unbiased would be at odds with a politically astute resident of the country about which you are reading.

Turning back to defining the categories on this chart, precisely defining what comprises the center (of our political system and of the chart) is the hardest, but I think that something approximating a consensus “center” can best be found by defining the easier-to-recognize outer left and right and then working inward towards the middle. I assert that it is easier to define the sides of an issue rather than the center of it for several reasons. One of those reasons is that some issues do not have a center. Another is because politicians tend to advocate for positions of a particular side, not for positions that are in the center. The “center” or a “compromise” is typically a result of a negotiation between two sides, and is not what politicians typically run on.

Similarly, it is easiest to detect bias in a media source the more extreme or egregious the bias is, and harder the more nuanced or unintentional it is. When it comes to articles and sources listed toward the center of the chart, or which just slightly “skew liberal” or “skew conservative,” there is room for reasonable minds to disagree as to exactly how biased these are. Whether a particular observer views these articles as skewing slightly one way or another is largely dependent on the observer’s own political leaning. These articles and sources may contain only nuanced bias, reflected in the choice of one particular term over another, or in the emphasis of certain facts at the beginning of an article and others at the end.

Referring briefly back to the vertical columns of the chart:

Most journalists at reputable sources, when writing fact-based stories (i.e., ranked in the top two vertical rows of the chart, as opposed to purposeful analysis or opinion listed in rows below) attempt to write stories that are as unbiased as possible. But since they are human, they inherently have some political bias, which many manifest subconsciously in their writing.

I submit that the absolute placement of an article or source in the middle three columns of the chart (skews liberal/neutral-balance/skews conservative) is not as important as whether an article or source falls within those middle columns or within the outer columns (hyper-partisan and most extreme). In other words, it is important for all media consumers to recognize when a source is egregiously biased. Though I stated earlier that it is easier to detect egregious bias, the state or our politics indicates that too many people are still unable to so detect it. If a moderately conservative person finds a particular article in Time Magazine skews liberal, and a moderately liberal person thinks the same article skews conservative, that disagreement can generate interesting and healthy debate, but those debates are not the dangerous ones creating extremism and damaging polarization. Those two people are reading something fundamentally reputable in the first place and engaging in productive political discourse. The worst problems with our media environment arise when people follow the hyper-partisan and most extremely biased sources and don’t even recognize that they are biased.

What Comprises the “Politics” that we are discussing, anyway?

Again, we need more baselines and definitions, because the term “politics” is broad. For our purposes here, we can define U.S. politics generally as things our elected officials have the ability to influence. And a good way to determine what our politicians think they can influence is by looking at the topics they solicit feedback on and discuss when they run for office. These often represent topics they take up in legislation when in office.

To generate a useful list of these topics, I took a look at the contact forms for each of my Senators from the state of Colorado, Michael Bennett (D) and Cory Gardner (R). Fortunately (I think), our state happens to have two fairly moderate members of their respective parties representing us in the Senate. On each of their e-mail contact forms, there is a long drop-down list of political issues about which you can contact them. The lists are long, with over 35 issues on each, and negligible differences in how they categorize them.

On some political issues, there is a fairly wide left-right political divide, and on others, there tends to be more consensus. Because this chart measures political bias, it is most helpful to identify the topics on those lists for which a discernible political divide exists. Some topics are so widely agreed upon that their political significance is negligible, and their inclusion in an article would not tell you much about the source’s political bias; for example, an article about a discovery on Jupiter would be fairly non-political. Those tend to go in the middle column absent other factors (e.g., if the article was about how cuts in NASA’s budget are limiting discoveries on Jupiter and that is a bad thing, that would be more political because it includes another political topic: budgets). For the purposes of ranking bias on this chart, it is most useful to identify what political positions have a wide left-to-right spectrum of positions.

From the political topics on my Senators’ lists, I consolidated and selected the ones with the most discernible political spectrums. These are:


Campaign Finance

Civil Rights





Higher Education


K-12 Education



Food stamps/ Welfare

Gun Control

Health Care


Social Security


Foreign Policy

When I refer to parties’ and politicians’ “positions,” I generally mean positions about these topics listed. Some of these are more polarizing than others, meaning the existing “extremes” are further apart than on other issues. For example, I submit that abortion and gun control are more polarizing than higher education.

In order to categorize how left, right, or center a position is, I used the proxies of the positions of current elected officials, as explained in further detail below. I then created a table of positions for each topic that fall into each of the categories. Because this is an evolving project and these positions change over time (sometimes significantly within a short period of time), I currently have only have this table in this paper-and-pencil copy, but I plan on converting it to an electronic format eventually. Here are some hard-to-see pictures of it:

Because the political spectrum changes over time, these positions should be revaluated and updated over fairly frequent time periods: every six months, for example.

Having defined various political positions as falling within particular categories, one can then methodically use the advocacy or favorable treatment of these positions in an article to rank them in those corresponding categories.

I submit it is possible to separate the concepts of political extremism (as measured horizontally on the chart) from quality (as measured vertically) to a certain extent, and for certain political issues. In other words, more extreme political positions do not always have to correlate with quality. The distribution of sources on the chart appears to indicate that the more extreme a position is, the lower quality the article or source is, but that is not necessarily due to the extremism of the position itself. Consider the columns of “hyper-partisan” liberal and conservative. Notice that many sources fall completely or partially within these columns, all the way from “fact reporting” down to “contains inaccurate/fabricated info.”

The reporting of certain facts themselves can create a compelling case for an idea that may be considered politically extreme at the time of its reporting; for example, at a time where adoption of children by gay couples was largely banned by law, an article reporting a study which finds that children raised by gay couples turn out to be just as happy and well-adjusted as those raised by straight couples would appear to take a very liberal policy position. Therefore, it is possible for “fact reporting” articles to fall in the “hyper-partisan” category.

Similarly, even strongly hyper-partisan positions can be supported with analysis and opinion arguments of varying quality. For example, arguments that are strong, compelling, made good faith, based on valid moral concerns, and which do not omit relevant facts from the other side can be made for even somewhat radical economic/social concepts, like libertarianism and socialism. However, worse arguments can be made for these things as well, and those quality-lowering factors are what drag articles or sources down the chart. I submit it is even possible (but rare) to write high-quality stories and arguments about some even more highly-polarized topics, such as abortion. That is, you could have a high-quality complex analysis article that advocates for an extreme position on abortion (e.g., no abortion or birth control on the right, or publicly funded abortion and birth control on the left) that would fall in the top of the “complex analysis” row and right on the right-most or left-most “hyper-partisan” line. However, the nature of very extreme positions is that they tend to be extreme precisely because they ignore some realities and/or valid concerns of the other side. The more extreme the position, the more likely it is to rely on ignored or omitted facts, and the more untenable it is for an elected official to hold. Therefore, those positions that are too extreme for even any politician to hold (in the “most extreme” liberal/conservative) columns are all in the lowest categories for quality due to their misleading and inaccurate natures.


What Comprises Linguistic Expression Bias?

As previously discussed, the horizontal categories also represent levels of linguistic partisanship; that is, I propose that the use of certain words in certain contexts can indicate levels of bias. I refer to these as simply “biased words” herein. They comprise words in the following four categories: 1) words with political connotations connecting them to certain parties or positions, 2) adjectives that don’t necessarily have a political connotation themselves, but when used to describe a political actor, party, or position, indicate political bias, 3) insults and pejoratives commonly used to describe certain political opponents, and 4) bogeymen.

The first category of biased words refers to the preferred terminology about a political position or political topic by one side or the other. These include characterizations of positions like being for/against abortion as “pro-life” or “pro-choice,” or referring to certain immigrants as “illegal aliens” or “undocumented immigrants.”  These kinds of words can correlate with quality as well, because certain ones are used as insults or in a derogatory manner, which necessarily fall into the category of “unfair persuasion” on the quality scale.

The second category of biased words refers to adjectives used for ad hominem (personal) attacks on politicians. For example, if an article applies the words “ugly” or “stupid” to politicians, those words are biased words. Such words also correlate with low quality because they are unnecessarily mean, and therefore fall into the category of “unfair persuasion.”

The third category of words includes specific insults and pejorative terms that have inherent contemporary political connotations. Examples include “deplorables,” “snowflakes,” “leftists,” and “the mainstream media.”

The fourth category of words—bogeymen—refers to people or groups that may or may not exist, but whose names are invoked by politicians or media figures to incite fear, anger, or loathing among their constituents or audience. These may be real people or groups that have committed bad acts, or acts perceived as bad by their political opponents. However, they evolve into “bogeymen” terms when they become used as abstractions of these acts, thereby transforming into a sort of common enemy. Examples include “the Muslim Brotherhood,” “the 1%,” “the Deep State,” and “Big Pharma.”

In addition to the table that I created for mapping political positions to categories on the chart, I made another table that lists biased words from the four categories above. I placed these words and phrases into the horizontal categories, categorizing the biased words themselves based on degree of bias. Again, since this is a work in progress, I only have this in pencil-and-paper format, but I plan to put it in electronic form soon. Here are some more hard-to-see pictures of that table:

I’d be grateful for commentators to help me supplement this list and bring new words that should be included to my attention. I submit that, like the table of political positions, these lists of words should be updated frequently as well, because certain terms gain and fade in popularity fairly frequently. For example, “The Koch Brothers” are much more en vogue as a bogeyman than “Karl Rove” nowadays, though that was different just a few years ago.

In the next post (Part 3), I’ll go through each of the seven columns in more detail and list more examples of what political positions and biased words correspond with each. Finally, in Part 4, I’ll lay out how I take an article or story, and, using the criteria I’ve laid out here, go through the steps of ranking it as discussed in Part 1, which are 1) creating an initial placement of left, right, or neutral based on the topic of the article itself, and 2) measuring certain factors that exist within the article. I’ll also discuss step 3, which is accounting for context by counting and evaluating factors that exist outside of the article.

Posted on

Part 1 of 4: Why Measuring Political Bias is So Hard, and How We Can Do It Anyway: The Media Bias Chart Horizontal Axis

Post One of a Four-Part Series

The Media Bias Chart Horizontal Axis:


Part 1:

Measuring Political Bias–Challenges to Existing Approaches and an Overview of a New Approach

Many commentators on the Media Bias Chart have asked me (or argued with me about) why I placed a particular source in a particular spot on the horizontal axis. Some more astute observers have asked (and argued with me about) the underlying questions of “what do the categories mean?” and “what makes a source more or less politically biased?” In this series of posts I will answer these questions.

In previous posts I have discussed how I analyze and rate quality of news sources and individual articles for placement on the vertical axis of the Media Bias Chart. Here, I tackle the more controversial dimension of rating sources and articles for partisan bias on the horizontal axis. In my post on Media Bias Chart 3.0, I discussed rating each article on the vertical axis by taking each aspect, including the headline, the graphic(s), the lede, AND each individual sentence and ranking it. In that post, I proposed that when it comes to sentences, there are at least three different ways to score them for quality on a Veracity scale, an Expression scale, and a Fairness scale. However, the ranking system I’ve outlined for vertical quality ratings doesn’t address everything that is required to rank partisan bias. Vertical quality ratings don’t necessarily correlate with horizontal partisan bias ratings (though they often do, hence the somewhat bell-curved distribution of sources along the chart).

Rating partisan bias requires different measures, and is more controversial because it disagreements about it enflame the passions of those arguing about it. It’s also very difficult, for reasons I will discuss in this series. However, I think it’s worth trying to 1) create a taxonomy with a defined scope for ranking bias and 2) define a methodology for ranking sources within that taxonomy.

In this series, I will do both things. I’ve created the taxonomy already—the chart itself—and in these posts I’ll explain how I’ve defined its horizontal dimension. The scope of this horizontal axis has some arbitrary limits and definitions,  For example, it is limited in scope to US political issues, as they exist within the last one year or so, and uses the positions of various elected officials as proxies for the categories. You can feel free to disagree with each of these. However, it has to start and end somewhere in order to create a systematic, repeatable way of ranking sources within it. I’ll discuss how I define each of the horizontal categories (most extreme/hyper-partisan/skews/neutral). Then, I’ll discuss a formal, quantitative, and objective-as-possible methodology for systematically rating partisan bias, which has evolved from the informal and somewhat subjective processes I had been using to rate it in the past. This methodology comprises:

  • An initial placement of left, right, or neutral for the story topic selection itself
  • Three measurements of partisanship on quantifiable scales which include
    1. a “Promotion” scale
    2. a “Characterization” scale, and
    3. a “Terminology” scale

3)  A systematic process for measuring what is NOT in an article, the absence of which results in partisan bias.

  1. Problems with existing bias rating systems

To the extent that organizations try to measure news media stories and sources, they often do so only by judging or rating partisan bias (rather than quality). Because it is difficult to define standards and metrics by which partisan bias can be measured, such ratings are often made through admittedly subjective assessments by the raters (see here, for example), or are made by polling the public or a subset thereof (see here, for example). High levels of subjectivity can cause the public to be skeptical of ratings results (see, e.g., all the comments on my blog complaining about my bias), and polling subsets of the public can skew results in a number of directions.

Polling the public, or otherwise asking the public to rate “trustworthiness” or bias of news sources has proven problematic in a number of ways. For one, people’s subjective ratings of trustworthiness of particular sources tend to correlate very highly with their own political leanings, so while liberal people will tend to rate MSNBC as highly trustworthy and FOX as not trustworthy, conservative people will do the opposite, which says very little about an objective level of actual trustworthiness of each of those sources. Further, current events have revealed that certain segments of the population are extremely susceptible to influence by low-quality, highly biased, and even fake news, and those segments have proven themselves unable to reliably discern measures of quality and bias, making them unhelpful to poll.

Another way individuals and organizations have attempted to rate partisan bias is through software-enabled text analysis. The idea of text analysis software is appealing to researchers because the sheer volume of text of news sources is enormous. Social media companies, advertisers, and other organizations have recently used such software to perform “sentiment analysis” of content such as social media posts in order to identify how individuals and groups feel about particular topics, with the hopes that knowing such information can influence purchasing behavior. Some have endeavored to measure partisan bias in this way, by programming software to count certain words that could be categorized as “liberal” or “conservative.” A study conducted by researchers at UCLA tried to measure such bias by references by media figures to conservative and liberal think tanks. However, such attempts to rate partisan bias have had mixed results, at best, because of the variation in context in which these words are presented. For example, if a word is used sarcastically, or in a quote by someone on the opposite side of the political spectrum from the side that uses that word, then the use of the word is not necessarily indicative of partisan bias. In the UCLA study, references to political think tanks were too infrequent to generate a meaningful sample. I submit that other factors within an article or story are far more indicative of bias.

I also submit that large-scale, software-enabled analysis bias ratings are not useful if the results do not align well with the subjective bias ratings gathered by a group of knowledgeable media observers. That is, if we took a poll of an equal number of knowledgeable left-leaning and right-leaning media observers, we could come to some kind of reasonable average for ratings bias. To the extent the software-generated results disagree, that suggests that the software model is wrong. I earlier stated my dissatisfaction with consumer polls as the sole indicator of bias ratings because it is consumer-focused and not content-focused. I think there is a way to develop a content-based approach to ranking bias that aligns with our human perceptions of bias, and that once that is developed, it is possible to automate portions of that content-based approach. That is, we can get computers to help us rate bias, but we have to first create a very thorough bias-rating model.

  1. Finding a better way to rank bias

When I started doing ratings of partisanship, I, like all others before me, rated them subjectively and instinctively from my point of view. However, knowing that I, like every other human person, have my own bias, I tried to control for my own bias (as referenced in my original methodology post), possibly resulting in overcorrection. I wanted a more measurable and repeatable way to evaluate bias of both entire news sources and individual news stories.

I have created a formal framework for measuring political bias in news sources within the defined taxonomy of the chart. I have started implementing this formal framework when analyzing individual articles and sources for ranking on the chart. This framework is a work in progress, and the sample size upon which I have tested it is not yet large enough to conclude that it is truly accurate and repeatable. However, I am putting it out here for comments and suggestions, and to let you know that I am designing a study for the dual purposes of 1) rating a large data set of articles for political bias and 2) refining the framework itself. Therefore, I will refer to some of these measurements in the present tense and others in the future tense. My overall goal is to create a methodology by which other knowledgeable media observers, including left-leaning and right-leaning ones, can reliably and repeatably rate bias of individual stories and not deviate too far from each other in their ratings.

My existing methodology for ranking an overall source on the chart takes into account certain factors related to the overall source as a first step, but is primarily based on rankings of individual articles within the source. Therefore, I have an “Entire Source” bias rating methodology and an “Individual Article” bias rating methodology.

  1. “Entire Source” Bias Rating Methodology

I discussed ranking partisan bias of overall sources in my original methodology post, which involves accounting for each of the following factors:

  1. Percentage of news media stories falling within each partisanship category (according to the “Individual Story” bias ranking methodology detailed below)
  2. Reputation for a partisan point of view among other news sources
  3. Reputation for a partisan point of view among the public
  4. Party affiliation of regular journalists, contributors, and interviewees
  5. Presence of an ideological reference or party affiliation in the title of the source

In my original methodology post, I identified a number of other factors for ranking sources on both the quality and partisanship scales that I am not necessarily including here. These include the factors of 1) number of journalists 2) time in existence and 3) readership/viewership. This is because I am starting with an assumption that the factors (a-e) listed above are more precise factors indicating partisanship that would line up with polling results of journalists and knowledgeable media consumers. In other words, my starting assumption is that if you used factors (a-e) to rate partisanship of a set of sources, and then also polled significant samples of journalists and consumers, you would get similar results. I believe that over time, some of the factors 1-3 (number of journalists, time in existence, and readership/viewership) may be shown to correlate strongly with indications of partisanship or non-partisanship. For example, I suspect that the factor “number of journalists” may be found to correlate high numbers of journalists with low partisanship, for the reason that it is expensive to have a lot of journalists on staff, and running a profitable news enterprise with a large staff would require broad readership across party lines. I suspect that “time in existence” may not necessarily correlate with partisanship, because there are several new sources that have come into existence within just the last few years that strive to provide unbiased news. I suspect that readership/viewership will not correlate very much with partisanship, for the simple reason that as many people seem to like extremely partisan junk as like unbiased news. Implementation of a study based on the above listed factors should verify or disprove these assumptions.

I have “percentage of news media stories falling within each partisanship category” listed as the first factor for ranking sources, and I believe it is the most important metric. Whenever someone disagrees with a particular ranking of an overall source on the chart, they usually cite their perceived partisan bias of a particular story that they believe does not align with my ranking of the overall source. What should be apparent to all thoughtful media observers out there, though, is that individual articles can themselves be more liberal or conservative than the mean or median partisanship bias of its overall source. In order to accurately rank a source, you have to accurately rank the stories in it.

             2. “Individual Story” Bias Rating Methodology

As previously discussed, I propose evaluating partisanship of an individual article by: 1) creating an initial placement of left, right, or neutral based on the topic of the article itself, 2) measuring certain factors that exist within the article and then 3) accounting for context by counting and evaluating factors that exist outside of the article. I’ll discuss this fully in Posts 3 and 4 of this series.

In my next post (#2 in this series) I will discuss the taxonomy of the horizontal dimension. I’ll cover many reasons why it is so hard to quantify bias in the first place. Then I’ll define what I mean by “partisanship,” the very concepts of “liberal,” “mainstream/center,” and “conservative,” and what each of the categories (most extreme/hyper-partisan/skews/neutral or balance) mean within the scope of the chart.

 Until then, thanks for reading and thinking!