Volume 2011, Number July (2011), Pages 1-8
Untrustworthy information is an increasing threat to decision making in information environments. Rick Hayes-Roth has been studying how to detect and filter away untrustworthy information and base decisions on well-grounded claims that can improve outcomes. We interviewed him to find out more about this problem and get advice for our readers. Although there are many subtleties in the shades of truth and the intentions of speakers and listeners, Hayes-Roth finds the essential core of what you can do to ward off untrustworthy information.
Rick Hayes-Roth is a Professor of Information Sciences at the Naval Postgraduate School, where he teaches the capstone course on strategic uses of information technology. He was formerly the CTO for Software at Hewlett-Packard. His entire career has focused on how to improve decisions and outcomes based on knowledge and information processing. He has started several companies. In 2011, he co-founded Truth Seal Corp., which aims to use market forces to reward truth telling. Truth Seal provides a method for people and organizations to guarantee the truth of vetted claims, and pays bounties for valid challenges that falsify guaranteed claims.
We will offer the interview in two installments. Part 1 will focus on the problem and the principles that help ameliorate it. Part 2 will focus on the means to implement the principles in our information environments.
Peter J. Denning
Ubiquity: You recently published a book called Truthiness Fever. What is your premise in that book?
Rick Hayes-Roth: The book focuses on the increasing prevalence of untrustworthy information in our work and home information environments. My experience is typical of what I hear from others. When I make Google searches, I often have to sift through many exaggerated and unsubstantiated claims to get to something useful. I often wonder how much of what I read in Wikipedia is trustworthy. When I see an item of news and try to find out more, I discover that what appear to be a multitude of stories all derive from a single source, such as a press release, and I can't tell if that source is correct. I work with people who are trying to lead, manage, and perform effectively in information environmentsand I often encounter people overwhelmed with the task of finding reliable information so that they can do their jobs. The complaint keeps growing year after year. I have become alarmed about the situation. I see bad information as a toxin in our information environment. Some misinformation is created inadvertently when uninformed people confuse their opinions and beliefs with facts. Other misinformation is created purposely by people trying to make things go their way. I wrote the book to call attention to a practice of "truthiness" that generates untrustworthy information, whether inadvertently or purposely. I'd like to see more people engage in "truthful" practices and I propose some market mechanisms to foster truthfulness and discourage truthiness.
Ubiquity: What is truthiness?
Hayes-Roth: Truthiness is a practice of claiming statements are true simply because the speaker has strong feelings, strong beliefs, or strong emotions about them. The speaker provides little or no supporting evidence of the claim. The speaker often states the claim in such a way that it cannot be falsified by contrary evidence. This practice contrasts with truthfulness, which means to make claims that are verifiably true, supported by evidence, and open to being falsified.
Ubiquity: Where did this term come from?
Hayes-Roth: The term "truthiness" was introduced by Steven Colbert, the well known TV political satirist. He referred to the practice of selling ideas as credible because the promoter "felt" them to be true. The promoter's personal feeling that something is true overrides, ignores, or is apathetic toward anything that might discredit the idea. Colbert illustrated that term with ideas that the Bush administration was using to justify its intention to invade Iraq after the 9/11 attacks. For that invasion to make sense, it had to rest on some kind of logic. Instead of offering fact-based logic, the administration promoted various "truthy" ideas including "Saddam Hussein was linked to Al-Qaida" and "Iraq was actively engaged in developing WMDs." Years later, we learned that Saddam had no ties to Al-Qaida and had no WMDs.
Ubiquity: What are some other examples of truthy claims?
Hayes-Roth: I'll give two more from the political domain. One is global warming. While the scientific community has established an overwhelming consensus on two pointsthat the world is warming and that humans are partly responsible (Union of Concerned Scientists)there is still some controversy around the influence of human activity on increasing temperatures. Some believe other causes, such as solar sunspots, have a greater effect. Yet the political arena seems devoid of honest discussions of the science. People opposed to regulating carbon emissions make truthy claims such as "The cold snowy winter shows the fallacy of global warming," or "Many reputable climate scientists dispute that human activity is a major contributor to global warming." Traditional energy suppliers, who want to sell more carbon-rich fuels, adopt terms like "clean coal," "safe nuclear energy," and "environmentally safe natural gas fracking" to convey an impression that these technologies are all safe and low-risk. The claims are truthy because they appeal to emotions rather than facts.
Another issue is the amount of debt being accumulated by governments. Some speakers blame public employees and their unions for state and municipal government budget woes. Others blame seniors, the uninsured, and the unemployed for unsustainable "health care entitlements." Still others blame excess taxation feeding excess government appetites for spending. Instead of promoting an honest discussion of the problem and the reasons we disagree, politicians and interest groups stake out particular interpretations and label them as the "truth." It is hard for most people to decide which of the many conflicting "truths" they are offered are really true, and therefore what action they should support. Again, we see "truths" proclaimed for emotional and ideological reasons, rather than based on facts.
Ubiquity: Say more about the distinction between "truthiness" and "truthfulness."
Hayes-Roth: I see two main dimensions to truthfulness. One concerns how we assess whether a statement is a fact. The other concerns the speaker's sincerity. Let me start with sincerity, which is the more troublesome aspect.
Let's say I assert something to be true. I usually situate my claimed fact in the context of a purpose because I want to get people to take an action I favor. I would be sincere if I told you honestly what my purpose is and if I believed my claim to be a fact. I would be insincere if I had a different purpose from what I told you or, worse, I knew that my claimed facts were false. The dark end of truthiness is outright lying, where I purposely deceive you about my purpose and offer false information as "facts." Insincere speakers won't earn the trust of their listeners.
But what about the other end, where I'm honest about my purpose and honestly believe my claims are facts? I can still be truthy, because I have relied on emotion to determine what I accept as facts. Sincere speakers who cannot back up their claims with facts won't be trusted for long. Thus, we need to consider the other dimension.
A fact is a statement that is either true (it holds) or false (it does not hold). Facts can be verified independently of the speakerthe listener can check independent sources, go look, or perform the experiment. Someone finding contrary evidence may refute a claimed fact.
Ubiquity: Say more about what you mean by "claims" and "facts."
Hayes-Roth: A claim is a statement asserting something to be true. People accept claims when they are well groundedthat is, when the speaker provides good explanations with objective data. Claims are vulnerable to possible disconfirmation because new, refuting facts may be discovered. Notice that, strictly speaking, claims cannot be either true or falsethey are grounded or ungrounded.
In all communities, a claim can evolve into a fact over time by winning allies. Allies fall into three categories, which I'll call primary, secondary and general. Primary allies contribute facts about a claim, established by their own independent investigations. Scientists doing independent experiments to replicate the claimed result illustrate this. Secondary allies evaluate claims by examining other sources, reaching their own conclusions based on others' data. Government regulators doing fact checking in the Internet illustrate this. Finally, general allies accept claims based on word of mouth or trusted sources.
Different communities move claims from hypothesis to fact depending on the number and preponderance of allies in various categories. Scientists in a specialized arena care most about agreement of primary investigators. Government regulators pay most attention to primary and secondary investigators. Politicians and media companies speak to and often pander to the larger category of general allies. If almost everyone in a community is an ally, that community will say the claim is true and will call it a fact or a truth.
I give all this detail because there is work to be done to convert a claim to a fact. Just because the speaker says it does not mean it is verified or verifiable, or that it has sufficient allies. And just because it has sufficient allies does not guarantee it is verifiable.
Many professional communities understand and emphasize these distinctions to improve their reliability and utility over time. For example, practicing MDs consider it a fact that certain diseases are strongly correlated with certain risk factors. The doctor therefore advises a patient to reduce such risk factors. The FDA or the FTC would consider the correlation to be a "credible hypothesis" but not a fact, because it is not always true that the disease follows if you have the risk factors. Thus, different communities can rate a claim statement on a spectrum from "credible hypothesis" to "fact supported by confirming data."
While no amount of confirming data ever proves a claim true, our everyday livesindeed, our entire civilizationdepend critically upon treating credible and confirmed beliefs as facts. Once a fact has been established, it is provisionally true until proven otherwise. We don't allow anybody to drive when intoxicated, because we accept the correlation between intoxicated driving and accidents. The claim "Intoxicated drivers cause accidents" is a credible hypothesis, provisionally accepted as true. In many areas of human interest, we can't even do relevant experiments, but that doesn't keep us from provisionally accepting empirically supported hypotheses, vulnerable to disconfirmation.
Ubiquity: What about a person who honestly believes a claim is true, and continues to hold that opinion after others refute the claim?
Hayes-Roth: I'd say that person is sincere, but ungrounded. Believing that a claim is true does not necessarily make it true. If that person took the trouble to find out what evidence exists to support the claim, or whether the claim has allies, the person could ground the claim. Without the grounding, the claim will seem truthy to listeners who are not allies.
Ubiquity: How would you categorize people with religious beliefs? For example, they can claim God exists, but they can't prove it.
Hayes-Roth: All they need to do is be honest. They can realize there is no proof and accept that.
Ubiquity: And what about scientists who believe something is true and do not immediately accept contrary evidence? A famous example was in 1887, when the Michelson-Morley experiment falsified the widely held belief "ether exists." Ether was the supposed medium through which light waves propagate. Many physicists clung to that belief until Einstein dislodged it in 1905 with the theory of relativity.
Hayes-Roth: This is a fine example of the role of allies in establishing scientific facts. Prior to that experiment there were no instruments that could definitely confirm or refute the ether claim, which had many allies. When the instrument falsified the claim "ether exists," the allies of the old claim were not ready to give it up. Einstein showed them a new interpretation of physics in which ether did not need to exist. Then it started to seem rational to accept the new claim, which was "the speed of light is the same in every frame of reference."
In a very real sense, all human knowledge is socially constructed. Our language, training, and culture shape the questions we ask, how we perceive reality, and how plausible or elegant we find models, explanations, and theories. When Commodore Perry arrived in Japan in a Western effort to force the country to "open," news of the Americans was drawn on scrolls by Japanese observers and sent to the Emperor. The scrolls portrayed the Americans consistently and mistakenly with Asian-shaped eyes, presumably because that's what the artists thought they observed.
Our culture accumulates knowledge through a process of accepting new explanations when needed. People hold tenaciously to ideas that worked for them in the past and which seem still effective. Only when ideas fail, repeatedly, to accord with data do enough people accept the necessity to change. Change is not easy for individuals and notoriously difficult for organizations. In the scientific arena, Thomas Kuhn famously explained how a scientific community ultimately makes the jump from yesterday's "paradigm" to tomorrow's. Imre Lakatos showed that the same was true for mathematics, a field some might think of as purely logical rather than empirical. He explained how resistant mathematicians are to "monsters," cognitively constructed examples that disconfirm previously accepted truths.
Throughout experimental psychology, the phenomenon of "superstitious learning" is common and instructive. All animals seek behaviors that reliably produce rewards in various contexts. The learning animal can often achieve success despite anchoring its behavior to an irrelevant and spurious contextual stimulus. When you get the right result with an accidental but irrelevant precondition, you exhibit superstitious learning. It works, and it's highly resistant to change. Of course, people often acquire superstitions that work for them. Whatever they have done for millennia to keep the devil away, for example, has worked, regardless of the lack of causal validity.
Even though science strives to establish what are the basic facts about nature, we can never be absolutely sure about the truth of claims. All facts, scientific or otherwise, are subject to falsification if things change or surprising data appear. The scientific enterprise makes progress by continually comparing alternative hypotheses and choosing those that give best summaries or explanations of data. Scientists reject hypotheses that are inconsistent with credible data. They wind up accepting as facts those that are repeatedly confirmed when subjected to experiments that could reasonably be expected to disconfirm them.
In this context, we ascribe truthfulness to people who claim only facts and intend to tell the truth. Truthiness, on the other hand, means accepting things as true because they feel good or you want them to be true. In most cases, the truthy claims are not actually true. You can be sincere in your beliefs that things are true and still be truthy.
Ubiquity: Is everyone who is "truthy" a liar?
Hayes-Roth: No. As I said, truthy claims are often made sincerely. You can be sincere and truthy as a missionary, a true believer, or merely a gossip.
I believe that the primary threat to our information environment comes from intentional pollution, where liars of means and motive spread falsehoods to advance their interests.
Ubiquity: What is different about today, compared to a generation ago, that makes truth-telling more urgent? People have been lying and otherwise skirting the truth since the beginning. There is even an Old Testament commandment about it.
Hayes-Roth: I think this is the key insight. Do you know what a "meme" is? A meme is an attractive idea that spreads from one person to another, much like a biological virus. As our civilization becomes more connected and shaped by the Information Age, we are eliminating the friction that previously slowed or limited the impact of "memes" on the distributed population. Slower rates of spread contributed to mental health in two ways. First, when it was expensive or difficult to convey information, fewer people encountered new memes. Second, often those who did encounter a new meme had specialized interests and resources; the specialists evaluated memes more critically. Both factors meant that in former days fewer ungrounded memes were in play.
Today, we can reach hundreds of millions of people with expensively crafted, polished, and test-marketed sales pitches for each promoted idea. The political action committee that wanted people to think that Sen. Kerry was a coward in the Vietnam War made movies and web sites, peppering the media with the story that his heroic deeds were fictitious. With adequate budget, we can buy the best talent, the best media channels, and the best social buzz programmers to make a decorated war hero seem untrustworthy. Such expensive marketing campaigns have become commonplace in modern politics.
Many powerful forces are converging to make this kind of packaging and selling of ideas so routine that it becomes part of the very fabric of Internet-based communications. Google now displays information that their programs have calculated is most likely to match your beliefs or interests. Since the search engine does not distinguish true from false, you can be presented with a diet that contains no information contrary to your beliefs and preferences. Can you be truthful if the facts do not flow to you because the search engine calculates you would not be interested in them?
But this is not just about Google. All information service providers today are beholden to the advertisers who pay them for users who click through and become purchasers. The advertisers' invisible hands shape each person's interactions closer and closer to the perfect reinforcing experience. Decades ago we marveled that laboratory rats would push a bar to stimulate their pleasure centers without stop, even to the point of starvation. What makes us think we can't succumb to the same desire for pleasant experience over truth? Through the ages, humans have learned that truth isn't usually pleasurable.
©2011 ACM $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.
A Ubiquity symposium is an organized debate around a proposition or point of view. It is a means to explore a complex issue from multiple perspectives. An early example of a symposium on teaching computer science appeared in Communications of the ACM (December 1989).
To organize a symposium, please read our guidelines.
Ubiquity Symposium: Big Data
- Big Data, Digitization, and Social Change (Opening Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic
- Big Data and the Attention Economy by Bernardo A. Huberman
- Big Data for Social Science Research by Mark Birkin
- Technology and Business Challenges of Big Data in the Digital Economy by Dave Penkler
- High Performance Synthetic Information Environments: An integrating architecture in the age of pervasive data and computing By Christopher L. Barrett, Jeffery Johnson, and Madhav Marathe
- Developing an Open Source "Big Data" Cognitive Computing Platform by Michael Kowolenko and Mladen Vouk
- When Good Machine Learning Leads to Bad Cyber Security by Tegjyot Singh Sethi and Mehmed Kantardzic
- Corporate Security is a Big Data Problem by Louisa Saunier and Kemal Delic
- Big Data: Business, technology, education, and science by Jeffrey Johnson, Luca Tesei, Marco Piangerelli, Emanuela Merelli, Riccardo Paci, Nenad Stojanovic, Paulo Leitão, José Barbosa, and Marco Amador
- Big Data or Big Brother? That is the question now (Closing Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic