Sources and Experts
Mark Battersby
1. Background[1]
Assessing appeals to authority, including the evaluation of sources of information and the evaluation the opinions of experts, should be a central concern in critical thinking courses. Thanks to the Web and Google, practically everyone will spend their lives supplied with an abundance of putative information. Weeding the credible from the not credible will be crucial to leading a well-informed life. Students should be clear that the authority in question is not political, moral or administrative authority, but rather epistemic authority.
Since the Enlightenment[2], the appeal to authority (ad verecundiam) has often been treated as a fallacy. Locke was the first to identify and name the ad verecundiam fallacy, noting that it involved taking advantage of the common habit of deference to authority to drive acceptance of a conclusion. Here is his argument for the classic Enlightenment negative view of appeals to authority.
The floating of other men’s opinions in our brains, makes us not one jot the more knowing, though they happen to be true. What in them was science, is in us but opiniatrety; whilst we give up our assent only to reverend names, and do not, as they did, employ our own reason to understand those truths which gave them reputation. Such borrowed wealth, like fairy money, though it were gold in the hand from which he received it, will be but leaves and dust when it comes to use. (Locke, 1948 Bk I, Ch 3, Section 24)
Locke (like other Enlightenment philosophers), inspired by the success of science, was concerned to liberate people from accepting hand-me-down claims that were untested and unquestioned by the recipient. Intellectual liberation meant the rejection of such claims and the move to establish independently and personally the truth of claims. While such advice was undoubtedly salutatory in its day where priestly authority tended to claim epistemic authority, the situation today is very different.
Despite Locke’s claim, a little reflection of what we know reveals that most of what we know (or claim to know) is based on claims received from credible sources (i.e., belief-worthy sources). Even autobiographical knowledge, such as our birth date, is based on a trusted source—our parents or a birth certificate.
More generally, none of us are equipped to establish independently most of the claims that we depend on. In our own areas of expertise we may be able to verify claims, but as Steven Pinker recently stated, “Nowadays we specialists cannot be more than laypeople in most of our own disciplines, let alone neighboring ones” (Pinker, 1997). , Outside of our own lives and our areas of specialty we are in a state of “epistemic dependency”—a lovely turn of phrase from John Hardwig (1985). This dependency is not necessarily bad; it means that we can know many more things than we could if left to our own investigations. It is part of the great power of society and language that such knowledge can be passed on.
Take the obvious example of the view that the sun is the centre of our solar system and the far from obvious claim that the earth is turning and the sun is not, as we still say, “going down.” Few of us could prove either of these well-established claims that we may have learned even before going to school. Clearly we know these facts about the solar system and the basis of this knowledge is primarily the credible sources (teachers, astronomers) who supplied us with the information. While knowledge transmission provides undoubted benefits, the problem is that erroneous beliefs can be passed on using the same powerful vehicles.
2. Evaluating credible sources
To start with we need to make a distinction between what I will call “credible claims” and “expert opinions.” Credible claims are backed by disciplinary authority, not primarily the expertise of the informant, whereas expert opinions get their credibility primarily from the competence and presumed independently knowledge of the expert. Our belief in the heliocentric account of the solar system as explained by our first grade teacher does not get its credibility from the teacher, but from the scientific understanding and consensus which is the basis of the claim. The question about informants is not whether they have expert knowledge of the claim, but whether they are reliable transmitters of this type of information.
On the other hand, an expert’s claim, such as a pathologist’s claim that a biopsy reveals cancer, receives much of its credibility from the expertise of the pathologist. There is of course a scientific basis to determining whether a cell is cancerous, but the application of this knowledge requires a competency beyond simply knowing the science. Applied expertise is a combination of “know how” and factual and theoretical knowledge. The credibility of the claim depends on the know how of the expert as much as or even more than the credibility of the background scientific understanding.
To be clear: I am using the term ‘credible’ to mean “rationally worth of belief.” That does not of course mean certain. Rather it means that the support for the claim meets the relevant criteria for claiming that a belief is rationally worthy of belief. In addition, like most rationally based beliefs, expert based beliefs come with degrees of reasonable confidence.
2.1 General argument model of Appeals to Authority
The logic of appeals to credible sources or expert opinions can be schematized as:
- The source or expert states “P”.
- The source or expert meets the relevant criteria of credibility (to be discussed below).
Conclusion: P is a credible belief, or it is reasonable to believe that P.
The premises of this type of argument do not entail the conclusion nor is this argument a case of statistical induction. The premises, if satisfied, provide reasonable support for the conclusion—the inference is best described as “probative” (Scriven, 2009) .
2.2 Evaluating sources independently
A useful way to apply the relevant criteria of credibility to claims based on putatively credible sources is to subject them to the following critical questions:
Q1.Is the claim from an appropriate and reliable domain of knowledge?
Q2. Is there consensus among the relevant experts supporting the claim?
Q3. Has the claim in question been subject to peer review or is it from a peer reviewed source?
Q4. Is the claim supported by evidence?
The characteristics of a domain of inquiry in which credible appeals to the source can be made are:
1.1 The domain is not an area characterized by moral autonomy.
1.2 The domain has widely agreed-upon procedures and criteria for establishing claims.
1.3 The domain is generally characterized by a significant degree of expert consensus.
The issue of which domains of inquiry (e.g., history, psychology, economics, physics, newspapers, theology) are credible sources of authority is not without controversy, though for the purposes of critical thinking the domains to which appropriate appeals to authority can be made are those based on domains that have well established, consensually agreed upon proof procedures and reliably yield consensus based knowledge claims. The clear paradigm in this case is the natural sciences, such as physics and chemistry, which have a long history of being a reliable (though far from infallible) source of knowledge.
On the other hand, certain kinds of claims are not appropriately settled or even well support by appeals to authority.[3] One of the most significant of these types of claims is moral claims. When people make moral decisions, they cannot say, as they often do about scientific claims, “I believe it because experts claim it.” There are three reasons for this. First, as we will see below, a primary reason to trust expert judgment is that there is consensus among the experts. On many moral issues there is controversy among candidate experts (moral philosophers, theologians). They disagree not only about a particular moral judgment but also about what are the justifiable criteria for making a particular judgment. Arguably there is not even agreement about who are experts on moral judgment. These problems undermine any appeal to moral authorities to reliably adjudicate moral issues.
Second, in a secular society such as ours, the responsibility for a moral judgment or decision stops with the individual. Any of us can decide to obey the edicts of our church or the moral advice of a friend, but it is still our decision and belief, and the responsibility rests with us. This is not to say that we should not listen to people who study ethical questions. They may well have insights to share. The crucial point is that we cannot appeal to their expertise to claim moral knowledge (Hooker, 1998).
Third, there is no well-established epistemology for moral judgments. People not only disagree about moral judgments, they even disagree about how such judgments can be verified. There is even widespread disagreement about whether there is moral knowledge. This is in stark contrast to the physical sciences where no one doubts that there is objective knowledge and it is agreed that observation provides a crucial basis for verification. The uncertainly about criteria and objectivity also undermines appeals to authority about aesthetic judgments. Though again this is not to claim one should ignore that the arguments and opinions of experts in this domain.
The physical sciences provide a broad basis for appeal to authority because they have well-established procedures for verification and as a result have a large degree of consensus on most claims. If we want to know the speed of light or the molecular weight of oxygen, we can look them up in any textbook. There is virtual unanimity in these disciplines on these topics. But even these fields have their zones of dissensus, or disagreement. Any astrophysicists can tell us the speed of light and the approximate number of stars in our galaxy; however, only recently have they reached consensus on the issue of whether the universe is expanding, and at this point, there is no consensus on how to account for the rate of the universe’s expansion. As a result, appeals to the claim that dark matter (or dark energy) is causing the acceleration of the expansion of the universe are necessarily low in credibility.
To take other areas of science, we know how old the earth is, although this question was settled only recently (Earth | EarthSky), and still more recently, there is a general consensus that homo sapiens emerged out of Africa (Boyd & Silk, 2014). To the extent that experts in these fields (geology and paleoanthropology) agree, we, as laypeople, have grounds to believe their claims. We should also recognize, however, that, because of the challenges of doing research in disciplines such as paleoanthropology—even biological paleoanthropology, the discipline that studies human origins using physical analysis of human remains—do not yield the same degree of certainty that a discipline like physics does. In general, non-experimental sciences will always be subject to more uncertainty than experimental ones, as will disciplines that study human behavior. Sciences such as paleoanthropology that are also vulnerable to the “luck” of finding fossils, tools, and other artifacts are unlikely to achieve the level of certainty that other observational sciences such as geology or ecology might achieve. Different disciplines or areas of inquiry have demonstrated different degrees of epistemic reliability. As a result, an important consideration when evaluating sources is the reliability of the discipline based on its particular subject matter, methodology, and history.
While reliable consensus and agreed-upon procedures for assessing claims are the best guide to establishing the credibility of discipline based claims, we should not ignore the fact that even the domains of inquiry which generally meet these criteria have often had consensus claims that later turn out to be false. Credible sources can be wrong. The history of science, especially some sciences, is littered with error and revision. But how do we know that? We know that because later science disproved the earlier claims/theories. To take two examples, plate tectonics refuted the stable earth theory, and the Michelson Morley experiment disproved the existence of “the ether”—the supposed medium of light waves. The history of self-correction in science is part of the basis for having a reasonable confidence in current scientific claims.
Even though science is self-correcting, the emergence of a consensus based on peer evaluation of a claim or theory takes time. New claims do not warrant the same credibility as those that have stood the test of time. “If it is in the news, it is probably not established science” is a good rule of thumb to use when evaluating claims reported in the media. On the other hand, that a theory/claim has been accepted for some time and has presumably survived ongoing scrutiny contributes to its credibility. In contrast, the notorious lack of long term consensus for many claims in the area of nutrition research weakens any appeal to authority in this area—even those claims supported by a current consensus (Freedman, 2010).
Given the importance of expert consensus in providing a basis for credibility it is important to establish the current state of a claim’s status. Because many claims do get revised or rejected over time, the most credible belief is one supported by the current consensus.
If, on the other hand, there is no current consensus on a particular claim, then any appeal to disciplinary authority is weak.
By consensus, we do not mean complete agreement by all scientists. We cannot and should not expect this on many matters. But the degree of disagreement is relevant. If the number of dissenting voices is relatively small, this is not a reason for laypeople to give up relying on the general consensus (although understanding the basis of the disagreement can be relevant to evaluation). If the domain of knowledge is a relatively reliable one, then it is completely reasonable to trust the dominant view. If there is no such consensus, then one’s belief cannot be reasonably based on an appeal to disciplinary credibility (Bailin & Battersby, 2016).
Another source of uncertainty when appealing to disciplinary sources is the fact that certain disciplines are characterized by divisions into “schools of thought”. Within these schools there may be a consensus, but across the discipline as a whole there may be no consensus. Psychology and economics are particular examples of such disciplines with competing schools of the thought. So, for example, rather than saying economists believe, it is more accurate to say “supply side economists” believe or “Keynesian economist hold that”, and these qualified references acknowledge the limits of such appeals.
Even within a relatively reliable discipline or profession, claims clearly come with degrees of credibility or belief-worthiness. While consensus is the primary criterion for assessing the appeal to credible disciplinary sources, there are other criteria that can be used to assess a scientific claim. It is beyond the limits of this chapter to explain these criteria in detail. They involve such factors as the size and type of studies (e.g., experimental vs. observational), the extent of replication, whether claims about humans are in fact based on animal studies, and whether studies were financed by interested parties. (For more details see Battersby, 2016.) As an example of the application of these type of criteria see the table below, based on a table used by United States’s Surgeon General assessing evidence about evidence for the carcinogenic effects of second hand smoke (Services, 2006).
Domains involving the study of human behavior are particularly prone to dissensus and a shifting consensus not only because of the great difficulty of studying humans but also of the intrusion of ideological bias. Not only are experiments limited, but they are difficult to generalize from because of cultural and other differences of the studied subjects from the target population, and studies of humans are also particularly vulnerable to political bias. In addition, most such studies are observational, making it difficult to identify and so to eliminate the problem of confounding factors. Predictions about future human behavior are especially uncertain, and appeals to such claims have low epistemic worth.[4]
We can roughly think of four different levels of belief-worthiness of scientific claims concerning a causal relationship (as shown on in Table 1).
Given disciplinary uncertainty it is fortunate that on many matters we do not need to make up our minds. For example, we can leave the question about the existence of “dark matter” (which may explain why the expansion of the universe is not slowing but accelerating) to the experts to work out.
But as citizens we need to make up our minds about appropriate economic policies. As parents we will have to make a decision about the dangers of violent video games. As individuals, we have to decide what to eat even if nutritionists disagree or exhibit a shifting consensus. Such decisions should not ignore expert views; rather the critical thinker’s judgment should be based on an assessment of the competing arguments. Expert summaries written for laypeople are often available as “executive summaries” and can inform one of the various arguments on an issue, providing the possibility of a rational decision based on an assessment of the best evidence available.
Hierarchy for classifying the belief-worthiness of scientific claims |
||
|
Judgment |
Evidence Quality |
Strong |
Highly credible |
Extensive studies with appropriate size, duration and variety, including large prospective studies and if possible experimental studies. A convergence of study results. A credible causal explanation. Large effects or extensive evidence for reported small effect. |
Weak |
Evidence is suggestive but not sufficient; still uncertain. |
Significant correlation evidence from credible studies with at least some prospective studies. Some experimental evidence if possible. Smaller size and fewer studies than above. Credible causal explanation. |
Inconclusive |
Evidence is inadequate to support claim |
Evidence is sparse, of poor quality, or conflicting. Credible causal explanation may be lacking. |
Negative |
Evidence is suggestive of no causal relationship. |
Extensive studies have failed to reveal a consistent correlation. The causal relationship seems implausible in view of current understanding. |
Table 1
2.3 Is the claim in question from a peer-reviewed source?
Peer review is the gold standard for evaluating claims and arguments made in academic journals. The idea is that peers, people in the same field as the article is written, are best positioned to assess the credibility of the article. The process usually involves an editor sending an article submitted to a journal to two or three experts in the relevant field to review and assess the quality of the article and assess whether it is worthy of publication. The process is not without its weaknesses. Some concerns: (1) reviewers do not usually see the original data or primary material but just the summary in the article, (2) while reviews are done anonymously, reviewers can often tell who has written the article and be unduly influenced by such factors as the eminence of the author, (3) reviewers, being experts in the relevant field, may also reject articles to protect their own the theories (Freedman, 2010). But this process is still the best method we have for short-term evaluation of a claim or theory. Longer-term evaluation can be based on such things as successful or failed replication, scrutiny by other experts after publication, and results from application (e.g., results from drug usage that provides information on long-term effects).
Fallible as this review process is, it is still the main quality control on claims, and publication in a peer-reviewed journal, especially the more eminent ones, is a source of crediblity. While the peer review process is a reasonable basis for a tentative acceptance of a claim, a claim is more credible if it has widespread peer acceptance and has survived years of peer scrutiny and other research efforts.
3. Assessing
As with the variety disciplines, not all experts are equally credible as sources of knowledge or practical advice.
In the case of assessing expert opinions, we use critical questions to evaluate both the claim itself, and the expertise of the expert.[5]
Regarding the claim
C1. Is the claim based on an appropriate and reliable domain of knowledge?
C2. Is there disciplinary/professional consensus that is the basis of the experts opinion?
C3. Is the claim supported by evidence?
Regarding the expert
E1.Does the expert have expertise in the relevant domain of knowledge / expertise?
E2. Is the expert trustworthy?
E3. Has the expert properly attended to the question at hand?
E4. Is the expert’s opinion backed by plausible arguments and evidence?
3.1 Assessing the claim
Turning first to evaluating the claim:
C1. Is the claim based on an appropriate and reliable domain of knowledge?
These questions illustrate a key idea about expert credibility: an expert’s claim is grounded first on the reliability of the source discipline. Given that experts are usually called upon to give opinions involving the practical application of disciplinary or professional knowledge, the reliability of the discipline or profession as a source of practical knowledge must be considered. Advice based on fields with notoriously changing advice, has much weaker credibility than that based on more stable and reliable fields. For example, the ever-changing advice that emerges from management theorists reflects the weakness of research in that field and weakens the credibility of the expert advice based on this research. Similar cautionary advice applies to nutritionists (Freedman, 2010).
We have addressed the issue of moral expertise above, but leaving aside the notorious difficulties in the field of moral philosophy, it is widely recognized that some people are more thoughtful and wise about moral decisions. This is probably a case where the “expertise” of the individual transcends the credibility of the disciplinary basis.[6]
C2. Is there disciplinary/professional consensus that is the basis of the expert’s opinion?
When there is no disciplinary or professional consensus, a significant part of our basis for believing an expert is undercut.
A classic example is the widespread theoretical disagreements in psychology. Despite these disagreements psychologists are often called on to make important judgement about such issues as criminal culpability, incarceration for mental illness, appropriate therapies, etc. In such areas, the expertise (experience, reputation) of the expert may provide a more significant basis of credibility than the disciplinary theories.
The problem of the lack of disciplinary consensus is compounded by the fact that areas of applied expertise are almost always less certain than more theoretical areas. Applying expertise in a real world situation requires the considering a wide range of relevant factors and knowing what is relevant; even knowing the limits of one’s own expertise—all of which seriously complicates expert judgment.
Take the area of law. An experienced lawyer will know, or know how to find out, the relevant precedents and statutes for a case. But this knowledge does not guarantee that the lawyer will recommend or adopt the best legal strategy. For example, there is almost always a question of whether going to court or settling is the best strategy. The choice of strategy goes beyond any legal facts and is a complex judgment which depends on the “expertise’ of the lawyer, not just her knowledge of the law.
Given the challenges involved in such practical judgments it is also very likely that experts will disagree—even if there is disciplinary consensus on the underlying theory. This is why we are advised to seek a second opinion when getting medical advice.
Ideally in the situation of expert disagreement (as with lack of disciplinary consensus), the most reasonable position for a layperson is to just admit, “I don’t know because they don’t know”. But most cases where we seek expert opinion are driven by a need to make a decision on the best information we can get. On these occasions, this will necessitate taking a (fallibilist) position in areas where there is no reassuring expert consensus. Dealing with such a situation will require assessing both the trustworthiness of the consulted expert(s) and the reasonableness of the arguments for their differing positions.
C3. Is the claim supported by evidence and cogent argument?
When seeking expert advice we do not want to simply be told “In my expert opinion you should do this” or “this is the case.” We have a legitimate interest in knowing the basis of the expert’s opinion, even if at the same time we recognize that an expert may not be able to fully articulate the entire basis of her judgment. It is legitimate for an established expert to make claims such as “in my experience such an approach is most effective” though the trustworthiness of that claims depends heavily on the track record of the expert—which is one reason why it is crucial to assess not only the claim of the expert, but the expert herself.
3.2 Assessing the expert
E1. Does the expert have expertise in the relevant domain of knowledge / expertise?
When we give epistemic weight to an expert opinion we do so not only on the basis of the knowledge contained in the discipline or profession, but also on an assessment of the expert’s relevant expertise. Because of specialization that is common in many professions, the specificity of the expert’s knowledge and experience is relevant. Being a general medical practitioner makes someone an expert in the area of medicine, but being an oncologist makes one more expert in the area of cancer, and being an oncologist specializing in prostate cancer makes one even more expert on questions of how to treat prostate cancer—though even in this narrower category there are differences in expertise and treatment strategies between radiologists and surgeons. When seeking expert opinion, the more the expert has the specialized knowledge, the more epistemic weight her opinion carries.
The willingness to trust the judgment of people who may have expertise but not necessarily the relevant expertise in a particular area is one source of the fallacious acceptance of authoritative pronouncements. Living in a culture of media personalities we are subject to the views of both Nobel Prize winners and movie stars, who might offer all sorts of advice outside their domain of expertise. Being influenced by people making claims outside their area of expertise is to succumb to a fallacious appeal to authority Given the willingness of advertisers to pay large amounts of money for product endorsement by entertainment and sports personalities, it would appear that this type of fallacious appeal continues to have significant influence in our culture.
E2. Is the expert trustworthy?
Everyone understands that expert bias can be a problem in fields where financial incentives are influential. But even in such domains as basic research, the drive for personal success can lead individuals to fabricate data and mislead the public (see Freedman, 2010). The personal integrity of the expert therefore is relevant and can be evaluated in part by seeing if the expert demonstrates appropriate humility in areas of controversy, clarity in the presentation of reasons, a personal history of integrity, and reputation for integrity among peers. Unfortunately, pharmaceutical companies exploit these principles of expert respect by seeking to fund the well respected leaders in medical research and such funding must call into question their trustworthiness (Goldacre, 2014).
E3. Has the expert properly attended to the question at hand?
The media is understandably fond of calling up experts to comment on newsworthy events. But such invitations often don’t allow the expert to properly consider the matter. Opinions offered in this situation should be appropriately discounted. The same goes for discovering your doctor hasn’t reviewed your file, and for comments by a professional at a social gathering.
E4. Is the expert’s opinion backed by intelligible arguments and evidence?
As argued above, for the recipient of an opinion to make any of kind of minimally autonomous judgment, the recipient needs the expert to provide intelligible case for her judgment. The haughty appeal to “I know and you should do or believe what I say” or an explanation shrouded in unintelligible jargon, undermines not only the autonomy of the recipient, but the reasonable confidence that one can have in the expert. While acknowledging that expertise involves tacit knowledge, the capacity to explain and justify is a key basis for trusting an expert’s opinion.
To some extent such explanations may be generated after an opinion has been offered or a decision made. Chess masters often know what move to make based on just looking at the board, but nonetheless they can also give a rationale if pressed. If they could not, we would be dealing with a kind of idiot savant and not a true expert.[7]
3.3 The fallacy of Inappropriate Appeal to Authority
As can be seen from the discussion above, appeals to authority come in degrees of credibility, but an appeal that violates the following key criteria can be treated as simply fallacious.
- The appeal is not in an appropriate domain of knowledge.
- There is no expert consensus in a particular area.
- The expert/source appealed to does not have the appropriate expertise.
- The expert has not had the appropriate opportunity to review relevant information.
- The expert is not trustworthy or has obvious biases.
4. Finding credible sources
Students need guidance not only on how to assess sources, but also what kind of information they need. Research is best started with overview articles that provide history and context. Highly popular media is usually an excellent place to begin. I suggest a tripartite categorization of credible print sources.
Level 1 is the popular and credible media that is indexed in various “periodical” indexes. Credible media such as the Scientific American or Atlantic Monthly or The New York Times and The Globe and Mail can be excellent sources of overview articles usually written by knowledgeable researchers. But this is still the tip of the information iceberg. The information presented by these sources is usually written by reporters who are not scientists and are far removed from the fundamental information.
These media usually have websites with varying degrees of free availability. There are also websites in this category, often sponsored by government agencies or health agencies such as Cancer Agency, that supply responsibly digested information for the layperson. Unfortunately many of these are “dumbed downed”, assuming that the reader wants to know, e.g., “how to reduce the risk of colon cancer”, not what the latest research is showing. They sometimes supply links to research but that is not their primary focus.
Level 1 material seldom provides enough evidence to support a well reasoned and confident belief. Personally significant problems (e.g., medical decisions, lifestyle health choices) or significant public policy issues should be researched more deeply.
Level 2 sites and material are written for the sophisticated non-experts such as government policy makers. A classic example of such a site is the website of the International Panel on Climate Change (http://www.ipcc.ch).The site provides a range of information from fairly simple accounts of global warming, to extensive literature reviews written for policy makers, and finally, primary research written by and for researchers in the field. Many UN sites also have this range of information, as does the World Bank site.
Level 3. Lastly, there is Level 3 research. This is the level of primary research that is found published in peer-reviewed journals in all fields. While access to many peer-reviewed journals is not free, students can gain access to many of them through public and post-secondary libraries. Google Scholar can usually provide access at least to the abstracts of the articles. Articles in peer-reviewed journals are often technical and filled with jargon, making them difficult for a layperson to follow. But the articles usually come with an abstract and/or summary at the beginning and concluding discussion at the end. The abstracts often state the basics of the research and the conclusion in understandable terms. For most purposes, what a student needs, even from the peer reviewed journals, are reliable overview summaries that can give them an understanding of the direction of the research and the extent to which researchers feel confident about this direction. Literature reviews and meta-analysis, while not written for the non-expert, often provide overview discussions that are comprehensible to the non-expert. Most importantly, these reviews provide the crucial information about the direction of the research (consensus or dissensus) and how reliable it is.
4.1 Footnotes
Publishers generally exclude footnotes from popular and even credible popular journals; while peer reviewed journals often contain an overwhelming number of footnotes. Hostility to the tedium of footnoting when writing the academic essay often leads students to “footnote phobia”—they’re too complicated, too tedious, and too academic.
But when researching, students need to realize that footnotes are their friend. When looking for information, a well-footnoted article is a kind of open-sesame into the literature. Quickly perusing the footnotes of a recent specialized journal article will often reveal crucial bits of information such as a key summary or a seminal article.
4.2 Using the Web
How to use the Web is discussed in more detail in Chapter 15. Here is some additional advice.
Specialized Web sites from reputable institutions can also be a good initial source of information. With respect to health issues, for example, the Web sites of such well-known research centers as the Mayo Clinic and Harvard University are highly credible sources.
Another preliminary source of information is Web sites devoted to particular topics. Recognizing the bias of a site is important, but it is not a basis for ignoring the site if the site contains useful and credible information. Such sites may also supply useful footnotes to articles that one could then go to for further examination.
How do we evaluate information from a Web site? We evaluate it using questions that are very similar to those we use to assess claims from supposedly expert sources:
- Is the claim from an appropriate domain of knowledge?
- Are the site authors or sources competent in the domain of the claim?
- Is the site relatively free from bias? What effect does the site’s bias have on the credibility of its arguments and information?
- Does the site refer to peer-reviewed sources for its claims?
- Does the site supply plausible arguments or explanations for the point of view being presented?
- What is the date of the information? Is it current and timely?
The web presents more of an assessment challenge than the library, but also considerably more convenience and frequently more timeliness. Books in libraries have the advantage that publishers usually make some effort to ensure the accuracy of claims in books. The mere fact that a book is published, though, does not provide adequate reason to trust its information. The evaluation of claims in books should be based on the criteria for evaluating any source.
Satirical websites present a particular challenge to the use of the web. Some of so called fake news spread during recent American elections was the result people treating claims from satirical websites as credible The intrinsically outrageousness of the claims did not seem to stop people from believing them. Slickness of appearance and political prejudice trumped common sense (Murdock, 2017).
Wikipedia. While most academics frown on students citing Wikipedia, most also use it, though not for academic citation. The key to making an intelligent use of Wikipedia is to use it as a portal. Often the articles are intelligible and provide credible overviews but the reliability of the article is a function of the references that support it. The footnotes not only provide a warrant for the claims but more importantly they serve as entrance to more credible sources. Lack of footnotes is a giveaway that the claims are not properly supported. In addition, Wikipedia can hardly be expected to provide reliable information on politically controversial issues.
5. Pedagogical suggestions
A useful exercise is to divide the class into two groups to research the information pro and con on a topic such as vaccination, and have them apply the criteria of assessment to the various sources. It is also useful to ask students where they go to find out information and how they tell whether it is credible.
Also a useful assignment is to have the students view the 20 minute YouTube version of “The Secret”[8] and identify the variety of fallacious appeals to authority.
References
Bailin, S., & Battersby, M. (2016). Reason in the balance: An inquiry approach to critical thinking. Hackett Publishing.
Battersby, M. (2016). Is That a Fact?: A Field Guide to Statistical and Scientific Information. Broadview Press.
Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531–533.
Boyd, R., & Silk, J. B. (2014). How humans evolved. WW Norton & Company.
Dancy, Jonathan. (n.d.). Moral Particularism Dancy, Jonathan, “Moral Particularism”, The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), URL =
<https://plato.stanford.edu/archives/fall2013/entries/moral-particularism/>.
Freedman, D. H. (2010). Wrong: Why experts* keep failing us–and how to know when not to trust them* Scientists, finance wizards, doctors, relationship gurus, celebrity CEOs, high-powered consultants, health officials and more. Little, Brown.
Goldacre, B. (2014). Bad pharma: How drug companies mislead doctors and harm patients. Macmillan.
Goldman, A. I. (2001). Experts: Which ones should you trust? Philosophy and Phenomenological Research, 63(1), 85–110.
Gorman, S. E., & Gorman, J. M. (2016). Denying to the grave: Why we ignore the facts that will save us. Oxford University Press.
Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 335–349.
Harris, R. (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. Hachette UK.
Hooker, B. (1998). Moral expertise. In Craig Edward (ed.), Encyclopedia of Philosophy. Routledge.
Ioannidis, J. (2013). Implausible results in human nutrition research. BMJ: British Medical Journal, 347.
Ioannidis, J. P. (2005a). Contradicted and initially stronger effects in highly cited clinical research. Jama, 294(2), 218–228.
Ioannidis, J. P. (2005b). Contradicted and initially stronger effects in highly cited clinical research. Jama, 294(2), 218–228.
Ioannidis, J. P. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648.
Ioannidis, J. P. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological Science, 7(6), 645–654.
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124.
Ioannidis, J. P., Ntzani, E. E., Trikalinos, T. A., & Contopoulos-Ioannidis, D. G. (2001). Replication validity of genetic association studies. Nature Genetics, 29(3), 306–309.
Ioannidis, J. P., Tarone, R., & McLaughlin, J. K. (2011). The false-positive to false-negative ratio in epidemiologic studies. Epidemiology, 22(4), 450–456.
IPCC – Intergovernmental Panel on Climate Change. (n.d.). Retrieved June 25, 2009, from http://www.ipcc.ch/about/index.htm
Kolbert, E. (2017). Why facts don’t change our minds. The New Yorker, 27(2017), 47.
Locke, J. (1948). An essay concerning human understanding, 1690.
Murdock, S. (2017, March 11). A Satire Website Posted Fake News To Trump Supporters. Many Believed It. Huffington Post. Retrieved from:
http://www.huffingtonpost.com/entry/a-million-trump-supporters-fell-for-this-absurd-fake-news-site_us_58c42653e4b0d1078ca7222e
Nichols, T. (2017). The death of expertise: The campaign against established knowledge and why it matters. Oxford University Press.
Pinker, S. (1997). How the mind works. NewYork: Norton.
Surgeon General of the United States. (2010). T. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention, Coordinating Center for Health Promotion, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 709.
Surgeon General of the United States. (2006). The health consequences of involuntary exposure to tobacco smoke. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention, Coordinating Center for Health Promotion, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 709.
Tetlock, P. E. (2017). Expert political judgment: How good is it? How can we know? Princeton University Press.
Walton, D. (2010). Appeal to expert opinion: Arguments from authority. Pennsylvania State University Press.
Walton, D. (2014). On a razor’s edge: evaluating arguments from expert opinion. Argument & Computation, 5(2–3), 139–159.
Walton, D. (2016). Evaluating expert opinion evidence. In Argument Evaluation and Evidence (pp. 117–144). Springer.
Walton, D. N. (1989). Reasoned use of expertise in argumentation. Argumentation, 3(1), 59–73.
What is the Earth’s age and how do we know? | Earth | EarthSky. (n.d.). Retrieved August 26, 2017, from http://earthsky.org/earth/how-old-is-the-earth
- © Mark Battersby ↵
- For a comprehensive scholarly review of the fallacy and its history see (Walton, 2010). ↵
- This position is of course that of the secular, post-enlightenment view, which is the basis of critical thinking instruction. In many traditions appeal to tradition, sacred texts, or the pronouncements of religious authorities is taken as “authoritatively” determining moral truth. ↵
- Tetlock’s study of the ineptitude of “pundits” to predict world affairs provides vivid examples (Tetlock, 2017) ↵
- In his most recent article on appeals to authority Douglas Walton recommends a similar list of critical questions (D. Walton, 2014): Field Question: Is E an expert in the field F that A is in? Opinion Question: What did E assert that implies A? Trustworthiness Question: Is E personally reliable as a source? Consistency Question: Is A consistent with what other experts assert? Backup Evidence Question: Is E’s assertion based on evidence? ↵
- This also touches not only on the debate of objectivity, but also the particularism vs. general debate about moral judgment. Again a Critical Thinking course is not the place to address these philosophical issues and any “appeal” to a morally wise person should be treated as “advice” not expert opinion (Dancy, Jonathan, n.d.). ↵
- Interestingly, AI programs that learn to play games (as opposed to those that are programmed to play), are just such idiot savants—even the programmers often do not know what considerations caused the program’s decision. ↵
- (https://www.youtube.com/watch?v=zdtqLNeK6Ww) ↵