"

3.2: Locating Credible Sources

ENL1004 Course Learning Outcomes

Target icon

  • Read analytically to interpret a variety of message types accurately (2).
  • Analyze information to determine purpose and meaning (2.1).
  • Use reading and other comprehension strategies (2.2).
  • Locate and evaluate information from a variety of sources (4.3).
  • Identify the value, limitations, and hazards of Generative AI and other transformative technologies (4.4).
  • Develop critical media literacy strategies to support research (4.6).

Once you’ve selected the appropriate research methodology, your next task is to search for sources that can be taken seriously by your audiences and, in so doing, narrow down your topic. Research is largely a process similar to sorting out wheat from chaff, then processing that wheat into a wholesome product people will consume. Appropriately using credible sources reflects well on your own credibility, whereas using suspicious sources undermines your own authority. Simply googling sources means that the results are algorithmically served up on the basis of factors such as your search history and location, not necessarily what’s best for your research purposes. Likewise, it’s impossible to know how AI chooses sources you ask it to find because you don’t know what training data it had access to. AI may also hallucinate by generating sources that don’t exist. In any case, it’s easier than ever to find suspect sources that have no place in the work you produce.

Graphic design of a four-stage writing process arranged like a clock with Preparing as the first 15-minute segment, Researching as the second 15 minutes, Drafting as the third 15 minutes, and Editing as the fourth 15 minutes. The first segment is blown up to show three sub-stages: 2.1 Selecting a Methodology, 2.2 Collecting Sources (circled), 2.3 Using Sources, and 2.4 Crediting Sources
Figure 3.2: Our focus in this section is sub-stage 2.2 of the four-stage writing process: Collecting Sources.

Drawing on dubious sources for a research assignment makes you look uneducated, lazy, flaky, or gullible at best, or at worst conniving and deceptive. Though it’s important to bolster your work with quality research sources, figuring out what those are in the modern research environment can prove challenging. We’re in an age that some have dubbed the “post-truth era” where “fake news” churned out by clickbait-driven edutainment outlets can be a major determining factor in the course of history (White, 2017). Though advanced reasoning AI models hallucinate sources less than they did in the early days circa 2023-2024, you cannot yet completely rely on them not to make up nonexistent citations in their effort to give you something—anything—rather than admit ignorance or failure (Jarry, 2025). They aim to please. Building the critical-thinking skills to distinguish truth from lies, good ideas from bad, facts from propaganda, objective viewpoints from spin, reality from fantasy, and credible sources from dubious ones is not only an academic or civic duty but also key to our collective survival. Learning how to navigate these perilous waters is one of the most important skills we can learn in school.

College or public libraries and their online databases are excellent places to find quality sources, and you should familiarize yourself with their features such as subject guides and advanced-search filters. Even libraries are populated by sources outside the realm of respectability, however, because they cater to diverse stakeholders and interests by being comprehensive, including entertainment materials in their collections. They also have holdings that are laughably out of date and only of historical interest. Whether found via the library or on the open internet, the only real way to ensure that a source is worth using is to develop critical thinking skills in knowing what to look for when sorting the wheat from the chaff.

(Algonquin College Library, 2023)

Developing a good intuitive sense of what sources are trustworthy takes time, often through seeing patterns of approval in how diligent professionals rely on certain sources for credible information. If you continue to see respected professionals cite articles in Scientific American and The Economist, for instance, you can be reasonably assured of those sources’ credibility. If you see few or no professionals cite Popular Mechanics, Infowars, or Reddit but you do see non-professionals cite fantastic, sensational, or outrageous stories or information from them in social media, you have good reason to suspect their reliability. The same goes for sources regarding certain issues; if 97% of relevant scientists confirm that global climate change results from human activity (Cook et al., 2016), for instance, sources representing or championing the 3% opposition will be seen as lacking credibility. Patterns of source approval take time to track, but rest assured you can count on more immediate reliable methods for assessing credibility in the meantime.

The best shorthand for assessing any research source is the CRAAP Test. In the original conception of librarian Sarah Blakeslee at the Meriam Library California State University (Blakeslee, 2010), “CRAAP” stands for the source-evaluation criteria of Currency, Relevance, Authority, Accuracy, and Purpose. If a source you find in the course of your research passes the CRAAP Test, you can be reasonably sure that its appearance in your work won’t undermine your credibility if you use it appropriately (see §3.4). The test works by you systematically assessing a source according to the criteria and considerations explained below.

3.2.1: Currency

Depending on the topic, how recently the source was published can be a key indicator of credibility. A book on communications technology from 1959, for instance, is no longer a relevant authority on communications because technology has changed so much since then. A 1959 writing guide such as Strunk and White’s Elements of Style, however, is mostly still relevant because we still value its advice on writing concisely and because language hasn’t changed drastically since then. (More recent editions have dispensed with outdated advice like using masculine pronouns exclusively when referring to writers, however, since we now value writing that’s not gender-exclusive, as discussed in §2.2.1 above).

In technology fields generally, a source may be considered current if it was published in the past 5-10 years. In some sub-disciplines, especially in computing, currency may be compressed to more like 1-2 years depending on how fast the technology is advancing; in frontier AI, for instance, currency may be measured in months, weeks, and even days. Disciplines that advance slower may have major sources still current even after 15-20 years if nothing has since replaced them. Knowing this, you can adjust the year range of the search results you would like to see when using the library’s advanced-search filters so that it excludes sources too old for your purposes.

For webpages, look for the publication date and whether there’s a recent update. If you can find a date neither on the page itself nor in the URL (web address), try typing Ctrl + U and, when a page of HTML source code opens, search it (Ctrl + F) for “datepublished” or just “date” and the exact date published may appear there. Linkrot is also a decent test of currency. If a webpage contains broken links that lead to error pages when you click on them, this means no one has bothered to maintain or even delete it. Having lost their currency, such webpages are like the forgotten or unnoticed dusty corner cobwebs of the internet and should be left alone if other, more current sources are available.

A final indicator of currency for online sources is the overall design quality of the website. The attractiveness of a site may be subjective, but a user-friendly and modern design with contemporary features suggests that money was spent relatively recently on improving its quality. If the site looks like it was designed 10-15 years ago and hasn’t had a facelift since, you can suspect that it’s lost its currency. Some websites look dated despite their content still being relevant, however, because that content doesn’t change drastically over time. Like Strunk and White’s Elements of Style mentioned above, sites such as The Mayfield Handbook of Technical & Scientific Writing (Perelman et al., 1998), can still prove useful as free writing guides despite being published in the Web 1.0 era before most of their current student users were born.

3.2.2: Relevance

If the research source you find is written by and for people related to the academic or industry discipline associated with your research topic, it is more likely to be appropriate for your needs. Assess how closely the book or article pertains to your research topic at hand first by checking its title, then look for more clues throughout. If you are writing a report on architectural trends in the twentieth century, for instance, an article you find on the topic in the academic Journal of the Society of Architectural Historians would be highly relevant. However, an engineering journal that mentions in passing structural defects in a building in the brutalist style would be less useful to you despite being highly relevant to architectural engineers.

Consider also various demographic factors related to the apparent intended audience such as age and educational level. If you are doing an assignment on best practices for dealing with problematic behaviour in the kindergarten classroom for an Early Childhood Education course, for instance, books and articles aimed at teaching professionals would be right on the mark. Drawing classroom-applicable lessons from a Robert Munsch story book about Halloween that takes a lighthearted approach to misbehaviour (e.g., Boo!), however, would undermine your credibility because it’s intended only to delight early elementary school-age children.

3.2.3: Authority

If the author is identified by name and credentials, you can verify whether they are expert enough on your research topic to be a credible source. Generally, the higher the credential or industry position an author holds, the more credible you can expect them to be. An author with a PhD (doctoral credential) in psychology will be a credible authority on matters of psychology because they have legitimate expertise. A talk-show host, on the other hand, lacks credibility and expertise on such topics since they don’t have the same years of focused study, training, and clinical practice in the field. The PhD is a more advanced credential than a master’s degree, which is more advanced than an undergrad (four-year bachelor’s) degree, which is more advanced than a college diploma or certificate, which is more advanced than a high school diploma. In the absence of more detailed information, you can roughly gauge how credible an authority someone is on a topic based on where they fall on that educational spectrum.

Years of successful industry experience are also equivalent to a trustworthy credential. If the author of a trade journal article has 35 years of experience in the industry, 20 of those as an owner of a thriving business, you can expect expert knowledge from them if the topic they publish on is directly related to their profession. This is not the case if a writer strikes out with a book on a topic other than their area of expertise, such as a study into the “inside job” conspiracy theory of the 9/11 terrorist attacks by a professor of the philosophy of religion and theology. However, an author who conducts a sustained investigation into a topic with multiple articles or books that appeal to academics or other reputable audiences over several years increases the likelihood that they can be trusted as an authority on it.

What about blogs? A blogger can only be taken seriously if they are a working professional writing about their work and shouldn’t be relied on outside of their area of expertise. A journalist conducting a rigorous, balanced study of an emerging issue and publishing it in a blog rather than in a reputable publication, for instance, might turn out to be trustworthy on that topic. However, you would have to apply the other CRAAP criteria explained above and below to be sure. A blogging hobbyist might have some interesting things to say, but without expert training and credentials, their word doesn’t carry much weight. If a backyard astronomer discovers something major in the night sky, for instance, it takes verification and systematic cataloguing from credentialed astronomers employed by renowned institutions before the discovery is considered real.

Sometimes the author isn’t revealed on a webpage, perhaps because it’s a company or organization’s website, in which case your scrutiny shifts to the organization, its potential biases, and its agenda (see Purpose below). A research project on electronic surveillance, for instance, might turn up the websites of companies selling monitoring systems, in which case you must be wary of any facts or statistics (especially uncited ones, but even cited sources) they use. Distrust any sources whose supporting information looks like it was cherry-picked to help sell products and services.

Instead of checking the publisher as you would for a print source (see Accuracy below), you could consider the domain name. Websites with .edu or .gov URL endings usually have higher standards of credibility for the information they publish than sites ending with .com or .org, which are typically the province of commercial enterprises (as in the monitoring systems example above) and special interest groups with unique agendas. There are exceptions to this rule, so you cannot rely on it exclusively, but you can use it as one of many potential indicators of credibility or lack thereof.

Although successful in being a comprehensive repository of knowledge, Wikipedia.org, for instance, is not generally considered credible and should therefore not appear as a source in a research document unless it’s for a topic so new or niche that no other credible sources for it exist. By the organization’s own admission, “Wikipedia cannot guarantee the validity of the information found [on their site].” The Web 2.0, user-generated nature of Wikipedia means that its articles are susceptible to vandalism or content changes inconsistent with expert opinion, and they aren’t improved by any formal peer-review process (“General Disclaimer”). Wikipedia sacrifices credibility for comprehensiveness. For these reasons, a Wikipedia article in a research report is a little laughable; few professionals will take you seriously if they see it there because you will look lazy for stopping at the first available source and picking the lowest-hanging fruit.

A Wikipedia article and AI prompted to fetch some good sources on your research topic can be good places to start in a research task, however. If you’re approaching a topic for the first time, use Wikipedia for a general introduction and a sense of the topic’s scope and key subtopics, as well as get a good AI to explain the major issues and point you in the direction of reputable authorities. (Wikimedia Commons is also a reliable source of images provided you credit them properly.) If you’re going to cite any sources, however, don’t stop at the article or AI summary; use the credible sources that the Wikipedia article or AI cites by scrolling down to the Wikipedia article’s References section and following the AI summary’s links, checking them out, and assessing them for their credibility using the criteria explained throughout this section.

3.2.4: Accuracy

Factual accuracy, as well as quality of writing and presentation, are excellent guarantors of credibility. As you read through a research source that looks like it passes the other CRAAP criteria, do quick, informal online research (see §3.1) to verify the accuracy of the information being presented. If others who also look like they pass the CRAAP Test confirm such details, you can be more assured that your source represents a scholarly or industry consensus—that is, widescale agreement. You can also use AI as a fact-checker as long as you use one that doesn’t have a reputation for being especially biased such as Grok (Booth, 2025). Information that appears to be corroborated only by fringe or disreputable sources (see Authority above) should look dubious and be left alone.

If a source identifies its own research sources through citations and references (see §3.5.2) and all of them meet the credibility standards outlined throughout this section, then you can be reasonably certain that the effort the source author made towards formal secondary research ensures their credibility. If the source doesn’t identify sources, however, or is vague about them (e.g., with expressions like “studies show that … ,” “according to experts, …” or “researchers have proven that …”), then you should question why the author hasn’t bothered to cite those research studies or name those experts. Of course, it may be because they don’t have the time and space to cite sources properly in the platform they’re writing. However, it may also be because they’re lazy in their research, they’re embarrassed by the sources knowing that they wouldn’t pass the CRAAP Test themselves, they’re making up claims of authority for self-serving purposes, or they’re AI slop. Indeed, as the titles of Atlantic and Rolling Stone articles attest, “Science Is Drowning in AI Slop” (Anderson, 2026) and “AI Is Inventing Academic Papers That Don’t Exist—and They’re Being Cited in Real Journals” (Klee, 2025). With this in mind, it’s important that you be on the lookout for the telltale signs of AI slop articles (see Wikipedia’s “Signs of AI Writing” guide), especially fake citations, and keep such suspect sources out of your own work.

The quality of the writing is another good indicator of credibility because an error-free publication suggests that the source underwent an editorial process to ensure quality and respectability. A poorly written document, on the other hand, suggests that the author worked alone (perhaps because they submitted their work for publication but it was rejected) and wasn’t a strong enough writer to proofread without help. Worse, a poorly written document that passed through multiple hands suggests that no one involved in its publication was educated enough or cared enough about details to bother correcting writing errors. Because there’s a connection between the quality of one’s writing and the quality of their thinking, a source that’s rife with errors often betrays the scattered, confused, or careless mind of its author. Leave such sources alone.

Another good barometer of whether a research source is credible is its publisher. If the publisher is an established, long-running, big-city (e.g., New York or Toronto) or university press with a large catalogue, you can be reasonably assured that the source underwent an editorial process that helped improve its validity. An editorial process means that more people besides the author reviewed the work for quality assurance prior to publishing. When a source undergoes the peer-review process as conducted by reputable academic journals and publishers, the author is required to make changes suggested by credentialed experts in the field coordinated by the publisher. This process ensures that author errors are corrected before the text is published and hence improves both its quality and credibility. A self-published (“vanity press”) book lacking that constructive criticism, however, wouldn’t necessarily have had the benefit of other people moderating the author’s ideas and pushing them towards expert consensus. This is why you must steer clear of the AI-written slop overrunning online booksellers (Donaldson, 2025). When using advanced-search filters in your library database, click the box that says you want only peer-reviewed journal articles to narrow your search results down to a manageable number of credible sources.

A publisher that isn’t a university press or that operates outside of the expensive New York City zip code does not necessarily lack credibility. However, you may want to do a background check to ensure that it’s not a publisher with a catalog rife with AI slop or more traditionally toxic output such as white-supremacist, conspiracy theorist, climate change-denying, or extremely partisan literature. Likewise, if you see that the source is sponsored and/or promoted by special interests like Big Oil, Big Pharma, or a far-left extremist group, for instance, your suspicions should be raised about the validity of the content. Run a quick, informal background check on the publisher by looking up their website and some other sources on them such as Wikipedia articles via its “List of English-language Book Publishing Companies” and “List of University Presses” articles. See also the Cornell University Library’s “Distinguishing Scholarly from Non-Scholarly Periodicals: A Checklist of Criteria.” Again, you can also use non-biased AI appropriately and responsibly as a publisher reputation checker.

3.2.5: Purpose

If the author or authors of a source you find through research clearly and accurately state their purpose, goal, or agenda, and this aligns with your needs (see Relevance above), then the source is more likely appropriate. Recall that an important preliminary step in the writing process is to know your own purpose—whether to inform, persuade, pitch or sell, motivate, amuse, etc. (see §2.1). When your purpose is to both inform and persuade your reader, for instance, you will be most convincing if you are objective, logical, and use information from authoritative secondary sources appropriately. Likewise, if a research source you find is upfront and transparent about identifying the other sides of a debate and convincingly challenges them with strong evidence and sound reasoning, their work is worth considering for inclusion in your work. A source being mysterious about its purpose or an author taking extreme positions and arguing entirely on one side of a debate on which experts disagree, however, raises a red flag. If the author obscures the fact that they are weighing into a controversy, summarily dismisses alternative points of view out of hand, offers dubious arguments driven by logical fallacies, simplifies complex issues by washing out any nuance, or appears to be driven more by profit motive than dedication to the truth, then “buyer beware.” Using such an extremely slanted source will undermine your own credibility.

Company websites, especially for smaller businesses, are generally suspect because their main goal is to attract customers and ultimately profit. They’re not going to focus too much on information that may give potential customers reason to think twice no matter how legitimate it is. A home security alarm company, for instance, is probably not going to post crime statistics in a market that has record-low criminal activity because people will conclude that home security is a non-issue and therefore not worth spending money on. The company is more likely to sidestep rational appeal and prey instead on fears and anxieties by dramatizing scenarios in which your home and loved-ones are violated by criminals. If the company website focuses on education, however, by explaining what to look for to assess the credibility of the professional you’re seeking, then you are probably looking at a successful operation that does quality work and doesn’t need to fleece you in order to survive.

Using AI As a CRAAP Test Assistant

Before you throw your hands up in frustration, thinking, “How can I possibly do all of the above? It all sounds like too much work!” just remember that seasoned professionals develop an intuition for sniffing out quality source material after years of systematically applying the CRAAP Test throughout the research process. Eventually, they incorporate the criteria into their research method so seamlessly that a quick glance at a research source is all they need to assess whether it’s credible or raises enough red flags that it should be left alone. Until you get there yourself, you have a wonderful tool in AI to expedite the learning process. If your library research serves up several potential research sources based on titles that look relevant enough to your topic, you can run the titles, URLs, article PDFs, or whatever materials associated with each source through a reasoning AI model. By prompting it to give you an evaluative breakdown explaining how the source fares against the CRAAP criteria and paying close attention to its reasoning for each assessment criterion, you’ll soon get the hang of doing the same yourself without needing to prompt AI to do it for you.

Key Takeaway

key iconInvestigating and narrowing down a research topic involves using databases to locate reputable sources using the CRAAP Test to assess for their currency, relevance, author credibility, accuracy, and purpose.

Exercises

1. Choose a research topic based on an aspect of your professional field that piqued your attention in your other courses in the program. Use your college library’s search website to locate several sources that may inform an assignment on that topic based on how closely their titles align with your topic. Fill out Carleton University’s CRAAP Test rubric (Bufton, 2020) for each source. How do their final scores compare?

2. Consider a recent controversy in the news that most media outlets have covered. Assemble articles from a variety of outlets throughout Canada, the United States, and even internationally, including those with major audience share like the CBC, CNN, FoxNews, and the Guardian, as well as some on the fringe. First compare the articles to identify the information that’s common to them all, then contrast them to identify the information and analysis that distinguishes them from one another. What details or context do they leave out? What spin do they throw on agreed-upon facts? Ultimately, what conclusions can you draw about how bias factors into the reportage of world events?

References

Algonquin College Library. (2023, August 14). Searching the Library with Page 1+ [Video]. https://www.youtube.com/watch?v=oZql-BJTZf0

Anderson, S. (2026, January 22). Science is drowning in AI slop. The Atlantic. https://www.theatlantic.com/science/2026/01/ai-slop-science-publishing/685704/

Blakeslee, S. (2010, September 17). Evaluating information—Applying the CRAAP Test. https://library.csuchico.edu/sites/default/files/craap-test.pdf

Booth, R. (2025, November 3). In Grok we don’t trust: Academics assess Elon Musk’s AI-powered encyclopedia. The Guardian. https://www.theguardian.com/technology/2025/nov/03/grokipedia-academics-assess-elon-musk-ai-powered-encyclopedia

Bufton, M. A. (2021). Keeping score: Do your online sources pass the CRAAP test? https://library.carleton.ca/sites/default/files/2021-10/2021%20CRAAP%20test%20rubric%20final.pdf

Cook, J., et al. (2016, April 13). Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters 11, 1-7. http://iopscience.iop.org/article/10.1088/1748-9326/11/4/048002/pdf

Cornell University Library. (2025, June 5). Distinguishing scholarly from non-scholarly periodicals: A checklist of criteria. http://guides.library.cornell.edu/scholarlyjournals.

Donaldson, K. (2025, July 10). Generative AI is turning publishing into a swamp of slop. Paste. https://www.pastemagazine.com/books/publishing/generative-ai-is-turning-publishing-into-a-swamp-of-slop

General disclaimer. (2026, January 9). In Wikipedia. https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer

Jarry, J. (2025, September 19). AI Comes for Academics. Can We Rely on It? McGill Office for Science and Society. https://www.mcgill.ca/oss/article/critical-thinking-technology/ai-comes-academics-can-we-rely-it

Klee, M. (2025, December 17). AI is inventing academic papers that don’t exist—and they’re being cited in real journals. Rolling Stone. https://www.rollingstone.com/culture/culture-features/ai-chatbot-journal-research-fake-citations-1235485484/

List of English-language book publishing companies. (2026, February 5). In Wikipedia. https://en.wikipedia.org/wiki/List_of_English-language_book_publishing_companies

List of university presses. (2026, January 6). In Wikipedia. https://en.wikipedia.org/wiki/List_of_university_presses

Perelman, L. C., Paradis, J., & Barrett, E. (1998). The Mayfield handbook of technical & scientific writing. https://www.mit.edu/course/21/21.guide/home.htm

White, A. (2017, January 10). Fake news: Facebook and matters of fact in the post-truth era. Ethics in the News: EJN Report on Challenges for Journalism in the Post-truth Era. https://ethicaljournalismnetwork.org/ethics-in-the-news-introduction

Signs of AI writing. (2026, February 9). In Wikipedia. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

License

Icon for the Creative Commons Attribution 4.0 International License

Communication at Work Copyright © 2019-2026 by Jordan Smith, PhD is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.