
We have diagnosed the disease and autopsied the digital corpse. We know that the databases are compromised, by much more than just fake AI citations, and we know that the “publish or perish” economy in academia has created an industrial-scale market for hijacked journals and fake AI citations. But theoretical outrage does not help me on a Monday morning when twenty undergraduates are staring at me, waiting to learn how to write a research paper. They need to know how to use citations correctly or they will fail the course (my rules!) and I am not going to ‘teach’ them the basic mechanics of citation formatting, because the formats change so often I need to teach them the system, not the steps.
We cannot simply tell students to “use good sources” when the very architecture of a “good source” has been indexjacked. The old “Academic Journal Good/Blog Bad” dichotomy (which was problematic anyway) is utterly broken now. Academic Journals can no longer be trusted if they have been hijacked, and we know from the Sokal Squared experiments and even going back over twenty years to John Ioannidis’s 2005 paper, “Why Most Published Research Findings Are False“. There is a Replication Crisis which has been a crack in the damn of scientific knowledge for a long time, but due to the fact that most academic publishing is privately owned and profits-driven, the general mindset is to simply keep asking academics to stand in line plugging the leak with more flimsy papers.
If we want to encourage future generations to think critically and respect academic sources, we have to change the pedagogy. We have to execute what I call the Forensic Flip: shifting the classroom focus from constructing arguments to actively deconstructing sources.
Here is exactly how I run the live citation audit workshop across my first-year cohorts to condition them early.
Step 1: Gamifying the Metric
Before we can teach them to spot a fake AI citation, they need to care about real ones. I open the workshop with a deceptively simple question: What is a citation, and why does it actually matter? Usually, this is met with silence or vague, bored mumblings about “not plagiarising.” So, we make it fun. We gamify it.
I project Google Scholar onto the screen and have the students look up the faculty members in our own department. We turn it into a live race to see who has the highest h-index and the most citations. (I won’t lie, it is particularly fun because I almost always win this race). Suddenly, citations stop being abstract formatting rules and become tangible currency. They see that real people—the ones grading their papers—rely on these metrics for their careers, their authority, and their livelihood. It personalises the stakes of academic publishing. It also highlights the absurdity and primes them for why this is an industry with a rotten core.
Step 2: The Source Hierarchy
Once they understand the economy of citations, I ask them to rank what they consider “good” and “less good” sources. Predictably, the initial hierarchy they build is terrifying. They genuinely believe a newspaper article, a dictionary definition, or a well-formatted Wikipedia page represents the pinnacle of research. I explain that I love Wikipedia, and regularly pay money to the foundation, even without those emails from lovely Jimmy Wales. I think Wikipedia is one of humanity’s crowning achievements. But it is merely a starting page, it is the doormat of the rabbit hole they are about to enter.
This is where I break their hearts and explain the mechanics of peer review. We discuss why a dictionary is just a record of usage, not a theoretical framework, and why a newspaper is reporting on the event, not providing the rigorous analysis of it.
Step 3: The Misinformation Anchor
To ground this practically, I send them away to find three academic sources relevant to their current topic. In Semester 1, the overarching theme of our writing classes is Mis- and Dis-information, anchored by a close reading of Bente Kalsnes’s (2018) work on “Fake News.”
By forcing them to grapple with Kalsnes’s definitions of systemic disinformation early on, they are being actively conditioned to distrust the digital ecosystem. We spend the first semester studying how information is weaponised and manipulated online. By the time they reach Semester 2—where they are writing the literary essays on George Orwell that ultimately spawned this entire “Propulsion” debacle—they are primed to understand that deception is everywhere, and it happens in the library, too.
But knowing that fake news exists is very different from holding a hijacked journal in your hands. That is where the real bloodshed begins.
Step 4: The Autopsy (Ripping Open the Carcass)
You cannot teach students to spot fraud in the abstract; you have to show them the crime scene. So, I project the Journal of Propulsion Technology paper onto the main screen. I do not tell them what is wrong with it. I just let them read the title, the journal name, and the abstract, and wait for the cognitive dissonance to set in.
If they look sheepish, it is because they already know they have fished up some rotten boots and not a nice marlin. They thought they were dragging a prize catch out of the academic ocean, but the smell is undeniable.
This is when we rip open the rotten carcass of the PDF together. We walk through the exact forensic teardown we did in Part II of the Propulsion Papers. I show them the semicolon catastrophe in the very first sentence. I point to the bibliography and highlight the screaming ALL CAPS and the dangling “JSTOR” metadata hanging off the end of the citations. I ask them to explain how an analysis of Animal Farm logically belongs next to an article with a schematic for a solid-fuel booster.
The goal here is not to humiliate the student who found it; it is to shatter the cohort’s blind faith in the authority of the PDF format. Once they see that the institutional databases they inherently trust are compromised—that the gatekeepers are asleep—they finally understand why I am being so pedantic about citations. I explain to them what a Hijacked Journal is, what indexjacking is, and I also tell them that we need to be able to be able to Find something you need to be able to Filter.
Step 5: The Smell Test (The Heuristics)
Once they realise the library is currently on fire, you have to hand them the fire extinguisher. You cannot expect first-year undergraduates to memorise Beall’s List or read Anna Abalkina’s latest research on indexjacking. You have to give them practical heuristics.
I give them a three-point checklist for authenticity—The Smell Test:
- The Source Match: Does this journal look ok? Do things match up — the title, the name of the journal.
- The Metadata Sweep: Is it easy to cite? Quite often students ask me how to cite something when they don’t know the author or date. Sorry to be a snob, but the best course of action there is to bin it and find something that gives you the information needed to verify it.
- The Formatting Maven: Look closely at the bibliography before you even read the introduction. Are there hanging artifacts from a “Cite This” button? Are the fonts inconsistent? If the journal editors did not care enough to format the references, they certainly did not care enough to peer-review the science.
- The Bot-Speak Radar: Does the abstract actually say anything? We look for the hollow, algorithmic filler that plagues the content-mill ecosystem. If the paper relies entirely on sweeping generalisations—like “vividly illustrates the cyclical nature of political revolutions”—without engaging in specific, rigorous textual analysis, it is likely automated slop.
They need to understand that reading an academic paper is no longer just about extracting quotes to support their thesis; it is about verifying the structural integrity of the vessel holding those quotes.
Step 6: The Peer Defence (The Traffic Light Audit)
Once they have the heuristic tools, we move to the main event. Students must bring the raw PDFs of the three sources they plan to use for their essay into the classroom. Hopefully after the autopsy, they have all gone away to find fresher catches. We learn best from mistakes, so the intention here is that they DO need to go away and re-do their search.
Back in the class, they swap papers with a partner. Their job is not to read their partner’s sources for content or synthesis. Their job is to act as hostile forensic auditors. They actively try to debunk their partner’s sources using the Smell Test. They have to verify the structural authenticity of the text before anyone is allowed to actually use the knowledge inside it.
To make this tangible, we quantify the audit using a traffic light system:
- Red (The Rotten Boots): The source fails the URL check, has mangled metadata, or reeks of algorithmic bot-speak. It is a hijacked journal, a predatory scam, or a or a hallucinated document built on fake AI citations. The auditor flags it, and the student must throw it back into the digital ocean.
- Yellow (The Question Mark): The source is borderline. Perhaps it is a non-peer-reviewed pre-print on ResearchGate, or an obscure open-access repository that hasn’t been indexed yet. It isn’t necessarily slop, but it requires a secondary, manual verification from me before it can be used. I am a little conflicted here though, as many students’ essays are now published by their own unis as part of an internal repository. Many unis publish undergraduate theses on their library website. This practice means more information, and just because it is an undergraduate paper does not mean it’s bad… but they have not gone through peer-review. So these papers are also subjected to scrutiny, ensuring the supervising professor is a real person, checking the university itself. However, I do not want to undermine the undergraduate essay, otherwise why are my students even bothering to do this if their output is to be rejected out of hand? So I accept undergraduate papers to be cited in other undergraduate papers papers. Just as I would accept MAs on another students MA. I think to deny the validity of the work they are doing would be a mistake, but it must be done with care.
- Green (The Marlin): The source passes the audit. It has a clean DOI, professional formatting, and human-authored prose that engages in actual analysis. It is an authentic piece of academic literature.
We gamify the survival rate. Anyone who walks out of the workshop with three verified Greens is a winner for the day. But I remind them of the ultimate stakes: you need five verified Greens to get an ‘A’ on the final research paper.
Step 7: The Comprehension Clause (AI as a Tool, Not an Endpoint)
Finding a Green source is only half the battle; the final hurdle is proving they actually understand it.
Once a source survives the peer audit, the student must verbally explain the core argument of the paper to their partner. This is my firewall against the mindless “copy-paste” culture. They have to show they actually read the paper and didn’t just get an AI to summarise it for them at the last minute.
I am very clear with them about the role of technology here: using an AI like ChatGPT to summarise a dense academic paper to aid your understanding is highly encouraged. That is what these tools are good for. If a 19th-century theoretical framework is giving them a headache, they absolutely should ask an LLM to explain it to them like a five-year-old. But that AI summary is a stepping stone; it is not the endpoint. The endpoint is the student synthesizing that understanding and defending it in their own human voice. They must also make links to their own personal opinion… heavily utilising the first person. Many of them have been (inexplicably) taught (by dinosaurs) that they should not use the first person in an academic essay.
The Aftermath
By the end of the workshop, the classroom floor is littered with discarded Red sources. They have thrown the rotten fish (red herrings) and olds boots back into the algorithmic sea.
The goal of this entire exercise—from the Kalsnes disinformation readings to the autopsy of the hijacked Propulsion paper, right down to this gamified peer defence—is to build a perimeter of authenticity around their writing process, arming them against the flood of hijacked journals and fake AI citations. Hopefully, they emerge from the workshop a little more cynical of the Ouroboros of modern content production, a little more protective of their own intellectual integrity, and, quite frankly, a little less idiotic now.
They finally understand that in the current academic landscape, you cannot just be a reader. You have to be a detective.
Please let me know in the comments below if you have any thoughts or reflection. I would love to hear how you are handling these issues in your own class, or what your thoughts are on the whole messy situation.
References
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLOS Medicine, 2(8), e124. doi:10.1371/journal.pmed.0020124
Kalsnes, B. (2018). Fake News. In M. Powers (Ed.), Research Encyclopedia of Communication: Oxford University Press.
