Obviously written collaboratively with an LLM. — dml
Known Knowns and Unknown Unknowns: Epistemic Blind Spots in AI
In the early 2000s, U.S. Defense Secretary Donald Rumsfeld famously categorised knowledge into a 2x2 matrix: known knowns, known unknowns, unknown knowns, and unknown unknowns.
This “Rumsfeld quadrant” offers a useful lens for understanding the epistemic limits of both human and artificial intelligence.
LLM thinks and pattern matches like people. It is not reasoning in any formal sense. One day soon they will be systematically fused with other AI models to improve reasoning
Large language models (LLMs) today excel at regurgitating known knowns – facts and patterns explicitly present in large quantities in their training data. They can even acknowledge known unknowns, for example by stating when they lack information (though too often they will fail to make such acknowledgments). Where current AI appears promising is in surfacing unknown knowns – subtle patterns or latent correlations in data that humans may not have noticed. An LLM can trawl through millions of documents and highlight implicit trends or biases that no individual scientist was consciously aware of.
However, unknown unknowns – the paradigm-shifters of science: genuinely novel hypotheses and discoveries that nobody has conceived of yet – elude LLMs. LLMs recombine existing knowledge, operating will ‘inside the box’. True leaps into the terra incognita of science remain rare. They presently (as of mid-2025) lack the generative imagination or experimental intuition to navigate the realm of unknown unknowns.
A hybrid AI approach could shed fresh light on these blind spots. By combining an LLM’s vast recall of prior knowledge with other AI approaches and automated sensing and experimental tools, researchers might better probe the edges of the unknown. Such systems could surface latent assumptions (unknown knowns) and suggest experiments in unexplored directions to chip away at known unknowns. In this way, AI might help researchers ask better questions, not just find better answers.
The Problem of Dimensionalisation in Science
My view of science is that progress occurs when we identify new dimensions for problems. Dimensionalisation reduces complex phenomena to measurable variables. This process is powerful, but it can constrain inquiry. In practice, many researchers find dimensionalisation difficult and ad hoc.
This presents an epistemic trap: we continue studying what is easily measured, and make progress measuring better, and getting more digits of precision by iterating within fixed frameworks, while more fruitful dimensions go unexplored. AIs are often limited to narrow tasks defined by training benchmarks – a limitation partly shaped by how problems are dimensionalised. If we always train humans or AIs on well-defined axes, we risk encouraging narrow specialists rather than interdisciplinarity.
Hybrid AIs could help here by supporting flexibility in representation. A large language model might interface with symbolic logic or knowledge graphs to reframe a problem in multiple ways. This ability to shift the coordinate system of thinking could break rigid dimensional mindsets. By prompting alternative metrics or perspectives, such systems could help science evolve beyond its current cognitive constraints.
Pattern Mimicry vs. Reasoning in Scientific Publishing
Scientific writing has increasingly become performative. Introductions follow expected forms; methods and discussions often mimic precedent. There is a running critique that publishing rewards pattern-matching over deep reasoning. Peer review and institutional incentives can create human instantiations of large language models – researchers who learn to produce papers that sound right without being right, or even close to right, often without truly innovative content.
Evidence backs this critique. Novel papers tend to experience longer peer-review times. Reviewers and editors often favor work that extends familiar paradigms over that which challenges them. This produces a conservative tilt – a proliferation of “safe” papers over field-changing work. Authors are in an environment where editors and reviewers produce too many false negatives in order to prevent any false positives, and in order to succeed, adapt themselves to the frame.
This mimicry extends to style and reasoning. Authors may add complexity (and jargon, and needlessly long words) to signal novelty, as Michael Black has noted, rather than for substantive gains. The result is often papers that pass for innovation based on form rather than substance.
AI now contributes to this glut. There is growing concern about AI-generated ‘slop’ cluttering the literature. The worry is that LLMs make it too easy to produce something that looks good enough – aping the form of a scientific paper without the substance. If the current ecosystem already rewards mimicry, a tool that automates mimicry could turbocharge mediocrity. It’s pattern-matching all the way down, a hall of mirrors in which genuine reasoning struggles to find a foothold.
The glut has real costs. Finding human reviewers is increasingly difficult. I probably get review requests daily at this point. It’s far more than my capacity.
Findings, which I edit, instituted an AI-use policy after I and others observed an overall rise (in other journals) in submissions that appeared to be partially or wholly machine-written. But with human-AI collaboration, it will difficult to determine what the author conceived of, what the AI produced. Nevertheless, the human is still held accountable until the AI’s set up their own journals (or we humans help them).
Toward Hybrid Intelligence: LLMs Meet Solvers, Knowledge Bases, and Logic
AI may be far from optimal today, but it is not going away. Can it be made better? What could hybrid AI offer as a counterweight? By combining LLMs with solvers, symbolic logic, knowledge bases, and causal models, we can build systems that reason, not just predict. Such systems can verify equations, query databases, and test logical consistency.
Imagine writing a paper with a competent hybrid AI co-author. The AI checks statistics, cross-references databases, and flags assumptions. It suggests alternative explanations using known causal models. It acts not just as a writer, but a partner in reasoning.
Melanie Mitchell 1notes that intelligence includes analogy-making – mapping one situation onto another. Hybrid AI could excel here by drawing cross-domain parallels. For example, relating ecological networks to communication systems. This goes beyond what LLMs typically do.
Science Writing Reimagined
Hybrid AI could change the very form of scientific writing. Instead of static IMRaD (Introduction, Methods, Results, and Discussion) papers, we might see dynamic documents with embedded AI-driven notebooks. Readers could ask questions or run alternate analyses.
Writing could become a dialogue with AI: proposing hypotheses, suggesting tests, identifying missing dimensions. Argumentation could become more rigorous. Instead of post-hoc justifications, authors would engage in reasoning, prompted by AI oversight.
Peer review could evolve as well. Instead of human reviewers alone, a hybrid system could check for logical consistency, data integrity, and citation coverage. Reviewers could focus on judgment and novelty. The human-AI partnership would improve quality and speed.
Human LLMs and AI Critics: The Peer Review Paradigm
Just like authors, reviewers are often human instantiations of LLMs – relying on pattern recognition and heuristics under time pressure. We skim, match to known templates, and often react without deep reasoning. The result is shallow reviews, vulnerable to bias.
Hybrid AI offers both critique and remedy. It can show how easily shallow reviews can be generated – and how a structured, reasoning-based system could do better. It can assist reviewers by flagging statistical issues, logical gaps, or missing literature.
With the available and near-term technology, the ideal reviewer might be a human supported by a reasoning AI. The AI handles verification and breadth; the human exercises insight and judgment. This could raise standards, reduce delays, and focus peer review on substance.
At Findings, we are testing (a clearly labeled and openly documented) AI reviewer for format and style, though it also offers other advice. Presently, the authors are free to use or discard the advice.
From Pattern Mimicry to Paradigm Shift
Hybrid AI could catalyse a shift from pattern mimicry to genuine scientific reasoning. It can expose biases, suggest new dimensions, and assist in rigorous exploration. It can help make writing a tool for thinking, not just communication.
Rather than diminish the role of scientists, hybrid systems can augment it. By handling repetition and verification, they create space for insight. By prompting better questions, they guide discovery.
The promise is not just better papers, but better science. If we embrace hybrid AI carefully, we may move from a literature of patterns to a literature of ideas – from mimicry to meaning. The current frontier is not AI replacing scientists, but scientists equipped with tools that reason, reframe, and refine. If the near future of science is machine-assisted, let’s ensure the machines help us think better, not just faster.
And Beyond
Some argue we must avoid over-reliance. Carl Bergstrom warns that if we delegate to LLMs, the result is shallow analysis: “Writing is thinking”, and outsourcing the writing may mean outsourcing the thinking. I agree that until it is superior to human intellect, AI should remain subservient, a supplement not a substitute.
Yet we all sort of expect that Hybrid-AIs will eventually consistently outperform most humans, including scientist-humans, in key aspects of scientific reasoning (it won’t outperform us personally, but it will outperform everyone else). I believe it will come sooner than many people expect. However, when it is viewed as superior, not just in regurgitating the known-knowns, but also in the whole scientific chain from asking questions through answering them, the role for the human scientist (and reviewer, and editor, and so on) will steadily diminish, no matter how many people try to hold up a rear guard. We have yet to reach Epistemic Perdition, but we are well on our way.
FIN
Melanie Mitchell co-authored the first paper in the Journal of Transport and Land Use. Cities as Organisms: Allometric Scaling of Urban Road Networks
Keep reading with a 7-day free trial
Subscribe to Transportist to keep reading this post and get 7 days of free access to the full post archives.