This recent post by Mel Andrews got me thinking this morning about AI in academia, and the risk of diminishing human thought.
The vision of human-out-of-the-loop science fundamentally mistakes what knowledge production is. It aims at the production of, wait for it, knowledge: a thing that definitionally involves an epistemic agent or agents, unless you are theistic about it.
The growing fascination with 'human-out-of-the-loop' science — the idea that machines could autonomously generate new knowledge without human intervention — rests on a fundamental category error: confusing the production of content with the production of knowledge and understanding.
The difference is not semantic; it’s ontological.
Without a knower, there is no knowledge.
After receiving the Nobel Prize, Max Planck toured Germany giving the same lecture over and over. Eventually his chauffeur, who had heard it delivered many times, joked that he could give the talk himself.
Amused, Planck agreed, and they swapped places. The chauffeur delivered the lecture flawlessly, until a professor in the audience asked a challenging question and Planck was needed back on stage.
Imitations work fine until reality intervenes. When understanding and wisdom is required so is a human expert. The chauffeur could repeat the knowledge, but he couldn’t apply it.
This feels very much like where we are today.
We live in a world obsessed with producing and automating knowledge — or rather content that looks like knowledge.
We’ve built systems that can copy and simulate expertise so convincingly that it’s easy to forget there’s no real understanding or wisdom underneath. Machines can generate data and information, even texts that sound convincingly knowledgable. But regurgitating content and actual understanding are simply not the same thing.
As Albert Einstein put it, 'Any fool can know. The point is to understand.'
Data, Information, Knowledge & Wisdom
The DIKW pyramid (Data, Information, Knowledge, Wisdom) is a simple way to see how understanding develops from facts.
Credit: Matthew.viel, CC BY-SA 4.0
Data is a raw unrefined signals: The number 37.2°C on its own is just a value. Turning data into information means giving it context: 37.2°C is my current body temperature here on this cold autumnal morning. Knowledge adds relevance: This is slightly above normal, and it may indicate I'm getting a cold. Wisdom guides how I might act: I should probably stay in bed today.
Climbing to the peak of this pyramid is a uniquely human act. Only people can make that leap from information to understanding. From knowing what, to knowing how, to asking and finding out the essential why.
Machines can easily process data and organise information but they don’t and can't know what’s worth understanding or why something matters or what ‘better’ even means. They can simulate all of this of course, but they don’t have stakes, goals, values or experiences simply because they are not alive.
And that’s why the upper layers of this pyramid — knowledge and wisdom — belong to us. They are where meaning lives, and meaning is why we move forward.
The point is to understand.
For people, understanding has a purpose. We learn so that we can survive and grow, together.
It’s how and why we evolve as individuals, as societies, and as a species. To move from our current state to a better one. To imagine and build the next more-desirable reality.
Confusing data knowledge means mistaking accumulation for progress, and so we stop climbing the pyramid. Progress only happens when meaning enters the equation. When someone asks 'why?' and 'what next?'.
The goal of knowing is not to store facts but to create change. It’s how we transcend what we are — through understanding, curiosity, and intention.
Machines might be able to mimic understanding, just as Planck’s chauffeur mimicked his lecture. But if we ever forget the difference, we risk becoming chauffeurs ourselves: performing wisdom without truly possessing it.