In the world of home food delivery, 'ghost kitchens' have mastered the art of culinary deception: A single warehouse operation can present itself as many different restaurants on popular delivery apps. Tony's Italian Kitchen. Seoul Street Tacos. Brooklyn Burger Bar. Each with its own unique branding, product shots, and carefully crafted identity.
Order from any of them, and the same industrial kitchen prepares your meal to be delivered by the same riders, but wrapped in different cuisine-appropriate packaging to match your expectations.
A strikingly similar phenomenon is quietly taking over academic publishing, except instead of rebranding the same kitchen, entrepreneurs are rebranding the same AI.
What most don't realise is that many of these "breakthrough" tools are nothing more than more-expensive facades wrapped around the same foundational models (ChatGPT, Claude, etc) that everyone else is using.
Introduction to Wrappers
These products are called 'wrappers', and understanding what they are, and why they can be so problematic, is crucial for anyone serious about research integrity.
Think of a wrapper as a fancy storefront for someone else's products. When you visit a tool that promises to 'revolutionise academic writing' or 'analyse research data with advanced AI,' you're often not encountering new artificial intelligence at all.
The confidence trick
Imagine we use 'Super Science Reviewer AI Plus' a new product which uses unique cutting-edge AI technology to review our manuscript before submission, give expert tailored advice, and get us published in our journal of choice. All for just $25 per review.
That's quite a bargain, even if it takes 2-3 turns to get it right.
So we upload our manuscript, which is quietly combined with hidden custom instructions, and sent to OpenAI, before a response bounces back via 'Super Science Reviewer AI Plus' presented as their own advice and insights.
This is nothing but a sleight of hand.
'Super Science Reviewer AI Plus' contributes zero value or intelligence to this exchange. It's a middle-man transaction with some pretty UI and clever marketing copy, charging premium rates to access the same AI system we could use directly for a fraction of the price.
The gold rush: why AI wrappers are suddenly appearing everywhere.
Today OpenAI OpenAI charge ~$1.25 to process 1m tokens: roughly 750,000 words. That's about $0.0000016 per word.
Our 5,000 word manuscript costs less than a cent to process and yet we paid $25 per check. A 300,000x markup seems outrageous, bordering on immoral to me.
Figure: Understanding LLM Token Counts
Unlike developing genuine AI systems—which demands years of research and deep expertise—a wrapper can go from napkin sketch to market-ready product in a matter of days.
The business model writes itself: subscribe to OpenAI's API for cents per transaction, build a simple web interface, craft domain-specific custom instructions, add branding, deploy on cheap hosting.
Creating a wrapper requires no real expertise, no breakthrough research, no specialised knowledge beyond basic web development.
AI-powered academic tools are appearing daily, each claiming revolutionary capabilities. The dirty secret is they all using the same AI models.
The Academic Publishing Problem
For academic publishing and research integrity, wrappers create problems extending far beyond simple economics. Entrepreneurs have minimal control over 3rd party foundational models, yet researchers risk unknowingly surrender control of their works:
Research theft risk
Unpublished findings shared with companies lacking strong intellectual property protections create scooping opportunities.
Black box methodology
Hidden prompts are proprietary, making results challenging to verify or replicate—which risks breaking some academic fundamentals.
Disclosure violations
Researchers may unknowingly violate journal AI-usage requirements when wrappers disguise ChatGPT as specialised academic tools.
Data security gaps
Sensitive research data gets processed by general-purpose AI systems that have standard commercial terms without academic protections.
False authority
Tools masquerading as domain experts encourage unwarranted trust in the outputs. Tools like our 'Super Science Reviewer AI Plus' carry the exact same limitations, biases, and potential for error as their underlying AI models—they're just packaged to appear more authoritative.
A Path Forward
Wrappers aren’t just overpriced middlemen. They’re a transparency crisis for academia — distorting trust, obscuring methods, and encouraging researchers to outsource critical judgement to tools that add nothing of their own.
They launder generic AI outputs into something that looks specialised, credible, and safe. But behind the glossy branding, it’s still the same unaccountable, biased systems with no real understanding of science.
The real danger is how easily wrappers pass as legitimate. Once they slip into research workflows, they can normalise bad practice: extortionate markups, black-box methods treated as credible, and the handing over of sensitive work to untrustworthy companies.
In science, the cost of this deception isn’t just wasted money: It’s compromised knowledge and integrity.