What a Cambridge Researcher learned about AI that every business should hear
Most of the AI conversation aimed at business owners focuses on the same handful of tools and use cases: chatbots, writing assistants, image generators. That’s useful, but it only tells one part of the story. To understand where AI is genuinely powerful, where it still falls apart, and what that means for the rest of us, sometimes you need to look at the places where the stakes are highest.
So we spoke to someone working with cutting edge AI systems that could one day change how we diagnose dementia.

Dr Henry Musto is a Postdoctoral Research Associate at the University of Cambridge, splitting his time between two departments. In Clinical Neuroscience, he works on applying AI to predict dementia progression, using brain imaging data, clinical test results, and patient demographics. His focus isn’t just on whether someone might develop dementia, but when, giving clinicians and clinical trial designers a timeline of risk rather than a binary yes or no.
In Psychiatry, he uses AI to track resilience among frontline health and social care workers, analysing social media data to understand how these frontline workers cope during adversity. He also advises a Cambridge-founded startup working to bring AI diagnostic tools into the NHS.
Before academia, Henry spent 11 years as a professional data scientist working for tech startups, and for companies like Visa, BBC, WPP, and Omnicom. That combination of commercial experience and hands-on academic research gives him an unusually rounded perspective on the technology, one that cuts through both the hype and the cynicism.
How AI is actually used in cutting-edge research
When most of us think about AI, we think about the tools we can access: ChatGPT, Claude, Gemini. In academic research, the picture is often very different.
Henry’s day-to-day involves reviewing published research papers, understanding their methodology, and then recreating models from scratch. Sometimes there’s a GitHub repository he can adapt, but mostly he’s building custom AI systems designed for very specific datasets that no commercial product can handle.
“Academia tends to develop a lot of things in house,” he says. “The work I do is very much frontier research.”
Why does this matter if you’re running a small business? Because the tools you see advertised, the ones with slick landing pages and free trials, are the consumer layer on top of a technology stack that’s still being built. The gap between what AI marketing promises and what the technology can actually deliver reliably is wider than most people realise. Knowing that gap exists is the first step toward using AI tools with the right expectations.
But even at Cambridge, everyday AI tools have changed the workflow. For literature review, the process of finding, reading, and synthesising published papers, tools like Connected Papers, Google’s NotebookLM, Gemini, and ChatGPT have made things considerably faster.
Henry is not alone in this. A 2025 study by Wiley surveying over 2,400 researchers worldwide found that AI tool usage among academics jumped from 57% to 84% in a single year, with 85% reporting that AI improved their efficiency. But the same study found something else: as researchers gained hands-on experience, they significantly scaled back their expectations of what AI could actually do. Last year, researchers believed AI already outperformed humans in over half of use cases. This year, that dropped to less than a third.
Henry’s experience reflects exactly this pattern. These tools are genuinely useful as a starting point, he says, but “there is still significant work required to verify the output and make sure you end up with a reliable body of knowledge.”
In healthcare research specifically, this isn’t just about efficiency. A hallucinated reference or an inaccurate summary of a clinical study could derail months of work or inform a flawed clinical decision.
For business owners, the principle is the same even if the stakes feel different. AI is strongest in the middle of a workflow: drafting, summarising, generating options. It remains weakest at two ends: framing the right question, and checking whether the answer is actually correct.
Why some smart people don’t adopt AI
A common assumption about AI in more traditional institutions is that people resist it. They’re conservative. They don’t trust technology. They’re set in their ways.
Henry sees something different.
“I’m not seeing much of an aversion,” he says. “The challenge is that serious academics are intensely focused on their own specialism. They simply don’t have the bandwidth to learn an entirely new discipline on top of the one they’ve already spent decades mastering.”
He’s describing world-class scientists who struggle with screen-sharing on a Teams video call. Not because they’re technophobic, but because every hour they spend on technology is an hour not spent on the research they are doing.
This will sound familiar to a lot of business owners. The barrier to AI adoption is rarely about attitude, it’s really about bandwidth. When you’re already stretched thin running your business, “I should look into AI tools” sits permanently on next month’s to-do list. The Wiley study backs this up: 57% of researchers cited lack of guidelines and training as their primary barrier to using AI more. It’s not resistance. It’s a guidance gap.
The trust problem: why AI errors are treated differently
We asked Henry about a counterargument we hear often: humans make mistakes too: doctors sometimes misdiagnose, lawyers overlook clauses… If AI is at least as good as a human, shouldn’t that be enough?
He pointed to a telling example. When a Tesla self-driving car was involved in a fatality a couple of years ago, it received enormous global attention. Meanwhile, tens of thousands of people die in conventional car accidents every year with a fraction of the coverage.
“We are far behind where we should be in terms of building confidence and public understanding of AI,” Henry says. “An AI error will always be perceived as a bigger deal than a human one.”
This resonates far beyond healthcare. Research published in the BJR Open has explored whether AI should be held to lower error rates than humans, and the answer from the public is consistently no: people expect technology to be better, not just equivalent. If you use AI in any customer-facing part of your business, this matters. The bar your customers hold AI output to isn’t “as good as a person.” It’s higher.
Henry raises another dimension that’s worth thinking about. AI systems deliver every answer with the same level of confidence, regardless of whether they’re right. “A good junior colleague will tell you when they’re uncertain,” he says. “They’ll qualify their answer or acknowledge the limits of their knowledge. Current AI systems don’t do that.”
If you’re using AI for customer communications, proposals, or content, this is something to watch carefully. Confident delivery doesn’t mean accurate delivery. And your customer probably can’t tell the difference until something has gone wrong.

The UK is betting big on AI in healthcare. The reality is messier.
Henry’s work sits within a broader push to bring AI into the NHS, the UK’s public health service. The UK government’s 10-year health plan, published in 2025, explicitly aims to make the NHS “the most AI-enabled care system in the world.” There are AI institutes launching across Cambridge and beyond, public-private partnerships multiplying, and a growing sense of urgency around diseases like dementia where current treatments remain limited.
Henry has seen this up close. He attended an event at the House of Lords for the launch of a synthetic data report, one of several government-backed initiatives to accelerate AI adoption in healthcare. “There’s been quite a concerted effort by colleagues of mine to really bring the UK up to speed in terms of generating synthetic data using AI methods,” he says.
But ambition and implementation are different things. A UCL-led study published in The Lancet found that an NHS programme to introduce AI-assisted diagnostics across 66 hospital trusts took 4 to 10 months longer than expected, with a third of trusts still not using the tools 18 months after the target date. The main problems weren’t technical. They were practical: overstretched clinical staff, ageing IT systems, and a general lack of understanding about how to actually work with AI.
This is worth paying attention to even if you have nothing to do with healthcare. If the NHS, with government funding and dedicated programme leadership, struggles to implement AI on time, it tells you something honest about how hard adoption really is. For a small business without a dedicated tech team, the challenge is even steeper, which is exactly why clear, practical guidance matters so much.
The language gap: a structural problem the market won’t solve alone
At AgentAya, we focus specifically on businesses in non-English-speaking markets. So we asked Henry about the language dimension.
“English is the language of science,” he says. “The vast majority of papers and conferences are in English.”
The same applies to the large language models that power most AI tools today. When you interact with ChatGPT or Claude in Spanish or Arabic, the system typically processes your request through English internally. A Stanford HAI white paper describes this as a “digital divide” in LLM development, where most major models underperform for non-English languages, are not attuned to relevant cultural contexts, and are not accessible in parts of the Global South.
The research data is stark. One study benchmarking GPT-4o across African languages found an absolute performance gap of 12% to 20% between English and the average of 11 African languages tested, with the gap reaching over 50% for individual languages like Bambara. Even for a relatively high-resource language like Chinese, research published in JMIR found that ChatGPT underperformed compared to models specifically trained on Chinese data, not because of translation barriers, but because of limited representation in training datasets.
Henry encountered this directly through his psychiatry work, where many frontline care workers come from linguistically diverse backgrounds. “There are an estimated 6,000 languages in Africa alone,” he points out. “Many spoken by populations of a few hundred thousand people. The commercial incentive to build AI models for those languages simply isn’t there.”
He’s right that the private sector won’t solve this alone. Universities and governments will need to step in, and in some cases already are. But Henry also notes that the economics shift substantially for larger populations: “India has hundreds of languages, and the return on investment looks very different when you’re talking about populations of 50 million.”
Useful technology, real limits
We asked Henry where he thinks all this is heading. His answer drew on his years in the data science industry.
“When data science first peaked as a discipline, there were three types of employer,” he says. “Those who believed data scientists could solve any problem. Those who dismissed the entire field as hype. And those who just saw us as another colleague with a specific skill set, with strengths and limitations like any other.”
The first group, he adds, tended to become the second group fairly quickly when they failed to provide the right data, tools, or expectations for the work to succeed.
He sees the same pattern with AI now, playing out on a much larger scale. “We are likely approaching the limit of what current large language models can offer,” Henry says. “That doesn’t mean they aren’t valuable. They are undoubtedly valuable. But I think the pace of dramatic improvement will slow, and the next major leap will require a fundamentally different approach.”
For business owners, this is actually reassuring. The tools available today are worth investing time in. They’re not going to be obsolete in six months. But it also means that waiting for AI to become perfect before you start using it is a strategy that will leave you behind.
What this means for businesses
Towards the end of our conversation, we described a scenario that sits at the heart of what AgentAya does: a small travel agency in Peru competing against a company like Expedia, which has already used AI to cut costs and improve margins. What happens to the smaller player who doesn’t adapt?
“Companies need to get on board,” Henry says. “There is a genuine risk that small businesses get outcompeted by larger players who implement AI more efficiently. I think that is extremely likely, and in many sectors it is probably already happening.”
The numbers support this. According to a report from the St. Louis Federal Reserve, generative AI adoption in the US reached 54.6% of adults by mid-2025, surpassing the adoption rate of both personal computers and the internet at the same point in their development. Meanwhile, Microsoft’s global adoption data shows that adoption rates average 24.7% in the Global North but just 14.1% in the Global South, a gap that risks widening as AI becomes more central to how businesses compete.
AI tools are imperfect. They still hallucinate, they work better in English than in other languages. And yet the cost of not using them, of standing still while larger competitors automate, is almost certainly higher than the cost of adopting them imperfectly.
The answer isn’t blind adoption. It’s informed adoption: knowing what the tools can and can’t do, which ones are worth your time, and which ones are just marketing. And it means having access to that information in your own language, written for businesses your size.
That’s what we’re building at AgentAya.

Dr Henry Musto is a Postdoctoral Research Associate at the University of Cambridge, working across the Department of Clinical Neuroscience (Rittman Lab) and the Department of Psychiatry (PHAB Lab). His research focuses on translational AI for dementia prediction and tracking resilience in frontline health and social care workers. Before academia, he spent 11 years as a data scientist working with companies including Visa, BBC, WPP, and Omnicom.
Browse our AI-powered research tool reviews to find tools that can support your own work.

