AI in health 🧬 What’s really next
Why the future belongs to those who solve data, trust, and adoption
I was at The Economist’s AI in Health Summit in London this week, where there were no clinical demos or shiny prototypes. The conversations focused instead on the structural barriers that will decide whether health AI can scale: data quality, regulation, workflows, and trust.
The strongest signal was how often these themes repeated across pharma, hospitals, startups, and regulators. Everyone is chasing breakthroughs, but the snags are the same. Innovation stalls when records can’t be linked, or when doctors and patients don’t trust the outputs.

For builders, the real opportunity lies in the scaffolding. The companies that break through will be the ones that prove their tools are usable, safe, and reliable inside the messy reality of healthcare. Those that put clinicians at the centre of their teams will move fastest, because adoption starts with the people delivering care.
The question now is where the next wave of health AI companies will come from. The patterns emerge once you look at the obstacles they all face: data, trust, integration, and regulation.
This special dispatch from the second annual Future of Health Europe event breaks down how those barriers are shaping the real opportunities. 👇🏻
If you only take three things away, it’s this: data bottlenecks, regulation isn’t optional, and adoption lives or dies on clinician trust.
— Millennial Masters is sponsored by Jolt ⚡️ reliable hosting for modern builders
This deep dive is 4,400+ words, so you can use the table of contents on the left to jump between topics (if you’re reading online). If you have the Substack app, you can also listen to a narrated version of this feature.
Healthcare AI starts with data
The most advanced models in the world are useless without the right data.
Thomas Renner, who leads digitalisation and innovation at Germany’s Federal Ministry of Health, explained the infrastructure simply isn’t ready. “Availability of data in all its forms, from electronic health records to genomic datasets, is the real bottleneck,” he said. Unless records can be linked across care systems with common standards, innovation stalls before it begins.
Pamela Kearns, emeritus professor of paediatric oncology at the University of Birmingham, underlined that not all data is equal. “It’s not about all data, it’s about the right data,” she argued, stressing that AI models can only be trusted if they are trained on datasets designed to answer specific clinical questions. Poor quality inputs only add noise.
Kerrie Woods, director of R&D clinical informatics at Oxford University Hospitals, pointed to the vast data buried in electronic patient records. Most of it, she said, remains trapped in fragmented silos. “There is deep, detailed data inside hospitals that’s rarely used. The value is in making it real-time, multimodal, and linked across primary and secondary care.”
From the US, Sandra Barbosu of the Center for Life Sciences Innovation said sharing data securely across borders is the only way to move past national silos without eroding public trust.
Fred Manby, CTO of Iambic Therapeutics, showed how the ground is already shifting. Where machine learning once worked with “tables of numbers,” new multimodal models now combine genomics, imaging, clinical records, and patient histories. That opens extraordinary opportunities, but only if the ownership and regulatory red tape are solved.
Again and again, the theme was trust. Patients are often willing to share data if it clearly benefits care. Yet frameworks like GDPR and national consent rules still clash and confuse.
With data as the bottleneck, the real fight is over the rules that govern it. Every dataset in health exists inside legal and ethical guardrails, and regulation is where the limits and opportunities get defined.
Regulating healthcare AI without losing trust
Getting regulation right for AI in healthcare is a balancing act with no safety net.
Sandra L. J. Johnson, consultant paediatrician and academic at the University of Sydney, grounded the conversation in the basics: “the reason we regulate is patient care.” She warned that systems risk becoming top-heavy and disconnected from the people they’re meant to protect unless patient feedback is baked into regulation.
Hugh Harvey, managing director at Hardian Health, pointed to a glaring blind spot: ignorance. “There already are regulations. Go and read them.” Most builders don’t even realise the frameworks exist, even though an AI product can be coded in a week, it can take a year or more to get approved.
Jamie Cox, co-founder and CTO of Scarlet, struck a more optimistic note. For him, regulations aren’t shackles but scaffolding: “there has never been a better time to build.” He argued that the real opportunity lies in shortening cycles: launching smaller, lower-risk products, testing them fast, and iterating safely under regulatory oversight.
David Novillo Ortiz, unit head for data and digital health at the WHO’s Regional Office for Europe, reminded the room that many countries lack basic regulatory infrastructure, and while device governance is advancing, the workforce is still unprepared. His push: update health education now, so clinicians and patients alike understand the tools before they hit the frontlines.
The trickiest problem remains post-market surveillance, monitoring what happens once AI systems are deployed. Sandra Johnson noted the risk of automation bias, where clinicians over-trust outputs from flawed models, especially when trained on populations that don’t match local realities.
Cox argued that most responsible developers already obsess over system performance in the wild, but regulators lack standardised methods to capture that data at scale.
Liability is another unresolved frontier. Johnson explained that today, accountability sits squarely with doctors. Yet she acknowledged this won’t hold forever: as systems grow more autonomous, responsibility must be shared between clinicians, hospitals, manufacturers, and regulators.
Transparency could be the key to rebuilding trust. Harvey pushed for AI devices to publish documentation similar to drug leaflets rather than hiding behind glossy marketing. Patients, clinicians, and regulators deserve more than slogans like “doctor in your pocket” apps.
Even when the rules are navigated, hospitals remain the hardest test ground. Culture, legacy IT, and trust slow everything down, and breakthroughs stall not in theory but in practice.
Why hospital AI breakthroughs stall

The hard question is how to make AI fit into hospitals already groaning under legacy systems, cautious cultures, and endless compliance hurdles. The conversation around integration exposed just how wide the gap remains between research pilots and daily clinical practice.
Max Gordon, Head of Trauma Surgery and Clinical AI Research Lab at Karolinska Institutet, pointed to mammography as proof that some AI is mature enough to scale now. A trial of 45,000 women showed that using one radiologist plus AI not only matched the standard two-doctor review, but actually detected more cancers with fewer false positives. For Gordon, this is a clear case of efficiency and accuracy converging.
Others stressed how fragile integration can be. Manish Chand, Robotic Surgeon and Health Technology Researcher at University College London, described how decision-support AI for endoscopy still meets cultural pushback. “You present this to 10 clinicians and the first answer is: I don’t need this. I can tell the difference without a box on the screen.” Even when the data proves value, adoption runs into pride, habit, and scepticism.
For Bruno Botelho, Director of Digital Operations and Innovation at Chelsea and Westminster Hospital, the bottleneck is regulation. His hospital is piloting AI to summarise patient records for junior doctors, potentially saving hours of repetitive work. But classification rules around medical devices and safety slow progress. “The regulation may be holding us back, but it’s also protecting patients,” he said, walking the fine line between innovation and liability.
Parkinson’s UK has taken a different approach. David Dexter, Director of Research, explained how they fund digital devices that track patients at home before clinical visits. Algorithms then guide neurologists on how medication is working, freeing consultation time for what really matters: the patient’s concerns in the room.
From Evis Sala, Albania’s Minister of Health, came a reminder that national health systems rarely move fast: bureaucracy adds “stones” to every new layer of innovation, and pilots often get trapped before real integration.
AI’s fate in healthcare depends less on breakthroughs than on whether hospitals can move ideas from lab to ward. And some hospitals are leading the way.
How AI is transforming hospitals
Björn Zoéga, chief executive of King Faisal Specialist Hospital & Research Centre in Riyadh, made a strong point: healthcare has waited too long for radical change. AI, he argued, must not become another technology that the sector delays by decades.
Zoéga described how his hospital is using AI across the board, from ambient listening tools that cut down paperwork to robotic systems that assist in heart transplants. “The AI systems that we have in robotics… it’s really a steadier, extended arm of the surgeon,” he explained. In 2023, his team completed the world’s first robotic heart transplant, an example of how algorithms can guide surgeons away from “danger zones” and help reduce complications.
Yet for Zoéga, the biggest breakthrough is in administration. “It’s a no-brainer,” he said, pointing to studies that show AI tools can give doctors 20% more time with patients. By handling coding, forms, and compliance burdens, AI makes room for human connection. Patients, he added, see the difference immediately.
On adoption, Zoéga believes that the key is clinician involvement. The best startups, he argued, have “60–70% clinical people” on their teams. Otherwise, promising pilots collapse when they can’t integrate into fragmented hospital systems. Convincing doctors, he added, requires leadership at the floor level, not from “innovation centres upstairs.”
His closing message was urgent: “We just have to start doing now. It’s too late. Don’t let AI become another digitalisation delay. Of course it has to be safe, but nothing is ever 100% safe, whether machine or human. We have to have the courage to do these changes.” But the promise in hospitals meets a harder reality in diagnostics.
Why AI diagnostics divide investors and doctors
Diagnostics are the frontline of health AI, but the real test is results in the clinic.
Madhu Narasimhan, Chief Information Officer at DaVita, offered a glimpse of AI’s practical value. His team built a model to spot dialysis patients likely to abandon treatment before even they realised it. A flag from the system triggered a physician intervention, which kept the patient on track. “We believe our clinicians are irreplaceable. We use AI to augment them, not to substitute,” Narasimhan said. The lesson: early detection is only useful if it can be acted on by human care teams.
For Anmol Kapoor, Chief Executive and Founder of BioAro, the frontier is genomic diagnostics delivered instantly. He described his platform compressing what used to be months of sequencing analysis into minutes, and even reading cardiac biomarkers from a drop of blood via smartphone. He called this “healthcare 3.0,” a future where tests can be done anywhere, even on planes or space stations.
But cost and adoption are the hard stops. “I haven’t heard cost yet, what does it save?” asked Gabriele Papievyte, Head of Ventures at XTX. Her investor lens cut through the optimism: the NHS and private buyers won’t roll out technology without a business case, and AI that fails to reduce spend or prove demand dies in pilot limbo.
Parth Patel, Medical Doctor and Implementation Scientist at OrthoGlobe, warned that brilliant products often fail in translation. “A 95-year-old patient at home may simply not accept a self-testing device,” he said. Behaviour, infrastructure, and trust derail as many AI pilots as technology does.
From academia, Anke Diehl, Chief Transformation Officer at University Medicine Essen, pointed to the need for interoperability. Her hospital has built dozens of AI tools on curated datasets, but she admitted they remain siloed: “There is a huge lack of cross-sector datasets.” Without wider data-sharing, progress risks getting stuck in isolation.
But between venture demands for cost-efficiency, patient realities, and the bureaucracy of public health systems, the real challenge is less about breakthroughs and more about whether they can ever spread fast enough.
Nowhere is the gap starker than in pharma, where AI is no longer just a clinical tool but a driver of research pipelines, where speed, precision, and data ownership are reshaping entire strategies.
AI drug discovery: Data over algorithms
Drug development is a race against time and AI is becoming the backbone of how new treatments are conceived, tested, and brought forward. Campaigns that once took months are being compressed into weeks.
Danielle Belgrave, Vice-president of AI/ML at GSK, said the biggest shift is precision. “With the scale and volume of data that we have, [AI is] really honing in on getting the right treatment to patients based on biological profiles and looking at progression of disease over time.” She added that GSK is building an internal “AI scientist” powered by large language models, trained on multimodal and proprietary datasets across oncology, respiratory, and hepatology.
Jim Weatherall, Vice-president and Chief Data Scientist for Biopharmaceuticals R&D at AstraZeneca, put small molecule design as a proving ground. “We’re now applying [AI] across probably about 90% of our portfolio… campaigns that used to take 10 weeks now take five.” He argued that the real unlock will come from treating AI as a “colleague, not a tool,” with humans and machines narrowing the discovery funnel together.
Selim Aydin, Global Head of Business Operations and Innovation, Patient Safety and Pharmacovigilance at Novartis, pointed to clinical trial acceleration. Using Novartis’ ClipAI tool for site selection, “we were able to hire the right patients in 40% less time.”
From the investor lens, Pierre Socha, Partner at Amadeus Capital, said time is the enemy. “When we invest we’ve got windows of five to ten years, so we need to be extremely dramatic as to where the dollars will go.” For him, AI’s immediate impact lies in diagnostics and patient finding, while long-term bets sit in biology itself.
Sean McClain, Founder and CEO of Absci, showed what “unlocking undruggable targets” looks like. His company used AI to design an antibody against a dermatological ion channel unsolved for 30 years. “We were able to get a drug into the clinic in roughly 24 months versus three to five years,” he said, at one-third the usual cost.
Finally, Ittai Dayan, Co-founder and CEO of Rhino Federated Computing, underlined infrastructure. His company helps pharma deploy AI models across their partners. “What underscores the maturity of AI in early development is not only that it’s being used by pharma, but also being built by pharma for themselves.”
AI won’t deliver hundreds of new molecules at once. What it can do is tilt the odds: fewer failures, faster timelines. As Weatherall put it, the future is a “fully agentic, scientific framework,” but with humans still steering the work.
If everyone can access foundation models, what sets companies apart? For Belgrave and Weatherall, the edge lies in proprietary data and in-house expertise. “The differentiator is the data,” Belgrave said. Weatherall agreed: “Over time AI will commoditise. What matters is your pipeline, your data, and how you use them.”
Discovery leads inevitably to trials, where timelines and failures define value. AI is now being applied to selection, monitoring, and adaptive designs that could change the economics of drug development.
AI and the future of clinical trials
Clinical trials are the backbone of evidence-based medicine, but they’re slow, narrow, and prone to failure. Mihaela van der Schaar, director of the Cambridge Center for AI in Medicine, made the case that AI can cut through these bottlenecks, not by automating everything, but by reshaping how they recruit, monitor, and measure.
Recruitment is the first choke point. Trials usually treat “disease present” as sufficient for inclusion, ignoring the fact that patients are at very different stages and trajectories. “AI can help you identify not only whether this person has the disease, but also how far they are progressing this disease, and what is the likely outcome if they will not be treated,” van der Schaar said. That means selecting participants based on timing, progression, and counterfactuals, not just diagnosis. It also means avoiding the trap of cherry-picking “good” patients who boost results but leave real-world populations underrepresented.
AI’s role doesn’t end once a trial starts. Van der Schaar also pointed to digital twins, generative models that create possible futures for an individual patient, with and without interventions. These can predict dropouts, and even inform when to stop or pivot a trial.
But innovation here is constrained by regulation. “AI for clinical trials is the single most difficult problem to solve,” she said, calling it harder than protein folding because of both complexity and oversight. Safety demands that systems be interpretable, with confidence estimates and transparent mechanisms, not black-box predictions.
Regulators are cautious, for good reason, but van der Schaar argued that interpretable, tested AI could actually reduce risk by avoiding approvals that send drugs into the market without clarity on who benefits.
The payoff is potentially huge. She estimated AI could cut timelines by as much as 50% while reducing failure rates. But the conversation always circles back to the patient. Whether in oncology, rare disease, or mental health, AI must translate into tangible improvements in the journey from diagnosis to treatment.
Building patient-centred healthcare AI
AI in healthcare carries weight only when it makes the patient’s experience better. That means earlier diagnoses, faster access to treatment, and care that adapts to the individual rather than the average.
Laura Brady, Chief Executive of the Irish Platform for Patient Organisations, Science and Industry (IPPOSI), believes that patients and the public need to be involved in the AI cycle from design to deployment, with a focus on ensuring marginalised voices are heard.
Javier Castro, patient and AI senior advisor, drew on both his lived experience and industry perspective. He argued that AI’s greatest promise lies in early detection and integrating scattered patient data to uncover hidden patterns. This, he said, could “increase the number of patients dramatically” who receive timely and effective care.
Ofer Waks, Global Medical Partnerships Lead at Pfizer, noted that AI must adapt to each individual’s health profile and context. Personalisation is the pathway to relevance and real impact.
Hernan Lew, Deputy Director of Data and Digital Strategy at Saint Joan de Deu Hospital in Spain, described the challenge from the frontline of care: thousands of children with rare or ultra-rare diseases face years-long diagnostic journeys, and only a small percentage have access to specific treatments. For him, the priority is for AI to cut diagnostic time and connect patients to effective treatments sooner.
Alex Frenkel, CEO and Co-founder of Kai.ai, detailed the role of AI in mental health support. With clinicians overstretched and waitlists growing, he sees large language models as a “force multiplier” that can provide safe, ethical and scalable emotional support. “The hope is that everyone will be able to receive mental health support immediately and at scale,” he said, with AI escalating to clinicians when needed.
Yet patients don’t care about the AI model, they care about the outcome. If AI helps a rare disease patient get diagnosed years sooner, or gives someone instant access to mental health support, then it delivers real value.
Focusing on patients also drags bias to the surface. Algorithms reflect the values and trade-offs of their creators, and ignoring that reality risks embedding inequality into care.
Bias in healthcare AI is unavoidable
Nicola Byrne, National Data Guardian for health and social care in the UK, cut straight to the paradox: AI in healthcare can never be free of bias, because people aren’t. The question is whether we’re willing to face it, define it, and decide whose values shape the system.
Patients expect two things: that their privacy will be respected, and that their data is used for public benefit. Without those guarantees, the promise of AI collapses.
Her core message was that we are emotional, inconsistent, and easily swayed by idols, whether that’s a charismatic leader or a shiny new technology. That makes us vulnerable to misplaced confidence in tools that present themselves as objective.
Even the language we use can distort reality: call an NHS app “a doctor in your pocket” and you risk eroding the distinction between what technology does and what clinicians actually provide.
Byrne argued that building trustworthy AI in healthcare requires clarity on what we value. Safety, fairness, and privacy matter, but so does recognising that illness is more than a disease state and care is more than a transaction. AI can optimise logistics and evidence, but it cannot hold meaning for patients facing pain or death, only humans can.
She warned against narrow perspectives. Bias doesn’t just appear in skewed datasets. It also lives in the priorities we choose, the trade-offs we tolerate, and the voices we ignore. Public judgement and collective wisdom must be counted in from the start, not retrofitted as an afterthought.
Her final provocation: the real risk is not that AI is biased, but that we stop thinking for ourselves. Responsible adoption means keeping language honest, values explicit, and decisions grounded in both the science and the art of medicine.
The ethics of AI in healthcare
AI in healthcare is often framed in terms of efficiency and diagnostics, but Julia Manning, dean of education at the Royal Society of Medicine, urged the audience to step back and confront a harder question: what do we lose when the machine takes centre stage?
She drew on Robert F. Kennedy’s reflections on dignity and purpose, asking whether the drive for AI productivity risks hollowing out the very humanity that medicine is meant to serve. “Some of the drivers behind this are the attention economy, surveillance capitalism, the tech bros. A lot of it’s about money,” Manning warned, noting that patient outcomes are too often treated as a secondary consideration.
For Manning, trust and education are the real pressure points. The promise of AI lies in its ability to accelerate analysis and detect conditions faster than any clinician could.
But if patients believe their care is shaped by commercial interests or algorithms designed to cut costs, the trust that underpins every clinical relationship begins to fray. She pointed to recent deployments of AI “agents” in the US, marketed as more compassionate than humans yet prescribed without any human in the loop, as a stark example of what’s at stake.
Clinicians, she argued, must also protect their own competencies. “Doctors who don’t actually retain their clinical analytical skills…understand cultural nuance, holistic mindset” risk becoming sidelined by automation, she said. Manning cited the danger of “skills fade” and automation bias, where over-reliance on AI diminishes critical thinking. “We train and we’re still here because we want to serve patients and we want the best outcomes,” she said. “But we need to think about the best way to do that.”
Why healthcare AI can’t move fast and break things
The promise of AI is real, but so are the roadblocks. From costs and privacy concerns to culture and trust, barriers continue to stall adoption. The closing panel of the summit asked what it will take to move faster without losing credibility.
Anca del Rio, data, AI and digital health consultant at the World Health Organisation, argued that healthcare still lags behind industries like finance in anticipating people’s needs. “We’re still providing sick care, we’re still being reactive,” she said, pointing out that banks and fintech players already use data to anticipate customer behaviour, while healthcare rarely takes the same proactive approach. The challenge, she added, is cultural: shifting from a mindset of treatment to one of anticipation.
Lucy Orr-Ewing, chief of staff and head of policy at the Coalition for Health AI, wanted a quality assurance and a monitoring framework to ensure trust. Drawing comparisons with how cars are tested before sale, she outlined her organisation’s work in building a “quality assurance ecosystem” for AI in healthcare. The goal, she said, is to track models at deployment and over time, catching when accuracy drops or hallucinations creep in. “You wouldn’t buy a car that hasn’t been through crash testing,” she said. “We should hold AI in healthcare to the same standard.”
Scott Eggertsen, strategy principal at ustwo, pointed to lessons from fintech’s regulatory sandboxes. He suggested healthcare could benefit from adopting a similar “patient duty” approach, embedding responsibility for outcomes into every product design and regulatory test. The fintech journey, he noted, shows how unbundled niche products eventually need rebundling into complex systems, a stage health tech is now approaching.
From a global perspective, David Novillo Ortiz of the WHO reminded the audience that healthcare operates under very different constraints from private tech. Unlike space rockets that can afford to fail in private hands, he said, public healthcare systems face unique accountability pressures, life-and-death outcomes, taxpayer funding, and political scrutiny. That makes innovation harder, but also increases the responsibility to get it right.
Which is why the real opportunity doesn’t sit in the algorithms alone. It sits in the scaffolding: the infrastructure, assurance, and usability layers that turn models into tools people trust and adopt. That is where lasting companies and impact will be built.
The real value of healthcare AI
The AI in Health Summit made one thing clear: building in healthcare is different. The rules of the game are beyond technical: they’re cultural, ethical and regulatory all at once.
For founders, the first lesson is that regulation isn’t an afterthought. It sets the terms of the market. Products that meet the standard from day one will survive. Products that try to work around it won’t.
Trust came up in every session. Patients and doctors need more than performance claims. They want to know who is accountable, how decisions are made, and what happens when the system fails. Startups that publish evidence, show their limits, and invite scrutiny will gain ground.
Ease of use is another barrier. Doctors don’t have time for clunky systems, and patients don’t want to be left guessing. Builders who design for workflow, integration, and clarity will see adoption.
And the real growth lies in infrastructure. Healthcare is shifting responsibility from individuals to the vendors and institutions behind these tools. That creates demand for monitoring, oversight, and quality assurance. A whole new category of companies will grow from that.
The opportunity isn’t just in the AI models themselves, but in the trust, the compliance, and the usability layers that make them real. For entrepreneurs and investors, that’s where the lasting value will be built.