Move over Dr Google: the inevitable rise of Dr ChatGPT
- 26 January 2026
GenAI tools for health are set to influence not only patient knowledge but also clinician behaviour, writes Pritesh Mistry, fellow in digital technologies at The King’s Fund
Thereâs no doubt AI is going to impact healthcare;Â we are already seeing the beginnings of this with AI scribes reducing time amount of time clinicians spend on admin, dermatology tools that speed up diagnosis as well as many other application areas.
However, the impact of AI is not going to be simply down to how healthcare systems like the NHS choose to implement them. There are AI technologies that are being made available directly to the public.
For example, the recent announcements of GenAI tools for health applications such as ChatGPT and Claude, which are currently available for health use in the US but likely to expand to include the UK.
These are AI tools that are able to take in large amounts of information and combine it with internet information and intrinsic AI data to respond to the requests the user, who could be any member of the public.
Patient empowerment?
The user requests are highly varied from explaining medical jargon to suggesting questions to ask a clinician to diagnosis.
Making these available direct to consumers could be an incredible step forward in patient empowerment and experience but also poses a potentially significant risk of patient harm and unknown consequences for the NHS.
We shouldnât kid ourselves, despite not being promoted as a wellbeing tool in reality we know the UK public are already using these tools for health queries to improve their knowledge of services, conditions and care options.
Making GenAI tools for health available direct to consumers could be an incredible step forward in patient empowerment and experience but also poses a potentially significant risk of patient harm
The latest announcement, while light on detail, indicates a positive step forward â these new tools are being made available to the public provide a protected space for health-related queries but not diagnosis.
They also aim to make it easier for health information to be entered into the GenAI tool. Assuming protection of information will remain the case in future, itâs good to see data and privacy protection incorporated, with data not being used to train models.
The healthcare product is currently not available in the UK which means the UK public are still using these tools for health purposes without data and privacy protection.
However, the GenAI models can, and do, get things wrong and make errors â to the point where Google is actively removing AI information in some cases.
They can give information from the wrong country, give incorrect information and make things up. This could lead to patients making poorly informed choices, which could worsen rather than improve their health.
Consequences for the NHS
Not only are there negative implications for individuals but also for the NHS.
Insufficient and incorrect information has the potential to drive more people to seeking NHS services, increasing demand in an already highly pressured system. And so, clinicians will have more consultations where the patient comes equipped with AI opinions, which brings its own challenges.
AI will influence patient knowledge and behaviour as well as clinician behaviour. If, or when, these tools become more reliable, and evidence-based it could mean the healthcare system needs to work out how the use and outputs of a GenAI tool should be incorporated into medical records and the clinical process.
Insufficient and incorrect information has the potential to drive more people to seeking NHS services, increasing demand in an already highly pressured system
While these new tools are stated as being unsuitable to use for diagnosis they inevitably will be used in that way by some people.
Do the public know what the difference between a query is and a diagnosis? There is a blurry line between the two and this creates a shared responsibility for how these tools are used. Itâs not sufficient to place disclaimers above a text box.
Putting safeguards in place
In many ways the horse has already bolted and people are already using GenAI tools without an understanding of their limitations, but itâs never too late to act to make improvements.
Thereâs an urgent need to respond to the publicly available GenAI tools in three ways.
Firstly, by engaging with the public to understand how they use the tools and then support building the skills and confidence of the public in order to navigate and use these tools appropriately.
It’s essential that NHS staff are supported to understand how these tools are affecting patient expectations, service demand and the patient-clinician interactions
Secondly, at the same time, it is essential that NHS staff are supported to understand how these tools are affecting patient expectations, service demand and the patient-clinician interactions. The NHS then needs to have the resources and capabilities to adjust and respond appropriately and consistently.
Thirdly, the NHS and those offering GenAI for health have a shared goal of improving patient care and outcomes. But more can be done in collaboration and weâd urge closer working across AI companies and the NHS to maximise the benefits for patients while mitigating the risk of harm.
This should include educating the public, engaging with patients and professionals, building evidence, making usable data accessible and putting agreed safeguards in place.
Dr Google is making way for Dr ChatGPT, itâs likely the health focused application of public GenAI tools will be available in the UK soon. We can see itâs inevitable and in doing so will likely increase their usability for health purposes.
There is much that can be done to maximise the benefits while mitigating the harms. Now would be a good time to start working on just that.