Anthropic and Google follow OpenAI to launch healthcare AI
- 19 January 2026
- Anthropic and Google have launched AI tools specifically for healthcare
- They follow the launch of ChatGPT Health in the US earlier this month, which allows patients to connect medical records and data from health apps
- Google removed some of its AI health summaries after an investigation found that people were being put at risk of harm by misleading information
Anthropic and Google are the latest major players to announce AI tools designed for healthcare, following the launch of ChatGPT Health in the US.
Claude for Healthcare is a set of tools and resources that allow healthcare providers, payers, and consumers to use Anthropicâs Claude AI for medical purposes.
According to a blog post by Anthropic, published on 11 January, Claude is introducing integrations “designed to make it easier for individuals to understand their health information and prepare for important medical conversations with clinicians”.
When connected to patients’ lab results and health records, Claude can summarise usersâ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments.
Google also released MedGemma 1.5, expanding its open medical AI model to interpret three-dimensional CT and MRI scans alongside whole-slide histopathology images.
They follow OpenAI’s launch of ChatGPT Health earlier this month, which can analyse people’s medical records and data from health apps to give them personalised healthcare advice.
OpenAI said on its website: “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.
“Instead, it helps you navigate everyday questions and understand patterns over timeânot just moments of illnessâso you can feel more informed and prepared for important medical conversations.”
ChatGPT Health, is currently only available in the US, but a spokesperson for OpenAI told Digital Health News that the firm is working through local regulations which require additional compliance measures before launching in the UK.
They added that OpenAI often engages in advance consultations with certain regulators in the UK and the EU before launching new products or services in those regions.
Last week the co-founder of health data startup Torch announced that the company had been acquired by OpenAI for more than $100 million (ÂŁ75m). Torch is focused on connecting health data from a wide range of sources to provide answers to common health questions.
Meanwhile, Google removed some of its AI health summaries after an investigation by The Guardian found that people were being put at risk of harm by misleading information, which sometimes omitted key safety details like side effects and allergy warnings.
Commenting, Euan McComiskie, health informatics lead at the Chartered Society of Physiotherapists, said: “These platforms are also not yet governed by any regulatory, strategic nor policy authority as is the case with our existing healthcare provider organisations.
“Until those issues are resolved, it is unlikely that generative AI platforms will entirely replace the human-led healthcare interactions.
“An AI-supported, human-led healthcare organisation can use multiple tools and platforms to operate efficiently, delivery high-quality healthcare whilst also enhancing the trusting and caring relationships that registered healthcare professionals have with the people we work with.”
In November the Medicines and Healthcare products Regulatory Agency advised that AI chatbots should not replace advice from healthcare professionals after research found that one in four UK patients are turning to AI and social media for health guidance.