I’m positive that different healthcare journalists, like myself, have obtained pitches from public relations professionals since OpenAI introduced a brand new instrument known as ChatGPT Well being just a few days in the past. Many of those pitches basically stated the generative AI firm missed the mark with the announcement.
As an example, one pitch touted the opinion of a CEO of an AI-driven well being navigation firm. The pitch said: People don’t want AI to assist them burn extra energy on their Peleton (sic). They want AI to assist them discover care on the lowest value.
The truth is that each statements are true. Individuals are nervous about rising healthcare prices and additionally they wish to be wholesome and perceive their very own well being information. The latter actuality is undoubtedly the catalyst that led to the genesis of ChatGPT Well being.
However the query is, why didn’t conventional EHR firms or firms which have been innovating with AI in healthcare do that themselves?
I’m truly not simply considering of firms like Epic Techniques, but in addition firms like Microsoft. Why Microsoft? As a result of just a few years in the past Microsoft demoed a instrument that appeared to get to the center of what ChatGPT Well being is aiming to do – present insights about a person’s personal medical report.
This was the autumn of 2023 and Microsoft was touting a number of AI capabilities to a bunch of journalists, together with me, at an trade occasion. I volunteered for a stay demo of the corporate’s ambient AI capabilities that got here courtesy of its Nuance acquisition. However essentially the most notable second of that afternoon was a much less shiny functionality highlighted by Linishya Vaz, principal mission supervisor, Well being and Life Sciences at Microsoft. I’m pasting verbatim what I wrote in that story under:
Think about you’ve had some bloodwork or radiological pictures finished. The language of the radiologists’s report is replete with scientific terminology — it’s virtually intentionally designed to maintain sufferers in the dead of night or guessing about what the stories say. I’ve taken to Google search to divine the that means of imaging and different lab stories as I’m positive numerous individuals have through the years.
Microsoft is aiming to make this simpler for sufferers, assuming after all that their imaging supplier/well being system is a Microsoft buyer and has deployed these capabilities.
Within the hypothetical instance that Vaz shared, a affected person with chest ache has undergone a chest X-Ray. The report reads like this:
“Left anterior chest wall dual-lead pacer steady from prior
examination. Lungs hyperinflated to clear. No pneumothorax or pleural
eltusion. Pulmonary vasculature regular. Coronary heart dimension regular. Osseous
buildings demineralized, nevertheless intact.
Impression: Hyperinflated lungs in step with emphysema. Osteopenia.”
Now Microsoft’s generative AI functionality can simplify the scientific jargon in plain English. And, per Vaz’s presentation, create a report that the affected person can view within the affected person portal.
* A tool with two leads (dual-lead pacer) is steady within the left entrance a part of your chest, as seen within the earlier examination.
* Your lungs are overinflated, which is in keeping with a situation known as emphysema
* There isn’t a air (pneumothorax) or fluid (pleural effusion) round your lungs
* The blood vessels in your lungs seem regular.
* The dimensions of your coronary heart is regular
* Your bones present a lower in density (demineralization), however they’re intact. That is known as osteopeniaIn conclusion:
Your lungs are overinflated, which is in keeping with emphysema, and you’ve got osteopenia (decreased bone density).
The affected person would additionally see this all-important disclaimer on the backside of the report: This simplified model was generated by an Al assistant.
The ability of this expertise in serving to sufferers decipher a radiology or lab report – assuming after all that the generative AI is simplifying precisely – can’t be overestimated. It should completely eradicate the necessity for sufferers to conduct time-consuming internet searches to grasp what is going on to them. And make them really feel empowered.
That’s what I wrote again in October, 2023
Now, let’s flip to the current OpenAI announcement and particularly a few sentences from it: Folks have shared numerous tales of turning to ChatGPT to assist make sense of all of it. In truth, well being is without doubt one of the most typical methods individuals use ChatGPT at this time: based mostly on our de-identified evaluation of conversations, over 230 million individuals globally ask well being and wellness associated questions on ChatGPT each week. (emphasis added)
“… assist make sense of all of it.” Therefore, the genesis of ChatGPT Well being, which is at the moment open to only some customers.
Now, how will ChatGPT Well being make OpenAI cash? I don’t know. Are you able to belief the truth that ChatGPT Well being is HIPAA compliant? They’re saying so. However do common ChatGPT customers nervous about their well being care about HIPAA, assuming they even learn about this commonplace? I’m prepared to wager “no.” And whereas that’s no excuse for having lax privateness requirements, the takeaway is easy: there’s a starvation for a well-designed, consumer-facing instrument that sheds extra mild on an individual’s medical report and well being tendencies over time. There’s a demand for a centralized location of well being information, together with medical data and wearables and the power to question that collective and disparate information and get simply comprehensible, actionable responses in plain English. OpenAI noticed the necessity and jumped on the chance. Microsoft noticed it however by no means scaled it sufficient – not less than that’s what it seems to be.
When requested if this was a missed alternative, an exterior Microsoft spokeswoman forwarded me a weblog submit that Microsoft AI CEO Mustafa Suleyman wrote within the fall of 2025, which included this paragraph on the Washington firm’s capabilities in consumer-facing well being.
Copilot for well being addresses one of the crucial frequent person wants: health-related questions. We’ve improved how we floor responses in credible sources like Harvard Well being to empower customers with dependable data. Copilot additionally helps you discover the fitting docs shortly and confidently, matching based mostly on specialty, location, language, and different preferences. The objective is easy: to enable you take management of your well being by empowering you with high-quality data and connecting you to the fitting care quick.
Besides Copilot for well being shouldn’t be actually a devoted well being instrument in any respect. It’s merely Copilot and you may ask the generative AI instrument health-related questions. It doesn’t have the performance or the privateness requirements to deal with a person’s medical data or information from wearables.
In different phrases, a giant missed alternative for Microsoft, in addition to conventional well being firms that declare to be patient-centric.
Two Epic executives, together with the EHR agency’s chief medical officer, counter this conclusion. Keep tuned for that story and what they consider ChatGPT Well being.
Photograph: Wong Yu Liang, Getty Pictures
