Friday, February 20, 2026
HomeHealthcareCan We Trip the GenAI Wave With out Getting Subsumed by It?...

Can We Trip the GenAI Wave With out Getting Subsumed by It? – The Well being Care Weblog

By DAVID SHAYWITZ

“There are a long time the place nothing occurs; and there are weeks the place a long time occur,” stated Lenin, most likely by no means.  It’s additionally a remarkably apt characterization of the final yr in generative AI (genAI) — the final week particularly — which has seen the AI panorama shift so dramatically that even skeptics are actually updating their priors in a extra bullish path.

In September 2025, Anthropic, the AI firm behind Claude, launched what it described as its most succesful mannequin but, and stated it might keep on complicated coding duties for about 30 hours constantly. Reported examples together with constructing an online app from scratch, with some runs described as producing roughly 11,000 strains of code. In January 2026, two Wall Avenue Journal reporters who stated that they had no programming background used Claude Code to construct and publish a Journal venture, and described the potential as “a breakout second for Anthropic’s coding instrument” and for “vibe coding” — the concept of making software program just by describing it.

Across the identical time, OpenClaw went viral as an open-source assistant that runs regionally and works by way of on a regular basis apps like WhatsApp, Telegram, and Slack to execute multi-step duties. The deeper shift, although, is architectural: the ecosystem is converging on open requirements for AI integration. One such customary known as MCP — the “USB-C of AI” — is now being downloaded practically 100 million occasions a month, suggesting that AI integration has moved from exploratory to operational.

Markets are watching the evolution of AI brokers into probably helpful financial actors and reacting accordingly. When Anthropic introduced plans to maneuver into high-revenue verticals — together with monetary companies, regulation, and life sciences — the Journal headline learn: “Menace of New AI Instruments Wipes $300B Off Software program and Knowledge Shares.”

Economist Tyler Cowen noticed that this second will “go down as some type of turning level.” Derek Thompson, lengthy involved about an AI bubble, stated his worries “declined considerably” in latest weeks. Heeding Wharton’s Ethan Mollick — “bear in mind, as we speak’s AI is the worst AI you’ll ever use” — buyers and entrepreneurs are busily trying to find alternatives to journey this wave.

Some founders are taking their ambition to healthcare and life science, the place they see a slew of issues for which (they anticipate) genAI is likely to be the answer, or not less than a part of it. The method one AI-driven startup is taking in direction of major care presents a glimpse into what such a future would possibly maintain (or maybe what contemporary hell awaits us).

Two Visions of Major Care

There may be real disaster in major care. Absurdly overburdened and comically underpaid, major care physicians have fled the occupation in droves — some to concierge practices the place (they are saying) they will present the standard of care that initially attracted them to medication, many out of scientific observe fully. Recruiting new trainees grows tougher every year.

What’s being misplaced is captured with extraordinary energy by Dr. Lisa Rosenbaum in her  NEJM  podcast collection on the subject.

In a companion essay, Rosenbaum paperwork the measurable penalties when sufferers lose a major care doctor: an increase in mortality, emergency room visits, and hospitalizations, all in proportion to the connection’s period — suggesting, as she writes, “that the connection itself conferred well being advantages.” Worse, greater than three quarters of sufferers by no means kind a brand new PCP relationship after dropping one.

However Rosenbaum’s deepest concern isn’t statistical. It’s about what she calls the “good physician” phenotype — not a talent set however a method. She describes a doctor whose hallmark was assuming accountability for the totality of his sufferers’ issues. When Rosenbaum was caring for one in all his hospitalized sufferers, the affected person insisted she replace the physician, explaining merely: “He’ll wish to know.” For Rosenbaum, having your sufferers intuit that you’d wish to know — excess of any high quality metric — constitutes the essence of being an excellent physician. A “tradition with no imaginative and prescient of the great physician,” she warns, “is a occupation with no soul.”

Her darkest fear: the system might morph into “some artificial-intelligence-enhanced triage system devoid of a relational core.”

Which is sort of precisely what physician-entrepreneur Muthu Alagappan, co-founder of Counsel Well being, aspires to ship — for the sake of sufferers. His start line: 100 million Individuals don’t have a relationship with a health care provider, good or in any other case. The relational ultimate Rosenbaum celebrates is already inaccessible to huge swaths of the inhabitants.

At Counsel Well being — not too long ago backed by a $25M Collection A from GV and Andreessen Horowitz — AI handles the upfront info gathering and preliminary scientific reasoning, functioning, as Alagappan places it, like “an especially sensible medical resident that’s reasoning together with them, serving up the plan and permitting them to approve or deny in a single click on.” Medical doctors see 15 to 20-plus sufferers per hour. The imaginative and prescient: major care visits costing lower than a greenback.

As Alagappan sees it “It’s onerous to fathom a cognitive facet of the observe of medication in major care {that a} know-how system is simply not higher suited to do than the human mind.”

He acknowledges that people should be crucial for pesky, hands-on duties like wrapping an ankle or administering a vaccine, however past these, he appears to consider, the longer term belongs to the machines. He anticipates “regulation will ease and enhance in order that the AI can do increasingly.”

In Utah, the method pursued by a startup known as Doctronic suggests such regulatory change could also be nearer than we predict. The corporate’s AI prescribes renewals with no doctor within the loop for 190 routine drugs, at $4 per script — with a malpractice insurance coverage coverage protecting the AI system itself, and escalation and oversight safeguards.  Growth is already contemplated to states like Texas, Arizona and Missouri, with a nationwide roll-out into consideration as effectively.

Who’s in cost?

As AI capabilities compound quickly, there’s super temptation to use them wherever they match most naturally.  With out intentionality, this method dangers quietly redefining disciplines by the duties the know-how performs effectively. As a result of AI can effectively course of signs, match protocols, and renew prescriptions, we would begin to outline medication as these particular duties — in a lot the identical means that as a result of we will measure steps, sleep scores, and VO2 max, we’re tempted to outline well being because the optimization of dashboard metrics. As Kate Crawford astutely warned, we should not let the “affordances of the instruments turn out to be the horizon of reality.”

This stress extends to biopharma R&D as effectively. Right here, efforts to leverage AI have succeeded in restricted domains with dense information and established benchmarks, however have struggled the place the important information are scarce, extremely conditional, or each — as Andreas Bender, particularly, has eloquently mentioned.

We’re at all times tempted to look the place the sunshine is.  However tough as it may be to take care of deal with what really issues, reasonably than what know-how most readily delivers, it may be accomplished.

A Firm Constructed on What Issues

For a while now, I’ve argued — on this house, at KindWellHealth, and elsewhere — that genuinely enhancing human flourishing requires consideration to a few broad dimensions: physiology (motion, vitamin, restoration, preventive screening), company (your perception in your potential to form a greater future), and connection (the worth of significant relationships and purposeful pursuits).

The information that caught my consideration not too long ago was that somebody independently constructed a enterprise round precisely this framework. Unbound, a UK-based preventive well being firm working from a single just-opened location in London’s Shoreditch, describes itself as “constructed on the idea that bodily, psychological and social well being are inseparable.”

A number of design selections distinguish Unbound from the optimization-culture norm. They measure connectedness alongside biomarkers — actually assessing social connection as a scientific enter. Their medical director, Dr. Elliott Roy-Highley, frames well being as “not merely the results of inner mobile mechanics, however an emergent property of social integration, function, and communal regulation.” A espresso store replaces the ready room; group circles, run golf equipment, and artwork exhibitions aren’t wellness window-dressing however structural commitments – the social surroundings is handled as significant a part of the intervention.

Maybe most distinctive is a post-assessment “future self” train — an evidence-backed optimistic psychology intervention that asks members to ascertain their optimum future self and establish private limitations to attaining that imaginative and prescient.  By strengthening the psychological connection between current and future selves, the train enhances purpose readability, self-efficacy, and motivation for conduct change. This course of works by way of narrative mechanisms — imagining, evaluating, and orienting towards personally significant targets –that translate evaluation insights into actionable well being methods.

Crucially, Unbound doesn’t reject measurement and know-how. They provide a companion app for extending connection and monitoring suggestions past the clinic; their assessments combine blood work and bodily efficiency testing alongside the emotional and social parts.  As Unbound places it: “Sure, we use instruments like scientific testing — however not as a approach to measure your price or push you to chase perfection. We use them to information and assist a a lot greater purpose: serving to you reside the life you need, with readability and confidence.” The intent: leverage science and know-how with intentionality, pointing them the place they need to be aimed, reasonably than the place they’re most inclined to go.

After all, there’s a big hole between a compelling idea and improved well being. It’s doable Unbound will show to be savvy wellness advertising and marketing aimed toward motivated, prosperous urbanites. The individuals who stroll into a classy Shoreditch well being studio are already comparatively motivated and sure already drawn to purposeful engagement. The proof that this system really improves well being, whereas theoretically grounded, stays to be seen.

However the curiosity Unbound has attracted reveals a considerable urge for food for one thing past relentless metric optimization — and there’s little of their method that appears particularly proprietary. The identical foundational rules — deepen connection, develop company, attend (with compassion) to physiology — all may very well be utilized at scale by incumbents and digital platforms. Peloton, for example, has the group infrastructure and the consumer engagement; what it lacks is a framework that extends past leaderboards and efficiency dashboards towards one thing that may assist customers not simply carry out however flourish.

Backside Line

GenAI is advancing at a tempo that might have appeared fantastical even a yr in the past; the developments of the previous few weeks have pressured even seasoned skeptics to recalibrate. There may be super incentive — and good purpose — to journey this know-how wave towards compelling alternatives just like the disaster in major care. However as these capabilities compound, the central problem might be guaranteeing the know-how serves what sufferers and folks really want, reasonably than permitting these must be outlined by what the know-how most readily delivers. The danger of primarily lowering well being to what will be optimized by know-how is actual, as so many tech-powered firms in healthcare, biotech, and health reveal. However it is usually doable to leverage know-how in service of a extra full and fewer reductive imaginative and prescient — attending to physiology, company, and real human connection — as Unbound suggests, and hopefully, many others pursue. 

Dr. David Shaywitz, a physician-scientist, is a lecturer at Harvard Medical Faculty, an adjunct fellow on the American Enterprise Institute, and founding father of KindWellHealth, an initiative targeted on advancing well being by way of the science of companyThis piece was beforehand revealed on the Timmerman Report

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments