Monday, March 9, 2026
HomeHealthcareDigital Well being Supply Firm Solera Tackles AI Governance Points

Digital Well being Supply Firm Solera Tackles AI Governance Points

Solera Well being has created a digital platform that matches well being plan members to greater than 20 curated digital well being options. Two of the corporate’s execs just lately sat down with Healthcare Innovation to debate the corporate’s enterprise mannequin and development in addition to its strategy to AI governance throughout its digital well being accomplice community. 



Glenn Alphen, Solera’s chief business officer, spoke in regards to the firm’s founding and development, and Mike Levin, the corporate’s common counsel and chief data safety officer, described the complexity of growing an AI governance framework throughout its ecosystem of digital well being resolution companions.



For instance of the kind of partnership it develops with payers, Blue Cross and Blue Protect of Texas simply introduced its Unity Well being Hub, powered by Solera Well being, that may hyperlink to customer support and situation administration assets to offer members with a coordinated expertise.

Healthcare Innovation: Might you give us an summary of the corporate’s enterprise mannequin and speak about a few of the digital well being companions it really works with?

Alphen: The corporate was based underneath the Inexpensive Care Act to serve Medicare Benefit members and to drive them to diabetes prevention applications regionally and probably digitally, after which flip their progress into claims. We began to construct a entrance finish, utilizing interviewing strategies to know the people utilizing it. Over time, our business prospects who additionally had Medicare Benefit stated that there have been some digital applications that may be nice for his or her business inhabitants in weight administration and diabetes prevention. Might we try this as nicely? So we started to construct out a mannequin that steered individuals to these types of applications and found out methods to construct these as claims.

We started gathering data on engagement and outcomes. Are you really shedding weight? Are you really doing this system? We constructed what is basically our personal EMR, the place we maintain monitor of all that knowledge coming in by these companions over time. Now we’re at eight circumstances. 

Now we have various giant well being plan prospects that use all of our situation classes primarily in business markets — whether or not it is absolutely insured or ASO [Administrative Services Only] sell-through. 

When I’m at a convention and other people ask what we do, I say, ‘See the whole lot on this room? We’re attempting to make it simple for a person to navigate and to take the purpose resolution fatigue away from the well being plan or the employer by being the place the place a community for digital and digital care exists, so we’re actually making a community strategy.’

HCI: Does Solera vet the digital well being options by way of their efficacy or trustworthiness? Or do the well being plans say to you that they work with a selected firm and would really like you to make it a part of your community?

Alphen: We do have plans say, ‘Hey, we love these guys. We need to make them a part of the community.’ However due to our vetting course of, it does not all the time occur.We begin with medical vetting. After which there’s enterprise alignment. Do they serve a care path that we already serve or do they serve a brand new care path? As a result of that is how we give it some thought — what’s the suitable care path? There is a very medical lens. The trick is that they should conform to extra of a pay-for-performance mannequin, which is that matching up of engagement with medical outcomes. Can they share the info in order that we will construct a value-based framework round billing? There are completely different billing methodologies. They’re typically per member/per 30 days, and that is the place numerous that time resolution fatigue comes from. The employers or the well being plans are all the time having to adapt to any person’s new methodology. We clear that up for them, usually talking.

HCI: Solera simply introduced a brand new behavioral well being community with firms Calm and Lyra Well being. Might you speak about that? 

Alphen: Sure. We’ve been very profitable within the psychological well being house with some prior companions. We thought we wanted a bit of bit extra of an expansive class, to actually meet the way in which our prospects. Calm grabs numerous consideration due to their deep shopper background, however they’ve launched Calm Well being for Employers, which additionally asks questions on different circumstances that we serve. We’ll have the ability to map a few of that knowledge into different choices that now we have. Behavioral well being offers us some flexibility to do some extra particular choices. I do not actually need to get into what these are but, however there are different areas that we will go into in behavioral well being.

HCI: Let me flip to Mike. I noticed some details about Solera unveiling a framework for accountable and clear use of AI in digital well being for use throughout your accomplice ecosystem. Might you first speak about the place governance most frequently collapses as soon as AI goes operational and what efficient, enforceable AI oversight must seem like now on this house? 

Levin: You are asking: how do how does AI governance break down? Typically, it is the identical issues that you simply see in safety. In the beginning, it is stock drift. Lot of organizations do not even notice that they’re utilizing AI particularly in manufacturing or that their community companions are using it, so they do not also have a correct stock of the place the AI is definitely embedded. 

Monitoring atrophy occurs fairly a bit, significantly once you’re constructing out a governance program. The monitoring cadence begins to float and the people who find themselves monitoring is probably not monitoring repeatedly, and that turns into an enormous danger. The third factor is incident response gaps. After we have interaction with our payers, that is the one which they’re frequently asking us about. A pilot does not really floor actual incidents as a result of it’s totally restricted in scope. However when you’re really out in the actual world, manufacturing may be very completely different. When an AI makes a problematic suggestion, how do you reply to it? In a dwell medical context, you want an escalation path. It’s essential to be pulling within the correct material experience. These have very restricted 24- to 72-hour reporting home windows as nicely. Greater than the rest, the incident response is just not actually thought by. It has to reflect what you do from a cyber perspective. If there are pre-existing fashions that exist in safety, you’ll be able to principally copy them over to the AI aspect.

HCI: Solera is sitting in form of a singular place on the middle of a digital well being ecosystem of separate firms. Is that this governance framework one you are constructing to assist all these firms as a base you count on them to achieve by way of issues like transparency?

Levin: Now we have a reasonably expansive AI governance program for our digital well being suppliers. That is one thing that we maintain being requested about by our payers. There’s numerous anxiousness round this, as a result of it is an unknown and there may be a number of overlapping and typically contradictory steerage round this. We see twin dangers. There’s the medical and there is the compliance, they usually do not all the time align. Scientific danger is about affected person security and care high quality. Does the AI floor correct suggestions? Does it hallucinate? Does it carry out equitably? If the info that’s coming in has bias, the outcomes that come out additionally has bias. Might it result in hurt if the output is improper? 

Then there’s the compliance danger, which is the one that you simply hear extra about from the authorized aspect, and that’s regulatory publicity. All people’s aware of HIPAA, however there are all these new legal guidelines, significantly in California and Colorado. Washington state has one, too. The FTC is wanting like they are going to begin imposing this as nicely. So there’s numerous concern in regards to the authorized danger perspective as nicely. 

Now we have a cross-functional oversight committee for our AI governance, which has engineering, authorized, safety, and compliance. Every of them has a singular perspective on the AI downside, if you’ll. These views have to work collectively, as a result of the dangers that I establish will not be the identical dangers that the engineering staff or the medical staff will see. That is how it’s important to handle it. The sensible actuality is that good medical governance typically satisfies the compliance necessities. So in the event you do one proper, it will typically result in the opposite. You must doc the whole lot. You want an enormous paper path.

HCI: Are these digital well being firms in your community appreciative that you simply guys are doing this? Is it such as you’re serving to them or is it such as you guys are the duty masters who’re making them do that stuff?

Levin: Effectively, a few of them are much less comfortable than others. Now we have a variety of digital well being companions as a result of now we have a pretty big portfolio, and a few of them are way more mature, they usually’re in a position to present mannequin playing cards. They’re in a position to clarify danger, to elucidate bias and different issues. Now we have to stroll them by this, however by doing that, they really construct out higher practices internally. 

The half that stunned me greater than the rest was the way you may suppose AI is in all places, but it surely’s actually not all the time being utilized immediately within the supply of care. It is within the again finish. It is principally getting used for coding or as a copilot within the workplace, but it surely’s not really constructed into numerous these healthcare apps, as a result of there’s a lot anxiousness round it from a compliance perspective. 

HCI: I learn that full implementation throughout the accomplice community was anticipated by the tip of the third quarter of 2025. Did that keep on schedule?

Levin: There have been some adjustments in our community, so since that assertion we have had some of us be part of and others have left. However we do have visibility in regards to the AI standing throughout all of our companions. We all know the posture of all of them, and we’re serving to those that want the assistance.

HCI: And Solera is growing an AI maturity scoring functionality with interactive dashboards for safety and compliance, anticipated to roll out this 12 months?

Levin: We’re engaged on that as a part of our bigger Halo platform. It is one of many product options. Consider it as a scoring mechanism for the digital well being suppliers — from a safety perspective, in addition to from an AI danger perspective. Consider it virtually like a credit score rating.

HCI: That already seems like so much, however are there some other huge duties in your to-do listing for 2026?

Levin: That could be a lot. I would say that AI in all probability consumes about 50% of my staff’s time from a governance and oversight perspective, as a result of there’s a lot unknown about it proper now, and it is so dynamic. However we’re not alone. I’ve seen this throughout the payer ecosystem as nicely. A whole lot of the payers have invested pretty closely in constructing AI governance groups, and no two of them are the identical. All of them reply in a different way. They’re all decoding the rules in a different way. When you’ve seen one AI governance program, you’ve seen one AI governance program.

 

 

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments