“The bottom line is that AI regulation is really about transparency. We don’t know everything.” 

Those words were spoken by Luke Ralston — who has been a biomedical engineer and scientific reviewer at the FDA for nearly two decades — during a presentation last week at the Heart Rhythm Society’s HRX conference in Atlanta. The FDA still views AI regulation as an evolving science that is highly specific to each device and its intended use, he said. 

As the FDA continues its work to ensure that AI is deployed safely and ethically within healthcare, there are a couple prevalent issues that reviewers frequently run into, Ralston stated. 

The first has to do with performance drift. As a reviewer, Ralston said he would like to see data from companies about their products being used in real-life clinical situations.

“We all have datasets that we collect, we adjudicate, and we train the models on, and then we deploy them. That training is all well and good, but how does it function when you deploy? Does the intended user population change in such a way that the metrics start to deteriorate? That’s a real problem. And that’s one that we’ve seen,” he declared.

Companies may want to think more about conducting post-market monitoring so they can track how well their models are performing in the real world, Ralston added.

 Data generalization is the second major problem with healthcare AI. Ralston stated that datasets need to be large, cleaned and representative.

To effectively train a healthcare AI model, developers need thousands — ideally tens of thousands — of data points, he noted. 

“Right now, retrospective data is kind of the best we have in a lot of areas, and it’s not perfect. There’s a lot of missing data, and there’s a lot of unrepresentative data, but if you put in the work, I think that we can get to those datasets that are large enough for training and then for testing,” Ralston said.

He also pointed out that the healthcare world needs to expand its idea of what representative data is. 

To him, it is obvious that “you can’t just have 60 year-old white men in every atrial fibrillation trial and say this is going to generalize to the entire population.” However, healthcare leaders don’t always recognize how important it is to gather data that is diverse in more ways than just demographically, Ralston declared.

“What’s the hardware used to acquire [the data]? What are the hospital systems that are being used to acquire these? Every hospital system has slightly different workflows,” he remarked. “What are we doing to look at those workflows — to make sure that they’re truly representative of the intended patient population and the intended use of the device?”

As AI continues to evolve in the healthcare world, companies are going to have to start coming up with good answers to these questions, Ralston noted.

Editor’s note: This story is based on discussions at HRX, a conference in Atlanta that was hosted by the Heart Rhythm Society. MedCity News Senior Reporter Katie Adams was invited to attend and speak at the conference, and all her travel and related expenses were covered by Heart Rhythm Society. However, company officials had no input in editorial coverage. 

Photo: Gerd Altmann, Pixabay

Similar Posts