The Food and Drug Administration (FDA) has announced it will develop a new AI medical devices regulation framework for reviewing “safe and effective medical devices”.
In a statement released earlier this month, its former commissioner Dr Scott Gottlieb acknowledged the power of AI and machine learning software that can learn and respond to real-world feedback, improving its capabilities in the development of innovative medical devices and technologies.
He said: “AI has helped transform industries like finance and manufacturing, and I’m confident that these technologies will have a profound and positive impact on health care.
“I can envision a world where, one day, AI can help detect and treat challenging health problems, for example by recognising the signs of disease well in advance of what we can do today.”
What are the current challenges in controlling AI-powered medical devices?
Along with the white paper, they also attached a 20-page discussion paper titled Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning.
The document explores the challenges in regulating AI-powered technology, also known as Software as a Medical Device (SaMD), whose algorithms can evolve into a smarter product overtime changing from its original version – which were not approved or FDA cleared.
So far, the agency has granted marketing authorisation or FDA approved medical devices that consisted of “locked algorithms”.
Last year, FDA authorised two AI-based companies, firstly the Coralville, Iowa-based IDx-DR, which has been built to detect retinopathy, an eye disease that can cause vision loss.
The association also confirmed a product from San Francisco, California-based Viz.ai, a software designed to notify providers of a potential stroke in their patients.
Dr Gottlieb described the authorisation as a “harbinger of progress” that FDA is expecting to see as more medical devices incorporate advanced AI algorithms to improve their performance and safety.
Dr Jabe Wilson, consulting director, text and data analytics at Elsevier explained the new AI framework from FDA has come at a time when there is growing concern over the ethical use of AI.
He said: “It’s important we are able to understand how training data is generated and gathered; when AI is used for medical devices, and more widely in the pharma industry, it’s critical to provide a transparent and understandable rationale for its ‘decision’ or output.
“Work is being done on creating artificial training data and looking at how to make decisions transparent, but it’s important FDA takes this into account and puts in place systems capable of gathering and normalising data to ensure analysis can be conducted accurately.”
What are “locked” and “adaptive” algorithms?
Two types of algorithms were outlined in the report: Locked algorithms and adaptive algorithms.
Locked algorithms don’t have the capability to continually adapt or learn every time the algorithm is used, and therefore provide the same result each time an algorithm is used and can only be manually modified and validated by the manufacturer.
An adaptive algorithm does the very opposite.
Dr Gottlieb explained: “The power of these AI/machine learning-based SaMD lies within the ability to continuously learn, where the adaptation or change to the algorithm is realised after the SaMD is distributed for use and has ‘learned’ from real-world experience.
“Following distribution, these types of continuously learning and adaptive AI/machine learning algorithms may provide a different output in comparison to the output initially cleared for a given set of inputs.”
Therefore, the agency is proposing a “predetermined change control plan” to provide information to FDA on anticipated changes an algorithm may undergo, along with an explanation of the method used to implement those changes.
FDA want the public’s input on the regulation of AI medical devices
The framework is currently open for public comment from experts and stakeholders in the medical space on how to better incorporate AI tools for better patient care.
According to Dr Gottlieb, one of the key elements to build an appropriate framework at a gold standard is through collaboration, which encourages feedback.
He stated: “We have more work to do to build out this initial set of ideas and we’ll rely on comments and feedback from experts and stakeholders in this space to help inform the agency as we continue to think about how we’ll regulate AI technologies to improve patient care.”
“As algorithms evolve, FDA must also modernise our approach to regulating these products.
“We must ensure that we can continue to provide a gold standard of safety and effectiveness.
“We believe that guidance from the agency will help advance the development of these innovative products.
The FDA intends to publish draft guidance based on the feedback received from the discussion paper.