In this special guest feature, Bradley Merrill Thompson, a partner in the Washington DC office of law firm Epstein Becker & Green and Chairman of the Board of the firm’s consulting affiliate EBG Advisors, starts a compelling and timely conversation on the FDA’s approach to regulating machine learning. Bradley leads a broad coalition of technology companies seeking reform of FDA’s approach to regulating clinical decision support software. To learn more about the CDS Coalition, visit www.cdscoalition.org. He would like to thank Mike Robkin, Robert Schulz and Mike Willingham for their insightful comments on this article, but he is solely responsible for any mistakes.
Many at FDA are big fans of using machine learning in healthcare. As a research tool, machine learning offers great promise in the discovery of new drugs and other treatments. But when machine learning algorithms become part of a regulated medical device, the unique nature of that technology creates challenges for the agency.
Machine learning is not a new subject for FDA. For years, the agency has been regulating software that uses machine learning algorithms to analyze medical images such as mammograms for potential areas of concern. For these products, there is now a relatively well-traveled path to market that includes providing FDA with specific information about the algorithm and its training, information on the features analyzed and the models and classifiers used. Further, in their submissions, developers commonly compare the software results against three human experts to see if there is sufficient agreement.
Now developers are moving beyond image analysis to clinical decision support software, or CDS, where clinical validation is not so easily done. Historically most CDS has been designed as expert systems, basically tracking human logic using if-then statements. But developers now are using machine learning to spot associations in data sets that so far have eluded the human eye. That’s exciting, but it has some at FDA wondering how to regulate such software.
Congress for its part stepped in during the lame-duck session last year and enacted the 21st Century Cures Act. With regard to CDS, the Act exempts from FDA regulation any software intended to allow healthcare professionals to “independently review” the basis for CDS recommendations, so the professional does not need to “rely primarily” on the recommendation in making a decision. CDS powered by machine learning algorithms may have a difficult time meeting that standard if developers are not able to design the software to allow users to review how the software arrived at a given conclusion. If the software acts like a black box and is advising doctors in a high risk situation, FDA may have to regulate the software including its machine learning algorithms.
That will challenge FDA, because CDS is likely to go beyond the functionality of the image analysis software. Historically, developers of image analysis software have presented FDA with a constrained version of machine learning software that does not itself change over time. That allows the developer to present FDA with data validating the performance of the software at the time the agency clears it for sale.
Some CDS developers may want to go beyond that, and present FDA with algorithms that continue to learn in the hands of the user. On the one hand, that sounds great, because it means that the software may perform better over time. To a regulator, though, this presents a challenge because the agency needs assurance that the performance will improve in a consistently upward trend. Over short periods of time, performance could actually decline, especially if the learning parameters are poorly designed and the software gets a hold of a bad batch of data. Machine learning is what it eats.
Further, medical device manufacturers that modify their products are supposed to seek new approvals if the products are changed significantly. Machine learning software – by definition – changes itself. This begs a very important question of when exactly does software machine learning need to be re-reviewed by FDA? One solution may be a stronger system for monitoring the performance of such software after it is introduced.
The FDA has recognized their need to have more internal expertise available for reviews. However, since this is a new field and experts are expensive, government salaries will have a difficult time competing. A more practical route for the FDA will be to use an advisory committee of experts willing to give the agency high level guidance on the risks and benefits associated with machine learning. The topic is also probably ripe for a public-private partnership to draw expertise from industry in fashioning appropriate regulatory pathways.
Fortunately, FDA is taking encouraging steps with regards to machine learning. In fact, the agency’s FY 2017 Regulatory Science Priorities make it clear that FDA hopes to use machine learning itself in analyzing data for possible insights that will help protect the public health. Let’s hope FDA carries forward that enthusiasm to finding an appropriate risk-based approach to regulating machine learning in CDS.
Sign up for the free insideAI News newsletter.