AI at the FDA: help or hindrance?

AI at the FDA: help or hindrance?

With the introduction of genAI tool Elsa at the FDA in June, the technology is playing a greater role in the medical device review process. The FDA rolled out its Elsa AI tool in June 2025, reflecting an ongoing trend for more automation at the agency.

The advent of AI at the US Food and Drug Administration (FDA) is changing the complexion of the regulatory approval process for medical devices.

In May, FDA commissioner Dr Martin Makary announced that the agency would be implementing Elsa, a generative AI (genAI) tool intended to drive efficiencies at the agency. Elsa adds to an increasing range of templates and other automated tools that have infused the agency in recent years, with a specific focus on the review process for medical devices and drugs.

Elsa’s rollout ahead of schedule last month proceeded mass layoffs at the agency as ordered by US health secretary Robert F Kennedy, Jr (RFK Jr), an action some observers believed would cause medical device approval bottlenecks and stifle innovation.

A number of observers perceive Elsa to be a hastily rolled out solution primarily designed to paper over staff cuts at the FDA. In any case, regulatory submissions at the FDA are undoubtedly changing. But Elsa is simply the latest tool in a line of trends at the FDA to streamline efficiencies, following the agency’s introduction of the electronic Submission Template and Resource (eSTAR) programme, an interactive PDF form launched in 2020 that guides applicants through the process of preparing medical device submissions.

While initial reactions to Elsa have been mixed, with some observers within the FDA stating that the AI tool “confidently hallucinates” and has “severe limitations” since it was trained on Anthropic’s Claude so only has public data through April 2024, Dr Acacia Parks, a strategic adviser to contract research organisation (CRO) Lindus Health, points out that the majority of reviewers at the FDA were not part of the mass layoffs. Therefore, there is still very much a human-in-the-loop element when AI is reviewing device regulatory submissions. “Ultimately, a reviewer is the one who’s got to sign off on it,” says Parks.

Dr William Soliman, CEO at the Accreditation Council for Medical Affairs (ACMA), states that while AI at the FDA may streamline the development of content and the management and organisation of content to make submissions more streamlined, there will need to be “lots of quality checks” along the way.

Soliman says: “We have seen in other fields, like law, for example, where attorneys have lost their privilege to practice because they used AI to prepare legal briefs without checking the accuracy of the information. In some cases, the AI was making up reference cases outright.”

The overuse of AI also means there is a risk in that it could miss critical information points that humans would identify, Soliman says, giving rise to the possibility that less rigorously assessed medical devices could make it on to the market.

Soliman likens the situation to the “debacle” seen at United Healthcare, wherein the healthcare insurance company used AI that automatically denied claims from sick elderly customers, ultimately leading to a class-action lawsuit and the murder of its CEO Brian Thompson.

Soliman says: “This is a definite risk, especially given that there is currently no regulation on this AI.”

While there is a greater need for streamlined, more concise applications to the FDA, Parks does not view this as a factor likely to result in less rigour in the review process or exploitation and ‘corner-cutting’ by those filing submissions.

0 items in Cart
Cart Subtotal:
Go to cart
You will be able to Pay Online or Request a Quote
Catalog
Services
Company

We use cookies only to remember your preferences and provide better browsing experience. We do not sell user information. Here is our privacy policy.

Accept