A challenge confronting the Food and Drug Administration — and other regulators around the world — is how to regulate generative AI. The approach it uses for new drugs and devices isn’t appropriate. Instead, the FDA should be conceiving of LLMs as novel forms of intelligence. It should employ similar approaches to those it applies to clinicians.
Generative AI has arrived in medicine. Normally, when a new device or drug enters the U.S. market, the Food and Drug Administration (FDA) reviews it for safety and efficacy before it becomes widely available. This process not only protects the public from unsafe and ineffective tests and treatments but also helps health professionals decide whether and how to apply it in their practices. Unfortunately, the usual approach to protecting the public and helping doctors and hospitals manage new health care technologies won’t work for generative AI. To realize the full clinical benefits of this technology while minimizing its risks, we will need a regulatory approach as innovative as generative AI itself.