47 F
Atlanta
Wednesday, December 6, 2023
HomeHealthDoctors Grapple with A.I. Implementation in Healthcare, Highlighting Loosened Regulations

Doctors Grapple with A.I. Implementation in Healthcare, Highlighting Loosened Regulations

Date:

Related stories

Inventor of Modern Blood Bag, William P. Murphy Jr., Passes Away at 100

Dr. William P. Murphy Jr., an excellent biomedical engineer...

The causes behind our nightmares and efficient methods to forestall them

Sign up for CNN's Sleep, But Better publication collection....

Boeing engineers create paper airplane that shatters world distance file

Sign up for CNN’s Wonder Theory science e-newsletter. Explore...

In medicine, cautionary tales about the unintended effects of artificial intelligence (AI) are well-known. Examples include a program designed to predict sepsis in patients, which resulted in false alarms, and another intended to improve follow-up care for the sickest patients, which deepened health disparities. As a result, physicians have been hesitant to fully integrate AI into their practices, instead using it in limited roles such as a scribe, second opinion, or back-office organizer. However, there has been growing investment and momentum in the use of AI in medicine.

The Food and Drug Administration (FDA), which plays a vital role in approving new medical products, is actively discussing the use of AI. AI is being utilized to discover new drugs, identify unexpected side effects, and assist overwhelmed staff with repetitive tasks. The FDA’s role in vetting and describing the approved AI programs used by doctors to detect various conditions has faced criticism. Physicians need confidence in the effectiveness of these tools before they can incorporate them into their workflow and payment systems.

To address these concerns, President Biden issued an executive order calling for regulations to manage the security and privacy risks of AI in healthcare. The order also seeks increased funding for AI research in medicine and the establishment of a safety program to report harm or unsafe practices. Additionally, a meeting with world leaders is planned to further discuss the topic.

No single agency in the US governs the entire AI landscape. Senator Chuck Schumer has called on tech executives to discuss nurturing the field and identifying potential pitfalls. Google, for example, has attracted attention for its chatbot for health workers, raising privacy and consent concerns.

The FDA has lagged behind rapidly evolving AI advancements, particularly concerning “large language models” and their oversight. The agency is just beginning to discuss how to review technology that continues to learn from processing diagnostic scans. Unlike AI tools in Europe that scan for a range of problems, the FDA’s existing rules encourage developers to focus on one problem at a time. Additionally, the FDA’s authority is limited to products approved for sale, allowing health systems to develop and use their own AI tools with little government oversight.

As doctors attempt to deploy the FDA-cleared AI tools for detecting various conditions, they face a lack of publicly available information about these programs. Basic questions about the program’s development, testing, and effectiveness remain unanswered. This lack of transparency has led physicians to be cautious, as they worry that exciting technology may result in unnecessary procedures, increased medical costs, and potentially harmful treatments without significant improvements in care.

Large studies are beginning to shed light on the benefits and flaws of AI in medicine. For example, studies have shown the benefits of using AI to detect breast cancer but have also highlighted flaws in an app designed to identify skin cancer. Dr. Eric Topol, an expert in AI in medicine, criticizes the FDA for allowing AI developers to keep their methodologies a secret and failing to require rigorous studies to assess meaningful benefits.

Dr. Jeffrey Shuren, the chief of the FDA’s medical device division, acknowledges the need for continued efforts to ensure that AI programs deliver on their promises. While drugs and certain devices are tested on patients before approval, the same is not typically required for AI software programs. Dr. Shuren suggests building labs where developers can access vast amounts of data and test AI programs to strike a balance between innovation and regulation.

Other challenges in adapting AI for major hospital and health networks include the lack of interoperability between software systems and the question of who should pay for these technologies. About 30% of radiologists are using AI technology, with simple tools finding acceptance more easily than higher-risk ones. Physicians have concerns about prioritization algorithms that are trained on limited patient populations.

To address these concerns, Dr. Nina Kottler is leading an effort to vet AI programs that are used at Radiology Partners, a practice that reads millions of scans annually. She evaluates approved AI programs by questioning their developers and testing them for accuracy. By doing so, she has identified programs with limitations and others that show promise. For example, an AI program for detecting brain clots alerted a radiologist to a patient’s condition immediately, leading to prompt treatment and a positive outcome.

While there are challenges in integrating AI into healthcare, efforts are being made to address the criticisms and ensure the safe and effective use of these technologies. Increased transparency, rigorous testing, and updated regulations are needed to build confidence in AI tools among physicians and patients.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here