FDA has published a list of over 500 AI algorithms cleared for use, and imaging makes majority of that list. Adoption of AI in medical imaging and disease detection specific use cases is increasing. In the UK, Europe and other countries, health systems are evaluating the use of AI in clinical practice.
AI has the potential to transform the field of radiology by improving the accuracy and efficiency of medical image analysis, diagnosis, and treatment planning.
Regulatory bodies in the US, Canada, UK, EU region and other countries like Australia are playing an important role in assessing and regulating the use of AI in clinical practice.
What type of radiology software requires regulatory clearances?
The US Food and Drug Administration (FDA) regulates medical devices, including artificial intelligence (AI) apps that are intended for medical use. The type of AI apps that require FDA clearance depend on their intended use and level of risk. Here are some examples of AI apps that may require FDA clearance:
- Diagnostic AI apps: AI apps that analyze medical images, such as X-rays, CT scans, or MRIs, to provide diagnostic recommendations or help identify abnormalities.
- Therapeutic AI apps: AI apps that provide treatment recommendations based on patient-specific data, such as medication dosages or treatment plans.
- Prognostic AI apps: AI apps that analyze patient data, such as medical history or biomarkers, to predict the likelihood of developing a certain condition or disease.
- Monitoring AI apps: AI apps that continuously monitor patient data, such as vital signs or glucose levels, and provide alerts when certain thresholds are reached.
- Decision support AI apps: AI apps that provide clinical decision support to healthcare providers, such as recommending appropriate tests or treatments based on patient data.
It’s important to note that not all AI apps may require FDA clearance. The FDA generally regulates medical devices based on their level of risk to patients, and some AI apps may be considered low risk and exempt from clearance. However, it’s always best to consult with the FDA or a regulatory consultant to determine whether an AI app requires clearance before it can be marketed and sold for medical use.
What are the examples of low risk AI apps that may be exempt from FDA clearances?
There are some AI apps that are considered low risk and may be exempt from FDA clearance. These low-risk AI apps generally do not pose a significant risk to patient safety and are intended to support clinical decision-making rather than make a diagnosis or provide treatment recommendations.
Here are some examples of low risk AI apps that may not require FDA clearance:
- Apps that use simple algorithms or rules-based systems to make recommendations based on patient data, such as medication reminder apps.
- Apps that provide general wellness advice or guidance, such as fitness trackers or diet planners.
- Apps that provide access to medical reference materials, such as drug formularies or medical dictionaries.
- Apps that offer educational information on health and medical topics, such as anatomy or disease management.
- Apps that provide patient-specific coaching or feedback on healthy habits, such as smoking cessation or stress management.
It should be kept in mind that even if an AI app is considered low risk and maybe exempt from FDA clearance, it still must comply with other FDA regulations, such as good manufacturing practices and labeling requirements. Additionally, the FDA may still review and take action against AI apps that pose a risk to patient safety or violate other regulations. It’s always best to consult with the FDA or a regulatory consultant to determine whether an AI app is exempt from clearance and to ensure compliance with all relevant regulations.
What about AI apps used for education and research?
There are several examples of radiology image interpretation AI algorithms that may not require FDA clearance, including:
- Research-oriented algorithms: AI algorithms that are developed for research purposes, such as academic studies or clinical trials, may not require FDA clearance if they are not intended for clinical use.
- Educational algorithms: AI algorithms that are used for educational purposes, such as training radiology residents or medical students, may not require FDA clearance if they are not intended for clinical use.
- Non-diagnostic algorithms: Some AI algorithms are designed to provide additional information to the radiologist, such as image segmentation or annotation tools, and are not intended to provide a diagnostic result. These types of algorithms may not require FDA clearance.
How is the radiology AI software classified?
Currently, the FDA regulates radiology AI software as medical devices and has developed a regulatory framework to ensure the safety and effectiveness of these products. The FDA categorizes medical devices into one of three classes (Class I, II, or III) based on the level of risk they pose to patients.
The FDA classifies medical devices, including radiological image software, into different classes based on the level of risk they pose to patient safety. The classification process involves evaluating the device’s intended use, technological characteristics, and the level of control necessary to ensure its safety and effectiveness. Here are the different classifications for radiological image software:
- Class I: Radiological image software that poses minimal risk to patient safety and requires only general controls to ensure its safety and effectiveness. Examples include image storage and display software, and image enhancement software.
- Class II: Radiological image software that poses moderate risk to patient safety and requires special controls to ensure its safety and effectiveness. Examples include diagnostic software that analyzes medical images for abnormalities and provides diagnostic recommendations.
- Class III: Radiological image software that poses high risk to patient safety and requires premarket approval to ensure its safety and effectiveness. Examples include therapeutic software that provides treatment recommendations based on medical images.
It’s important to note that these classifications are subject to change and may vary depending on the specific radiological image software and its intended use. It’s always best to consult with the FDA or a regulatory consultant to determine the appropriate classification for a specific radiological image software and the regulatory requirements that apply.
The FDA Clearances and AI Apps
The US Food and Drug Administration (FDA) provides different types of clearances for radiology AI apps, depending on their intended use and level of risk. Here are the main types of FDA clearances for radiology AI apps:
- 510(k) Clearance: This clearance is required for radiology AI apps that are substantially equivalent to an existing legally marketed device. To obtain a 510(k) clearance, the AI app developer must demonstrate that the new device has the same intended use and technological characteristics as the existing device, and that it is at least as safe and effective.
- De Novo Clearance: This clearance is required for radiology AI apps that are not substantially equivalent to an existing legally marketed device, and for which there is no predicate device. To obtain a De Novo clearance, the AI app developer must demonstrate that the new device is safe and effective for its intended use.
- Premarket Approval (PMA): This clearance is required for radiology AI apps that are high-risk or novel, and for which general or special controls alone are not sufficient to ensure their safety and effectiveness. To obtain a PMA, the AI app developer must provide extensive clinical data demonstrating the safety and effectiveness of the device for its intended use.
- Breakthrough Device Designation: This designation is intended to speed up the development, assessment, and review of radiology AI apps that offer a breakthrough in medical technology. AI app developers must apply for this designation, and they must meet certain criteria, such as providing evidence that the device offers a more effective treatment or diagnosis for a life-threatening or irreversibly debilitating condition.
It’s important to note that these clearance processes are subject to change and may vary depending on the specific AI app and its intended use. It’s always best to consult with the FDA or a regulatory consultant for guidance on the appropriate clearance pathway for a specific radiology AI app.
Where does the Radiology AI software fall?
The FDA’s review of radiology AI software focuses on several key areas, including the algorithm’s performance characteristics, clinical validation data, and user instructions. The agency also considers factors such as the intended use of the software, the type of medical imaging modality being used, and the population of patients for whom the software is intended.
In addition to premarket review, the FDA also conducts post-market surveillance to monitor the safety and effectiveness of radiology AI software after it has been cleared for marketing. This includes monitoring adverse events, conducting inspections of manufacturing facilities, and reviewing post-market study data.
AI software used in radiology falls under the Class II category, which includes devices that are higher risk than Class I devices but lower risk than Class III devices. The FDA requires Class II medical devices to undergo premarket review and clearance before they can be marketed in the United States. This review process typically involves submitting a 510(k) premarket notification to the FDA, which includes information about the device’s intended use, design, performance, and safety.
Overall, the FDA plays an important role in ensuring the safety and effectiveness of radiology AI software and other medical devices, and developers of these products must adhere to strict regulatory requirements to bring their products to market.
What about continuously learning AI apps?
FDA’s approach to continuously learning AI apps is evolving as the field of AI in healthcare continues to advance. Currently, the FDA is taking a risk-based approach to regulating continuously learning AI apps, which takes into account the potential risks and benefits of the technology.
The FDA recognizes that continuously learning AI apps have the potential to improve patient outcomes by continuously refining their algorithms and improving their diagnostic accuracy. However, these apps also present unique regulatory challenges, as their performance may change over time as they are trained on new data.
To address these challenges, the FDA has released a discussion paper outlining its proposed framework for regulating continuously learning AI apps. The framework emphasizes the importance of transparency and real-world performance monitoring, and outlines a risk-based approach to premarket review and post-market surveillance.
Under the proposed framework, continuously learning AI apps would be subject to premarket review and clearance or approval by the FDA, and would be required to provide ongoing performance monitoring data to the FDA to ensure their continued safety and effectiveness. The FDA would also prioritize post-market surveillance of continuously learning AI apps, and would require manufacturers to report any changes to their algorithm or training data that could affect the app’s performance.
Overall, the FDA’s approach to continuously learning AI apps is still evolving, and will likely continue to be refined as the technology advances and new regulatory challenges emerge. However, the FDA’s proposed framework represents an important step forward in regulating these innovative technologies in a way that promotes patient safety and improves healthcare outcomes.
What should physicians and radiologists keep in mind when considering AI
- AI is not infallible: While AI has the potential to improve diagnostic accuracy and reduce errors, it is not perfect. AI algorithms may have limitations or biases that can affect their performance, and clinicians should be aware of these limitations and interpret AI results accordingly.
- AI is a tool, not a replacement for clinical judgement: AI can provide valuable insights and help clinicians make more informed decisions, but it should not be relied on as a substitute for clinical judgement. Physicians and radiologists should always use their own expertise and judgement when making clinical decisions, and should interpret AI results in the context of the patient’s specific clinical situation.
- AI requires high-quality data: AI algorithms rely on high-quality data to provide accurate results. Clinicians should ensure that the data used to train and validate AI algorithms is representative and free from bias, and should be cautious when using AI algorithms that have not been validated on their specific patient population.
- AI is not a substitute for patient interaction: AI can provide valuable insights into patient data, but it cannot replace the importance of patient interaction and communication. Clinicians should always consider the patient’s individual needs and preferences when making clinical decisions, and should use AI as a tool to enhance, rather than replace, the patient-clinician relationship.
- Ethical and legal considerations: Clinicians should be aware of the ethical and legal considerations involved in using AI, including issues related to privacy, informed consent, and liability. They should ensure that they are using AI in accordance with relevant regulations and guidelines, and should seek legal advice if necessary.
Decision criteria to consider before implementing AI:
Implementing AI in medical imaging requires careful consideration of several decision criteria. Some of the key criteria to consider are:
- Regulatory compliance: The use of AI in medical imaging must comply with regulatory requirements such as HIPAA, FDA, and GDPR. Compliance must be verified before implementation. Other regional regulatory frameworks, for example Health Canada, CE, MHRA UK requirements are to be considered depending upon which country AI is being deployed in.
- Data quality and training: The accuracy and reliability of AI algorithms depend on the quality and quantity of data used to train them. Before implementing AI in medical imaging, it is essential to ensure that sufficient data of high quality was available for training of the AI app.
- Clinical utility: The AI algorithm should provide clinically relevant information that may help improve patient care. The value of the AI algorithm should be weighed against the cost and effort required to implement it.
- Ethics and transparency: AI algorithms should be designed with ethics and transparency in mind. This includes ensuring that the algorithm is unbiased, transparent, and explainable.
- Technical requirements: Implementing AI in medical imaging requires significant technical expertise and infrastructure. The organization should assess whether it has the technical capabilities required to implement AI successfully.
- Risk management: AI algorithms must be designed to manage risks associated with misdiagnosis or misinterpretation of medical images. The organization must have a risk management plan in place to mitigate such risks. It is also important to evaluate technical risks associated with the platform on which AI apps are hosted.
- Cost-effectiveness: The cost of implementing AI in medical imaging must be justified by the benefits it provides. This includes considering the cost of acquiring and maintaining the infrastructure, training the staff, and ongoing support and maintenance costs.
Considering these decision criteria before implementing AI in medical imaging can help ensure that the implementation is successful, beneficial, and compliant with relevant regulations.
This article provides a summary based on publicly available information and data. It is important to consult with qualified professionals and experts in the relevant fields for advice specific to your situation. Always verify the accuracy and applicability of any information before making decisions or taking actions based on it.
References:
- FDA Radiological Health: Digital Health Software Precertification (Pre-Cert) Program: https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program
- FDA Radiological Health: Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data – Premarket Notification [510(k)] Submissions: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/computer-assisted-detection-devices-applied-radiology-images-and-radiology-device-data-premarket
- FDA Radiological Health: Classify Your Medical Device: https://www.fda.gov/radiation-emitting-products/radiation-emitting-products-and-procedures/medical-imaging#classify
- FDA Radiological Health: Medical Device Reporting (MDR): https://www.fda.gov/medical-devices/medical-device-reporting-mdr/how-report-medical-device-problems
- FDA Radiological Health: Postmarket Surveillance: https://www.fda.gov/radiation-emitting-products/radiation-emitting-products-and-procedures/medical-imaging#postmarket
- FDA Radiological Health: Classify Your Medical Device: https://www.fda.gov/radiation-emitting-products/radiation-emitting-products-and-procedures/medical-imaging#classify
- FDA Radiological Health: Digital Health Software Precertification (Pre-Cert) Program: https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program
- FDA Radiological Health: Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data – Premarket Notification [510(k)] Submissions: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/computer-assisted-detection-devices-applied-radiology-images-and-radiology-device-data-premarket
- Radiology: Computer-aided detection and diagnosis in radiology: https://pubs.rsna.org/doi/full/10.1148/radiol.2015151169
- Journal of Digital Imaging: Artificial Intelligence in Medical Imaging: A Guide to FDA Approval Process for AI-Based CADe/CDSS Tools: https://link.springer.com/article/10.1007/s10278-018-0075-5