
Both regulatory agencies recognize the potential of this technology to transform biomedical innovation and hope to help lay the groundwork for joint work among stakeholders in this field.
The pharmaceutical industry sees artificial intelligence as a great opportunity to transform healthcare, which requires clear regulatory frameworks that are aligned with the digital reality.
Decalogo buenas prácticas IA FDA y EMA
Farmaindustria.es
Artificial intelligence (AI) is already revolutionizing biomedical research, and its potential to transform healthcare and improve people’s health is enormous. The pharmaceutical industry is one of the sectors at the forefront of the use of these technologies, always within the framework of ethical principles and ensuring that the technology is used for the benefit of society and healthcare at all stages of the pharmaceutical sector’s value chain.
That is why the industry values the ten principles published jointly by the European Medicines Agency (EMA) and its US counterpart (FDA), which recognize the potential of AI and establish 10 principles that they hope will lay the foundations for the development of good practices and encourage growth in this field.
The document Principles of Good Practice with AI in Drug Development identifies AI as the system technologies used to generate or analyze evidence throughout the entire drug life cycle, from preclinical and clinical phases to manufacturing, marketing, and beyond.
The agencies explain that these technologies must reinforce the quality, efficacy, and safety requirements demanded of drugs and that, specifically for AI, its use is expected to reduce time to market, reinforce regulatory excellence and pharmacovigilance, and even decrease reliance on animal testing by improving efficacy or toxicity predictions.
The document establishes 10 principles that the various stakeholders involved can work on, from those who develop medicines to applicants or holders of marketing authorizations. It also emphasizes the importance of strong international partnerships to promote responsible innovation.
The 10 principles:
1.Human-centered design
The development and use of AI technologies must be aligned with ethical and human-centered values.
2. Risk-based approach
The level of validation and oversight of the AI system must be proportional to the risk it may pose to the specific model and in the context of use.
3. Adherence to standards
AI technologies must comply with relevant legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including good practices (GxP).
4. Clear context of use
AI systems must have a well-established context of use, function, and scope.
5. Multidisciplinary approach
The development of all AI systems must be approached from a multidisciplinary perspective.
6. Data governance and documentation
The origin of data sources, their processing, and analytical decisions must be documented in a detailed, traceable, and verifiable manner, in accordance with best practices. This will ensure proper governance and data protection and confidentiality throughout the technology’s life cycle.
7. Model design and development
The development of AI technologies must follow best practices in model and system design and software engineering to leverage data in a way that is interpretable, understandable, and predictive. Good model and system development promotes transparency, reliability, generalization, and robustness of AI technologies, contributing to patient safety.
8. Risk-based performance evaluation
Risk-based performance evaluations analyze the entire system, including human-AI interactions, using data and metrics appropriate to the intended use context and supported by predictive performance validation through appropriately designed testing and evaluation methods.
9. Lifecycle management
Risk-based quality management systems are implemented throughout the entire lifecycle of AI technologies, including support for incident detection, assessment, and resolution. AI technologies undergo scheduled monitoring and periodic reassessments to ensure adequate performance, for example, potential data drift.
10. Clear and Essential Information
Plain language should be used to present clear, accessible, and contextually relevant information to the target audience, including users and patients, regarding the context of use, performance, limitations, underlying data, updates, and to make the AI technology interpretable and understandable.