
The integration of AI into healthcare and medical research brings several ethical considerations that need to be addressed. These considerations revolve around four primary challenges: informed consent, safety and transparency, algorithmic fairness and biases, and data privacy.
Informed Consent: As AI becomes more prevalent in clinical practice, it is crucial to examine the circumstances under which the principles of informed consent should be applied. Clinicians must consider their responsibility to educate patients about the AI systems being used, including the kind of data inputs and the possibility of biases or shortcomings in the data. Questions also arise around the level of transparency required and whether clinicians need to disclose that they cannot fully interpret the AI's recommendations.
Safety and Transparency: AI systems, especially those using "black-box" algorithms, can be difficult for clinicians to understand fully. It is essential to determine how transparency can be achieved in such cases. While positive results from randomized trials may serve as sufficient demonstrations of safety and effectiveness, striking a balance between the product's safety and the level of transparency required is a challenge that must be addressed.
Algorithmic Fairness and Biases: AI has the potential to democratize healthcare by bringing expertise to remote areas. However, any ML system or human-trained algorithm will only be as trustworthy, effective, and fair as the data it is trained on. AI bears a risk for biases and discrimination, making it vital for AI makers to be aware of this risk and minimize potential biases at every stage in the product development process.
Data Privacy: With the increasing use of AI in healthcare, protecting patient data and ensuring privacy become paramount. As AI systems rely on large amounts of data to learn and perform tasks, it is crucial to address issues related to data quality, availability, and potential biases1. Healthcare providers need to ensure that all data used by AI systems is representative, accurate, and secure.
Addressing these ethical considerations is essential for the successful and responsible integration of AI into healthcare and medical research. Collaboration among stakeholders, including AI makers, clinicians, patients, ethicists, and legislators, is crucial to ensuring that AI is implemented in a manner that prioritizes patient well-being and upholds ethical principles.

ML and AI are being integrated into biomedicine, particularly in digital health, by leveraging the vast and complex biomedical data generated through high-throughput technologies such as genome-wide sequencing, medical imaging, and drug perturbation screens. Researchers apply advanced ML techniques, including deep neural networks, to these datasets to perform tasks like automated disease classification, digital image recognition, and virtual drug screening with unprecedented accuracy. This integration enhances our understanding of disease signatures and healthy baselines, paving the way for innovative treatments and personalized healthcare approaches. AI-driven methods are proving transformative in infectious diseases and other complex conditions where traditional single-gene or protein biomarkers are insufficient. By processing and interpreting large, diverse datasets, AI can provide precise diagnostics, optimize treatment strategies, and predict disease progression, ultimately leading to more personalized and proactive healthcare.

The collaboration between AI and systems biology is advancing precision medicine by enabling customized medical interventions for each patient4. This is achieved by considering their genetic composition, environmental influences, and lifestyle factors4. AI-driven methods are particularly transformative in infectious diseases and other complex conditions where traditional single-gene or protein biomarkers are insufficient4. AI can provide precise diagnostics, optimize treatment strategies, and predict disease progression by processing and interpreting large, diverse datasets. This multidisciplinary approach fosters collaboration among experts from various fields, such as genomics, proteomics, and clinical data, ensuring that AI models are robust, reliable, and ethically sound.