TrustworthyAI4Health 2026: Toward Trustworthy AI Modeling for Computational Healthcare EMBL Heidelberg, Germany, March 9, 2026 |
| Submission link | https://easychair.org/conferences/?conf=trustworthyai4health |
| Abstract registration deadline | January 26, 2026 |
| Submission deadline | January 26, 2026 |
The TrustworthyAI4Health Workshop, co-located with the AI and Biology Conference, moves toward building private, fair, and reliable AI models for healthcare by emphasizing trustworthiness in model design, evaluation, and deployment. It unites theoretical, clinical, and benchmarking perspectives to promote responsible and transparent AI in medicine. Visit workshop website for registration and updates.
We envision this workshop as a collaborative forum for advancing trustworthy AI in healthcare, uniting machine learning researchers, privacy and fairness specialists, experts in data governance and ethics, computational health and biology researchers, bioinformaticians, and other interdisciplinary practitioners to generate new insights and practical pathways forward.
Submission Guidelines
Poster Track: Workshops are ideal venues for presenting work in progress. To encourage evolving research, we invite submissions,
- in the form of a 1-page extended abstract (excluding references) describing ongoing work.
List of Topics
We invite submissions addressing any of the topics listed below, as well as related areas, that advance reliable, clinically aligned AI systems across diverse data modalities and healthcare environments.
- Privacy & Security: Differential privacy, private synthetic data generation, and healthcare-specific adversarial threat modeling.
- Fairness: Techniques and applications that identify and mitigate disparities across demographic groups, clinical subpopulations, and comorbidity profiles.
- Interpretability & Explainability: Methods and applications grounded in biological or biomedical knowledge and model logic.
- Robustness & Uncertainty: Solutions for noisy, sparse, high-dimensional, or distributionally shifting clinical datasets, with calibrated uncertainty estimation.
- Human-AI Trust: Trust calibration, error communication, and evaluations of AI decision support in high-stakes settings such as intensive care or oncology.
- Benchmarks & Evaluation: Model-agnostic trustworthiness metrics, regulatory-informed benchmarks, and frameworks that articulate trade-offs among privacy, fairness, interpretability, uncertainty, and utility.
- and related areas.
