FUTURE-AI: A Global framework to build trust in healthcare AI

FUTURE_AI

AI adoption is steadily increasing, but trust remains a well-known challenge. Concerns over bias, safety, and transparency are often raised in this regard.

A recent paper from an international team of 117 experts across 50 countries introduces the FUTURE-AI framework, designed to make AI in healthcare fairer, more reliable, and more usable. Built on six core principles: fairness, universality, traceability, usability, robustness, and explainability - the FUTURE-AI framework outlines 30 best practices for developing and deploying AI that works across diverse populations and clinical settings.

The goal? Safer, more transparent AI that clinicians and patients can trust.

The framework seems to be a combination of requirements from the ALTAI guidelines, AI Act, MDR and FAIR principles, but now internationally backed with authors from 50 countries. Could this be a start of AI legislation outside of Europe as well...?

Read the full paper to explore the implications of this framework for AI adoption in radiology.


Read full study

FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare

BMJ, 2025

Abstract

Despite major advances in artificial intelligence (AI) research for healthcare, the deployment and adoption of AI technologies remain limited in clinical practice. This paper describes the FUTURE-AI framework, which provides guidance for the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI Consortium was founded in 2021 and comprises 117 interdisciplinary experts from 50 countries representing all continents, including AI scientists, clinical researchers, biomedical ethicists, and social scientists. Over a two year period, the FUTURE-AI guideline was established through consensus based on six guiding principles—fairness, universality, traceability, usability, robustness, and explainability. To operationalise trustworthy AI in healthcare, a set of 30 best practices were defined, addressing technical, clinical, socioethical, and legal dimensions. The recommendations cover the entire lifecycle of healthcare AI, from design, development, and validation to regulation, deployment, and monitoring.