Fairness issue and bias can frequently arise in AI-assisted decision making. The simplest example is that an AI model works more accurately for a certain gender/group than others. There are several challenges in this domain, e.g., identifying the source of bias, which could be inherent in the society or the dataset collected. 

Amir Aminifar

Gender bias og fairness

Q: How do you understand and approach the issue of gender bias in predictive models or AI systems — and what do you see as the main challenges in addressing it?

A: Fairness issue and bias can frequently arise in AI-assisted decision making. The simplest example is that an AI model works more accurately for a certain gender/group than others. There are several challenges in this domain, e.g., identifying the source of bias, which could be inherent in the society or the dataset collected.

Data and representation

Q: To what extent do you think current datasets and modelling practices adequately represent gender and other social differences — and what are the implications when they don’t?

A: This is pathology dependent, but generally many datasets and modeling practices often fall short of adequately representing gender and other social differences. This can lead to fairness and bias issues and hinder trust in general.

Robustness & Generalizability

Q: What does robustness mean in your field, and how do you ensure that models remain valid and trustworthy across different populations or contexts?

A: Robustness means that a small change in the attribute (e.g., small change in the weight of a patient) should not change the AI-predicted health outcome. To ensure robustness and generalizability, AI models are required to keep such dimensions into consideration during the entire lifetime of the AI model, e.g., during the training process.

Trust & Interpretability

Q: Which factors do you think build (or undermine) trust in AI models — both among researchers and end-users — and how can interpretability play a role in this?

A: Interpretability may help increase our confidence in the decision made by AI models, yet it may also generate only a perception of trust. As such, caution should be exercised in drawing conclusions when it comes to interpretability in particular and trust in AI decisions in general. This is one of the main challenges for the medical community to overcome.

Ethics & Responsibility

Q: Who should hold responsibility for ensuring fairness, robustness, and transparency in predictive modelling — and what mechanisms or practices would strengthen that accountability?

A: Ensuring fairness, robustness, and transparency in predictive AI is a shared responsibility for model developers and data scientists, institutions and organizations, as well as regulators and policymakers. A systemic approach to accountability in predictive AI is essential, ensuring alignment with regulatory and legal frameworks such as the GDPR and the EU AI Act.