{"id":1167,"date":"2025-12-05T21:22:20","date_gmt":"2025-12-05T19:22:20","guid":{"rendered":"https:\/\/florence.forskning.eu\/q-a-about-gender-bias-robustness-and-interpretability\/"},"modified":"2025-12-17T13:36:44","modified_gmt":"2025-12-17T11:36:44","slug":"q-a-about-gender-bias-robustness-and-interpretability","status":"publish","type":"post","link":"https:\/\/florence.forskning.eu\/en\/q-a-about-gender-bias-robustness-and-interpretability\/","title":{"rendered":"Q and A about gender bias, robustness and interpretability in AI-models"},"content":{"rendered":"<link rel=\"stylesheet\" type=\"text\/css\" href=\"https:\/\/florence.forskning.eu\/wp-content\/themes\/oak-theme\/dist\/template-parts\/blocks\/hero-project\/hero-project.css\">\n\t<section class=\"block block--hero-project spacing--none\">\n\t\t<div  class=\"hero-project\">\n\t\t\t\t<div class=\"hero-project__wrapper no-logo \">\n\t\t\t\t<div class=\"grid\">\n\t\t\t<div class=\"col-lg-8 offset-lg-2\">\n\t\t\t\t\t\t\t\t<h1 class=\"hero-project__title\">\n\t\t\t\t\tQ&amp;A about gender bias, robustness and interpretability in AI-models\n\t\t\t\t<\/h1>\n\t\t\t\t\t\t\t\t\t<p class=\"hero-project__description\">\n\t\t\t\t\t\tQ&amp;A with associate senior lecturer Amir Aminifar from Lund University, project partner.\n\t\t\t\t\t<\/p>\n\t\t\t\t\t\t\t\t<div class=\"hero-project__container\">\n\t\t\t\t\t<div class=\"hero-project__manager\">\n\t\t\t\t\t\t<div class=\"manager__avatar\">\n\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/florence.forskning.eu\/wp-content\/themes\/oak-theme\/assets\/images\/placeholder.png\" data-src=\"https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6-150x150.jpg\" data-srcset=\"\" sizes=\"(max-width: 100vw) 200px, 200px\" data-loading=\"lazy\" alt=\"\" \/>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<span class=\"manager__name\">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t<span class=\"manager__position\">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n\t\t\t<div class=\"grid\">\n\t\t\t<div class=\"col-lg-8 offset-lg-2\">\n\t\t\t\t<div class=\"hero-project__image\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/florence.forskning.eu\/wp-content\/themes\/oak-theme\/assets\/images\/placeholder.png\" data-src=\"https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6-1024x682.jpg\" data-srcset=\"https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6-1024x682.jpg 1024w, https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6-300x200.jpg 300w, https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6-768x512.jpg 768w, https:\/\/florence.forskning.eu\/wp-content\/uploads\/sites\/6\/2024\/11\/FLORENCE_GA_nov24_6.jpg 1303w\" sizes=\"(max-width: 100vw) 1600px, 1600px\" data-loading=\"lazy\" alt=\"\"\n\t\t\t\t\t\talt=\"\" \/>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t<\/div>\n\t<\/section>\n\n\n<link rel=\"stylesheet\" type=\"text\/css\" href=\"https:\/\/florence.forskning.eu\/wp-content\/themes\/oak-theme\/dist\/template-parts\/blocks\/quote\/quote.css\">\n\t<section class=\"block block--quote spacing--none\">\n\t\t<div  class=\"quote\">\n\t\t\t\t\t\t\t<div class=\"grid \">\n\t\t\t\t\t\t\t\t<div class=\"col-lg-10 col-xl-8 offset-lg-1 offset-xl-2\">\n\t\t\t<div class=\"quote__detailed\">\n\t\t\t\t<i class=\"quote__icon\"><svg width=\"42\" height=\"32\" viewBox=\"0 0 42 32\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n<path d=\"M23.991 32L23.0465 30.5948C25.3973 21.4398 29.1334 11.2415 34.2549 0H42C40.7406 4.89687 39.4813 10.4325 38.2219 16.6068C37.0045 22.7385 36.1229 27.8696 35.5772 32H23.991ZM0.755622 32L0 30.5948C0.923538 26.7625 2.41379 21.9933 4.47076 16.2874C6.52774 10.5815 8.73163 5.15236 11.0825 0H18.8276C16.057 10.8157 13.8951 21.4824 12.3418 32H0.755622Z\" fill=\"currentColor\"\/>\n<\/svg>\n<\/i>\n\t\t\t\t<p class=\"quote__quote\">\n\t\t\t\t\tFairness issue and bias can frequently arise in AI-assisted decision making. The simplest example is that an AI model works more accurately for a certain gender\/group than others. There are several challenges in this domain, e.g., identifying the source of bias, which could be inherent in the society or the dataset collected.\u00a0\n\t\t\t\t<\/p>\n\t\t\t\t<span class=\"quote__author\">Amir Aminifar<\/span>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t<\/section>\n\n\n<link rel=\"stylesheet\" type=\"text\/css\" href=\"https:\/\/florence.forskning.eu\/wp-content\/themes\/oak-theme\/dist\/template-parts\/blocks\/paragraph\/paragraph.css\">\n\t<section class=\"block block--paragraph spacing--none\">\n\t\t<div  class=\"paragraph\">\n\t\t\t\t\t\t\t<div class=\"grid \">\n\t\t\t\t\t\t<div class=\"col-lg-8 offset-lg-2\">\n\t\t<p data-start=\"55\" data-end=\"79\"><strong data-start=\"55\" data-end=\"79\">Gender bias og fairness<\/strong><\/p>\n<p data-start=\"81\" data-end=\"270\"><strong data-start=\"81\" data-end=\"95\">Q:<\/strong> How do you understand and approach the issue of gender bias in predictive models or AI systems \u2014 and what do you see as the main challenges in addressing it?<\/p>\n<p data-start=\"272\" data-end=\"621\"><strong data-start=\"272\" data-end=\"281\">A:<\/strong> Fairness issue and bias can frequently arise in AI-assisted decision making. The simplest example is that an AI model works more accurately for a certain gender\/group than others. There are several challenges in this domain, e.g., identifying the source of bias, which could be inherent in the society or the dataset collected.  <\/p>\n<p data-start=\"628\" data-end=\"654\"><strong data-start=\"628\" data-end=\"654\">Data and representation<\/strong><\/p>\n<p data-start=\"656\" data-end=\"852\"><strong data-start=\"656\" data-end=\"670\">Q:<\/strong> To what extent do you think current datasets and modelling practices adequately represent gender and other social differences \u2014 and what are the implications when they don\u2019t?<\/p>\n<p data-start=\"854\" data-end=\"1111\"><strong data-start=\"854\" data-end=\"863\">A: This is pathology dependent, but generally many datasets and modeling practices often fall short of adequately representing gender and other social differences. This can lead to fairness and bias issues and hinder trust in general. <\/strong> <\/p>\n<p data-start=\"1118\" data-end=\"1152\">\n<p data-start=\"1118\" data-end=\"1152\"><strong data-start=\"1118\" data-end=\"1152\">Robustness &amp; Generalizability<\/strong><\/p>\n<p data-start=\"1154\" data-end=\"1323\"><strong data-start=\"1154\" data-end=\"1168\">Q:<\/strong> What does robustness mean in your field, and how do you ensure that models remain valid and trustworthy across different populations or contexts?<\/p>\n<p data-start=\"1325\" data-end=\"1637\"><strong data-start=\"1325\" data-end=\"1334\">A:<\/strong> Robustness means that a small change in the attribute (e.g., small change in the weight of a patient) should not change the AI-predicted health outcome. To ensure robustness and generalizability, AI models are required to keep such dimensions into consideration during the entire lifetime of the AI model, e.g., during the training process. <\/p>\n<p data-start=\"1644\" data-end=\"1671\">\n<p data-start=\"1644\" data-end=\"1671\"><strong data-start=\"1644\" data-end=\"1671\">Trust &amp; Interpretability<\/strong><\/p>\n<p data-start=\"1673\" data-end=\"1848\"><strong data-start=\"1673\" data-end=\"1687\">Q:<\/strong> Which factors do you think build (or undermine) trust in AI models \u2014 both among researchers and end-users \u2014 and how can interpretability play a role in this?<\/p>\n<p data-start=\"1850\" data-end=\"2195\"><strong data-start=\"1850\" data-end=\"1859\">A:<\/strong> Interpretability may help increase our confidence in the decision made by AI models, yet it may also generate only a perception of trust. As such, caution should be exercised in drawing conclusions when it comes to interpretability in particular and trust in AI decisions in general. This is one of the main challenges for the medical community to overcome.  <\/p>\n<p data-start=\"2202\" data-end=\"2220\">\n<p data-start=\"2202\" data-end=\"2220\"><strong data-start=\"2202\" data-end=\"2220\">Ethics &amp; Responsibility<\/strong><\/p>\n<p data-start=\"2222\" data-end=\"2400\"><strong data-start=\"2222\" data-end=\"2236\">Q:<\/strong> Who should hold responsibility for ensuring fairness, robustness, and transparency in predictive modelling \u2014 and what mechanisms or practices would strengthen that accountability?<\/p>\n<p data-start=\"2402\" data-end=\"2751\"><strong data-start=\"2402\" data-end=\"2411\">A:<\/strong> Ensuring fairness, robustness, and transparency in predictive AI is a shared responsibility for model developers and data scientists, institutions and organizations, as well as regulators and policymakers. A systemic approach to accountability in predictive AI is essential, ensuring alignment with regulatory and legal frameworks such as the GDPR and the EU AI Act. <\/p>\n\n\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t<\/section>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":7,"featured_media":982,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2],"tags":[],"class_list":["post-1167","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/posts\/1167","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/comments?post=1167"}],"version-history":[{"count":5,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/posts\/1167\/revisions"}],"predecessor-version":[{"id":1183,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/posts\/1167\/revisions\/1183"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/media\/982"}],"wp:attachment":[{"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/media?parent=1167"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/categories?post=1167"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/florence.forskning.eu\/en\/wp-json\/wp\/v2\/tags?post=1167"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}