You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration.
In order to mitigate the aforementioned bias, there is a proposed Intersectional Fairness method published in https://doi.org/10.1007/978-3-030-87687-6_5 (see also https://doi.org/10.48550/arXiv.2010.13494 for the arxiv version). Intersectional Fairness is based on a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Currently there is no support of intersectional bias mitigation in AIF360 and it would be great to see the inclusion of such a feature in a future version.
The text was updated successfully, but these errors were encountered:
With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration.
In order to mitigate the aforementioned bias, there is a proposed Intersectional Fairness method published in https://doi.org/10.1007/978-3-030-87687-6_5 (see also https://doi.org/10.48550/arXiv.2010.13494 for the arxiv version). Intersectional Fairness is based on a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Currently there is no support of intersectional bias mitigation in AIF360 and it would be great to see the inclusion of such a feature in a future version.
The text was updated successfully, but these errors were encountered: