Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for intersectional bias mitigation #537

Open
ckalousi opened this issue Aug 26, 2024 · 0 comments
Open

Adding support for intersectional bias mitigation #537

ckalousi opened this issue Aug 26, 2024 · 0 comments

Comments

@ckalousi
Copy link

With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration.

In order to mitigate the aforementioned bias, there is a proposed Intersectional Fairness method published in https://doi.org/10.1007/978-3-030-87687-6_5 (see also https://doi.org/10.48550/arXiv.2010.13494 for the arxiv version). Intersectional Fairness is based on a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.

Currently there is no support of intersectional bias mitigation in AIF360 and it would be great to see the inclusion of such a feature in a future version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant