Skip to content

Latest commit

 

History

History
26 lines (19 loc) · 2.83 KB

Readme.md

File metadata and controls

26 lines (19 loc) · 2.83 KB

This is the code accompanying the paper:

On (assessing) the fairness of risk score models
Petersen, Ganz, Holm, Feragen
ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23)

Main scripts:

  • model_fitting.py fits a logistic regression and an xgboost classification model to the dataset.
  • analyze_models.py performs the fairness analysis described in the paper and produces figure 3 in the paper.
  • auc_ranking_paper_examples.py implements the example shown in and produces figure 1 in the paper.
  • calibration_bias_analysis.py implements the calibration bias analyses described in the paper and produces figure 2.

The first two scripts require the Catalan juvenile recidivism dataset provided by the Centre for Legal Studies and Specialised Training (CEJFE) within the Department of Justice of the Government of Catalonia, first analyized by Tolan et al. (2019). The dataset can be downloaded here; use the preprocessing provided and described by Fuglsang-Damgaard and Zink (2022).

To set up the required packages, do conda env create -f environment.yml (if using Anaconda) or pip install -r requirements.txt (if using pip).

Implemented general functionality (besides the above main scripts):

  • Implementation of the debiased calibration error metric we describe in the paper, which is based on the method by Kumar et al. (2019): get_unbiased_calibration_rmse in calibration.py.
  • Unified implementations of the standard (sample size-biased, like we discuss in the paper) expected calibration error (ECE) and adaptive calibration error (ACE) metrics, both with fixed and automatically determined bin counts: ece in metrics.py.
  • LOESS-based calibration diagrams (with bootstrap-based uncertainty quantification), as proposed by Austin and Steyerberg (2013) and as shown in Figs 3 and 4 in our paper: rel_diag in analyze_models.py.

Eike Petersen, Technical University of Denmark, DTU Compute, Section Visual Computing, 2023.
Created as part of the project Bias and fairness in medicine, funded by the Independent Research Fund Denmark (DFF).