Authors: Tao Zhang, Tianqing Zhu, Jing Li, Wanlei Zhou
This is the code for my paper 'Revisiting Model Fairness via Adversarial Examples'.
To install requirements:
pip install -r requirements.txt
- Download .csv files for 4 datasets used in the paper from
data
directory
- Data preprocessing is implemented in the prepare_data.py.
-
Two adversarial attack methods, LowProFool and DeepFool, are implemented in the Adverse.py.
-
Training and evaluation are implemented in the train_model.py.
-
Demographic parity is implemented in the Fairness_metrics.py.
- All evaluation metrics are implemented in the Metrics.py.
- We provide an example to train models and obtain experimental results on the
German
dataset with sensitive attributeage
in the Playground_German.ipynb.