Project/Bachelor thesis
Supervisor: Martin Kampel
Status: open
Motivation
Since the breakthrough of DeepFace in 2014 — a deep learning-based facial recognition system that achieved 97.35% accuracy, nearly reaching the level of human experts on the Labeled Faces in the Wild (LFW) dataset — artificial neural networks have become the dominant methodology in facial recognition. Despite impressive results, significant biases still exist regarding ethnicity, gender, and image capture conditions. Current systems demonstrably perform worse for individuals from underrepresented demographic groups.
As such systems are increasingly used in security-critical areas — such as in law enforcement, border control, or public surveillance — the discourse around transparency, fairness, and accountability in technical systems becomes more urgent. This thesis focuses on the methodological investigation of bias in facial recognition systems and explores possible countermeasures.
Objective
The goal of this project is to analyze and quantify bias in current facial recognition models, and to develop and evaluate methods for improving fairness. This includes the analysis of existing datasets and the evaluation of pre-trained deep learning models regarding their performance disparities across ethnic groups and genders.
Additionally, bias mitigation techniques (e.g., rebalancing, adversarial training, or fairness-aware loss functions) are to be tested. A particular emphasis is placed on applicability in security-critical contexts (e.g., correctional facilities).
Tasks
-
Literature review on bias in facial recognition and fairness in machine learning
-
Selection and analysis of publicly available facial image datasets (e.g., FairFace, BFW, LFW)
-
Evaluation of pre-trained deep learning models (e.g., ArcFace, FaceNet) with respect to bias metrics (e.g., accuracy, false accept/reject rates per group)
-
Implementation of bias mitigation strategies (e.g., re-sampling, group-aware training)
-
(Optional) Visualization of feature space differences using PCA or t-SNE
-
Documentation of ethical and legal aspects of facial recognition in the context of justice
Required Skills
-
Solid Python skills
-
Experience with deep learning frameworks (PyTorch or TensorFlow)
-
Basic knowledge of statistics, machine learning, and image processing
-
Interest in the ethical/social implications of AI
Contact: martin.kampel@tuwien.ac.at