Higher-dimensional bias in a BERT-based disinformation classifier
Algoprudence identification code
ALGO:AA:2023:01
Summary advice
The advice commission believes there is a low risk of (higher-dimensional) proxy discrimination by the BERT-based disinformation classifier and that the particular difference in treatment identified by the quantitative bias scan can be justified, if certain conditions apply.
Source of case
Applying our self-build unsupervised bias detection tool on a self-trained BERT-based disinformation classifier on the Twitter1516 dataset. Learn more on Github.
Stanford’s AI Audit Challenge 2023
This case study, in combination with our bias detection tool, has been selected as a finalist for Stanford’s AI Audit Challenge 2023.
Presentation
A visual presentation of this case study can be found in this slide deck.
Report
Dowload the full report and problem statement here.
Normative advice commission
- Anne Meuwese, Professor in Public Law & AI at Leiden University
- Hinda Haned, Professor in Responsible Data Science at University of Amsterdam
- Raphaële Xenidis, Associate Professor in EU law at Sciences Po Paris
- Aileen Nielsen, Fellow Law&Tech at ETH Zürich
- Carlos Hernández-EchevarrÃa, Assistant Director and Head of Public Policy at the anti-disinformation nonprofit fact-checker Maldita.es
- Ellen Judson, Head of CASM and Sophia Knight, Researcher, CASM at Britain’s leading cross-party think tank Demos
Finalist selection Stanford's AI Audit Challenge 2023
Description
Our bias detection tool and this case study have been selected as a finalist for Stanford’s AI Audit Challenge 2023.
React to this normative judgement
Your reaction will be sent to the team maintaining algoprudence. A team will review your response and, if it complies with the guidelines, it will be placed in the Discussion & debate section above.