Higher-dimensional bias in a BERT-based disinformation classifier

Algoprudence identification code

ALGO:AA:2023:01

Summary advice

The advice commission believes there is a low risk of (higher-dimensional) proxy discrimination by the BERT-based disinformation classifier and that the particular difference in treatment identified by the quantitative bias scan can be justified, if certain conditions apply.

Source of case

Applying our self-build unsupervised bias detection tool on a self-trained BERT-based disinformation classifier on the Twitter1516 dataset. Learn more on Github.

Stanford’s AI Audit Challenge 2023

This case study, in combination with our bias detection tool, has been selected as a finalist for Stanford’s AI Audit Challenge 2023.

Stanford University

Presentation

A visual presentation of this case study can be found in this slide deck.

Report

Dowload the full report and problem statement here.

    / [pdf]
    / [pdf]

Normative advice commission

  • Anne Meuwese, Professor in Public Law & AI at Leiden University
  • Hinda Haned, Professor in Responsible Data Science at University of Amsterdam
  • Raphaële Xenidis, Associate Professor in EU law at Sciences Po Paris
  • Aileen Nielsen, Fellow Law&Tech at ETH Zürich
  • Carlos Hernández-Echevarría, Assistant Director and Head of Public Policy at the anti-disinformation nonprofit fact-checker Maldita.es
  • Ellen Judson, Head of CASM and Sophia Knight, Researcher, CASM at Britain’s leading cross-party think tank Demos

Funded by


European Artificial Intelligence & Society Fund

Funding for further development

01-12-2023 funding open source AI auditing tool
Description

SIDN Fund is supporting Algorithm Audit for further development of the bias detection tool. On 01-01-2024, a team has started that is further developing a testing the tool.

Finalist selection Stanford's AI Audit Challenge 2023

28-04-2023 finalist
Description

Our bias detection tool and this case study have been selected as a finalist for Stanford’s AI Audit Challenge 2023.

React to this normative judgement

Your reaction will be sent to the team maintaining algoprudence. A team will review your response and, if it complies with the guidelines, it will be placed in the Discussion & debate section above.

* required

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms