Author: Janneke Gerards and Raphaële Xenidis

Executive summary

Six major challenges that algorithms pose to gender equality and non-discrimination law:

1. The human factor and the stereotyping and cognitive bias challenge Describe how implicit biases, harmful stereotypes and discriminatory prejudices held by humans risk infecting the algorithms humans create and how automation and anchoring biases reinforce these risks;

2. The data challenge Describes how data embodies the historically consolidated patterns of discrimination that structure society and how training algorithms with such biased data, or with incorrect, unrepresentative or unbalanced data, leads to the reproduction of structural inequalities by these algorithms;

3. The correlation and proxies challenge The correlation challenge explains how algorithms might reify and further enact discriminatory correlations (e.g. gender might negatively correlate with work performance, not because of a causal relationship, but because women historically have been consistently evaluated more negatively than men for the same work performance) by treating them as causalities and using them as foundations for further decisions, recommendations or predictions, while the proxies challenge outlines how removing protected characteristics from the pool of available input variables is insufficient in light of learning algorithms’ ability to detect proxies for these protected characteristics;

4. The transparency and explainability challenge Refers to difficulties in monitoring and proving algorithmic discrimination in light of the opacity of certain types of algorithms (even for computer scientists) and the lack of information about their inner workings (especially when codes and data are proprietary);

5. The scale and speed challenge Describes how algorithmic discrimination can ‘spread’ at a wider scale and a much quicker pace than ‘human’ discrimination since algorithms both speed up and scale up decision making; and

6. The responsibility, liability and accountability challenge Represents the difficulty of identifying who to hold responsible, liable and/or accountable for a discriminatory outcome in the context of complex human-machine relationships, given that so many different parties are involved in the design, commercialisation and use of algorithms.

Proxy discrimination questions the boundaries of the exhaustive list of protected grounds defined in Article 19 TFEU and sheds new light on the role and place of the non-exhaustive list of protected grounds to be found on Article 21 of the EU Charter of Fundamental Rights (https://fra.europa.eu/en/eu-charter/article/21-non-discrimination)

In light of the difficulties in tracking differential treatment based on protected grounds in ‘black box’ algorithms, the notion of indirect discrimination might become a conceptual ‘refuge’ to capture the discriminatory wrongs of algorithms. This development might reduce legal certainty if it leads, by default, to the generalisation of the open-ended objective justification test applicable in indirect discrimination cases as opposed to the narrower pool of justifications available in direct discrimination cases.

PROTECT

- PREVENT: through diverse and well-trained IT teams, equality impact assessments, ex ante ‘equality by design’ or ‘legality by design’ strategies.

- REDRESS: combine different legal tools in non-discrimination law, data protection law etc. to foster clear attribution of legal responsibilities, clear remedies, fair rules of evidence, flexible and responsive interpretation and application of non-discrimination concepts.

- OPEN: foster transparency, e.g. through open data requirements for monitoring purposes (such as access to source codes).

- TRAIN: educate, create and disseminate knowledge on non-discrimination and equality issues among IT specialists, raise awareness about issues of algorithmic discrimination with regulators, judges, recruiters, officials and society at large.

- EXPLAIN: establish explainability, accountability and information requirements;

- CONTROL: active human involvement (human-centred AI), e.g. in the form of human-in-the-loop (HITL) systems designed to avoid rubber-stamping, complemented by supervision and consultation mechanisms (chain of control and consultation with users).

- TEST: continuously monitor high-risk algorithms and their output, set up auditing, labelling and certification mechanisms.

Update 14-07-2022

We are happy to announce Raphaële Xenidis is now part of Algorithm Audit’s Board of Advice.

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms