Op-ed, as published in Parool on 14-02-2024, arguing that:
- Not only algorithmic-driven processes can have discriminatory effects, but that human-driven process can be severely biased too;
- They argue therefore that, as a result of the performed bias test by the City of Amsterdam, not only should the explainable boosting ML-model be abandoned, but also the allegedly detected human biases within the processes of the City of Amsterdam should be subject to further investigation;
- Because more open and transparent research is needed to strengthen human-machine interplay to prevent systemic biases in the digital future.