Distinctive in
Normative advice
Mindful of societal impact our audit commissions provide normative advice on ethical issues that arise in algorithmic use cases
Independence
By working nonprofit and under explicit terms and conditions, we ensure the independence, quality and diversity of our audit commissions
Ethics beyond compliance
We help organizations committed to ethical algorithms to make judgments about fairness and open legal norms
Public knowledge
All our cases and corresponding advice (algoprudence) are made publicly available, increasing collective knowledge how to deploy and use algorithms in an ethical way
Algoprudence
Stakeholders learn from our techno-ethical jurisprudence, can help to improve it and can utilize it as a best practice in similar cases
Joint effort
Let’s remove boundaries between public and private organizations that face similar AI quandaries. We offer a collaborative platform for academics, activists, developers and policy makers to define normative standards for AI
How we work
Algorithms we have reviewedWho we work with
We work together with international experts from various backgrounds, e.g. ethicists, legal professionals, data scientists. The composition of audit commissions varies per case. Most of the experts are affiliated with academic institutions.
Why we exist
AI ethics urgently needs case-based experience and a bottom-up approach. We believe existing and proposed legislation is and will not suffice to realize ethical algorithms. Why not?
AI policy observatory
- AI Act
The AI Act imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:
> Conformity assessment (Art. 43): The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;> Risk- and quality management systems (Art. 9 and 17): Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;> Normative standards: To realize AI harmonization across the EU, publicly available technical and normative best-practices for fair AI are urgently needed. - Digital Service Act (DSA)
The DSA lacks provisions to disclose normative methodological choices that underlie general purpose AI systems. For instance:
> Risk definitions: Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;> Audit template: Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;> Feedback submitted to the European Commission (EC) on DSA Art. 37 DR showcasts that:– Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;– Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems. - General Data Protection Regulation (GDPR)> Participatroy DPIA (art. 35 sub 9): This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;> Profiling (recital 71) is broadly defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations;> Automated decision-making (art. 22 sub 2): Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
- Fundamental Rights and Human Rights Impact Assessments
The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.
- Administrative law
Unifying principles of sound administration with (semi-) automated decision-making is challenging. For instance:
> Obligation to state reasons: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile. - AI registers
..
Get in touch
Do you have an ethical issue for review? Or want to share ideas? Let us know!