AI Act

The AI Act imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:

  • Risk- and quality management systems (Art. 9 and 17) – Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
  • Conformity assessment (Art. 43) – The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
  • Technical standards – Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.

As a member of Dutch standardization body NEN, Algorithm Audit contributes to the European debate how fundamental rights should be co-regulated by product safety.

Digital Services Act (DSA)

The Digital Services Act (DSA) lacks provisions to disclose normative methodological choices that underlie the AI systems the DSA tries to regulate. For instance:

  • Risk definitions – Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;
  • Audit templates – Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;
  • Insufficient knowledge how to audit AI – Feedback submitted to the European Commission (EC) on DSA Art. 37 DR reveals that:
    • Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
    • Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.

Read Algorithm Audit’s feedback to the Europen Commission on DSA Art. 37 Delegated Regulation

    / [pdf]

General Data Protection Regulation (GDPR)

The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.

  • Participatory Data Privacy Impact Assessment (DPIA) (art. 35 sub 9) – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
  • Automated decision-making (art. 22 sub 2) – Ongoing legal uncertainty what exactly is ‘automated decision-making’ and ‘meaningful human interaction’ given the Schüfa court ruling by the Court of Justice of the European Union (CJEU).

Article summarizing interaction GDPR and AI Act regarding data collection for debiasing

    / [pdf]

Administrative law

Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:

  • Duty to give reasons: It must be sufficiently clear on what grounds and why an administrative body takes a decision. When an algorithm is used for decision support it should be explained how the output of the algorithm contributed to the decision-making process.
  • Duty of care: The duty of care, among others stating that a situation must be created in which all interest can be weighed and in which a suitable ML method is used;
  • Fair play principle: The principle of fair play, or proper treatment, which is partly codified as a prohibition of bias in Section 2:4 of the Dutch General Administrative Law Act, concerns impartial execution of tasks by an administrative body. We argue that ‘contextualising’ the gpga in the case of this principle should focus on new, digital manifestations of bias. Thereafter, a subsequent best-efforts obligation could be applied to prevent bias and guarantee fairness in algorithmic applications.

Read the article How ‘algoprudence’ can contribute to responsible use of ML-algorithms and its interplay with the Dutch General Administrative Law Act

    / [pdf]

FRIA

Over the years, many Fundamental Rights Impact Assessments (FRIAs) have been developed. FRIAs typically assess responsible deployment of algorithms and AI by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

Read Algorithm Audit’s comparative analysis of 10 FRIAs

    / [pdf]

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms