AI Verordening

De AI Verordening legt brede nieuwe verantwoordelijkheden op om risico’s van AI-systemen te beheersen, maar specifieke normen voor de verantwoorde inzet van algoritmes ontbreken vooralsnog. Bijvoorbeeld:

  • Risico- en kwaliteitmanagement systeem (Art. 9 and 17) – Vereisten voor risico- en kwaliteitmanagement systemen blijven te generiek. De vereisten stellen bijvoorbeeld dat AI systemen niet mogen discrimineren en dat ethische risico’s in kaart moeten worden gebracht. Er wordt echter niet toegelicht hoe discriminatie kan worden vastgesteld of hoe waardenspanningen beslecht kunnen worden;
  • Conformiteitsassessment (Art. 43) – De AI Verordening leunt zwaar of interne controles en mechanismen die zelf-reflectie moeten bevorderen om AI systemen op een verantwoorde wijze in te zetten. Dit leidt echter tot subjectieve keuzen. Meer institutionele duiding is nodig over normatieve vraagstukken;
  • Normatieve standaarden – Enkel technische standaarden voor AI-systemen, zoals de Europese Commissie standaardiseringsorganisaties CEN-CENELEC heeft verzocht te ontwikkelen, zijn onvoldoende om voor de verantwoorde inzet van AI systemen. Publieke kennis over technische én normatieve oordeelsvorming over verantwoorde AI-systemen is hard nodig. Maar juist hier is een gebrek aan.

Als lid van het Nederlands Normalisatie Instituut NEN draagt Stichting Algorithm Audit bij aan het Europese debat hoe fundamentele rechten gecoreguleerd kunnen worden door productveiligheidsregulatie zoals de AI Verordening.

Presentatie Algorithm Audit tijdens plenaire bijeenkomst Europese standaardiseringsorganisatie CEN-CENELEC over diverse en inclusieve adviescommissies in Dublin, feb-2024

Algorithm Audit’s technische en normatieve algoritmevalidaties spelen in op de aanstaande geharmonizeerde normen van de AI Verordening (niet cybersecurity-specificaties). Voor ieder rapport dat wordt gepubliceerd in onze case repository wordt toegelicht hoe de auditcriteria zich verhouden tot de huidige status van de geharmonizeerde standaarden die worden ontwikkeld voor de AI Verordening.

Digital Services Act (DSA)

The Digital Services Act (DSA) lacks provisions to disclose normative methodological choices that underlie the AI systems the DSA tries to regulate. For instance:

  • Risk definitions – Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;
  • Audit templates – Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;
  • Insufficient knowledge how to audit AI – Feedback submitted to the European Commission (EC) on DSA Art. 37 DR reveals that:
    • Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
    • Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.

Read our feedback to the Europen Commission on DSA Art. 37 Delegated Regulation

General Data Protection Regulation (GDPR)

The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.

  • Participatory DPIA (art. 35 sub 9) – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
  • Profiling (recital 71) – Profiling is defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations. As illustrated by simple, rule-based but harmful profiling algorithms in The Netherlands;
  • Automated decision-making (art. 22 sub 2) – Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.

Read Algorithm Audit’s technical audit of a risk profiling-based control proces of a Dutch public sector organisation

Administrative law

Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:

  • Principle of reasoning: On the basis of the principle of reasoning, it must be sufficiently clear on what grounds and why an administrative body takes a decision. What can and cannot be categorised as ’explainable’ as a non-legal part of the legal norm of the principle of reasoning is undergoing extensive development, and concrete norms are therefore still lacking.
  • Principle of due diligence: This principle relates to the formation of a decision, and ML-driven risk profiling is precisely used in the phase of the decision-making process. The principle of due diligence can be jeopardized when ML-driven risk profiling is applied if the input data is incomplete or incorrect and if the risk profile does not include all the relevant facts. This principle suffers from a lack of interpretation, resulting in a lack of clear guidance.
  • Fair play principle: The principle of fair play, or proper treatment, which is partly codified as a prohibition of bias in Section 2:4 of the Dutch General Administrative Law Act, concerns impartial execution of tasks by an administrative body. We argue that ‘contextualising’ the gpga in the case of this principle should focus on new, digital manifestations of bias. Thereafter, a subsequent best-efforts obligation could be applied to prevent bias and guarantee fairness in algorithmic applications.

Read Algorithm Audit’s article How ‘algoprudence’ can contribute to responsible use of ML-algorithms and its interplay with the Dutch General Administrative Law Act

FRIA

The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

Registers

Unifying principles of sound administration with (semi-) automated decision-making is challenging. For instance:

Obligation to state reasons: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile.

Read Algoprudence AA:2023:02 for a review of xgboost machine learning used for risk profiling variable selection 

Nieuwsbrief

Blijf op de hoogte van ons werk door je in te schrijven voor onze nieuwsbrief

Nieuwsbrief

Blijf op de hoogte van ons werk door je in te schrijven voor onze nieuwsbrief

Publieke kennisopbouw voor ethische algoritmes