Onderscheidend in

Normatief advies

Auditcommissies geven normatief advies over concrete ethische vraagtukken die zich voordoen bij gebruik van algoritmes

Onafhankelijk

We werken zonder winstoogmerk. Onze werkafspraken garanderen onafhankelijkheid, kwaliteit en diversiteit van onze auditcommissies en bijbehorend normatief advies

Ethiek voorbij compliance

We helpen organisaties die zich committeren aan de verantwoorde inzet van algoritmes met biastoetsing en het invullen van open juridische normen

Public kennisopbouw

Al onze casuĂŻstiek en bijbehorend advies (algoprudentie) is openbaar. Zo dragen we bij aan publieke kennisopbouw over de verantwoorde inzet van algoritmes

Techno-ethische jurisprudentie

Belanghebbenden kunnen van onze algoprudentie leren, het helpen verbeteren en het inzetten als referentiemateriaal bij soortgelijke vraagstukken

Krachten bundelen

Publieke en private organisaties lopen tegen vergelijkbare uitdagingen aan. Wij jagen het collectieve leerproces over de verantwoorde inzet van algoritmes aan door overheid, bedrijfsleven, NGOs en wetenschap met elkaar te verbinden

Partners

Met wie werken we samen

We werken samen met internationale experts van verschillende achtergronden, o.a. ethici, juristen en datawetenschappers. De samenstelling van auditcommissies verschilt per vraagstuk. Commissieleden zijn verbonden aan een academische instellingen, zijn domeinexpert of worden onderworpen aan het algoritme.

Waarom we bestaan

Het is hard nodig ervaringen te delen hoe algoritmes verantwoord ontwikkeld kunnen worden. Bestaande en komende wetgeving lost niet alle vragen op. Waarom niet?

Nationale en Europese AI beleidsontwikkelingen

  • AI Verordening

    The AI Act imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:

    > Conformity assessment (Art. 43): The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
    > Risk- and quality management systems (Art. 9 and 17): Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
    > Normative standards: To realize AI harmonization across the EU, publicly available technical and normative best-practices for fair AI are urgently needed.
  • Verordening digitale diensten

    The Digital Services Act (DSA) lacks provisions to disclose normative methodological choices that underlie general purpose AI systems. For instance:

    > Risk definitions: Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;
    > Audit template: Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;
    > Feedback submitted to the European Commission (EC) on DSA Art. 37 DR showcasts that:
    – Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
    – Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.
  • Algemene verordening gegevensbescherming (Avg)

    Organizations that develop algorithms do often not comply with GDPR provisions that foster participatory algorithm development. For example:

    > Participatroy DPIA (art. 35 sub 9): This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;

    Besides, the current regulation only partially specifies measures to safeguard algorithmic decision-making. For instance:

    > Profiling (recital 71) is broadly defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations;
    > Automated decision-making (art. 22 sub 2): Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
  • Algemene wet bestuursrecht

    Unifying principles of sound administration with algorithmic methods is challenging. For instance:

    > Motivation principle: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile.
  • Impact Assessments Mensenrechten en Algoritmes

    The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

  • Algoritmeregisters

    ..

  • Toezichthouderslandschap

    Perspective 3.1.1 in the Guidelines for Algorithms of the Dutch Court of Auditors argues that ethical algorithms are not allowed to “discriminate and that bias should be minimised”. Missing from this judgment is a discussion of what precisely constitutes bias in the context of algorithms and what would be appropriate methods to ascertain and mitigate algorithmic discrimination. In the absence of a clear ethical framework, it is up to organizations to formulate context-sensitive approaches to combat discrimination.

  • AI Verordening

    The AI Act imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:

    > Conformity assessment (Art. 43): The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
    > Risk- and quality management systems (Art. 9 and 17): Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
    > Normative standards: To realize AI harmonization across the EU, publicly available technical and normative best-practices for fair AI are urgently needed.
  • Verordening digitale diensten

    The Digital Services Act (DSA) lacks provisions to disclose normative methodological choices that underlie general purpose AI systems. For instance:

    > Risk definitions: Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;
    > Audit template: Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;
    > Feedback submitted to the European Commission (EC) on DSA Art. 37 DR showcasts that:
    – Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
    – Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.
  • Algemene verordening gegevensbescherming (Avg)

    > Participatroy DPIA (art. 35 sub 9): This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
    > Profiling (recital 71) is broadly defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations;
    > Automated decision-making (art. 22 sub 2): Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
  • Impact Assessments Mensenrechten en Algoritmes

    The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

  • Algemene wet bestuursrecht

    Unifying principles of sound administration with algorithmic methods is challenging. For instance:

    > Motivation principle: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile.
  • Algoritmeregisters

    ..

Denk mee

Wil je advies over een algoritme? Of wil je nieuwe ideeën met ons uitwerken? Neem contact op.