Introduction - a solution for the AI Act and beyond

Ensuring legal compliance when using algorithmic systems requires correct qualification of in-scope systems and risks categories. AI AQT consists of dynamic questionnaires that help with:

1A – Identification of AI systems and high-impact algorithms;

1B – [BETA] Identification of AI systems and assessment of high-impact algorithms and solely automated decision-making (sADM);

2 – Risk classification of AI systems.

The tool is developed open-source in collaboration with the Municipality of Amsterdam and can be used for free within your organization to manage algorithms. All potential outcomes of the questionnaires are shown in the figure below. The flowcharts of both questionnaires can also be found below.

Why correct qualification of algorithmic systems matters

Applying legal definitions in practice raises difficult questions: For example: What features distinguish AI from other algoritmic systems? And what criteria determine the risk category of an AI system? Standardized questionnaires can help govern algorithms efficiently. Questionnaires can be shared across the entire organization and can be processed centrally. AI AQT provides a clear and user-friendly approach and serves as a building block for compliance. Identification and risk classification are relevant not only in the context of the AI Act, but also in relation to the GDPR and additional policy instruments, such as the Algorithm Register Guidelines of the Dutch government.

Flow of the AI AQT questionnaires

drawing

Outcomes tool

The outcomes of the tool are displayed in the figure below. The following categories are distinguished:

Questionnaire 1A:

  • AI systems: Fall inside the scope of the AI Act. Additional control measures are legally required dependent on the risk category. Continue with Questionnaire 2.
  • High-impact algorithms: Fall outside the scope of the AI Act but inside the scope of the Dutch Algorithm Register. Additional control measures are needed.

Questionnaire 1B [BETA]:

  • AI systems: See outcome Questionnaire 1A.
  • High-impact algorithms: See outcome Questionnaire 1A.
  • Solely automated decision-making (sADM): Fall inside the scope of Article 22 of the GDPR. Additional control measures are needed.
  • Other systems: Fall outside the scopes of the AI Act, Dutch Algorithm Register and Article 22 of the GDPR.

Questionnaire 2:

  • Prohibited AI systems: Usage of this type of AI system is prohibited in the European Union. More information about this category is provided by the Dutch government.
  • High-risk AI systems: Additional control measures for high-risk AI systems are needed through harmonized standards.
  • Additional transparency requirements: This type of AI system requires additional transparency, but no control measures are required.
  • General Purpose AI (GPAI): Additional requirements apply to the provider.
drawing

Development, usage and source code:

The AI and Algorithms Qualification Toolkit (AI AQT) is developed in collaboration with the municipality of Amsterdam. The source code of the tool can be found on Github and can be (re-)used under the EUPL-1.2 license. Among others, the AI AQT is used by:

Gemeente Amsterdam
De Nederlandsche Bank
Gemeente Den Haag

Documentation AI AQT questionnaires

Considerations and choices made during development of the questionnaires relating to the AI Act, guidelines from the European Commission on the definition of an AI system, Article 22 GDPR and guidelines of the EPDB and Dutch DPA (AP) and the Dutch Algorithm Register Guidelines are described in ‘Implementation of the AI Act – Definition of an AI System’. The policy briefing explains why certain elements from guidelines of the European Commission on the definition of an AI system contradict the legislative text of the AI Act.

Questionnaire 1B (in beta) will be covered soon in the documentation.

    / [pdf]
    / [pdf]

Examples and explainers

Using examples, we explain how legal definitions apply to common algorithmic systems.

Risk classification

    / [pdf]

10 examples of (non) AI systems

    / [pdf]

Rule-based algorithms under the AI Act

    / [pdf]

Definition of an AI system under the AI Act

    / [pdf]

Preventing prohibited automated decision-making

    / [pdf]

Flowchart Questionnaire 1A - Identification of AI systems and high-impact algorithms

    / [pdf]

Flowchart Questionnaire 1B - Identification of AI systems, high-impact algorithms and solely automated decision-making

    / [pdf]

The above flowcharts for Questionnaire 1B are simplified representations of only the logic needed to assess whether a system falls under one of the legal definitions, considered in isolation (either AI, high-impact algorithm or sADM). In an actual run through the questionnaire, the questions overlap, and users encounter each of them only once. A complete flowchart of Questionnaire 1B with all paths and outcomes can be found here.

Flowchart Questionnaire 2 – Risk classification of AI systems

    / [pdf]

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms