Ensuring legal compliance when using algorithmic systems requires correct qualification of in-scope systems and correct risks categorization. AI AQT consists of dynamic questionnaires that help with:
1 – Identification: AI systems, solely automated decision-making and high-impact algorithms.
2 – Risk classification: Prohibited AI systems, high risk AI systems, additional transparency requirements and General Purpose AI (GPAI) models.
The tool is developed open-source and can be used for free within your organization for AI governance. All outcomes of the questionnaires are shown in the figure below. The flowcharts of the questionnaires can also be found below.
Applying legal definitions in practice raises difficult questions. For example: What features distinguish AI from other algoritmic systems? And what criteria determine the risk category of an AI system? Standardized questionnaires can help govern algorithms efficiently. Questionnaires can be shared across the entire organization and can be processed centrally. AI AQT provides a clear and user-friendly approach and serves as a building block for compliance. Identification and risk classification are relevant not only in the context of the AI Act, but also in relation to the GDPR and additional policy instruments, such as the Algorithm Register Guidelines of the Dutch government.
Infographic – Flow of the AI AQT questionnaires

The outcomes of the tool are displayed in the figure below. The following categories are distinguished:
Questionnaire 1:
- AI systems: Fall inside the scope of the AI Act. Additional control measures are legally required dependent on the risk category. Continue with Questionnaire 2.
- High-impact algorithms: Fall outside the scope of the AI Act but inside the scope of the Dutch Algorithm Register. Additional control measures are needed.
- Solely automated decision-making (sADM): Fall inside the scope of Article 22 of the GDPR. Additional control measures are needed.
- Other systems: Fall outside the scopes of the AI Act, Dutch Algorithm Register and Article 22 of the GDPR.
Questionnaire 2:
- Prohibited AI systems: Usage of this type of AI system is prohibited in the European Union. More information about this category is provided by the Dutch government.
- High-risk AI systems: Additional control measures for high-risk AI systems are needed through harmonized standards.
- Additional transparency requirements: This type of AI system requires additional transparency, but no control measures are required.
- General Purpose AI (GPAI): Additional requirements apply to the provider.

The first version of AI AQT was developed in collaboration with the Municipality of Amsterdam and has since been further developed by Algorithm Audit. The source code of the tool can be found on Github and can be (re)used under the EUPL-1.2 license. AI AQT is used by the following organisations:
Considerations and choices made during development of the questionnaires relating to the AI Act, guidelines from the European Commission on the definition of an AI system, Article 22 GDPR and guidelines of the EPDB and Dutch DPA (AP) and the Dutch Algorithm Register Guidelines are described in ‘AI AQT Documentation’. The policy briefing elaborates why the guidelines of the European Commission blur the interpretation of the AI system definition.
Using examples, we explain how the AI system definition and risk categories from the AI Act apply to real-world algorithmic applications.
Preventing prohibited automated decision-making
The above flowcharts for Questionnaire 1 are simplified representations of the logic needed to assess whether an algorithmic system falls under one of the legal definitions: AI system, solely automated decision-making or high-impact algorithm). When using the questionnaire, users encounter questions only once. A complete flowchart of Questionnaire 1 with all paths and outcomes can be found here.




