AI-system risk classification

Organization-wide algorithm management policy requires pragmatic frameworks. A risk-oriented approach often plays a leading role in this. It is in line with national and international legislation for algorithms, such as the AI Act. Below you will find an example of an algorithm scorecard for public sector organizations. Based on this scorecard, algorithms can be classified as high, medium or low risk. The results of the scorecard are available to be downloaded as a pdf.

* required

1. Data

Are there any agreements regarding data delivery?

checkbox

Text *

Text ARea *

AsfpoASdokas

File upload

email

email

2. Personal data

To what extent are personal data used in the algorithm?

3. Consequences for citizens I

Is the algorithm part of a decision-making process that has legal consequences for citizens?

4. Consequences for citizens II

Does the algorithm affect (or threaten to affect) a fundamental right? If so, how severely is this fundamental right affected?

5. Population size

What is the size of the population to which the algorithm is applied?

6. Third parties

Are the outcomes of the algorithm shared with third parties (e.g. citizens, other departments or supervisory parties)?

7. Financial/reputational damage

What is the (estimated) financial or reputational damage if the algorithm contains errors that lead to incorrect outcomes?

8. Autonomy

Are the decisions made based on the outcomes of the AI ​​system? If so, how?

9. Validation

Are the results of the algorithm validated?

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms