Public standard profiling algorithms
Based on our case-based experience with auditing risk profiling systems, Algorithm Audit publishes a public standard providing qualitative and qualitative safeguards for responsible use of this type of algorithms.
This standard provides a concise step-by-step guide for the responsible use of profiling algorithms in the public domain, but can also be utilized in the private sector.
Deployers of profiling systems should not wait for European harmonised standards, as developed in support of the AI Act, to regulate profiling systems, because:
- AI Act standards have a broad scope (namely: all type of AI-systems) and are therefore ineffective to regulate specific algorithmic applications, such as profiling;
- Classic rule-based profiling systems, that are based on rules defined solely by natural persons (see recital (12)), do not fall under the definition of an AI-system and are therefore not in scope of the AI Act;
- AI Act standards will only be available behind the paywall of standardisation organisations and therefore impede public knowledge building about responsible algorithms.
In the Netherlands, ‘simple’ rule-based profiling systems have often been deployed irresponsibly for over the last 15 years. This public standard aims to contribute to concise risk management measures that can be implemented today to deploy profiling more responsibly.