Featured
Upcoming events

Presentation 'Local-only tools for AI validation', The Netherlands Platform AI & Government

Presentation 'AI x Governance & Regulation - The EU AI Act is here', Big Data Republic and Kickstart AI
Presentation 'A Public Standard for Auditing Risk Profiling Algorithms', Audit Analytics Summit 2025, Nyenrode Business University and Utrecht University
Expertise
Sociotechnical evaluation of generative AI
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. Learn more about our evaluation framework.
AI Act implementation and standards
Our open-source AI Act Implementation Tool helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public knowledge base on standardization
Bias analysis
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about data analysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our public standards how to deploy algorithmic systems responsibly.
Sociotechnical evaluation of generative AI
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. Learn more about our evaluation framework.
AI Act implementation and standards
Our open-source AI Act Implementation Tool helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public knowledge base on standardization
Bias analysis
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about data analysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our public standards how to deploy algorithmic systems responsibly.
Distinctive in
Multi-disciplinary expertise
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
Not-for-profit
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
Public knowledge building
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.
Multi-disciplinary expertise
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
Not-for-profit
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
Public knowledge building
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.