Comparative review of 10 FRIAs

We have conducted a comparative review of 10 existing FRIAs frameworks, evaluating them against 12 requirements across legal, organizational, technical and social dimensions.

Our assessment shows a sharp divide regarding the length and completeness of FRIAs, for instance:

🩺 Many FRIAs have not incorporated legal instruments that address the core of normative decision-making, such as the objective justification test, which is particularly important when users are segmented by an AI system.

🔢 None of the FRIAs connect accuracy metrics to assessing the conceptual soundness of an AI-systems’ statistical methodology, such as (hyper)parameter sensitivity testing for ML and DL methods, or statistical hypothesis testing for risk assessment methods.

🫴🏽 Besides, the technocratic approach taken by most FRIAs does not empower citizens to meaningfully participate in shaping the technologies that govern them. Stakeholder groups should be more involved in the normative decision that underpin data modelling.

Are you a frequent user, or a developer of a FRIA, please reach out to info@algorithmaudit.eu to share insights based on our case-based AI auditing experience.

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms