LLM validator – Responsible AI

part-time 2-4h voluntary

Summary

Are you interested in contributing to responsible use of Large Language Models (LLMs) for public information services? Join Algorithm Audit’s 6-month part-time LLM validation cohort consisting of 3-5 AI experts. In this project, you will help validating the infrastructure of an LLM pilot developed by the Dutch judiciary. The system is designed to inform citizens about their legal rights and possibilities when facing disputes. This project offers a unique opportunity to assess an existing codebase and apply your expertise to ensure robust deployment of LLMs. The outcome of this project will be a practical and publicly available validation framework for LLM systems used for public information services.

What is Algorithm Audit?

Algorithm Audit is a European knowledge platform for responsible AI. We are a young, tech-savvy NGO working on ethical issues that arise in real-world algorithms. We bring together experts from various professional backgrounds to build bottom-up public knowledge how to use AI in a responsible manner.

Project activities

You will work remotely on assigned tasks and share your insights with Algorithm Audit’s executive team managing the project. Depending on your skills set, your responsibilities may include evaluating an existing codebase, analyzing documentation and performing robustness testing on guardrail methodologies. Familiarity with cloud-based LLM solutions and widely-used used libraries, such as ChromaDB for vector representation, LangChain for embedded search and chunking strategies, is highly advantageous. Additionally, the project involves gathering insight from other public sector LLM pilots to promote the broader adoption of best practices.

What will you do as project team member?

  • Dedicate 2-4 hours per week from 01-07-2025 up to 31-01-2026;
  • Execute work items as discussed with Algorithm Audit’s team coordinating this project;
  • Present your work first internally in the project team, and in a later phase of the project to the international AI auditing community during (online) events, workshop and presentations.

Candidate profile

  • Professional experience with LLM applications, software engineering, data science, machine learning, or similar; OR
  • PhD-candidate in one of the following fields: computer science, engineering, machine learning, statistics, mathematics, economics, or similar with proven track record; AND
  • 3+ years of development experience in Python, JavaScript or similar;
  • Methodological expertise relating to LLM and/or NLP;
  • Familiarity with packages commonly used for LLM applications, e.g., LangChain, ChromaDB etc.

Practicalities

  • No reimbursement available;
  • Apply before Wednesday June 11th 23h59 CET.

Our approach to diversity, equity and inclusion

Algorithm Audit’s commitment is reflected in its core mission to strengthen the fairness and non-discriminatory deployment of AI in all parts of society. We build and share public knowledge about discriminatory bias and fostering equitable algorithms and methods for data-analysis. In all our work special attention is paid to the inclusion of various cultural and gender backgrounds.

Application form

CV*

Note about processing personal data
Submitted data will only be processed as part of the application process. Your data will be securely stored and deleted after the procedure is completed.*

* required

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms