Predicting irresponsible driving

Algoprudence identification code

ALGO:AA:2025:01

Key takeaways normative advice commission

  • Model validity is fundamental
    The algorithm must be altered to specifically predict driving behavior that causes damage, not general platform misuse. As for any risk prediction model, getting alignment between training data and intended purpose is a critical prerequisite.
  • Balance monitoring with user autonomy
    Monitoring irresponsible driving to reduce damage costs is a legitimate business interest but must not become excessive surveillance or veer into paternalistic advice about general driving habits.
  • Meaningful transparency required
    Users need specific explanations about what driving behavior triggered the warning and clear guidance for improvement, not generic warnings or confusing technical jargon that means nothing to the average driver.
  • Careful variable selection
    Speeding has obvious safety implications, but acceleration and similar variables are trickier. They depend on context and may just reflect personal driving preferences. Before including them, there must be solid evidence linking them to actual damage risk, not just different driving styles or environments.
  • Human oversight essential
    Human analysts currently override 50-60% of the model’s recommendations, demonstrating real discretion rather than rubber-stamping. This meaningful human oversight must continue.

Summary advice

The commission judges that algorithmic risk prediction for identifying irresponsible driving behavior should take place under strict conditions and should be weighed against alternative methods of reducing damage. The validity of the prediction model is a critical prerequisite, and hence the current mismatch between the stated objective (predicting irresponsible driving) and the target variable in training (user bans for a wide variety of misuse) must first be resolved. The commission emphasizes that while monitoring to reduce damage cost may be a legitimate business interest, it should not become excessive surveillance or be used for paternalistic feedback on users’ general driving style. Users should receive specific, meaningful explanations about which driving behaviors triggered warnings, not generic notifications or lists of technical variables that users cannot comprehend. Variable selection must be carefully justified, with speeding as the most legitimate variable, while contextual behaviors like fast acceleration or hard braking require attention to driving context and solid evidence in what sense they are related to damage risk. The commission recommends maintaining substantial human review of algorithmic recommendations, to mitigate the risk that warnings are unduly sent and to facilitate appeal and redress by users.

Source of case

The case originates from an (anonymized) car sharing platform, which has cooperated with Algorithm Audit to provide details about the case. Both the commission and Algorithm Audit have conducted this study independently from the car sharing platform. Neither the investigation nor the advice have been commissioned or funded by the platform.

Presentation

This case study was published during UNESCO’s Expert roundtable II: Capacity building for AI supervisory authorities in Paris on September 30, 2025.

Problem statement and advice document

    / [pdf]
    / [pdf]

Normative advice commission

  • Cynthia Liem, Associate Professor at the Multimedia Computing Group, TU Delft
  • Hilde Weerts, Assistant Professor Fair and Explainable Machine Learning, TU Eindhoven
  • Joris Krijger, AI & Ethics Officer, De Volksbank
  • Maaike Harbers, Professor of Applied Sciences (lector) Artificial Intelligence & Society, Rotterdam University of Applied Sciences
  • Monique Steijns, Founder The People’s AI agency
  • Anne Rijlaarsdam, user car sharing platform.

React to this normative judgement

Your reaction will be sent to the team maintaining algoprudence. A team will review your response and, if it complies with the guidelines, it will be placed in the Discussion & debate section above.

* required

Name
*

Affiliated organization

Reaction
*

Contact details
*

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms