Universiteit Leiden

nl en


SAILS Lunch Time Seminar: Andrei Poama

Monday 5 February 2024
Online only

AI-Assisted Penal Sentencing: The Epistemic Free-Riding Objection

Several scholars (Laquer & Copus 2017; Leibovitch 2017; Chiao 2018) argue that machine-learning algorithms can and ought to be used by judges at the sentencing stage to predict the statistically typical (i.e., modal) sentencing decision taken by other (actual or counterfactual) judges in relevantly similar cases, and adjust their individual sentences to cohere with these latter ones. Unlike currently deployed AI-informed tools, the proposal here is to use algorithms to predict judicial, not offender behavior. Furthermore, unlike existing static actuarial tables or sentencing guidelines and grids, such algorithms proceed dynamically – viz., by updating sentence predictions based on the decisions taken by individual judges. The contention is that these algorithmic tools can secure more consistency among sentencing decisions while preserving judges’ substantive commitment to reasonably defensible penal principles. The argument of this paper is twofold. First, it argues that the decision-making situation that such proposals would instantiate is that descriptively satisfies the conditions of the Condorcet Jury Theorem (CJT) – viz., one where the average competence of decision-makers is better than random, where decision-makers share the same goal, and where their judgments are independent. Because of this, the proposal seems epistemically desirable. Second, I draw on List & Pettit (2004) and Dunn (2018) to further argue that, insofar as they believe that sentencing algorithms create situations that satisfy CJT, judges are justified to adjust their sentencing decisions to statistically typical ones. Insofar as this happens, and because sentencing is a temporally deployed process, judges’ beliefs that CJT is satisfied would also rationally motivate them to free-ride on other judges’ decisions, and thereby eventually prompt a situation that violates the independent judgment condition posited by CJT. Thus, envisaged diachronically, the proposed algorithms are an epistemic liability. Epistemic free-riding, I note, is both different from and rationally more difficult to address than algorithmic complacency (Zerilli 2021) or automation bias (Kazim & Tomlison). Third, I examine three institutional set-ups that could contain the epistemic free-riding, and conclude that none of them is simultaneously feasible and desirable at the level of penal sentencing practice.


  1. Boland, P. J. (1989). Majority systems and the Condorcet jury theorem. Journal of the Royal Statistical Society: Series D (The Statistician)38(3), 181-189
  2. Chiao, V. (2018). Predicting proportionality: The case for algorithmic sentencing. Criminal Justice Ethics37(3), 238-261
  3. Dunn, Jeffrey, 'Epistemic Free Riding', in H. Kristoffer Ahlstrom-Vij, and Jeffrey Dunn (eds), Epistemic Consequentialism, Oxford University Press 
  4. Kazim, T., & Tomlinson, J. (2023). Automation Bias and the Principles of Judicial Review. Judicial Review28(1), 9-16
  5. List, C., Goodin, R.E. (2001). Epistemic Democracy: Generalizing the Condorcet Jury Theorem, Journal of Political Philosophy, 9(3): 277-306
  6. List, C., & Pettit, P. (2004). An epistemic free-riding problem?: Christian List and Philip Pettit. In Karl Popper (pp. 138-168). Routledge
  7. Zerilli, J. (2021). Algorithmic sentencing: Drawing lessons from human factors research. In J. Ryberg, & J. V. Roberts (Eds.), Sentencing and Artificial Intelligence (pp. 165-183). (Studies in Penal Theory and Philosophy). Oxford University Press

Join us!

The SAILS Lunch Time Seminar is an online event, but it is not publicly accessible in real-time. Please click the the link below to register to our mailinglist and receive participation links for our Lunch Time Seminars.

Click here to register!
This website uses cookies.  More information.