International Journal of Advanced Artificial Intelligence Research

  1. Home
  2. Archives
  3. Vol. 2 No. 01 (2025): Volume 02 Issue 01
  4. Articles
International Journal of Advanced Artificial Intelligence Research

Article Details Page

ALGORITHMIC INEQUITY IN JUSTICE: UNPACKING THE SOCIETAL IMPACT OF AI IN JUDICIAL DECISION-MAKING

Authors

  • Dr. Jakob Schneider Institute for Ethics in Artificial Intelligence, Technical University of Munich, Munich, Germany

DOI:

https://doi.org/10.55640/ijaair-v02i01-02

Keywords:

Algorithmic Bias, Judicial Decision-Making, AI Ethics, Algorithmic Accountability

Abstract

The integration of Artificial Intelligence (AI) in judicial decision-making processes has introduced both opportunities and significant concerns, particularly regarding fairness and transparency. This paper critically examines the phenomenon of algorithmic inequity within legal systems, focusing on how biased data, opaque algorithms, and lack of accountability can perpetuate or even amplify existing social injustices. Through interdisciplinary analysis, the study explores the structural factors contributing to algorithmic bias, its implications for marginalized communities, and the ethical dilemmas facing policymakers and technologists. Case studies of real-world AI applications in sentencing, parole, and risk assessment highlight the societal consequences of uncritical AI adoption in the justice system. The paper concludes with recommendations for fostering algorithmic accountability, inclusive data governance, and human oversight to ensure equitable and trustworthy judicial outcomes.

References

Cofone, I. (2020). AI and Judicial Decision-Making. SSRN. researchgate.net+12papers.ssrn.com+12pmc.ncbi.nlm.nih.gov+12

Medvedeva, M., Wieling, M., & Vols, M. (2020). The danger of reverse-engineering of automated judicial decision making systems. arXiv. arxiv.org

Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2019). Discrimination in the age of algorithms. arXiv. arxiv.org

Alon Barkat, S., & Busuioc, M. (2021). Human–AI interactions in public sector decision-making: “Automation bias” and “Selective adherence” to algorithmic advice. arXiv. arxiv.org

Ferrer, X., van Nuenen, T., Such, J. M., Coté, M., & Criado, N. (2020). Bias and discrimination in AI: A cross-disciplinary perspective. arXiv. arxiv.org

“Bias in AI-supported decision making: old problems, new challenges.” (2025). Int’l Journal of Criminal Administration. iacajournal.org+1clp.law.harvard.edu+1

Ho, A., et al. (2025). Public perceptions of judges’ use of AI tools in courtroom decision-making. Behavioral Sciences, 15(4), 476. mdpi.com+1pmc.ncbi.nlm.nih.gov+1

“Bias in adjudication: Investigating the impact of artificial intelligence.” (2025). Journal of Global Justice Studies. pmc.ncbi.nlm.nih.gov

“Artificial intelligence in judicial adjudication: Semantic biasness in legal judgements.” (2024). ScienceDirect. papers.ssrn.com+15sciencedirect.com+15tatup.de+15

“Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing.” (2023). ResearchGate. researchgate.net

“Artificial intelligence at the bench: Legal and ethical challenges of generative AI.” (2025). Data & Policy. cambridge.org

“The risk of discrimination in AI-powered judicial decision.” (2025). TheLegalWire.ai. thelegalwire.ai

“The digital ‘To Kill a Mockingbird’: AI biases in predictive judicial support.” (2024). CWSL Law Review. scholarlycommons.law.cwsl.edu

“Content analysis of judges’ sentiments toward AI risk-assessment tools.” (2023). CCJLS. ccjls.scholasticahq.com

Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. mdpi.com

Esthappan, S. (2024). Judges using algorithms to justify decisions: Study on pretrial risk assessment. Social Problems. theverge.com

Proudman, C. & herEthical AI (2024). Victim-blaming language in family court judges. The Guardian. theguardian.com

Reform, J. D. (2023). AI tells lawyers how judges are likely to rule: Pre/Dicta analysis. Axios. axios.com

Strang, D., & Buting, J. (2025). Risks of jurors using ChatGPT in trials. The Sun. thesun.co.uk

Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. Little, Brown Spark. en.wikipedia.org

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances. en.wikipedia.org

Diakopoulos, N. (2016). Make algorithms accountable. The New York Times. en.wikipedia.org

Kroll, H., Barocas, S., Felten, E., & Reidenberg, J. (2016). Accountable algorithms. University of Pennsylvania Law Review. en.wikipedia.org

Mosier, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias and errors: Are teams better than individuals? Journal of the American Medical Informatics Association.

Downloads

Published

2025-01-17

How to Cite

ALGORITHMIC INEQUITY IN JUSTICE: UNPACKING THE SOCIETAL IMPACT OF AI IN JUDICIAL DECISION-MAKING. (2025). International Journal of Advanced Artificial Intelligence Research, 2(01), 7-12. https://doi.org/10.55640/ijaair-v02i01-02

How to Cite

ALGORITHMIC INEQUITY IN JUSTICE: UNPACKING THE SOCIETAL IMPACT OF AI IN JUDICIAL DECISION-MAKING. (2025). International Journal of Advanced Artificial Intelligence Research, 2(01), 7-12. https://doi.org/10.55640/ijaair-v02i01-02

Similar Articles

1-10 of 16

You may also start an advanced similarity search for this article.