Books

book Featured
2016

Weapons of Math Destruction

Cathy O’Neil

The book that brought algorithmic bias to a mainstream audience, and certainly to my attention. O’Neil’s background as a mathematician-turned-data-scientist gives her unique authority to explain how models that claim to be objective can encode and amplify discrimination. Essential reading for anyone entering the AI ethics space. If I had to recommend the first book to start with, this would be it.

biasalgorithmsinequalityaccessibility
book Featured
2019

Race After Technology

Ruha Benjamin

Benjamin’s concept of the “New Jim Code” — technology that reinforces racial hierarchies under the guise of neutrality — is one of the most important conceptual contributions to AI ethics in recent years. She draws on sociology, STS, and critical race theory to show how default settings, automated systems, and “colorblind” design choices can produce discriminatory outcomes. Required reading for understanding how technical systems encode social inequality, even when no individual designer intends harm.

racebiastechnologyjustice
book
2020

The Oxford Handbook of Ethics of AI

Markus D. Dubber, Frank Pasquale & Sunit Das (eds.)

A sprawling but essential reference. The handbook brings together philosophers, legal scholars, and computer scientists across 40+ chapters covering everything from algorithmic fairness to autonomous weapons. Not a cover-to-cover read, but invaluable for finding rigorous treatments of specific sub-topics. I keep returning to the chapters on moral agency and the ones on justice in automated decision-making.

philosophygovernancehandbookacademic

Papers

paper
2021

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Emily Bender, Timnit Gebru, Angelina McMillan-Major & Shmargaret Shmitchell

The paper that launched a thousand debates — and cost Timnit Gebru her position at Google. Bender et al. argue that the rush to build ever-larger language models carries serious risks: environmental costs, encoded biases in training data, and the illusion of understanding where none exists. The “stochastic parrot” metaphor — a system that produces plausible-sounding text without comprehension — has become one of the most cited framings in AI ethics discourse. Required reading for anyone working with or writing about LLMs.

language-modelsbiasenvironmental-costethics
paper
2018

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

Joy Buolamwini & Timnit Gebru

A landmark audit study that exposed dramatic accuracy disparities in commercial facial recognition systems. Buolamwini and Gebru found that these systems performed worst on darker-skinned women — an intersectional failure that single-axis analysis would miss entirely. The paper didn’t just document a problem; it established a methodology for algorithmic auditing that has since been widely adopted. It’s a model of how rigorous empirical work can drive both academic discourse and real-world policy change.

facial-recognitionbiasintersectionalityaudit
paper
2019

Model Cards for Model Reporting

Margaret Mitchell et al.

Mitchell et al. propose a standardised framework for documenting machine learning models — their intended use, performance across different demographic groups, and known limitations. Think of model cards as nutrition labels for AI: they don’t solve the problem, but they make the problem visible. The paper has had enormous practical influence; major tech companies now publish model cards, and the concept has been integrated into regulatory frameworks like the EU AI Act. A rare example of an academic paper that directly shaped industry practice.

transparencydocumentationaccountabilitystandards
paper
2021

Datasheets for Datasets

Timnit Gebru et al.

The companion piece to Model Cards, but focused on the data side. Gebru et al. argue that every dataset used in machine learning should come with a “datasheet” documenting its motivation, composition, collection process, intended uses, and ethical considerations. The parallel to datasheets in electronics is apt — you wouldn’t use a component without knowing its specifications, so why would you train a model on data you haven’t characterised? This paper has become foundational in responsible AI practice and data governance curricula.

datadocumentationtransparencymethodology
paper
2020

The Ethics of AI Ethics: An Evaluation of Guidelines

Thilo Hagendorff

A sharp meta-analysis of AI ethics guidelines that asks an uncomfortable question: are these documents actually effective, or are they performative gestures? Hagendorff evaluates over 20 prominent AI ethics frameworks and finds significant gaps — particularly around issues of surveillance, labour rights, and environmental impact. The paper is a useful corrective for anyone tempted to assume that publishing principles is the same as practising them. I’ve returned to this paper repeatedly when thinking about the gap between ethics as aspiration and ethics as action.

meta-ethicsguidelinesgovernancecritique

Tools

tool
2020

AI Ethics Guidelines Global Inventory

Algorithm Watch

A comprehensive database of AI ethics guidelines from around the world. Invaluable for understanding the landscape of soft governance — who’s saying what, and where the consensus lies (and doesn’t).

governanceguidelinesglobaldatabase
tool
2021

AI Incident Database

Partnership on AI

A searchable repository of real-world AI failures and harms. Every time someone argues that AI risks are theoretical, I point them here. The database documents hundreds of incidents — from facial recognition misidentifications to autonomous vehicle accidents to algorithmic discrimination in hiring. It’s a critical resource for grounding ethics discussions in reality rather than hypotheticals.

incidentsdatabaseaccountabilitydocumentation

Organizations

organization
2018

Ada Lovelace Institute

An independent research institute based in London, focused on ensuring data and AI work for people and society. Their reports on facial recognition, biometric data, and public attitudes toward AI are consistently rigorous and policy-relevant. I find their work bridges the gap between academic research and practical governance better than most. Their “AI Now” reports are particularly worth reading alongside the AI Now Institute’s work in the US.

governanceresearchpolicyUK

Courses

course
2022

Ethics of Artificial Intelligence

Helsinki University

A free, accessible introduction to AI ethics from the University of Helsinki — the same team behind the popular Elements of AI course. Covers key concepts like fairness, transparency, and accountability without requiring a technical background. An excellent starting point for anyone entering the field.

ethicsintroductoryfreeMOOC