The Four Modes of the AI Ethicist Practitioner
When people hear or see the role of ‘AI Ethicist’, it’s understandable to question what that means. After all, the day-to-day functions of this role aren’t exactly intuitive. Do these people lie underneath a tree all day contemplating moral dilemmas while stroking an imaginary professorial beard? Are they really just high-minded whistleblowers who add bureaucratic bloat to already process-heavy organizations? The reality is more nuanced and, frankly, more complicated, depending on the actual role and actual job title. Below is a summary of functions most commonly performed based on my own experience, each coupled with the top five skills I’ve most frequently relied upon. My aim is to shed some light on what it means to be a practitioner of AI Ethics, not merely a theorist, or worse, a “thought leader.”
Mode #1: The Translator
The Translator converts external ethical and legal requirements into concrete technical specifications and measurable outcomes that development teams can implement. This role bridges the gap between regulatory language and practical engineering, developing verification methodologies that demonstrate compliance. For example, when facing requirements from the EU AI Act that mandate “AI systems shall not disproportionately impact protected groups,” The Translator may deliberate specific fairness metrics, testing protocols, and documentation standards that satisfy both the legal requirement and technical feasibility constraints.
Top Five Skills:
- Regulatory interpretation - Ability to decode complex regulatory requirements into clear operational directives
- Technical specification development - Converting abstract principles into concrete implementation criteria (e.g., Jira tickets)
- Compliance verification design - Creating measurable tests that demonstrate adherence to regulations
- Cross-disciplinary communication - Bridging legal, ethical, and technical languages effectively, including written and presentation skills
- Documentation expertise - Producing artifacts and evidence that satisfy both technical teams and external auditors (e.g., Technical documentation on Confluence, Wiki pages, etc.)
Mode #2: The Futurist
The Futurist explores the entire vector of potential long-term consequences of AI systems and policies, extending beyond immediate or intended outcomes. Using multiple approaches ranging from quantitative forecasting to fictional narration, this role identifies possibilities that might otherwise remain invisible. For example, when evaluating the adoption of generative AI in healthcare, The Futurist might explore not just efficiency improvements but also potential impacts on medical education, doctor-patient relationships, insurance models, and healthcare accessibility across different populations and timeframes.
Top Five Skills:
- Scenario development - Creating comprehensive alternative futures to explore potential outcomes, which may even be in the form of creative writing
- Interdisciplinary pattern recognition - Identifying connections across technological, social, and economic domains
- Weak signal detection - Spotting early indicators of significant emerging trends or changes (Note: Some refer to this as “reading the tea leaves”)
- Temporal perspective-taking - Imagining how values, needs, and contexts might shift in the future (e.g., Would people in the future think poorly of those who drove their own cars instead of relying on autonomous vehicles if AV’s were perceived as “safer?”)
- Uncertainty strategizing - Dealing with reducible knowledge gaps to recommend adaptive versus fixed strategies depending on uncertainty type. A skilled Futurist recognizes uncertainty and adapts their approaches accordingly.
Mode #3: The Arbitrator
The Arbitrator makes context-specific judgments when faced with competing values, unclear priorities, or risk-benefit tradeoffs that cannot be fully resolved through rules alone. This role evaluates the relative importance of different ethical considerations in specific situations, determining acceptable risk thresholds that balance innovation with safety. For example, when deciding on deployment parameters for a medical AI system, the Arbitrator would weigh the potential harm from occasional system errors against the harm of withholding beneficial technology, considering factors like alternative options, affected populations, and risk mitigation capabilities.
Top Five Skills:
- Value-system analysis - Identifying and navigating competing ethical frameworks and priorities (e.g, different opinions on how prisoners should be treated)
- First-principles interrogation - Systematically questioning assumptions and received wisdom to uncover foundational truths, similar to Cartesian doubt
- Proportionality assessment - Evaluating relative weights of different benefits and harms
- Contextual judgment - Making well-reasoned decisions under uncertainty in specific situations
- Stakeholder impact evaluation - Understanding how different groups are affected by various choices
Mode #4: The Logician
The Logician creates systematic governance frameworks and classification systems that enable consistent handling of AI inputs, outputs, and behaviors across similar scenarios. This role develops taxonomies, decision trees, and logical frameworks that determine how systems should respond to different types of content or requests. For example, when a company’s AI chatbot needs to handle customer complaints that might involve harassment or threats, the Logician creates clear rules about how to handle those cases: ‘IF a message contains profanity directed at a person THEN ‘block.’ IF it contains a threat of violence THEN flag for the security team and terminate the conversation.’ These aren’t just guidelines, they’re actionable protocols that the AI follows.
Top Five Skills:
- Taxonomy development - Creating comprehensive classification systems for content and behaviors
- Logical framework construction - Building consistent decision and ruleset architectures
- Boundary definition - Establishing clear criteria for category distinctions and classifications
- Governance operationalization - Translating high-level principles into executable rule systems
- Policy consistency enforcement - Ensuring similar cases are treated similarly across the system