Intensive Day 1: Algorithmic Bias, Fairness, and Justice

Today’s eight-hour intensive started at 2am PDT. Not as bad as the usual 1am. Overall, the material was highly engaging, even though we’re just scratching the surface of some massive topics. Here are some of the main takeaways from the lecture.

  1. We looked at a basic framework for dealing with machine learning bias using the “black-chain” model. The black chain is a spin on the “black box” analogy, which many people use when describing opaque machine learning systems. The word “chain” refers to a supply chain, which is a more accurate analogy than a box. The framework is composed of four main parts and could be particularly useful as an organizational tool when conducting an algorithmic audit. The four main components are as follows:
    • Problem Specification
    • Data Collection & Pre-processing
    • Modelling & Validation
    • Deployment
  2. There are several ways of describing bias, all of which are valid depending on the context.
    • Bias is when the underlying population isn’t accurately or fairly represented
    • Bias is when there is a disposition towards a particular outcome
    • Bias is when decisions favor a particular group without justification
    • …and many more
  3. We discussed different approaches to “Fairness,” which include:
    • Fairness as a distribution of resource allocation
    • Fairness as a means of individual needs or preferences
    • Fairness as a means of random allocation, given that all other methods are unfair
  4. A hunch I’ve had for some time was validated: regardless of which version of fairness you choose, it must be represented computationally or mathematically.
  5. The reason we pay attention to AI ethics or machine learning ethics at all is to avoid harm of which there are two main kinds:
    • Harms of allocation = when allocating some benefit or opportunity
    • Harms of representation = when reinforcing some stereotype