I am about halfway through the 20-something required reading assignments for the course. This is a longer set of reading assignments than what I’ve experienced in other courses, but so far, they’re all relevant. The topics and subtopics vary greatly and surface some of the most difficult questions in AI ethics, requiring two specific skills learned during Year 1: Ethical Analysis, a subjective (as ethics are) identification and interpretation of the issue(s) at play and Ethical Deliberation, which is where you come down on an issue, supported by an argument and critical thinking.
Topics include, but are not limited to, the following:
- Do autonomous systems deteriorate work ethic over time?
- Should robots be treated as slaves, or should they be granted rights?
- Do we advertently or inadvertently outsource the moral responsibility of our decisions by granting AI systems decision-making capabilities?
I find the second question regarding the rights of robots fascinating, and I am sure I will be asked to formalize an opinion at some point during this course. Specifically, three questions worth meditating on arise from this question.
- Are robots just technological surrogates that lack any agency for moral reasoning? After all, their decisions are based on the encoded values of humans, which they would not be able to generate themselves.
- Are robots merely extension tools for humans, analogous to a blind person’s cane? We do not grant cane rights despite its importance or contribution to leading a good life; therefore, we shouldn’t grant a robot rights either.
- Is the argument about robots’ rights born out of human self-importance and hubris? Is the reason we believe we should be having the conversation at all because we have fallen victim to the idea that our technological accomplishments have more profundity than they deserve? If so, the conversation around robot rights becomes one about human achievement, as we believe we’ve created something that warrants having this conversation at all.