I met Shannon Vallor today, the Director of the entire Data & Artificial Intelligence Ethics program and a massive force within the Centre for Technomoral Futures. Launched in 2020 as an integral part of the University of Edinburgh’s Futures Institute, the Centre supports EFI’s larger aim: to pursue and promote the participatory knowledge and critical understanding needed to support society’s navigation of complex futures.
I knew a bit about her, that she started in California, did a stint in Boston, moved back to California, and then came to Edinburgh. In some ways, it is a life parallel to mine, but hers includes prestige, accolades, industry credibility, writing credentials, and experience. What I didn’t know about her was her focus on what is called Virtue Ethics. According to the BBC, Virtue Ethics not only deals with the rightness or wrongness of individual actions, it provides guidance as to the sort of characteristics and behaviors a good person will seek to achieve. In that way, virtue ethics is concerned with the whole of a person’s life, rather than particular episodes or actions.
After meeting her, I felt as though my reason for going back to school had been immediately validated—to learn the things I wouldn’t know to learn. Virtue Ethics isn’t something I would have naturally stumbled upon through self-education. It has me thinking about ethics in terms of an entire person (or entity) rather than a series of individual acts, which, over time, may become fragmented or incoherent. That may seem obvious. Of course, you say, ethics must be something a person exhibits. However, I think that when we think about AI Ethics, we tend to think of a series of individual acts rather than the unified output of an entity. I think it’s meaningful because it gets us out of the trap of thinking about whether this action is ethical or that action is ethical. Instead, we can view our decisions as part of a larger whole, and the decisions we make simply being a derivative of that larger whole.
Other things Shannon talked about include:
- Without ethics, the decisions we make around technology are “hollow.” I love the word “hollow.”
- AI Ethicists will be in demand in the future. However, EFI is not preparing students for a specific track like a radiologist would train in med school. It will be much more fluid.
- She recommended that the approach to education over the next two years be somewhat “T-shaped.” Meaning going broad and wide for the first year, learning what you can, and absorbing all the electives you can. But then, at some point, you have to pick a focus for your final project. The project seems to be centered around intervention, a theme I’ve encountered here before. Thus, students should simultaneously explore the landscape of AI ethics and consider where they might want to dig.