One of the second-semester courses that I’m in is called “Translational Data and AI Ethics.” from what I can tell, the point of the class is to think more broadly about how we communicate ethical values across the various people involved with technology, not only those who we perceive as end users. The course touches on themes including ethnography, applied science, and AI new collaborative practices, or what is sometimes called “Team Science.”
In my readings, I uncovered an interesting example. People building the technology may be more enthusiastic about it than the people/users actually consuming it. There is a risk that savants will build systems that suit their interests and ideas while neglecting those of the market. For example, those who design music recommendation algorithms may be music enthusiasts. However, the people they design them for may mostly be casual listeners. Therefore, we may use ethnography to understand how the technology is made and not relegate that research to the end users (or, in this case, end-listeners.)
This video featuring David Danks was definitely interesting. In it, he addresses the Design School at UCSD on how we can practically (with an emphasis on doing) implement AI practices. He also argues for the need for what he calls “Translational AI Ethics,” although I’m not sure I entirely agree. I would instead argue that AI, ideally, should be ethical. We shouldn’t qualify or create sub-branches of it because we risk it becoming too fragmented and perhaps too complicated for the general public to embrace.