11 White Lies on Silver Tongues: Why Robots Need to Deceive (and How)

Here is a brief reflection on White Lies on Silver Tongues: Why Robots Need to Deceive (and How) | Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence | Oxford Academic (oup.com), one of the selected readings for the course Ethics of Robotics and Autonomous Systems by Alistair M. C. Isaac, Will Bridewell

The piece makes the argument for why robots should lie to us. It also breaks down lies and deceptions into individual components that can be analyzed and interpreted clearly, which is the argument’s foundation. For example, “bullshitting” can be considered a form of deception, but it’s also something that would allow a robot to assimilate into human society more quickly. Now, one might argue, “Why would we want a robot to be able to do that?” But that really comes down to the robot’s intended purpose. What if the robot aims to distribute vaccines for a malaria outbreak? The faster it assimilates into a culture, the faster it can save lives. Therefore, it should be able to lie because the goal of the lie isn’t malicious or self-serving. It is also worth noting that when deciding whether to allow robots to lie (or deceive), we are inherently programming ethics into the robot’s directive. In this example, programming a robot in this way would defy the rules of deontology, which asserts that the action itself is morally right or wrong. Instead, our robot may be operating based on some form of utilitarianism, with the principle of helping as many people as possible from contracting malaria. And that may be entirely appropriate for this particular use case, which also ushers in the theory of particularism.