Optimizing to Human Rights Over Innovation

A major theme in the media is that governing or regulating AI stifles innovation. The former Google CEO, Eric Schmidt, has been consistently vocal about what I’ll call an “Innovation First” approach. He’s also been critical of the EU AI act. Schmidt said the EU should be an “innovation partner to the U.S.” to be able to compete with China. Schmidt made the claim, “the EU did regulation first, and I think that’s a mistake.” He has also said that the role of government is not simply to regulate AI. It must simultaneously promote the technology. Alongside a regulatory plan. Schmidt suggested every country should have a “how-do-we-win-AI” plan.

Innovation for the sake of innovation doesn’t necessarily improve the lives of humans. Historically, we’ve developed many innovations that led to human harm, most visibly in the area of chemical, atomic, and nuclear weaponry. Similarly, DDT (dichloro-diphenyl-trichloroethane) was developed in the 1940s to combat malaria, typhus, and other insect-borne human diseases among military and civilian populations. In 1972, the EPA issued a cancellation order for DDT based on its adverse environmental effects on wildlife and its potential human health risks. Was it an innovation? Yes. Did the innovation end up hurting people? Without question.

An Innovation First approach assumes that innovation is what we optimize our decisions to. If innovation comes first, that means that human rights must be subordinated. If we believe that technology exists to better the lives of humans, then an Innovation First approach occupies the wrong place in the hierarchy.