Skip to content

In Defense of Ethical AI

By Justin Sherman

The desire to “win” the “race” in artificial intelligence—and fear that China has an early lead—has some analysts ready to take a dramatic step: skip the ethics. This is seemingly based on the view that because China’s government has access to copious amounts of data via pervasive surveillance, democracies that are focused on AI ethics issues like data privacy are losing an edge. But abandoning democratic principles around artificial intelligence would set the world on a course we would all soon regret.

There is clearly ongoing AI competition between China and the United States. China’s government is investing far more in AI than its democratic counterparts, and that should be worrisome; AI will greatly influence the future world order, particularly in how it bolsters state power through enhancing economies and military capabilities. Further, the uses of AI that China’s government is championing—surveillance, oppression, and tight social control—destroy human rights and are dangerous to democratic norms in general. But the conclusions some analysts draw from these facts—that in response to global AI competition, democracies should give up on AI ethics because it’ll slow us down—are highly problematic, and I strongly rebuke them.

One, this reflects an oversimplified line of thinking about China’s AI development. Chinese governmental and nongovernmental entities alike have expressed concern for issues of AI safety and AI ethics, contrary to what many might assume. In the words of the executive director of the Partnership for AI, a consortium of stakeholders focused on safe and ethical AI, “We cannot have a comprehensive and global conversation on AI development unless China has a seat at the table.” This is not to say that norms around “tech ethics” are not different between certain democratic countries and China. This is true in many respects. However, vague generalizations such as “China cares nothing about AI ethics” are counterproductive, not to mention nonspecific about what “China” refers to—companies, government agencies, or the public, to name a few possibilities.

Two, the notion of abandoning AI ethics often stems from thinking that China as a whole has a significant edge in AI competition because of limitless data collection by public and private entities. This in and of itself is a problematic line of thinking, as it risks leading democratic policymakers to allow big tech to continue operating with no privacy regulation. It’s also unclear to what extent data access in and of itself will provide strategic advantages. Furthermore, this view also narrows the scope of AI ethics to only issues of data privacy, which is wrong. Even companies or governments that are selfishly motivated—i.e., those that care little about bias in AI—would do well to care about AI bias that could make technology inaccessible to a certain prospective customer base or lead to decision errors in, perhaps, a lethal autonomous weapon.

Third and finally, this view of abandoning AI ethics in the service of AI development leads to dangerous messaging. The world’s democracies—the United States, India, Japan, the E.U. bloc, etc.—need to promote global democratic norms around AI to counter the Chinese government’s model of digital authoritarianism. If we say “abandon AI ethics” on the off chance that doing so provides a tiny edge in some AI application areas, democracies will lose something far more valuable when it comes to the world order: the cost to human rights around the globe as others follow suit in abandoning AI ethics, and the consolidation of power in the hands of authoritarians as oppressive uses of AI become global norms. That’s something none of us should want.


Justin Sherman is the Co-Founder and Vice President of Ethical Tech.