Reprinted from the East Bay Times
Hundreds of technology leaders and researchers, including Steve Wozniak and Elon Musk, recently released a letter calling for a moratorium on artificial intelligence development, citing the risk to human society. There are legitimate concerns about AI impact on humanity, but a moratorium is unrealistic, especially while there is a reasonable alternative.
First, let’s look at those concerns. There’s worry that AI will dehumanize society by making us too soft or overly dependent on technology. AI could benefit a very few while eliminating jobs or replacing them with menial low-paying ones. AI applications could be biased, harming specific ethnic groups.
AI could be used in fraud and other crimes. AI could manipulate the political discourse causing social instability. AI could be used in autonomous weapons on and off the battlefield. And worst of all, AI might develop an intelligence or consciousness that would enable it to actively eliminate humans and other life forms.
A moratorium on AI development might be possible in a totalitarian society in which neighbors, friends and even family members would be compelled to report anyone involved in work on the technology — but not in a free country.
Scientists and engineers develop technology for a variety of reasons. Money is just one of the motivators. Scientists also yearn to explain and manipulate nature. For many technologists, being the first one to make a disruptive discovery is the ultimate motivator. In today’s environment, you must always assume there will be competent competition.
Placing a moratorium on AI would slow its benefits, such as advances in health care, learning to use earth’s limited resources efficiently and fight climate change, reducing traffic and transportation injuries, translating languages and increasing human productivity. Stopping AI would ask our descendants to live lives similar to ours when much better may be possible..
The most obvious alternative to a moratorium is regulation. It is possible to create laws and regulations that would guide AI development, but that doesn’t seem to be in the cards. The United States has not even created a privacy law, a necessary precursor to an AI regulatory law. The European Union has a privacy law, and so does California. But the United States has not been able to pass a privacy law because of disputes between and within the political parties, disputes between states and the federal government, and resistance from stakeholders.
But there are ways to erect guardrails to reduce AI risks while allowing the beneficial development. The answer lies in creation of a coalition or an association that can bring stakeholders in industry, government and academia together to create standards and a legislative plan for AI. Standards should apply to the algorithms and data that are used to develop and train AI applications, making the applications more predictable and less biased. Standards could also be used to weed out products that are not up to par.
Stakeholders should agree to participate and support this for the same reasons many technologists created the moratorium letter in the first place. The public is already concerned enough about what AI might do that a moratorium is being seriously discussed even though there haven’t been any massive AI-caused layoffs or other catastrophes. If something bad does happen that can be blamed on AI, the public reaction could be severe, possibly resulting in draconian measures.
In their own self-interest, AI developers don’t want bad actors, rogue players or incompetent developers to introduce applications that cause major backlash. They should be willing to come together and develop standards along with realistic legislation that can be the basis for federal regulations.
Copyright © 2023 Elan Barenholtz for the Center for the Future Mind - All Rights Reserved.