Malicious AI? Deal With It (2023 May)

by Barry A. Liebling

While various versions of Artificial Intelligence (AI) have been around for a long time, advances in its capabilities have expanded tremendously in the last few years. Some of the newer AI systems can carry on detailed human-like conversations with users, write essays in response to general questions, act as advisors to those seeking help, produce code to create or modify systems applications, and generate convincing imitations of specific human beings (for example, pretend to be a well-known celebrity by using words the celebrity uses and replicating the person’s voice).

So, are spectacular improvements in AI a good thing or should people regard the new technology with dread? Pull back and consider what is occurring. AI is a tool, a tool that has the potential to do things that have not been possible previously. Advanced technology tools always have major impacts on the world. Think of how many things have changed radically with the introduction of telephones, automobiles, airplanes, radios, personal computers, the internet. In every case the tools were used to create genuine value – making life better. And also, in every case there were (and are) nefarious villains who use the tools to commit terrible crimes.

Recently a number of AI experts, AI users, and outside observers have become alarmed by the rapid development of cutting-edge AI systems. They have circulated and publicized a petition that is calling for a six-month pause in AI development. They are concerned that AI applications might get out of hand, might cause harm to human beings, and that nobody can tell what the ramifications of new versions might be.

Peggy Noonan writes in The Wall Street Journal that six months might not be enough of a pause. She is concerned that it is not possible to predict or control exactly what boosted incarnations of AI might do. And she is anticipating bad consequences for humanity.

Let’s unpack some of the concerns of AI critics.

First, consider the feasibility of stopping the development of advanced AI. Alarmists are calling for the government (or world government) to put a temporary stop on AI production. Whether or not governments act, frightened observers are urging the biggest players (think Google, Microsoft, OpenAI which created ChatGBT, several others) to agree among themselves to put a moratorium on their work.

Here is a certainty. Pandora’s Box is already wide open. Once a new technology has been discovered there is no going back. No matter what any government “commands” there will be people who will work on AI systems. If the major AI tech players agree on a moratorium it is inevitable that some of them will break the truce and work on the sneak. There are experts who are not on the major players’ payroll who will surely be interested in improving the capabilities of AI. And speaking of governments, what do you suppose the ruling elite in China or Russia will do if they see that the United States has (officially, but not really) discontinued AI research? Will the foreign powers also stop, or will they accelerate their efforts?

Peggy Noonan is concerned that in the United States members of the “Tech Elite” are using their own judgment and standards to decide the direction and details of AI. She writes they, “are now solely in charge of erecting the moral and ethical guardrails for AI. This is because they are the ones creating AI.” Her strong suggestion is to get the government involved in directing and controlling AI. This is a classic example of jumping from the frying pan into the fire. I may not like to see AI dominated by a half dozen private actors (who by the way are probably woke). But a few private developers who are competing with one another is far preferable to a single government power using coercive force to get its way (and the government officials are almost exclusively woke).

Some terrified citizens point to the high probability that AI will “learn to persuade” human beings. Furthermore, AI might provide false information labeled as fact and might either accidently or even deliberately spread misinformation. Horrors! Notice that these are things that have occurred via the internet as soon as internet systems became a reality. Also notice that human beings have been influencing one another for good or for ill for thousands of years. Some people succumb to the propaganda, others recognize and reject it. In the past, in the present, and in the future everyone is supposed to critically evaluate advice and directives. Those who are determined to be fools always accomplish their goal.

OK, AI is a reality, and along with the good stuff there will be malicious AI. What is the best course of action? Go back to the bromide that free speech advocates endlessly invoke. The solution to evil or false speech is not censorship. Instead good, true, convincing speech is the effective antidote. Let everyone have their say, and the best arguments (in the long run) will prevail.

This is the time for smart developers who value liberty and individualism to get into the act. If an AI system can be “trained” (insiders say “trained” rather than “programmed”) to be a leftist progressive, a different AI system can be trained to emphasize liberty, explain the virtues of freedom, and make appropriate recommendations.

Good AI systems do not have to stand alone. It is feasible to produce special purpose AI systems that function to identify and debunk material that is coming from illegitimate sources (this could be woke enthusiasts or other menacing adversaries). The idea of launching an arms race is ancient. Recall that fighter airplanes were developed in response to the threat of bombers, and attack submarines came into existence to cancel submarines that were sinking ships.

AI is powerful, is getting stronger, and is here to stay. People who understand and value human flourishing have no time to waste and must immediately get into the game.

*** See other entries at in “Monthly Columns.” ***

Comments are closed.