by Barry A. Liebling
Human beings have always tried to influence one another. Before the advent of literacy overt behavior, gestures, drawings, and the spoken word were the available methods. The proliferation of reading and writing opened new ways of shaping beliefs and attitudes.
In the late 1800s telegraphy and telephony made it easy to communicate at a distance rapidly and accurately. And in the twentieth century the technology of radio and television expanded the power of nearly everyone who yearns to persuade others.
The internet exploded the available opportunities to effect how people see and interpret the world. In the past decade the development of artificial intelligence (AI) – a tool with tremendous potential that can be used to obtain valuable and benign results or can be abused by scoundrels – has further increased the ability of its users to affect how others recognize and evaluate events.
Note that regardless of what particular technology is employed people have always been capable of communicating honestly and truthfully. And liars and bad actors have always found ways of sending false messages to their intended victims.
Fortunately the person sending the message does not have all the power. Everyone can (and should) check the veracity of important messages. Alert people cultivate the habit of sorting out what is genuine from what is counterfeit. Some people are very good at this and they deserve credit for exercising their intelligence. Of course, there are people who rarely succeed in differentiating the real thing from nonsense (lies, misinformation, disinformation, fake news, fantasy, …) because they do not attempt to do so, or because they lack the skill, or both.
Recently a team of Wall Street Journal investigators reported that they discovered a substantial amount of dishonest political material on TikTok. Apparently, people from outside the United States are posting videos about former President Trump that are lies and defamatory. The bad actors are using AI to generate fake videos that smear the Republican presidential candidate.
The Wall Street Journal team has determined that many of these videos are originating from “China, Nigeria, Iran and Viet Nam.” The ultimate intention of the videos is not obvious, but the reporters are distressed that “the divisive narratives corrode the country’s already acrimonious discourse at a time when about a third of young Americans turn to TikTok for news.” https://www.wsj.com/tech/tiktok-political-misinformation-trump-election-2024-bd0019d8
The reporters assert that “TikTok’s rules forbid misinformation about elections that the company considers harmful.” (Note that every outlet is in the habit of removing content that its owners consider harmful.) TikTok has responded to the Journal’s queries and is supposedly taking down clips that violate its terms of service. But as some accounts and videos are removed, others pop up with similar material filled with “misinformation.”
How should we evaluate the significance of the Journal’s article? Has something nefarious and novel been uncovered which should be alarming, or is this just another typical example of liars doing their thing? Here are some thoughts.
I assume that the team accurately identified accounts that are disseminating fake news.
The article stresses that the content is coming from foreign locations. Does this mean that there is an absence of similar videos that originate from within the United States? Is domestic disinformation less troubling and less common than mischief from international sources? I am not convinced that one is worse than the other.
Any videos, audios, or written material that are produced by AI are in some sense fake. A lot of it is easy to recognize, and intelligent people should be suspicious of AI generated content. Of course, as technology advances AI is getting more capable of mimicking the real thing. And the good news is that the same people who are working on AI have developed detectors that reliably recognize AI content. Apparently OpenAI already has a tool for identifying products of ChatGPT. I expect there will soon be multiple detectors coming from various sources for identifying AI generated material. And the detecting programs will surely be widely available. https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a?mod=Searchresults_pos1&page=1
What should the management of TikTok do? They are already removing videos that they find objectionable. And they will always have the editorial power to decide what is allowed on the site. But it would be helpful if TikTok explicitly announced to its users that anything they encounter on TikTok may or may not be true. Furthermore, it would be step in the right direction if TikTok used an AI detector to explicitly label all content as AI or not AI.
As long a humans communicate with one another there will be lies, fake news, misinformation, and disinformation. There is no alternative scenario. If free speech is respected (as it should be) some people will maliciously disseminate falsehoods, and some people will erroneously say things that are not true. The cliche that the cure for false speech is lots of true speech has merit. Everyone should do their own research to sort out what makes sense and what does not.
Note that there are influential critics (both in the United States and worldwide) who yearn to have government agencies supervise communication and forbid what their “experts” regard as incorrect, misleading, or hateful. Note also that government “experts” (just like everyone else) lack the ability to errorlessly sort out what is true from what is false. However, the “experts” are very skillful at identifying things they do not like. And they never hesitate to squash their adversaries when they can.
An abundance of false and deceptive information is here to stay. Get over it.
*** See other entries at AlertMindPublishing.com in “Monthly Columns.” ***