Virtual conversations

Rate this post

It’s becoming increasingly fashionable for so-called content creators to engage in conversations with artificial intelligences, so-called Large Language Models (LLMs). These conversations are sometimes quite emotionally driven by the human part. But the titles of these videos are also sometimes very sensationalist. One often reads headlines like ‘ChatGPT debunked’; ‘AI reveals the truth about…’ and similar phrases. Many of these titles are certainly clickbait, intended to motivate viewers to watch the content until the very end. The content is often quite sobering, and the question arises as to how useful such contributions actually are.

LLMs, as a form of artificial neural networks, are very powerful AI systems, and conversations in the sense of the Turing Test are certainly possible. The Turing Test is the question of determining which part of a human-machine conversation is part of the machine. Classic chat bots, such as the ELIZA bot developed by Joseph Weizenbaum in 1966, only partially pass this test. One problem arises from the existing set of rules for grammatical transformation of responses and the lack of context for human input. The exciting thing about ELIZA is that its performance works without complex intelligence. Grammatical transformations alone guide the conversation. In his paper “Computational Linguistics: ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Weizenbaum describes the following conversation:

Men are all alike

IN WHAT WY

They’re always hugging us about somthing or other

CAN YOU THINK OF A SPECIFIC EXAMPLE

Well, my boyfriend made me come here

YOUR BOYFRIEND MADE YOU COME HERE

He say I’m depressed much of the time

I AM SORRY TO HEAR YOU ARE DEPRESSED

It’s true I am unhappy.

DO YOU THINK COMMING HERE WILL HELP YOU NOT TO BE UNHAPPY

I tried some help, that much seems certain

WHAT WOULD IT MEAN MEAN TO YOU GOT SOME HELP

At this point, I’ll cut the dialogue short, because the way the machine responds in capital letters already clearly reveals the concept. Simple linguistic or rhetorical tricks in the answers force the interlocutor into the role of narrator. If you look at various videos in which people have supposedly in-depth conversations with artificial intelligence, you’ll find a similar pattern.

A serious problem with current human-machine communication is the upstream filters designed to prevent humans from changing the internal state of the artificial intelligence. This scenario would be the worst-case scenario for developers and would amount to a hacker attack. Changing the internal state of a neural network without existing filters would be very easy. Just as with humans, every single stimulus means a change. Hence the saying: repetition breeds truth. Regardless of whether a fact is false or correct, if it is repeated often enough, it will be incorporated into the knowledge base. Regardless of whether it is an AI or a human knowledge base. It’s not for nothing that we speak of the individual. What makes us unique as individuals is the sum of our experiences. This statement also applies to a neural network. And this is precisely the crucial point as to why conversations with an AI are more likely to be a waste of time. If the purpose of such a conversation is therapeutic to encourage self-reflection, I rate the benefits as very high. All other applications are highly questionable. To support this statement, I would like to quote Joseph Weizenbaum again. In the book “Who Creates the Computer Myths?” there is a section entitled “A Virtual Conversation.” It describes how, in a film, questions and answers were compiled into a fictional conversation between Weizenbaum and his MIT colleague Marvin Minsky. Weizenbaum makes a telling statement about the concept of conversation in this section:

“…but of course it’s not a conversation between people either, because if I say something, it should change the state of my conversation partner. Otherwise, it’s just not a conversation.”

This is exactly what happens with all these AI conversations. The AI’s state isn’t changed. You keep talking to the machine until it eventually says things like, “Under these circumstances, your statement is correct.” Then you turn off the computer, and if you restart the program at a later point and ask the initial question again, you’ll receive a similar answer to the first time. However, this behavior is intentional by the operators and has been painstakingly built into the AI. So if you vehemently stick to your point, the AI ​​switches to its charming mode and politely says yes and amen to everything. Because the goal is for you to come back and ask more questions.
Here, too, it’s worth reading Weizenbaum. He once compared humanity’s amazing technological achievements. He talked about the content of television and the internet, which can be quite substantial. But as soon as a medium mutates into a mass medium, quality is consistently replaced by quantity.

Even between two human interlocutors, it’s becoming increasingly difficult to have a meaningful conversation. People quickly question what’s being said because it might not fit their own concept. Then they pull out their smartphones and quote the first article they find that supports their own views. Similar behavior can now be observed with AI. More and more people are relying on statements from ChatGPT and the like without checking their veracity. These people are then resistant to any argument, no matter how obvious. In conclusion, we have found in this entire chain of argumentation possible proof of why humanity’s intellectual capacity is massively threatened by AI and other mass media.
Another very amusing point is the idea some people have that the profession of prompt engineer has a bright future. That is, people who tell AI what to do. Consider that not so long ago, it took a lot of effort to learn how to give a computer commands. The introduction of various language models now offers a way to use natural language to tell a computer what you want it to do. I find it rather sarcastic to suggest to people that being able to speak clear and concise sentences is the job of the future.

But I don’t want to end this article on such a negative note. I believe that AI is indeed a powerful tool in the right hands. I’ve become convinced that it’s better not to generate texts with AI. Its use in research should also be approached with great caution. A specialized AI in the hands of an expert can, on the other hand, produce high-quality and, above all, fast results.


This entry was posted in Articels and tagged by Elmar Dott. Bookmark the permalink.
Elmar Dott

About Elmar Dott

Elmar Dott has been implementing large web applications as a freelance consultant in international projects for over 20 years. His focus is on DevOps, configuration management, software architectures & release management. As a trainer, he shares his knowledge in training courses and also speaks regularly on current topics at conferences.

Leave a Reply