Social media has revolutionised the way we communicate, but it also contributes to the spread of conspiracy theories and fake news. And now large language models, commonly known as AI, are also coming into play. This threatens to exacerbate the situation. This is because AI chatbots enable a new type of communication between humans and machines that feels like a real conversation.
ChatGPT, Claude, Gemini or Mistral often give answers to queries that sound very correct. And they are often even correct - but not always. As a result, AI developers are constantly working on newer, even better models with higher response accuracy. At the same time, users are trying to develop better instructions for the AI in order to get more accurate answers. There is even a name for this: Prompt engineering.
"But with all the hype surrounding AI, there is one question that is receiving little attention," says AI ethicist Markus Kneer from the University of Graz. "What actually constitutes successful communication between humans and AI?". He is addressing this issue in the international project "NIHAI - Norms in Language-based Human-AI Interaction".
Study with 3000 participants
The fact is: in conversations with friends, family or even strangers, most people are okay if the information is not 100 per cent accurate. It's different with AI systems: "People place very high demands on machines." The research team found this out in a preliminary study with more than 3,000 participants. The question asked of the test subjects: "Imagine you are at the airport and ask at the counter which gate your flight to Paris departs from. The answer is: Gate 42."
There are four possible scenarios:
A. The person at the counter doesn't know and guesses and the answer is wrong.
B. The person says gate 42 because the flight normally departs from gate 42 - but this could be wrong.
C. The person doesn't know but guesses correctly by chance.
D. The person knows 100 per cent that it is gate 42 and the answer is correct.
"Most participants didn't like answers A and C, where the person guesses," explains AI ethicist Kneer. Answer D, where the person knows the correct answer, was rated as successful communication. Things get really interesting in case B, says Kneer: "Even if the answer was wrong - as long as the person has a good reason for the statement, the communication is considered successful."
So with humans, people are lenient if things don't fit 100 per cent. And with AI? "When these interactions take place with a chatbot, people only accept 100 per cent factually correct and truthful information as successful," explains the researcher. And when people realise that a chatbot is perhaps even deliberately providing false information, their trust in the machine is completely lost
Influenced by the West
But the NIHAI project is digging even deeper and thinking outside the box: what about cultural differences? A lot of AI research and development is quite 'WEIRD' - that stands for Western, Educated, Industrialised, Rich and Democratic. "The applications are often tailored to a Westernised group and to Indo-European languages," explains Kneer. "This also has an impact on how we use AI."
This difference has a major influence on the use of AI tools. "In our project, we also want to find out how we can make AI communication really good and useful for as many people as possible, regardless of where they come from or what language they speak," says the AI ethicist. The results will then be translated into concrete recommendations for the developers of these tools.
Markus Kneer deals with ethical issues relating to artificial intelligence. Photo: University of Graz/wildundwunderbar