Fast Facts
- Think that using mental verbs like “know” or “understand” when describing AI can make machines seem more human than they really are.
- Know that such language can create a false impression of AI’s independence and intentions, which can lead to unrealistic expectations.
- Understand that news writers rarely use anthropomorphic language, and when they do, it often depends on context rather than just word choice.
- Remember that the way we describe AI influences public perception, so careful language use is essential to accurately represent what AI can and cannot do.
Are AI Systems Truly “Thinking”?
Many people use words like “think,” “know,” and “understand” when talking about artificial intelligence. These words are common when describing how the human mind works. However, applying these terms to AI may give a false impression. AI systems analyze data and recognize patterns. They do not have thoughts, feelings, or awareness, despite the human-like language often used.
How Language Can Be Misleading
Researchers warn that describing AI with mental verbs can make machines seem more alive than they really are. When phrases such as “AI decided” or “ChatGPT knows” appear, they can create the idea that AI is independent and intelligent. This may cause people to overestimate what AI can do and expect more from it than is realistic. It might also distract from the humans who built and control these systems.
Common Usage in News Reporting
A large analysis of over 20 billion words from news articles across 20 countries shows that journalists rarely use mental verbs with AI. Words like “learns” or “knows” appear less often than expected. When they do, these words are usually used in practical ways, such as describing basic needs (“AI needs data”) or tasks (“AI needs to be trained”). These uses do not necessarily suggest AI has human-like qualities.
The Context Matters
Even when mental verbs are present, their meaning is often straightforward. For instance, saying “AI needs data” simply explains a technical requirement, not a desire or thought. Sometimes, passive voice phrases like “AI needs to be trained” focus on human responsibility rather than suggesting AI has intentions. This demonstrates that context influences how we interpret language related to AI.
The Spectrum of Human-Like Language
Not all descriptions of AI fall into simple or extreme categories. Some phrases suggest understanding or reasoning, implying more human qualities. These instances point to a spectrum where language can subtly hint at AI having intentions or awareness—though they are often based on expectations rather than actual capabilities.
Why Words Matter
Language shapes how people view AI. Using overly human-like descriptions can create misconceptions about what these systems can do and overlook the humans responsible for them. Recognizing this helps creators, journalists, and readers develop a more accurate understanding of AI technologies. As AI evolves, careful word choice remains essential.
Researchers suggest that more studies are needed to see how specific words influence perceptions of AI. Overall, the way we describe these systems reflects our understanding and expectations. Choosing words thoughtfully can promote clearer communication and help set accurate standards for AI use.
Expand Your Tech Knowledge
Learn how the Internet of Things (IoT) is transforming everyday life.
Stay inspired by the vast knowledge available on Wikipedia.
AITechV1
