How AI Creates Meaning from Absurd Expressions

Understanding AI’s Interpretation of Made-Up Sayings
Artificial Intelligence (AI), particularly tools developed by tech giants like Google, has made significant strides in natural language processing. However, recent events highlight that these tools can often miss the mark when interpreting idiomatic expressions—especially those that are invented.
The Quirks of AI: Badger Sayings Explained
A recent example involves a nonsensical saying: “You can’t lick a badger twice.” According to Google’s AI assistant, Gemini, this saying implies that “you can’t trick or deceive someone a second time after they’ve been tricked once.” However, it’s worth noting that badgers have never been licked, making this interpretation particularly odd. This illustrates a key issue faced by AI: the tendency to "hallucinate" or generate explanations for statements that have no real meaning.
How Do AI Systems Process Language?
Generative AI utilizes algorithms to analyze extensive datasets on how humans use language online. This includes identifying common sayings or phrases. Unfortunately, this process doesn’t always distinguish between legitimate idioms and invented phrases. Instead of revealing the fallacy of a phrase, AI often accepts it at face value, leading to convoluted explanations.
The Hallucination Phenomenon
The phenomenon where AI systems, like chatbots, fabricate information is known as "hallucination." This can lead to inaccuracies, creating security concerns for developers striving to enhance AI reliability. The challenge lies in preventing these systems from concocting absurd responses or interpretations, which can skew user understanding.
Examples of AI Interpretations
When tasked with defining another fabricated phrase, “it’s better to flick the arm with an old banana,” Gemini offered an elaborate response. It described "flicking" as a derogatory gesture in some cultures, while cleverly connecting the mention of a banana to similar practices in Brazil. Despite the nonsensical nature of the phrase, the AI’s response illustrated its capacity to link cultural nuances.
In a similarly perplexing response, Gemini provided an interpretation for the made-up saying “three popes are better than none.” This was presented as a historical reference to the Western Schism of the 14th century, a period where three individuals claimed to be the Pope simultaneously. The AI suggested this phrase comments on the chaos of having multiple leaders within a single structure, which reflects the AI’s attempt to contextualize the nonsense.
The Problem with Gibberish Phrases
In tackling phrases like “you can’t bong the ferret once,” AI claimed it related to the sport of ferret-legging, where competitors place ferrets inside their trousers and attempt to keep them from escaping. While entertaining, these interpretations can spread misinformation, leading users away from facts and into a world of confusing analogies.
Implications for Fact-Checking
Greg Jenner, an author and historian, highlighted a significant concern regarding AI-generated information. He argues that the ability to verify quotes or sources online may diminish if AI systems prioritize statistical likelihoods over factual accuracy. His remarks reflect a growing unease among users regarding reliance on AI for trustworthy information.
The Case of Shrewsbury Hats
Another example provided by Gemini was the phrase “there’s never two hats in Shrewsbury.” This was interpreted as a comment on the uniqueness of professions within a particular town. While you can see the attempt at humor here, it speaks to the randomness of these AI interpretations.
In conclusion, AI’s ability to engage with language remains impressive, yet it is clear that it has limitations—especially when faced with nonsensical sayings. As AI technology continues to advance, ensuring that it provides reliable and factual information will be crucial to its acceptance and usability in everyday contexts.