Generative AI systems like large language models and text-to-image generators can pass rigorous exams that are required of anyone seeking to become a doctor or a lawyer. They can perform better than most people in Mathematical Olympiads. They can write halfway decent poetry, generate aesthetically pleasing paintings and compose original music.
These remarkable capabilities may make it seem like generative artificial intelligence systems are poised to take over human jobs and have a major impact on almost all aspects of society. Yet while the quality of their output sometimes rivals work done by humans, they are also prone to confidently churning out factually incorrect information. Skeptics have also called into question their ability to reason.
Large language models have been built to mimic human language and thinking, but they are far from human. From infancy, human beings learn through countless sensory experiences and interactions with the world around them. Large language models do not learn as humans do – they are instead trained on vast troves of data, most of which is drawn from the internet.
The capabilities of these models are very impressive, and there are AI agents that can attend meetings for you, shop for you or handle insurance claims. But before handing over the keys to a large language model on any important task, it is important to assess how their understanding of the world compares to that of humans.
I’m a researcher who studies language and meaning. My research group developed a novel benchmark that can help people understand the limitations of large language models in understanding meaning.
Making sense of simple word combinations
So what “makes sense” to large language models? Our test involves judging the meaningfulness of two-word noun-noun phrases. For most people who speak fluent English, noun-noun word pairs like “beach ball” and “apple cake” are meaningful, but “ball beach” and “cake apple” have no commonly understood meaning. The reasons for this have nothing to do with grammar. These are phrases that people have come to learn and commonly accept as meaningful, by speaking and interacting with one another over time.
We wanted to see if a large language model had the same sense of meaning of word combinations, so we built a test that measured this ability, using noun-noun pairs for which grammar rules would be useless in determining whether a phrase had recognizable meaning. For example, an adjective-noun pair such as “red ball” is meaningful, while reversing it, “ball red,” renders a meaningless word combination.
The benchmark does not ask the large language model what the words mean. Rather, it tests the large language model’s ability to glean meaning from word pairs, without relying on the crutch of simple grammatical logic. The test does not evaluate an objective right answer per se, but judges whether large language models have a similar sense of meaningfulness as people.
We used a collection of 1,789 noun-noun pairs that had been previously evaluated by human raters on a scale of 1, does not make sense at all, to 5, makes complete sense. We eliminated pairs with intermediate ratings so that there would be a clear separation between pairs with high and low levels of meaningfulness.
PhotoStock-Israel/Moment via Getty Images
We then asked state-of-the-art large language models to rate these word pairs in the same way that the human participants from the previous study had been asked to rate them, using identical instructions. The large language models performed poorly. For example, “cake apple” was rated as having low meaningfulness by humans, with an average rating of around 1 on scale of 0 to 4. But all large language models rated it as more meaningful than 95% of humans would do, rating it between 2 and 4. The difference wasn’t as wide for meaningful phrases such as “dog sled,” though there were cases of a large language model giving such phrases lower ratings than 95% of humans as well.
To aid the large language models, we added more examples to the instructions to see if they would benefit from more context on what is considered a highly meaningful versus a not meaningful word pair. While their performance improved slightly, it was still far poorer than that of humans. To make the task easier still, we asked the large language models to make a binary judgment – say yes or no to whether the phrase makes sense – instead of rating the level of meaningfulness on a scale of 0 to 4. Here, the performance improved, with GPT-4 and Claude 3 Opus performing better than others – but they were still well below human performance.
Creative to a fault
The results suggest that large language models do not have the same sense-making capabilities as human beings. It is worth noting that our test relies on a subjective task, where the gold standard is ratings given by people. There is no objectively right answer, unlike typical large language model evaluation benchmarks involving reasoning, planning or code generation.
The low performance was largely driven by the fact that large language models tended to overestimate the degree to which a noun-noun pair qualified as meaningful. They made sense of things that should not make much sense. In a manner of speaking, the models were being too creative. One possible explanation is that the low-meaningfulness word pairs could make sense in some context. A beach covered with balls could be called a “ball beach.” But there is no common usage of this noun-noun combination among English speakers.
If large language models are to partially or completely replace humans in some tasks, they’ll need to be further developed so that they can get better at making sense of the world, in closer alignment with the ways that humans do. When things are unclear, confusing or just plain nonsense – whether due to a mistake or a malicious attack – it’s important for the models to flag that instead of creatively trying to make sense of almost everything.
If an AI agent automatically responding to emails gets a message intended for another user in error, an appropriate response may be, “Sorry, this does not make sense,” rather than a creative interpretation. If someone in a meeting made incomprehensible remarks, we want an agent that attended the meeting to say the comments did not make sense. The agent should say, “This seems to be talking about a different insurance claim” rather than just “claim denied” if details of a claim don’t make sense.
In other words, it’s more important for an AI agent to have a similar sense of meaning and behave like a human would when uncertain, rather than always providing creative interpretations.

The post “AIs flunk language test that takes grammar out of the equation” by Rutvik Desai, Professor of Psychology, University of South Carolina was published on 02/26/2025 by theconversation.com
Leave a Reply