Anyone can sit down with an artificial intelligence (AI) program, such as ChatGPT, to write a poem, a children's story, or a screenplay. It's uncanny: the results can seem quite "human" at first glance. But don't expect anything with much depth or sensory "richness", as researchers explain in a new study.
They found that the Large Language Modes (LLMs) that currently power Generative AI tools are unable to represent the concept of a flower in the same way that humans do.
In fact, the researchers suggest that LLMs aren't very good at representing any 'thing' that has a sensory or motor component — because they lack a body and any organic human experience.
"A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers. Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts," said Qihui Xu, lead author of the study at Ohio State University, US.
The study suggests that AI's poor ability to represent sensory concepts like flowers might also explain why they lack human-style creativity.
"AI doesn't have rich sensory experiences, which is why AI frequently produces things that satisfy a kind of minimal definition of creativity, but it's hollow and shallow," said Mark Runco, a cognitive scientist at Southern Oregon University, US, who was not involved in the study.
The study was published in the journal Nature Human Behaviour, June 4, 2025.
What are the challenges to book preservation?
AI poor at representing sensory concepts
The more scientists probe the inner workings of AI models, the more they are finding just how different their 'thinking' is compared to that of humans. Some say AIs are so different that they are more like alien forms of intelligence.
Yet objectively testing the conceptual understanding of AI is tricky. If computer scientists open up a LLM and look inside, they won't necessarily understand what the millions of numbers changing every second really mean.
Xu and colleagues aimed to test how well LLMs can 'understand' things based on sensory characteristics. They did this by testing how well LLMs represent words with complex sensory meanings, measuring factors, such as how emotionally arousing a thing is or whether you can mentally visualize a thing, and movement or action-based representations.
For example, they analyzed the extent to which humans experience flowers by smelling, or experience them using actions from the torso, such as reaching out to touch a petal. These ideas are easy for us to grasp, since we have intimate knowledge of our noses and bodies, but it's harder for LLMs, which lack a body.
Overall, LLMs represent words well — but those words lack any connection to the senses or motor actions that we experience or feel as humans.
But when it comes to words that have connections to things we see, taste or interact with using our body, that's where AI fails to convincingly capture human concepts.
What's meant by 'AI art is hollow'
AI creates representations of concepts and words by analyzing patterns from a dataset that is used to train it. This idea underlies every algorithm or task, from writing a poem, to predicting whether an image of a face is you or your neighbor.
Most LLMs are trained on text data scraped from the internet, but some LLMs are also trained on visual learning, from still-images and videos.
Xu and colleagues found that LLMs with visual learning exhibited some similarity with human representations in visual-related dimensions. Those LLMs beat other LLMs trained just on text. But this test was limited to visual learning — it excluded other human sensations, like touch or hearing.
This suggests that the more sensory information an AI model receives as training data, the better it can represent sensory aspects.
AI’s impact on the working world
AI keeps learning and improving
The authors noted that LLMs are continually improving and said it was likely that AI will get better at capturing human concepts in the future.
Xu said that when future LLMs are augmented with sensor data and robotics, they may be able to actively make inferences about and act upon the physical world.
But independent experts DW spoke to suggested the future of sensory AI remained unclear.
"It's possible an AI trained on multisensory information could deal with multimodal sensory aspects without any problem," said Mirco Musolesi, a computer scientist at University College London, UK, who was not involved in the study.
However, Runco said even with more advanced sensory capabilities, AI will still understand things like flowers completely differently to humans.
Our human experience and memory are tightly linked with our senses — it's a brain-body interaction that stretches beyond the moment. The smell of a rose or the silky feel of its petals, for example, can trigger joyous memories of your childhood or lustful excitement in adulthood.
AI programs do not have a body, memories or a 'self'. They lack the ability to experience the world or interact with it as animals and human-animals do — which, said Runco, means "the creative output of AI will still be hollow and shallow."
Edited by: Zulfikar Abbany