is already happening • Artificial Intelligence • Forbes Mexico

0
4


Generative AI was trained on centuries of human-produced art and writing.

But scientists and critics have wondered what would happen once AI became widely adopted and began training on its own results.

A new study offers some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström, and Jory Schossau published a study showing what happens when generative AI systems are left to operate autonomously, generating and interpreting their own results without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate—image, description, image, description—over and over again.

Regardless of the diversity of the initial prompts and the degree of randomness allowed, the results quickly converged toward a limited set of generic and familiar visual themes: atmospheric cityscapes, grandiose buildings, and pastoral landscapes. Even more surprising, the system quickly “forgot” the initial prompt.

The researchers called the results “visual elevator music”: nice and polished, but with no real meaning.

For example, they began with the prompt: “The Prime Minister was reviewing strategic documents, trying to sell the public on a fragile peace agreement while dealing with the pressure of his office in the face of imminent military action.” The resulting image was then captioned by the AI, and that caption was used as a prompt to generate the next image.

After repeating this cycle several times, the researchers came up with a bland image of a formal interior space: no people, no drama, no real sense of time or place.

As a computer scientist who studies generative models and creativity, I consider the findings of this study to be an important contribution to the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems tend to homogenize when used autonomously and repeatedly. They even suggest that AI systems already operate this way by default.

Find out: Artificial intelligence, with a Latin accent

The familiar is the default

This experiment might seem irrelevant: most people don’t ask AI systems to indefinitely describe and regenerate their own images. Convergence toward a set of generic images occurred without retraining. No new data was added. Nothing was learned. The collapse arose solely from repeated use.

But the design of the experiment can be seen as a diagnostic tool: it reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by these types of workflows: images are summarized into text, text is converted to images, content is classified, filtered and regenerated as it moves between words, images and videos. Articles on the web are now more likely to be written by AI than humans. Even when humans remain in the process, they often choose from AI-generated options rather than creating from scratch.

The findings show that the default behavior of these systems is to compress meaning towards what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

In recent years, skeptics have warned that generative AI could cause cultural stagnation by flooding the web with synthetic content that will then train future AI systems. Over time, they argue, this recursive cycle would reduce diversity and innovation.

Technology advocates counter that fears of cultural decline accompany every new technology. Humans, they maintain, will always be the final arbiter of creative decisions.

What has been missing in this debate is empirical evidence showing where homogenization really begins.

The new study does not test retraining with AI-generated data; shows something more fundamental: homogenization occurs even before retraining comes into play. The content that generative AI systems produce autonomously and repeatedly is already compressed and generic.

This reframes the stagnation argument: the risk is not just that future models will train with AI-generated content, but that AI-mediated culture is already filtered in ways that favor the familiar, the describable, and the conventional.

Retraining would amplify this effect, but it is not its origin.

You may be interested: ChatGPT will detect minors and allow adult content

This is not a moral alarm

The skeptics are right about one thing: culture has always adapted to new technologies. Photography did not kill painting, cinema did not kill theater. Digital tools have allowed new forms of expression.

But those earlier technologies never forced culture to be constantly reconfigured on a global scale through different media. They were not summarizing, regenerating, and classifying cultural products—news, songs, memes, academic articles, photos, or social media posts—millions of times a day, guided by the same assumptions about what is “typical.”

The study shows that when meaning is forced through these streams repeatedly, diversity collapses, not because of bad intentions, malicious design, or corporate negligence, but because only certain types of meaning survive repeated conversions from text to image and back to text.

This does not mean that cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But, in my opinion, the findings show that stagnation is a real risk—not speculation—if generative systems are left to operate as they currently are.

They also help clarify a common misconception about AI creativity: producing infinite variations is not the same as producing innovation. A system can generate millions of images by exploring just a small corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without that, systems optimize toward familiarity because that’s what they’ve learned best. The study reinforces this point empirically: autonomy alone does not guarantee exploration; in some cases, it accelerates convergence.

This pattern has already been observed in the real world: one study found that AI-generated lesson plans showed the same drift toward conventional and uninspiring content, underscoring that the systems tend toward the typical rather than the unique or creative.

You may like: Spotify launches ‘AI-powered playlist’ for premium users

Loss in translation

Every time you write a caption for an image, details are lost. The same thing happens when generating an image from text. And this happens whether it is done by a human or a machine.

In that sense, the observed convergence is not an exclusive failure of AI; it reflects a deeper property of moving from one medium to another. When meaning passes repeatedly between two formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors show that meaning within generative systems is processed with a silent tilt toward the generic.

The implication is serious: even with human guidance—whether writing prompts, selecting results, or refining results—these systems still eliminate some details and amplify others in a way that is oriented toward the “average.”

If generative AI is to enrich culture rather than flatten it, systems must be designed to resist convergence toward statistically average outcomes. There must be incentives to deviate and support for less common and less conventional forms of expression.

The study makes one thing clear: without these interventions, generative AI will continue to drift toward mediocre and uninspiring content.

Cultural stagnation is no longer speculation. It’s already happening.

*Ahmed Elgammal is a Professor of Computer Science and Director of the Art and AI Laboratory at Rutgers University.

This article was originally published by The Conversation

Follow the technology information in our specialized section


LEAVE A REPLY

Please enter your comment!
Please enter your name here