It is one of the many questions that appear since, two years ago, the generative artificial intelligence (GEN) got fully into our daily routine: who is the owner of the huge amount of content that millions of people create at every minute in chatgpt, gemini, co -pilot, perplexity, Claude, goal, deepseek and so many other platforms that allow generating texts, images, images, images, videos and audios through a simple prompt?
The copyright and intellectual property are, perhaps, one of the main challenges facing the thriving and growing artificial intelligence industry. To train a generative platform, huge amounts of content are needed: millions of texts, images, songs, sounds, videos, human voices, news, articles, works of art, among others. Without that input, the Gen cannot work. It is crucial. Digital data is for AI what gasoline is for a car: the fuel that makes it walk.
Now, much of these contents are already published on the web, in digital libraries, news portals, sites, blogs, databases, image banks, social networks, wikipedia and a long etcetera. And most of that material is protected by copyright. However, the large technology companies used it – and they continue to use it – to train their algorithms without paying a single dollar. That is one of the most conflicting points of the current debate.
So it was that in a very short time the first confrontations for copyright arose. One of the most relevant cases occurred at the end of 2023, when The New York Times became the first medium in the world to sue Openai and Microsoft – propiearies of Chatgpt and Copilot, respectively -, for using, without permission, millions of texts of their property to train language models. The Times argued that these articles belong to him and are protected by Copyright. “The defendants seek to take advantage of the enormous investment made by the Times in their journalism,” says the text of the demand, which is already outlined as historical.
Some compare it with that faced by Napster in the early 2000s, for allowing the massive download of MP3 files protected by copyright. In this case, the newspaper demands that OpenAI – valued at about 300 billion dollars – pays for the use of that content.
But there was an answer. The artificial intelligence company argued that its language models need to learn from the available text on the web. He maintains that they do not copy or repeat articles, but simply “read” them and then generate new content, without reproducing them in a textual way. And that, like a person who writes about something he read without violating copyright, AI acts in the same way.
According to its position, to work, AI needs free access to all available information online. Otherwise, it could not exist. OpenAI frames its actions within what is known in the legal field is known as “reasonable or fair use” (Fair Use), a doctrine that allows to reuse protected material without prior authorization in certain contexts, such as teaching, criticism or research.
But there are other cases in conflict. At the end of February, different actors of the creative and technological industry of the United Kingdom – media, editorials, advertising and public relations agencies, illustrators, designers and several professional associations – formed the creative rights in ai coalition, a group that drives the “Make It Fair” campaign. What do you demand from the British government? To reconsider the modifications to the copyright law and protect the work of the sector in the face of the risk of being delivered free of charge to artificial intelligence companies to feed their models.
As part of the launch, the main newspapers of the United Kingdom published, on the same day, an identical cover with the legend “Make It Fair”. In the text of the request they warned: “The creative industry contributes 120 billion pounds per year to the economy, resources that would be in danger if the government legitimizes content theft.”
Read more: the fall in world investment could be aggravated due to tariffs, warns the UN
When the AI copied your style (and did not pay for that): Who is the owner of what is created artificially?
During last April, social networks were flooded with thousands of images generated by artificial intelligence that perfectly imitated the style of Ghibli, the famous Japanese animation study founded by Hayao Miyazaki and Isao Takahata, creator of classics such as my neighbor Totoro, Princess Mononoke and Chihiro’s trip. These Ghibli -style images were created by millions of users thanks to the new chatgpt visual generation engine.
It all started with a direct stimulus of Sam Altman himself, CEO of OpenAI, who on March 22 published on the X network (where he has 3.5 million followers) a Ghibli image with the provocative text “Feel the AGI” (“I felt the AGI”). His post immediately went viral and, as part of a strategic play, OpenAi enabled free access to his image generator. From there, millions of people began to create “ghiblified” versions of portraits, memes and famous photos. Altman even announced a million new users per hour.
Miyazaki fans raised their voice. He even did it, although almost a decade ago: in 2016. Many media and users shared a video where the Japanese filmmaker harshly criticized artificial intelligence. “It’s an insult to life itself,” he said. Although at that time he did not refer specifically to what Chatgpt can do today with his characteristic style, his opinion has not changed. Maybe it’s even worsened. We don’t know.
What is clear is that its author’s brand and the legacy of Studio Ghibli are, in theory, the most harmed by this new OpenNai visual generation function. Millions of images created with their distinctive line, without anyone having paid a single dollar for it.
What began as a viral game ended up exposing the limits – or the total absence of them – in the use of generative artificial intelligence. Because behind each sympathetic image that circulates on the web, there are technological, legal and ethical decisions that impact millions of people.
However, this opens another discussion: is it possible to legally protect a human expression? How far does the law get and where creativity begins?
Because a style, in itself, cannot be protected. And this point is key: it cannot be legally registered or claimed by a “style”, whether the line of an illustrator, the color palette of a work or the characteristic framing of a photograph. The law only covers the punctual work, not the “how” was made. In the digital world, that leaves many exposed artists: their styles can be imitated without legal consequences. How many videos did we see in “Wes Anderson mode”?
The legal problem knot is clear: if Openai’s algorithms – or from any other artificial intelligence platform – can now replicate almost any style, that implies that they were trained with hundreds, or thousands, of original works that are protected by the law of copyright. And, until now, AI companies did not pay to use that material to feed their models. These are photos, books, songs, voices, paintings and more.
Now, inspiration cannot be regulated or enclosed in a law. When someone creates an original work – he writes, paints, he composes a song or films a film – it is evident that he is influenced by everything he consumed throughout his life: works that, mostly, are protected by copyright. But that does not imply a payment for each influence. Inspiration is intangible. How do you quantify or monetize the footprint left by the great works? How many films were made under the influence of Spielberg, Tarantino or Scorsese? How many books were born from the reading of Stephen King, Borges or Asimov?
Anyway, the background question persists: Is it fair for a company to appropriate the work of artists without paying them or even notifying them? Where is authorship in a world where technology already allows you to copy everything, instantly and at no cost?
The legal basis held by those who defend intellectual property is based on three fundamental pillars:
The first is the value of creative work. An artistic work implies elaboration, own ideas, personal look, subjectivity. It is not just an enumeration of facts or objects. Therefore, it must be legally protected, and its use to train neural networks – like those of Chatgpt – should be paid.
The second argument is economical: companies such as OpenAI and Microsoft charge users for the use of their platforms. That is, they obtain direct benefits from trained models with content that does not belong to them.
And the third pillar is reputational, since the results generated by AI can damage the brand, image or credibility of the authors, by replicating, without authorization, their style, voice or ideas. As is the case, for example, when requested by a “Ghibli style” image or when the AI quotes a text of a medium or attributes phrases erroneously.
On the other side of the counter, technology companies respond: “If we cannot use the works that the world generated, there is no way to train and advance with AI.” The answers, for now, are not definitive, but the discussion is already installed.
What seems out of any doubt is that the world needs to rethink – with urgency – a new copyright law that contemplates the challenges that artificial intelligence brought. Many of the legal frameworks that organized the twentieth century were already obsolete. And technology is already known, runs much faster than justice times.
But the case of AI is even more complex: there is still no unified global regulation. Europe sanctioned its law in August 2024, but it still did not enter into full. The United States and a good part of the world are still expectant. Large companies (and Trump administration) know that if they regulate too much, they run the risk of being behind China’s progress.
The end, for now, is open, and what happens in court with these cases will lay the foundations of an inevitable future relationship: the one that unites the generative AI industry with the creators of content, a relationship that still seeks its own rules.
The author is a technology and communication journalist and consultant. It has been analyzing the new media and social networks for 25 years. Author 5 books on media and technology. The last is “Machines cannot dream,” on IA (2018). He was a general editor of Forbes Argentina. He is a permanent columnist in Clarín, Viva Magazine, Newsweek, Infobae and Advertising Report, among other media. It is Speaker in local and international events.
You may be interested: This is the Forbes list of the most influential CMOs in the world in 2025