In the tattoo case, tattooing pikachu on a person does not harm Nintendo’s business, but copying another tattoo artist’s work or style directly takes their business. Tattoo art is an industry where your art style largely defines your career.
I can see the argument LLMs are transformative, but when you set up specific evaluation of copying a company/artist and then advertise that you clone that specific studio’s work, that’s harming them and in my opinion crossing a line.
This isn’t an individual vs corporation thing, (though people are very selfish).
There’s so much more here than just corporate vs individual. There’s the sheer scale of it, the enforcement double standards, questions of consent, and taking advantage of the commons (artists public work) etc. To characterize it as people not liking business is plain wrong.
But the AI companies are making billions off of this (and a lot of other) IP, so it totally makes sense for the IP holders to care about copyright protection.
That's the problem here, there's no creative input apart from the prompt, so obviously the source is blatant (and often in the prompt).
Technically, you can't, but there's no way to enforce copyright infringement on private work.
You can paint a Studio Ghibli-style painting -- the style isn't protected.
These rules assume that copying the style is labor intensive, and righteously rewards the worker.
When an LLM can reproduce thousands and thousands of Ghibli-style paintings effortlessly, not protecting the style seems less fair, because the work of establishing the Ghibli-style was harder than copying it large-scale.
I'm in the "don't fight a roaring ocean, go with the flow" boat:
If your entire livelihood depends on having the right to distribute something anyone can copy, get a stronger business.
or even better: if you make art, get a stronger business
or maybe simply: stop making art
That’s not really what I’m saying.
Large brand copyright holders gonna sue. It is part of their core business.
If you’re a musician, you gotta tour to make money. If you’re a painter, you need patrons.
There already were a ton of ways organised IP theft would make money on your creative force.
AI training seems different because it can gobble up anything out of order and somehow digest it into something valuable without human intervention.
The training is absurdly expensive, but considering how they capture the profit on the value created, and the training input owners won’t, will just mark the end of the open internet for non-FOSS normies.
In the poster shop you can choose between a bunch of classics, or you can upload your own AI-generated picture and have that printed as a poster.
Art was always expensive, and posters as an alternative to paintings existed way before AI. Same with copying all kinds of art.
The main difference seems to be that we can’t clearly pay royalties to anyone for AI artwork, because it’s not obvious exactly where it came from.
There was a YouTube channel dedicated to Warhammer lore narrated by an AI David Attenborough. It got taken down for infringing on his voice, but its replacement came up, starting out with a generic old man’s voice and over time gradually more Attenborough-like. When should the Attenborough estate start to get royalties? At 60% Attenborough? Or at 80% Attenborough?
I'll answer your question, but my question for you is: why were you buying wall decorations in the first place? To me, it sounds like you were searching for a product category, and not specifically for art.
Regarding your example, if the AI is capable of imitating David Attenborough by including his name in the prompt, then it was probably trained on his data. If he didn't consent, then I might argue that is ethically wrong and, in my view, theft. If the channel was not monetized and done without his consent, I might argue that is just an ethical failing. In using his voice, the channel betrays the fact that it has value, otherwise they would continue to use the random old man voice.
No dollars burned, just huge huge amounts of cultural capital. Just a reduction to a more primitive, less cultured/developed version of what it means to be human. Less thinking out loud, sharing of thoughts, exposure to new thoughts. More retreating into (a now lesser developed, now culturally atrophied) self.
Two of the four core tests for fair use hinge on this.
1. Purpose and character of the use. With emphasis on whether the copy was made for commercial use.
4. Effect on the work's value, and the creator's ability to exploit their work.
---
Both can be dramatically impacted by the intent of the copy, usually with enforcement and punishment also being considerably stronger if the copy is being made for commercial gain and not private use.
If I was the first person to invent a car, for example, and I named its method of locomotion "walking", would you treat it the same as a human and let it "walk" in all the same places humans walk? After all, it's simply using kinetic energy and friction to propel itself along the ground, as we do.
Because a car is so obviously different to a human, we intuitively understand it requires an alteration to our rules in order for us to coexist peacefully. Since LLMs are so abstract, we don't intuitively understand this distinction, and so continue to treat them as if they should be bound by the same rules and laws as us.
I rarely use these tools (I'm not in marketing, game design, or any related field), but I can see the problem these tools are causing to artists, etc.
Any LLM company offering these services needs to pay the piper.
We can argue if that should be the case or not, which is a different issue.
However, it should not be legal to automate something at scale that is illegal when done by an individual human. Allowing it just tips the scale against labor even more.
If you paint a Studio Ghibli totoro on your cup, pillow, PC, T-shirt, nobody is going to care. If you do this a thousand times it obviously is an issue. And if you charge people to access you to do this, it is also obviously an issue.
If I make a for profit AI, that AI is a product. And if that product required others' copyrighted works to derive it's deliverable, it is by definition creating derivative works. Again, creating a small human, not creating a product. Creating a for profit AI, creating a product.
If my product couldn't produce the same output without having at some point consumed the other works, I've triggered copyright concerns.
If I make and train a small human, I am not creating a for profit product so it doesn't come in to play at all. The two are not similar in any way. The human is not the product. If THEY create a product later (a drawing in this case) then THAT is where copyright comes in.
None of the training data was originally drawn by OpenAI. OpenAI also actively monetizes that work.
Fair use is a defense to infringement, like self defense is a defense to homicide. If you infringe but are noncommercial, it is more likely to be ruled fair use. If Disney did a Ghibli style ripoff for their next movie, that is clearly not fair use.
OpenAI is clearly gaining significant material benefits from their models being able to infringe Ghibli style.
Only if you copy their characters. If you make your own character and story, and are replicating ghibli style, it is OK. Style is not copyrightable.
Of course not because by this twisted logic every piece of art is inspired by what comes before and you could claim Ghibli is just a derivative of what came before and nobody has any copyright then...
that's why piracy Robin Hood Style is fine, but corporate piracy is not. I downloaded a Ghibli movie because I could't afford the DVD. I didn't copy it on VHS then to sell it via e-commerce to 1000 people.
AI companies grabbed IP and pull hundreds of thousands of customers with it, then collect their interactions either way and profit exponentially while Ghibli, Square Enix et al. don't profit from users using more and more AI ...
and most people are not "training" ML models ... people are using copy machines that already learned to compensate for their lack of will to put effort into stuff.
a lot of us been there and enough decided to move beyond and get/become better at being human aka evolve vs get cozy in some sub-singularity. some didn't and won't, and they are easiest to profit from.
In this case producing anything commercial and anything with AI period should always be disclosed.
Since at this stage we can often tell when something is AI (though not everyone and not always), especially food images at a restaurant, for me that immediately downgrades the quality or value of a product That’s going to be the natural human response. And users of the tech will likely be lumped in with very poor attempts, downgrading the value of anyone who uses it. That is natural payback for trying to go commercial.
However in the hobbyist space - the space where humans learn what AI attempts will also do is expand the creative space massively - people will get to iterate much faster with their own styles and new styles will emerge. Just like the invention of writing and publishing - the original writers were people with tremendous time and resource privilege on their hands, but the art of writing would have never ever bloomed if it didn’t become available to anyone over time. Humans then draw higher order conclusions and insights from the abundance, even if it takes energy for filtering.
That said, abuse in the form of pretending something generated is real or taking credit for generated work as real should be illegal. If you teach the moral compass along with the book or you built the identification along with the work you will get a lot more authentic novelty even with AI tools.
Copyright: covers works on publication. Registering it allows for seeking of statutory damages.
Trademark: covers defining characteristics. This can be muddy since defining characteristics are not necessarily the same as style.
It can be especially confusing when certain things become characteristic of a genre. This is mostly what transformer and diffusion models cover: the strongest weights will be what's most common in the training data. You get a lot of em dashes and heroes in colorful outfits, but they don't constitute a violation on their own in any modern model unless the operator of the model goes out of their way to violate.
You wouldn't steal a handbag.
You wouldn't steal a television.
You wouldn't steal a DVD.
Downloading someone's content for AI training is stealing.
Stealing is a crime.
really, really happy that someone is calling out data-harvesting for what it really is.We've have always known we should harvest the internet for absolutely everything. If we don't, it's fine, we can squabble about our IP and China will just ingest the entire Internet, make a model out of it, then release the Ghiblifier and we'll all download it or run it on Openrouter. You can already download Hunyuan Image 3.0 and it's just as good as OpenAI's image create, if not better. What's Japan going to do about that?
Also, for the record, I would absolutely download a handbag, or a television, or a DVD, or a car, if I could. I'd be pumping out Louis Vuittons and iPhones for my kids all day long, and driving a Lambo because why not?
Probably not as much a generational thing ("old people", versus "Millennials" or really, Gen X at that time), as just a tone-deaf shaming attempt by our corporate overlords.
Copyright/Patent/Intellectual Property for more than 25 years is illogical in my personal weird opinion. I think it grinds innovation to a halt and only serves to generate money. In other news i would glady recieve a lot of money forever just because i invented some thing ages ago. I'm only a humble capitalist human. :)
OpenAI will just fork out a bunch of money and settle everything, because that is how money works.
I was once accused of being a pirate back then because I was talking about downloading in a chatroom.
What was I downloading? Linux. Probably Debian. It's the same kind of nonsense as people going around accusing everyone whose process they don't understand of using AI. I'm surprised no one has come after me for making flame fractals.
You wouldn't download a car.
Ironically, that whole anti-piracy campaign used a pirated font:https://arstechnica.com/gadgets/2025/04/you-wouldnt-steal-a-...
The difference is: You're making copies of something:
Scenario 1: I take your baguette. Your hand is empty. You starve.
Scenario 2: I take your baguette recipe. You still have a baguette recipe. You continue to live.
Scenario 3: I take your baguette recipe and publish it. Your customers leave you. You starve.
Copying someone's IP can also impact you economically if your financial model depends on you being the only distributor of copies of something.Should we enforce the protection of people's right to have monopoly of distribution of intellectual property?
Or should we accept that in reality, copies are free and distribution monopolies only exist in inefficient markets?
It seems totally right to protect people's intellectual property.
But information wants to be free.
It's a dilemma. Do we side what feels right, or what's real?
It is accepted, within limits, for humans do transformative work, but it's not been yet established which the limits for AIs are, primarily (IMO): 1. whether the work is transformative or not 2. whether the scale of transformation/distribution changes the nature of the work.
Because of the scaling abilities of a human brain, you cannot plug more brains into a building to pump out massive amounts of transformative work, it requires a lot for humans to be able to do it which creates a natural limit to the scale it's possible.
Scale and degree matter even if the process is 100% analogous to how humans do it, the natural limitation for computers to do it is only compute, which requires some physical server space, and electricity, both of which can be minimised with further technological advances. This completely changes the foundation of the concept for "transformative work" which before required a human being.
We know that humans don't – or, at least, aren't limited to this approach. A quick perusal of an Organization for Transformative Works project (e.g. AO3, Fanlore) will reveal a lot of ideas which are novel. See the current featured article on Fanlore (https://fanlore.org/wiki/Stormtrooper_Rebellion), or the (after heavy filtering) the Crack Treated Seriously tag on AO3 (https://archiveofourown.org/works?work_search[sort_column]=k...). You can't get stuff like this from a large language model.
Doesn't this apply to the printing press?
For me the core issue is not that OpenAI can generate some copies if the art, the issue is that some artists can not earn an honest living and that people do not care about artists generally. I wonder how many of the people commenting here have bought themselves art from an artist.
I personally doubt that AI can do a movie similar to studio Ghibli (of which I seen a lot and I love and paid for) and I also wonder how much of the issue here is some corporate profit rather than some poor artists (do you know who owns studio Ghibli without looking?)
It's fine to boycott AI content, but you could also decide to boycott content produced by large corporations for profit.
It is not that difficult to understand.
Also counts as "downloading someone's content" - at least partially
We will eventually need that policeman's helmet as a retaliation means /s
From its title
"In 1710, the British Parliament passed a piece of legislation entitled An Act for the Encouragement of Learning. It became known as the Statute of Anne, and it was the world’s first copyright law. Copyright protects and regulates a piece of work - whether that's a book, a painting, a piece of music or a software programme. It emerged as a way of balancing the interests of authors, artists, publishers, and the public in the context of evolving technologies and the rise of mechanical reproduction. Writers and artists such as Alexander Pope, William Hogarth and Charles Dickens became involved in heated debates about ownership and originality that continue to this day - especially with the emergence of artificial intelligence. With:
Lionel Bently, Herchel Smith Professor of Intellectual Property Law at the University of Cambridge
Will Slauter, Professor of History at Sorbonne University, Paris
Katie McGettigan, Senior Lecturer in American Literature at Royal Holloway, University of London.
These models critically depend on many hours of artistic effort during training/prompting, but don’t require any additional effort by the new distributors/consumers. On top of that, proper attribution is often impossible.
1) Download Ghibli film material from an arrrr site. Extract frames using ffmpeg. Pay someone $1 per week in Nigeria to add metadata for each (or some) frame(s).
2) ???
3) Profit
It's also possible that models' entire understanding of the aesthetic comes from screenshots of the movie. Even if OpenAI didn't feed in each frame of the movie, they definitely fed in lots of images from the web, some of which were almost certainly movie screenshots.
so many people giving their feelings about laws, about 1s and 0s. -_- what is this stack overflow?
It seems very reasonable to me that we as a society might stop and reassess. It is not a foregone conclusion that all the treasure of humanity be handed over to @sama for a tuppence.
1. Ask ChatGPT to generate some Ghibli-likes for you.
2. Find a local photo printing store prepared to print and blockmount them for you (this is actually the hardest part - most will politely tell you to piss off).
3. List them on Mercari or Rakuma.
4. Regularly relist them as they're periodically removed for violating local counterfeiting laws.
5. Eventually explain yourself to a judge and maybe go to prison for a year.
Technically you only need to do 1 and some of 2 to be committing a crime in Japan.
So by that benchmark Japanese companies have a case.
Try generating a 19th century style illustration of Snow White. You can't at least not on OpenAI platform.
Try generating a picture "of flying boy fighting a pirate on a ship".
I for one am interested to see how courts globally figure out what is acceptable or not. The problem imo is the cat is already out of the bag. No government is going to stop progress in LLM. I also have to wonder if we are better off in society to allow this type of IP infringement, not thinking about copying art style but being able to ingest research and other important pieces of information.
[1] https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....
We’re deving a game AI can’t use. We’ve invented several firewalls.
This is the future.
https://youtu.be/X9RYuvPCQUA?si=XXJ9l7O4Y3lxfEci
[video]
Ofc Microsoft tried to limit the use of OpenOffice/LibreOffice using file format quirks and other strategies, so they are no angel, but still I don't know of a clear parallel.