I don't know if this is a recent policy change, but it is not the complete amount but only 50% of the remaining annual amount as per their website[1].
If it were something involving physical goods or services I can understand, but 50% penalty is still a crazy amount for a hosted software service.
The billing your credit card 50% is a "well we tried" type thing. They're happy if it works out, but not unhappy if it doesn't.
True, and it sucks, but you can also keep contesting it. I got a few random things off my credit by using the tools provided by the credit agencies to contest them.
We are reaching a critical mass of people who have no buy-in to these structures because they've been previously cut out.
It was an error on their part so take that as you will, but... scary letter != inability to borrow money.
(And just for the record, I no longer subscribe to that rag.)
A prepaid Visa/MC/Amex gift card might work, but those are easily blockable. I’d expect Adobe to do so.
Furthermore, it's going to cost Adobe a minimum of $1500 to even bring the case to arbitration, and probably $15k more in legal fees to actually win.
So yes, it's actually a difficult battle for Adobe to win and the costs will be much higher than the payout.
Adobe knows this. It’s a numbers game; if they have an honest monthly subscription and someone cancels, they get nothing.
If they have this scammy subscription and they collect 50% of the remainder for 50% of people, it’s like a free 25% (of the remaining “annual” term).
Neither does netflix. It also doesn't mention that photoshop doesn't run on linux. Are you going to complain about that as well?
>or that if their servers are ever down or inaccessible for any reason you may not be able to use the software you paid for on your own machine
Again, netfilx. Also, isn't there usually enough of a grace window that unless you're working off a cruise ship for months at a time, you'll be fine? This feels like a edge case that gets trotted out in comments than happens in reality.
An image editor is not an inherently online service.
Is it the most manipulative dark pattern in e-commerce? Hardly--there are plenty far more vicious--but it's still an attempt to prime a would-be subscriber to focus on the annual, billed monthly and play on their understanding of the word "monthly" by using it in both options.
"Annual, billed monthly" is set in smaller italicized type right under the actual price of US$59.99/mo on the main pricing page[0]. You've now been primed to focus on the $59.99 price. Only when you select a plan and a modal pops up do you see that there's a separate monthly option available from the annual, billed monthly option that's been helpfully pre-selected or a third annual, prepaid option.
The point is to quickly shepherd subscribers through the payment process. The user sees the $59.99 option they expected is pre-selected, so most hit continue and move on. If they look beyond the price in bold to the plan descriptions in smaller italics, well, there are literally decades of eye tracking studies showing users skim websites rather than carefully reading every single word. The price in bold draws in the eye, the word "monthly" is present so the user catches the word, and then they move on to the continue button.
Adobe could have easily labeled the plan Annual, billed in 12 installments or even Annual, billed in monthly installments to better differentiate the two options. They didn't for a reason. The word "monthly" comes with certain expectations. Using it for both the actual monthly plan and the default annual, billed monthly plan allows those expectations to bleed over to both.
While it mentions a fee for cancelling after 14 days, you'll find nary a mention of what that fee actually is until you track down a legal page[1] that isn't linked to any point during the payment process up until the sign-in prompt (I didn't bother creating a new account to look beyond that). At the very least, it's not present during the stage when you're still relatively uncommitted and somewhat more likely to notice any more onerous terms were they present.
Finally, there's an option for a 30-day free trial of Adobe Stock. I'd have sworn it was pre-selected a few years ago, but I may be mistaken on that. If it was, then at least that's a change for the better. Anyhow, did you notice how it's on a 30 day trial period whereas the normal plan has a 14 day cancellation window? Let those deadlines fall to the back of your mind for a week or two, and will you remember which is 14 days and which is 30? There was no reason why Adobe had to use 30 days for Stock or only 14 days for their other offerings. But it adds to the confusion, and that's the entire purpose of a dark pattern. Stock is also an "annual, billed monthly plan," but nowhere in the checkout process is it mentioned that Stock also has a large cancellation fee. That's hidden in a separate part of the Subscription Terms page.[1]
Adobe could easily just choose to settle for a straight-up monthly payment plan with no bullshit and completely sidestep recurring--but largely toothless, given the state of most alternatives to their software--criticism over their billing practices. They could eliminate the dark patterns and make their plan selection and payment process more transparent. They don't, presumably because those patterns generate more revenue than the lost goodwill they create is worth. That goodwill is diffused, and even if people grumble about it online, it generally doesn't rise to the level of leaving.
Do you think "$500 biweekly" car ads, or "$2000/month" apartment rentals are the same?
>"Annual, billed monthly" is set in smaller italicized type right under the actual price of US$59.99/mo on the main pricing page[0].
I might be sympathetic to this reasoning if this was a $2 coffee or something, but $60/month is nothing to be sneezed at, and I'd expect buyers to read the very legible text under the price tag. Otherwise, this makes as much sense as complaining about supermarket price tags that show "$4" in huge font, and "/lb" in small font, claiming that it misled buyers into thinking an entire package of ground beef costs $4, because the $4 price tag "primed" them or whatever.
>While it mentions a fee for cancelling after 14 days, you'll find nary a mention of what that fee actually is until you track down a legal page[1] that isn't linked to any point during the payment process up until the sign-in prompt (I didn't bother creating a new account to look beyond that). At the very least, it's not present during the stage when you're still relatively uncommitted and somewhat more likely to notice any more onerous terms were they present.
Okay but if you read most complaints, it's clear that they're not even aware that such early termination fee even existed. There's approximately zero people who were aware the termination fee existed, found it too hard to figure out what it actually was, but somehow still went with the "Annual, billed monthly" option.
>Finally, there's an option for a 30-day free trial of Adobe Stock. I'd have sworn it was pre-selected a few years ago, but I may be mistaken on that. If it was, then at least that's a change for the better. Anyhow, did you notice how it's on a 30 day trial period whereas the normal plan has a 14 day cancellation window? Let those deadlines fall to the back of your mind for a week or two, and will you remember which is 14 days and which is 30? There was no reason why Adobe had to use 30 days for Stock or only 14 days for their other offerings. But it adds to the confusion, and that's the entire purpose of a dark pattern. Stock is also an "annual, billed monthly plan," but nowhere in the checkout process is it mentioned that Stock also has a large cancellation fee. That's hidden in a separate part of the Subscription Terms page.[1]
This feels like grasping at straws. If we're going to invoke "people might get two numbers confused with each other", we might as well also invoke "people can't calculate dates properly, and therefore a 14 day cancellation window is misleading because they think 14 days = 2 weeks, and set up a cancellation reminder for the same day of the week 2 weeks afterwards, not realizing that would be just over 14 days and thus outside the window".
> Do you think "$500 biweekly" car ads, or "$2000/month" apartment rentals are the same?
The rentals make it very clear what the contract period is and what the penalty for breaking early is. Those terms are also tightly regulated in most jurisdictions for exactly the reason that they are prone to abuse.
> I'd expect buyers to read the very legible text under the price tag.
Given that the text fails to provide details about the fee is this even a valid contract to begin with? On multiple levels there's clearly been no meeting of the minds.
> if you read most complaints, it's clear that they're not even aware that such early termination fee even existed.
Isn't that a strong case that it's an unfair practice?
On the billboard or in the multi-page rental agreement that they send for you to sign? How is this different from than the ToS/fine print on adobe's site?
>Given that the text fails to provide details about the fee is this even a valid contract to begin with?
It's probably buried in the fine print somewhere, which courts have generally held to be enforceable.
>Isn't that a strong case that it's an unfair practice?
No, the legal standard is "reasonable person", not whether there's enough people bamboozled by it to raise a ruckus on reddit or whatever.
I have had plenty of other issues with borderline dishonest landlords but mutually understanding what was being agreed to up front was never one of them. The issues generally came later when they tried to get out of or add additional things without my consent.
> It's probably buried in the fine print somewhere, which courts have generally held to be enforceable.
People elsewhere in this comment section reported that they checked and claimed that it is not found anywhere directly linked from the sales page. You generally have to specify the terms of a contract up front, before it is signed.
> No, the legal standard is "reasonable person"
It isn't conclusive, but I think it makes for a strong case. The more people who are confused by it the stronger your argument that it is confusing to a "reasonable person" becomes.
In some things, expectations are made to be disappointed. This is one of those.
We know that people use all sorts of cognitive shortcuts to make processing their environments easier. It doesn't matter if you're smart, dumb, foolish, or perfectly average. It's just how our brains have evolved to function, and companies have been consulting with industrial and organizational psychologists for decades to help them optimize their marketing and business strategies to maximize the chances that those shortcuts play out in a way that breaks in their favor. Before I/O psychologists, companies tried to do the same by guess and trial and error...and they stumbled upon lots of strategies that were later confirmed by psychological experiments.
Cereal boxes marketed to children have cartoon characters whose eyes are drawn looking down so as to appear as if they're making eye contact with kids walking down the cereal aisle.[0] There are all sorts of "tricks" commonly used by salespeople selling things to sophisticated buyers who are capable of recognize them for what they are. Why did pharma reps take doctors to dinner and give them cheap pens and swag? Or consider the success of psychological pricing[1] and how those strategies somehow manage to be successful despite it being commonly accepted wisdom that odd prices (i.e. $1.99 instead of $2) is a marketing gimmick. We know it's a gimmick, and yet, it still has an impact on our buying behavior.
Yes, the text is there below it, but the whole point of a dark pattern is to manipulate a large enough percentage of buyers/users in a way that generates more revenue than is lost due to any frustration or annoyance created by the same patterns. Most people skim through websites, pluck out key words, and continue on. We can bemoan people for not reading the fine print, but that's not going to change the behavior.
As for the beef metaphor, per unit pricing can absolutely be used to trip up would-be buyers into buying a bit more than they planned. Not because the foolish shoppers don't know any better, but because mixed units usually require a bit more cognitive engagement. Grocery stores absolutely recognize that and benefit from it. On the other hand, you can't really sell beef in a way other than by weight, so it's the opportunity for abuse is much more limited.
> Okay but if you read most complaints, it's clear that they're not even aware that such early termination fee even existed. There's approximately zero people who were aware the termination fee existed, found it too hard to figure out what it actually was, but somehow still went with the "Annual, billed monthly" option.
Sure, because Adobe purposely hides information about the fee. That's one of the dark pattern at play. In the absence of that information, users will insert their own expectations to create meaning. If there's a fee, we'd expect it's probably a reasonable one (even if we have countless examples in our lives of how fees can be anything but reasonable). Does half the annual cost of a subscription seem reasonable to most people? Would that be most people's first guess? Probably not. I might not have been clear about this in my original comment, but there are multiple dark patterns at work here.
> This feels like grasping at straws. If we're going to invoke "people might get two numbers confused with each other",[...]
That particular dark pattern is less about people confusing two different numbers with each other when they're directly in front of them, so much as it is about giving you two different numbers to remember two weeks after you've made your decision and gone on with your life. Literally nobody on the planet is going to keep the free trial or cancellation period as a mental priority over the course of two weeks, so it becomes little more than a random thought at the back of your mind. At best, you might jot it down or set aside the receipt until closer to the deadline. The pattern's purpose is that, if you think of the cancellation/trial periods at all, the numbers will be easily conflated. Think about the times in your life when you've asked yourself something like did I see/do/hear [insert thing] last Monday or was it Tuesday? and weren't quite confident in your answer.
Dark patterns doesn't have to trip up all subscribers or even most of them. But if it trips up a some of them, well, Adobe isn't going to complain about the opportunity. Multiple, more subtle dark patterns together can work just as effectively as one particularly vicious one. They can even be preferable, in that they won't piss off your customers nearly as much, either on their own or as a whole.
0. https://news.cornell.edu/stories/2014/04/food-psychologists-...
Just because it’s written doesn’t make it legal
Ask the FTC what they think or at least thought before Trump
Sounds like a pretty good deal given how much money you'd save and how drawing modest amounts of blood has basically zero downsides.
>Just because it’s written doesn’t make it legal
And just because you invoke "Just because it’s written doesn’t make it legal", doesn't make it invalid.
I see it a bit differently:
A solid, high value contract should make sense. And guess what? When they do make sense, most people have no reason not to pay and they will, barring emergencies and the usual risks that play out in all business. Most people, myself included, would side with Adobe. The peeps need to pay up.
However, when the contract is shady, abusive, just dripping with greed? A much higher percentage of people are gonna say, "fuck 'em! Plenty will find reasons too. And there is a higher inherent risk associated with all new accounts, potentially going as far as to raise it, while value dilution happens across the board to software subscriptions as a whole.
Who wants all that noise?
I am not sure whether the piece mentioned this or not (skimmed, Ok? LOL), but there are fairly strong second and third order effects playing out that are likely to persist for a very long time:
Network effects: A pretty healthy slice of Adobe users, or forced users I could say, reach their hating peak every year. When I was skill building for creative work, Adobe hate was modest. Adobe love was higher than average too. So far, so good, right?
Just half a decade later, I revisit this work about the time people could no longer buy the suite on physical media with a perpetual license. Hmmm... haters were right! That is exactly what they said Adobe was going to do. Some time after this change, and while watching how Adobe handles the users of one of their more hated acquisitions; namely, Alias and MAYA who came from industry culture that believes Autodesk could quite possibly be one of the worst to end up owning what many observers called "elite" or "career" type software packages with costs starting in the mid to high 4 figures and ending up a solid 5 figure purchase ... (Alias 10 forever hoo rah!) ... um, yeah, where was I?
Yes, Hating Adobe solid now. Not ever going to be a potential customer.
You are reading third order effects. People like me, and the very aggressive first order people are hard at work figuring out just how much they can do with alternatives and also realizing everything they can do with the OSS alternatives are publishing our work, sharing successes and when we are teachers, consultants, department heads, we de-recommend Adobe on sight, while at the same time being very forgiving as people ramp up on the other options.
That catches the attention of many who would never have a clue if it were not for social media bringing us the very best drama like this.
Takes years and real talent to grow a software business while also so damn consistently earning the hate. Amazing!
Think about it: you're in control. Not being at the mercy of... whoever is great. You said it yourself: attempt.
Why play with your money? The toys/experiences it can afford are way more fun.
Chargebacks are more effort, and IIRC, weigh negatively on you as well. Can only do so many. I expect your bank would take issue if you really relied on this strategy.
Painful to unsub? How terrible for them. I can be painful to bill. PLONK says the pause button.
Learned everything I needed to know from gyms. If they don't take a virtual card, but want bank details/etc... they're on some bullshit. Pass.
Last I used Revolut 2 years ago, they even had a "disposable" virtual card, meaning after 1 charge it's automatically deleted.
> Hi, Firstname
> I've been reviewing your dispute and wanted to touch base with you to explain what happened.
> It appears that the disputed charge is a "force post" by the merchant. This happens when a merchant cannot collect funds for a transaction after repeated attempts and completes the transaction without an authorization — it's literally an unauthorized transaction that's against payment card network rules. It's a pretty sneaky move used by some merchants, and unfortunately, it's not something Privacy can block.
It's also very obviously not against the payment network rules, otherwise privacy.com wouldn't be actively participating.
Note, their name isn't SpendingLimit.com.
This shook me plenty and I no longer use them for anything I actually need a spending limit on. They're still good for their namesake privacy, with a very limited scope (i.e. scummy merchants), but it's a very thin veil and easy to pierce.
It's a little counter-intuitive to introduce another party to improve privacy. I find it worthwhile for the pausable and vendor-locked cards.
https://www.adobe.com/products/photoshop/plans.html
I am not sure why this should face FTC or any similar mechanism to prevent "deception".
It's written right there:
US$22.99/mo Annual, billed monthly
And if you slightly scroll down the very first question is how much it costs:
> There are several Creative Cloud plans that include Photoshop. You can purchase it as a standalone app for US$22.99/mo. for the annual billed monthly plan or opt for annual billing at US$263.88/yr.
Buying it with the annual billing would save you 1$ per month.
I have seen this model used elsewhere: if you opt in for the yearly subscription, you still pay per month but you save X% over the monthly subscription.
Not sure what could they do to make it more obvious, besides writing big: we only offer yearly subscriptions, although you can pay monthly..
Edit: if you click on buy it, it leads to another option too, the monthly one. Is this the scam one? Because it says you cancel any time...
Edit again: it seems that they did quite some nasty stuff in the past and then US sued them, so now they are more transparent about their subscriptions.
God bless such organizations that sue the hell out of such bad actors until they behave well.
Annual, billed monthly cannot be a deception the way it is phrased. Lots of contracts work like that, even my phone/electricity bill and they have been like that forever.
The issue, if you look in one of the links posted in the comments, is that some years ago they didn't mention this specifically. They made you believe it was a monthly subscription and when you canceled it, the termination fees were really high. You know, like those old contracts using 4pt fonts for the important stuff :)
I’m no adobe supporter generally, and sure they could do more, but they take an awful lot of flak for people who won’t read two lines of text and then scream bloody murder.
https://www.geeky-gadgets.com/adobe-sued-over-subscription-f...
I’m not suggesting we just forgive and forget, but warning people against abusive billing practices that aren’t in place any more is a bit silly. If your argument is we shouldnt support a corporation who requires being taken to court to treat their users fairly then there’s probably a very long list of companies that fail that test much harder than adobe do, especially now.
That seems to be exactly what your posts amount to though?
I disagree. Abusive relationships need constant call-out and their BlueSky post was exactly that, a reminder.
Just because your fed up with hearing it; I am not. It's a a real history to how they acted, got away and demonstrates that they would happily screw you again.
They are just another $corp who show no respect to their users, they've done it once, they will do it again. Let it be a count of permanent mark of how they treat their user-base.
> They are just another $corp who show no respect to their users,
Great, so talk about the ways they're actually doing that not just getting mad about something that's no longer an issue.
Horses decay which where if Adobe were being dissolved than it would have no relevance; Adobe isn't defunct so I don't agree. Adobe is far from dead so while they are still operating it's worth a call out of their previous scummy behavior. It was a recent event in time.
> just getting mad about something that's no longer an issue.
I'm not mad. I don't use paid software where I don't need to. When a corporation screws up on their part, I'm going to call them out on it. It sounds like you have more of an issue rather than just skipping past. "Sssh, lets not mention that part because I'm tired of hearing it".
If you want to hear another another grudge from me with Adobe. One is that my mother forked ££ for the whole CS2 Suite on DVD. Adobe has now made it impossible to use without requiring a hack. Why should my mother not be allowed to use her own copy of CS2?
She doesn't require the latest nor can she afford the subscription in her elderly age with other life admin costs. Another show of that Adobe doesn't care for it's users. They extort for money. Not new as history dictates.
This is moot as I not going to change your mind, nor will you change mine. The pricing scandal was recent and that this topic on HN how Adobe trying to act cute does make it relevant to whole conversion of "oh by the way Adobe xyz".
Shall we start ignoring about how Nazi Germany, Adolf Hitler were setting up concentration camps? Because that would beating a dead horse yet it's still taught in schools.
Adobe isn't comparable to a mass-genocide of innocent people but that was history of an important event in time. By not mentioning it you are letting it be forgotten which is bad. History is being rewritten; you can see it in action with AI censorship.
The next generation of children will have no clue of such history and that's sad.
I hate annual billed monthly but the wording isn't hidden.
> Adobe knowingly "trapped" customers into annual subscriptions, the FTC alleged.
> Adobe prioritized profits while spending years ignoring numerous complaints from users struggling to cancel costly subscriptions without incurring hefty hidden fees, the US Federal Trade Commission (FTC) alleged in a lawsuit Monday.
> According to the FTC, Adobe knew that canceling subscriptions was hard but determined that it would hurt revenue to make canceling any easier, so Adobe never changed the "convoluted" process. Even when the FTC launched a probe in 2022 specifically indicating that Adobe's practices may be illegal, Adobe did nothing to address the alleged harm to consumers, the FTC complaint noted. Adobe also "provides no refunds or only partial refunds to some subscribers who incur charges after an attempted, unsuccessful cancellation."
https://arstechnica.com/tech-policy/2024/06/ftc-sues-adobe-o...
>Annual, billed monthly
>US$22.99/mo
>Fee applies if you cancel after 14 days
There's a popup you can open with more information, but that just says:
>If you cancel after 14 days, your service will continue until the end of that month's billing period, and you will be charged an early termination fee.
It doesn't tell you anywhere what that fee is, and I can't find any link to a page with more information.
Sounds like you're less against the concept of "annual, billed monthly" or even the "dark patterns" that Adobe is using, and more against the fact that Photoshop is now behind a $30/month subscription rather than an one-time purchase price like in the Good Old Days™.
"enshitified" is so vague that the statement almost a tautology. "Bad things are bad". Moreover the original claim was not that, but "unfair business practices". Uber cutting back on their generous coupons is arguably "enshittification" or whatever, but as much as I miss those discounted rides/takeouts, it'd be totally ludicrous to complain that yanking those coupons was some sort of "unfair business practice", as if uber had some sort of obligation to offer such coupons in perpetuity.
I just think it’s insane to attack a company for something they’re not doing, with the implication they are still doing it.
Edit: I just clicked on buy, and it leads to what you said. Apparently the monthly one is not mentioned in the front page. Weird.
I fell for it once. But I’m in India so I just cancelled my debit card and that was that. Good luck to them to chase me through legal means in India. It was still bit of a hassle though.
i also use separate cards for everything, just through privacy.com, so i also can just cancel things. services have started falsely blocking it for abuse though which is really sad :/
Now it’s much easier to deal with the subscription problems due to the new RBI norms.
Earlier the vendor would just take your money and you’d have to fight a long battle to get it back.
But you know what? Karma’s a bitch. I think I am likely not alone in having used a cracked version of photoshop for far, far more time than I ever did an actual paid up copy.
I’m not unaware that piracy was part of their strategy for market penetration, and I guess it’s now a case of “we have the market cornered, let’s monetise”.
FTC Takes Action Against Adobe and Executives for Hiding Fees, Preventing Consumers from Easily Cancelling Software Subscriptions
June 17, 2024
https://www.ftc.gov/news-events/news/press-releases/2024/06/...
I do not like Adobe in the slightest, but it's not because of their billing practices.
https://www.ftc.gov/news-events/news/press-releases/2024/06/...
Interestingly, just fyi, they do a reasonable-person test when trying these cases. That means they literally pull 100 people off the street and ask each one to go through the funnel and then give them a quiz with questions like "How much am I going to be billed?"
So if people are confused, it's basically on you, regardless of whether you think you were being clear about the terms.
But the contract plan is not aimed at them, but at literate computer users most of them working as freelancers (so with at least some financial knowledge).
The same way a Pilot Operating Handbook cannot be judged by the understanding of random 100 people off the street.
No one needs a pilots license to read a PDF.
† I.e. the type of deal where the individual is being asked to trade away something they cannot reasonably evaluate the net present value of (their own future optionality in a future they can't predict) — which will inevitably be presented by the company offering the deal, in a way that minimizes/obscures this loss of optionality. In other words, it's a deal that, in being able to make it, has the same inherent flaws as indentured servitude does — just with money instead of labor.
My natural instinct was to be ropable. But then I realised that I had actually been paying an annual insurance policy, monthly. I wasn't paying a monthly insurance policy.
Presumably when we signed up, there was a monthly option. Presumably it cost more. And so I can hardly be annoyed that they're essentially making up that difference now that I've chosen to terminate that contract early.
That being said, maybe we're talking past one-another here.
Where I come from (Canada), even if you prepay for a service that charges annually (no "annual charged monthly" language needed), as long as that service can be common-sense-construed as delivering value on a finer granularity (by the month, by the second, etc), then if you only use that service for some fraction of the plan length, and then cancel it — you are then legally obligated to a pro-rated refund of the remaining plan length. So if you cancel an annual-billed service after a month? You get 11/12ths of your payment back. If you subscribe to a monthly-billed service on January 1 and cancel on January 2? You get 30/31ths of your payment back. Etc.
Under such a legal doctrine, there is no difference in the total amount owed between "billed monthly" when subscribed for one month, vs "billed annually" when subscribed for one month and then cancelled, vs "annual, billed monthly" when subscribed for one month and then cancelled.
If you're curious about the set of countries where this doctrine applies, here's a page from the Microsoft Store support outlining the set of countries where they will give out pro-rated refunds for subscriptions: https://support.microsoft.com/en-us/account-billing/countrie...
(And if it isn't sickening to you that in general, corporations will write logic into their billing systems to support this, and then only activate that logic for countries where they're legally obligated to do so, while — now with intentionality — continuing to squeeze everyone else for services they've knowingly already cut off... then I don't know what to tell you.)
---
And yes, if you're wondering, there are a few exceptions to this pro-rated refund doctrine.
One is real-estate leasing — because chancery courts are weird and make their own rules; but also because a lot of the "work" of being a landlord is up-front/annual. (Though, admittedly, we also have laws here that force real-estate annual leasing contracts to revert to month-to-month after a low set number of years — usually 1 or 2 — with the month-to-month lease rate carried over from the "annual, paid monthly" rate.)
The other is for commercial leasing of assets like vehicles, construction equipment, servers, etc. This is because corporations have much more predictable optionality, sure — but it's also because corporations don't "deserve" protections in the same way individuals do. (Same reason investment banks don't get the protections of savings banks.)
Anyway I don't really think they deserve a lot of the hate they get, but I do hope this encourages development of viable alternatives to their products. Photoshop is still pretty much peerless. Illustrator has a ton of competitors catching up. After Effects and Premiere for video editing are getting overtaken by Davinci Resolve -- though for motion graphics it is still hard to beat After Effects. Though I do love that Adobe simply uses JavaScript for its expression and scripting language.
It's because nobody actually wants that.
Artists don't like AI image generators because they have to compete with them, not because of how they were trained. How they were trained is just the the most plausible claim they can make against them if they want to sue OpenAI et al over it, or to make a moral argument that some kind of misappropriation is occurring.
From the perspective of an artist, a corporation training an AI image generator in a way that isn't susceptible to moral or legal assault is worse, because then it exists and they have to compete with it and there is no visible path for them to make it go away.
The ones using the name of the artist/studio (e.g. Ghiblification) also seem more common than they are because they're the ones that garner negative attention. Then the media attention a) causes people perceive it as being more common than it is and b) causes people do it more for a short period of time, making it temporarily more common even though the long-term economic relevance is still negligible.
Also, I'm curious, when they start censoring exports from their software. They already do that for money scans.
I'm not worry about image generators. They'll never generate art by definition. AI tools are same as camera back then - a new tool that still require human skills and purpose to create specific tasks.
From what I've seen from artists, they hate Adobe for both reasons, and the AI thing is often more of a dogmatic, uncompromising hate (and is not based on any of the various rationalizations used to persuade others to act in accord with it) and less of the kind of hate that is nevertheless willing to accept products for utility.
I always wonder why people make statements like this. Anyone that knows more than one artist knows that artists uses these tools for a variety of reasons and aren't nearly as scared as random internet concern trolls make them out to be.
But we decided to drop Adobe after some of their recent shenanigans and moved to a set of tools that didn't have this ability and, frankly, we didn't really miss it that much. Certainly not enough to ever give Adobe another cent.
They can also make a legal argument that the training set will fully reproduce copyrighted work. Which is just an actual crime as well as being completely amoral.
> because then it exists and they have to compete with it
The entire point of copyright law is: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."
Individual artists should not have to "compete" against a billion dollar corporation which freely engages in copyright violations that these same artists have to abide by.
That's ignoring the fact that an AI image generator trained without infringing on existing works would have way worse quality, because of the reduced amount and quality of the training set.
Agreed, the ones I know in real life are excited by these tools and have been using them.
(/s)
You cannot find any group, where "all" is true in such context. There's always an element of outlier.
That said, you're not really an artist if you direct someone else to paint. Imagine a scenario where you sit back, and ask someone to paint an oil painting for you. During the event, you sit in an easy chair, watch them with easel and brush, and provide direction "I want clouds", "I want a dark background". The person does so.
You're not the artist.
All this AI blather is the same. At best, you're a fashion designer. Arranging things in a pleasant way.
Photographers do manipulate cameras, and rework afterwise the images to develop.
Digital artists do manipulate digital tools.
Their output is a large function of their informed input, experience, taste, knowledge, practice and intention, using their own specific tools in their own way.
Same with developers: the result is a function of their input (architecture, code, etc.). Garbage in, garbage out.
With AI prompters, the output is part function of the (very small) prompt, part function of the (huuuuuuuge) training set, part randomness.
If you're the director of a movie, or of a photo shoot, you're the director. Not the photographer, not the set painter, not the carpenter, not the light, etc.
If you're the producer, you're not the artist (unless you _also_ act as an artist in the production).
Do you feel the difference?
That comparison would be fair if the generative AI you use is trained exclusively on your own (rightfully acquired) data and work.
Existing generative AIs are feeding on the work of millions of people who did not consent.
That’s a violation of their work and of their rights.
And that should also alert those that expect to use/benefit of their own production out of these generators: why would it be 1/ protectable, 2/ protected at all.
It is no coincidence that these generators makers’ philosophy aligns with an autocrat political project, and some inhuman « masculinity » promoters. It’s all about power and nothing about playing by the rules of a society.
The other objections, in the economic range (replacing/displacing artists work for financial gain, from the producers point of view) are totally valid too, but don't rely on the same argument.
And my point above is not really an objection, it's a reminder: of what are AI generators, and what they are not (and that AI generators promoters pretend they are, without any piece of evidence or real argument).
Of what their output is (a rough, industrial barely specified and mastered product), and what it is not (art).
And this is why I've stopped arguing with people from this crowd. Beyond the classic gatekeeping of what art is, I'm sick of the constant moving of the goalposts. Even if a company provides proof, I'm sure you'd find another issue with them
Underlying all of it is a fundamental misunderstanding of how AI tools are used for art, and a subtle implication that it's really the amount of effort that defines what "art" really is.
And what crowd? I am stating my viewpoint, from an education in humanities AND tech, and from 25 years of career in software tech, and 30 years of musician and painter practice.
Sorry but who is moving the goalpost here? Who is coming with their tech saying « hi, but we don’t care about how your laws make sense and we don’t care that we don’t know what art is because we never studied about it, neither do we have any artistic practice, we just want to have what you guys do by pressing a button. Oh and all of your stuff is free for us to forage thru, don’t care about what you say about your own work. »
Typical entitled behavior. Don’t act surprised that this is met with counter arguments and reality.
Artistic expression is people in motion, alone or in groups.
You’re talking about the economics of performances and artefacts, which are _something else_ out of artistic expression.
EDIT to clarify/reinforce:
Elvis without Elvis isn’t Elvis. Discs, movies, books are captures of Elvis. Not the same thing.
Miyazaki without Miyazaki isn’t Miyazaki. It may look like it, but it is not it.
Artistic expression is someone’s expression, practice (yours, mine, theirs). It’s the definition of the originality of it (who it comes from, who it is actually made by).
A machine, a software may produce (raw) materials for artistic expression, whatever it is, but it is not artistic expression by itself.
Bowie using the Verbasizer is using a tool for artistic expression. The Verbasizer output isn’t art by itself. Bowie made Bowie stuff.
What would be gatekeeping is if someone prevented you to pick a pencil, paper, a guitar, a brush, to make something out of your own.
You’re the only one gatekeeping yourself here.
Looks like it’s the same pattern as with blockchains, and NFTs and Web3 stuff and the move fast/break things mantra: you cannot argue for and demonstrate for what your « solutions » actually solve, so you need brute force to break things and impose them.
> Existing generative AIs are feeding on the work of millions of people who did not consent.
There are LLMs that are trained on non-copyright work, but apparently that's irrelevant according to the comment I replied to.
With photographers, the output is part function of the (very small) orientation of the camera and pressing the button, part function of the (huuuuuuuge) technical marvel that are modern cameras, part randomness.
Let's be realistic here. Without the manufactured cameras, 99.9% of photographers wouldn't be photographers, only the 10 people who'd want it enough to build their own cameras, and they wouldn't have much appeal beyond a curiosity because their cameras would suck.
Reducing this to "orientation of the camera" is such a dismissive take on the eye and focus of the person that decides to take a picture, where/when he/she is; this is really revealing you do not practice it.
And... before cameras were even electronic, back in the early 2000, there were already thousands and more of extremely gifted photographers.
Yes, cameras are marvellous tools. But they are _static_. They don't dynamically, randomly change the input.
Generative AI are not _static_. They require training sets to be anywhere near useful.
Cameras _do not_ feed on all the previous photographies taken by others.
What's more important: the person behind the camera or the camera? Show me the photos taken without the camera and then look at all the great photos taken by amateurs.
> They require training sets to be anywhere near useful.
And the camera needs assembly and R&D. But when either arrives at your door, it's "ready to go".
> Cameras _do not_ feed on all the previous photographies taken by others.
Cameras do feed on all the research of previous cameras though. The photos don't matter to the Camera. The Camera manufacturers are geniuses, the photographers are users.
It's really not far off from AI, especially when the cameras do so much, and then there's the software-tools afterwards etc etc.
Yeah, yeah, everybody wants to feel special and artsy and all that and looks down on the new people who aren't even real artists. But most people really shouldn't.
However good or not is the camera, it’s not the camera that dictates the inner qualities of a photograph, there is _something else_ that evades the technicalities of the tools and comes from the context and the choice of the photograph (and of accident, too, because it’s the nature of photography: capturing an accident of light).
The same camera in the hands of two persons will give two totally different sets of pictures, if only because, their sight, their looking at the world is different; and because one knows how to use the tools, and the other, not in the same way, or not at all.
It’s not a matter of « feeling artsy » or special, it’s a matter of « doing art ».
Everyone is an artist, if they want to: it’s a matter of practicing and intent, not a matter of outputting.
Art is in the process (of making, and of receiving), not in the output (which is the artefact of art and which has its own set of controversial and confusing economics and markets).
Generative AI on the contrary of tools that stay in their specific place, steals the insight from previous artists (from the training set) and strips the prompter from their own insights and personality and imprint (because it is not employed, but only through a limited text prompt at an interface).
Generative AI enthousiasts may be so. They have every right to be. But not by ignoring and denying the fundamental steal that injecting training sets without approval is, and the fundamental difference there is between _doing art_ and asking a computer to produce art.
Ignoring those two is a red flag of people having no idea what art, and practice is.
I am not even speaking of « do the users feel what it is ». Here it is:
If some people are so enthusiastic and ruthless defenders of AI generators that were trained/fed from the work of millions on unconsenting artists…
1/ what do they expect will happen to their own generated production?
2/ what do they expect will happen to their own consent, in that particular matter, or in others matters (as this will have been an additional precedent, a de facto)?
Again, said it elsewhere, there is a power play behind this, that is very related to the brolicharchy pushing for some kind of twisted, « red pilled » (lol) masculinity, and that is related to rape as a culture, not only in sexual matter but in all of them.
The casual dismissal of artists' fundamental rights to control their work and how they are used is a part of a larger cultural problem, where might would rule over law, power would rule over justice, lies over truth.
That may seem a charged argument, and it is, because it hits right and it is particularly uncomfortable to acknowledge.
The same tech leaders that push for this move over IP law are the tech leaders that fund(ed) the current dismantling of US democracy and that have chosen their political team because it aligns precisely (up to the man that got the presidential seat, the man that has (had?) quite problematic issues towards women) with their values.
This is too obvious to be an accident.
And this is also a stern warning. Because the ideology behind power does not stop at anything. It goes on until it eats itself.
2/ it has been discussed for like, decades, in academic and social contexts, how attitudes in some domain reflects and reinforces them in others.
3/ Your « actual » makes an assumption about my experience that you have no basis for.
Point remains that non-consensual use of artists’ work reflects the same fundamental disregard for autonomy that characterizes other consent violations.
Because every piece of generative AI looks identical, right? I mean, if the prompt had an impact, and two people using some ML-model would create different results based on what they choose to input, it sounds suspiciously like your "the same camera in two different hands", doesn't it?
> the fundamental difference there is between _doing art_ and asking a computer to produce art.
You mean doing art by asking a computer do produce a dump of sensor-data by pressing a button?
You appear to be completely blind to the similarities and just retreat towards "I draw the lines around art, and this is inside, and that's outside of it" without being able to explain how the AI-tool is fundamentally different from the camera-tool, but obviously one negates all possibility to create art, while the other totally is art, because that's what people say!
Needless to say that the people making those distinctions can't even tell apart a photo from an AI-generated picture.
I feel there's something interesting to discuss here but I'm still not convinced: a camera captures light from the physical reality. AI generators "capture" something from a model trained on existing artworks from other people (most likely not consenting). There's a superficial similarity in the push of the button, but that's it. Each does not operate the same way, on the same domain.
> You appear to be completely blind to the similarities [...] without being able to explain how the AI-tool is fundamentally different from the camera-tool, but obviously one negates all possibility to create art, while the other totally is art, because that's what people say!
There's a vocabulary issue here. Art is a practice, not a thing, not a product. You can create a picture, however you like it.
What makes a picture cool to look at is how it looks. And that is very subjective and contextual. No issue with that. What makes it _interesting_ and catchy is not so much what it _is_ but what it says, what it means, what it triggers, from the intent of the artist (if one gets to have the info about it), to its techniques[1] all the way to the inspiration it creates in the onlookers (which is also a function of a lot of things).
Anything machine-produced can be cool/beautiful/whatever.
Machines also reproduce/reprint original works. And while there are common qualities, it is not the same to look at a copy, at a reproduction of a thing, and to look at the original thing, that was made by the original artist. If you haven't experienced that, please try to (going to a museum for instance, or a gallery, anywhere).
[1] and there, using AI stuff as anything else as a _tool_ to practice/make art? of course. But to say that what this tool makes _is_ art or a work of art? Basic no for me.
> Needless to say that the people making those distinctions can't even tell apart a photo from an AI-generated picture.
1/ It does get better and better, but it still looks like AI-generated (as of April 2025).
2/ Human-wise/feeling-wise/intellectual-wise, anything that I know has been generated by AI will be a. interesting perhaps, for ideas, for randomness, b. but soulless. And that is connection, relief, soul (mine, and those of others) I am looking for in art (as a practice, an artefact or a performance); I'm pretty sure that's what connects us humans.
3/ Market-wise, I predict that any renowned artwork will lose of its value as soon as its origin being AI-made will be known; for the very reason 2/ above.
Oh, the irony...
Adobe AI tools are pretty shit though if you want to use them to do something creative. Shockingly bad really.
They are probably good if you want to add a few elements to an instagram photo but terrible for actual digital art.
If I were an artist, and I made a painting and published it to a site which was then used to train an LLM, I would feel as though the AI company treated me disingenuously, regardless of competition or not. Intellectual property laws aside, I think there is a social contract being broken when a publicly shared work is then used without the artist's direct, explicit permission.
The rights artists have over their work are economic rights. The most important fair use factor is how the use affects the market for the original work. If Disney is lobbying for copyright term extensions and you want to make art showing Mickey Mouse in a cage with the CEO of Disney as the jailer, that's allowed even though you're not allowed to open a movie theater and show Fantasia without paying for it, and even though (even because!) Disney would not approve of you using Mickey to oppose their lobbying position. And once the copyright expires you can do as you like.
So the ethical argument against AI training is that the AI is going to compete with them and make it harder for them to make a living. But substantially the same thing happens if the AI is trained on some other artist's work instead. Whose work it was has minimal impact on the economic consequences for artists in general. And being one of the artists who got a pittance for the training data is little consolation either.
The real ethical question is whether it's okay to put artists out of business by providing AI-generated images at negligible cost. If the answer is no, it doesn't really matter which artists were in the training data. If the answer is yes, it doesn't really matter which artists were in the training data.
Further making a variant of a famous art piece under copyright might very well be a derivative. There are court cases here just some years for the AI boom were a format shift from photo to painting was deemed to be a derivative. The picture generated with "Painting of a archeologist with a whip" will almost certainly be deemed a derivative if it would go through the same court.
The US doesn't really have moral rights and it's not clear they're even constitutional in the US, since the copyright clause explicitly requires "promote the progress" and "limited times" and many aspects of "moral rights" would be violations of the First Amendment. Whether they exist in some other country doesn't really help you when it's US companies doing it in the US.
> Further making a variant of a famous art piece under copyright might very well be a derivative.
Well of course it is. That's what derivative works are. You can also produce derivative works with Photoshop or MS Paint, but that doesn't mean the purpose of MS Paint is to produce derivative works or that it's Microsoft rather than the user purposely creating a derivative work who should be responsible for that.
Personally I'm inclined to liken ML tools to backhoes. I don't want the law to force ditches to be dug by hand. I'm not a fan of busywork.
You could take that further and say that "substantially the same thing" happens if the AI is trained on music instead. It's just another kind of artwork, right? Somebody who was going to have an illustration by [illustrator with distinctive style] might choose to have music instead, so the music is in competition, so all that illustrator's art might as well be in the training data, and that doesn't matter because the artist would get competed with either way. Says you.
You still get images in a particular style by specifying the name of the style instead of the name of the artist. Do you really think this is no different than being able to produce only music when you want an image?
My counter-argument is "no". Ideally I'd elaborate on that. So ummm ... no, that's not the way things are. Is it?
Which is equally illegal.
> disregarding copyright law because that is circular reasoning
This is not circular, copyright is non-negotiable.
"Nothing was stolen from the artists but instead used without their permission"
Yes and no. Sure, the artist didn't loose anything physical, but neither did music or movie producers when people downloaded and shared MP3s and videos. They still won in court based on the profits they determined the "theft" cost them, and the settlements were absurdly high. How is this different? An artist's work is essentially their resume. AI companies use their work without permission to create programs specifically intended to generate similar work in seconds, this substantially impacts an artist's ability to profit from their work. You seem to be suggesting that artists have no right to control the profits their work can generate - an argument I can't imagine you would extend to corporations.
"The thing being used is an idea"
This is profoundly absurd. AI companies aren't taking ideas directly from artist's heads... yet. They're not training their models on ideas. They're training them on the actual images artists create with skills honed over decades of work.
"not anything the artist loses access to when someone else has it"
Again, see point #1. The courts have long established that what's lost in IP theft is the potential for future profits, not something directly physical. By your reasoning here, there should be no such things as patents. I should be able to take anyone or any corporation's "ideas" and use them to produce my own products to sell. And this is a perfect analogy - why would any corporation invest millions or billions of dollars developing a product if anyone could just take the "ideas" they came up with and immediately undercut the corporation with clones or variants of their products? Exactly similar, why would an artist invest years or decades of time honing the skills needed to create imagery if massive corporations can just take that work, feed it into their programs and generate similar work in seconds for pennies?
"What is there to complain about"
The loss of income potential, which is precisely what courts have agreed with when corporations are on the receiving end of IP theft.
"Why should others listen to the complaints"
Because what's happening is objectively wrong. You are exactly the kind of person the corporatocracy wants - someone who just say "Ehhh, I wasn't personally impacted, so I don't care". And not only don't you care, you actively argue in favor of the corporations. Is it any wonder society is what it is today?
> They still won in court based on the profits they determined the "theft" cost them, and the settlements were absurdly high.
Such court determinations are wrong. At least hopefully you can see how perhaps there is not so much wrong with the reasoning, even if you ultimately disagree.
> They're training them on the actual images artists create with skills honed over decades of work.
This is very similar to a human studying different artists and practicing; it’s pretty inarguable that art generated by such humans is not the product of copyright infringement, unless the image copies an artist’s style. Studio Ghibli-style AI images come to mind, to be fair, which should be a liability to whoever is running the AI because they’re distributing the image after producing it.
If one doesn’t think that it’s wrong for, e.g., Meta to torrent everything they can, as I do not, then it is not inconsistent to think their ML training and LLM deployment is simply something that happened and changed market conditions.
A machine, software, hardware, whatever, as much as a corporation, _is not a human person_.
The person you replied to derailed the conversation by misconstruing an analogy.
> what's happening is objectively wrong.
Doesn't seem like a defensible claim to me. Clearly plenty of people don't feel that way, myself included.
Aside, you appear to be banned. Just in case you aren't aware.
Curious why you say this. They seem to have made the copyright infringement analogous to theft and I addressed that directly in the comment.
The discussion is about whether or not ignoring something that is of little consequence to you diminishes a later case you might bring when something substantially similar causes you noticeable problems. The question at hand had nothing to do with damages due to piracy (direct, perceived, hypothetical, legal fiction, or otherwise).
It's confusing because the basis for the legal claim is damages due to piracy and the size of that claim probably hasn't shifted all that much. But the motivating interest is not the damages. It is the impact of the thing on their employment. That impact was not present before so no one was inclined to pursue a protracted uphill battle.
I believe it is completely reasonable for an artist to want to share their work publicly on the Internet without fear of it being appropriated, and I wish there was a pragmatic way they could achieve this.
I hate Adobe's subscription model as much as the next guy and that's a good reason to get annoyed at them. Adobe building AI features is not.
It isn't, but it doesn't stop people from trying and hoping for a miracle. That's pretty much all there is to the arguments of image models, as well as LLMs, being trained in violation of copyright - it's distaste and greed[0], with a slice of basic legalese on top to confuse people into believing the law says what it doesn't (at least yet) on top.
> If Adobe properly licenses all their training data artists don't have a right to say "well i think this is bad for creativity and puts my job at risk, ban it!!!" Or more precisely, they have a right to say that, but no moral justification for trying to ban/regulate/sue over it.
I'd say they have plenty of moral / ethical justification for trying to ban/regulate/sue over it, they just don't have much of a legal one at this point. But that's why they should be trying[1] - they have a legitimate argument that this is an unexpected, undeserved, unfair calamity for them, threatening to derail their lives, and lives of their dependents, across the entire sector - and therefore that laws should be changed to shield them, or compensate them for the loss. After all, that's what laws are for.
(Let's not forget that the entire legal edifice around recognizing and protecting "intellectual property" is an entirely artificial construct that goes against the nature of information and knowledge, forcing information to behave like physical goods, so it's not unfair to the creators in an economy that's built around trading physical goods. IP laws were built on moral arguments, so it's only fair to change them on moral grounds too.)
--
[0] - Greed is more visible in the LLM theatre of this conflict, because with textual content there's vastly more people who believe that they're entitled to compensation just because some comments they wrote on the Internet may have been part of the training dataset, and are appalled to see LLM providers get paid for the service while they are not. This Dog in the Manger mentality is distinct from that of people whose output was used in training a model that now directly competes with them for their job; the latter have legitimate ethical reasons to complain.
[1] - Even though myself I am for treating training datasets to generative AI as exempt from copyright. I think it'll be better for society in general - but I recognize it's easy for me to say it, because I'm not the one being rugpulled out of a career path by GenAI, watching it going from 0 to being half of the way towards automating away visual arts, in just ~5 years.
Lots of people have had their lives disrupted by technological and economic changes before - entire careers which existed a century ago are now gone. Given society provided little or no compensation for prior such cases of disruption, what’s the argument for doing differently here?
Wouldn’t a better alternative be to work on improving social safety nets for everybody, as opposed to providing a bespoke one for a single industry?
Yes, but:
1) It's not really an exclusive choice; different people can pursue different angles, including all of them - one can both seek immediate support/compensation for the specific case they're the victim of and seek longer-term solution for everyone who'd face the same problem in the future.
2) A bespoke solution is much more likely to be achievable than a general one.
3) I don't believe it would be good for society for artists to succeed in curtailing generative AI! But, should they succeed, I imagine the consequences will encourage people to seek the more general solution that mitigates occupational damage of GenAI while preserving its availability, instead of having to deal with a series of bespoke stopgaps that also kills GenAI entirely.
4) Not that banning GenAI has any chance of succeeding - the most we'd get is it being unavailable in some countries, who'd then be at a disadvantage in competition with countries that embraced it.
Again, I'm not in favor of banning GenAI - on the contrary, I'm in favor of giving a blanket exception from copyright laws for purposes of training generative models. However, I recognize the plight of artists and other people who are feeling the negative economic impact on their jobs right now (and hell, my own line of work - software development - is still one of the most at risk in the near to mid-term, too); I wish for a solution that will help them (and others about to be in this situation), but in the meantime, I don't begrudge them for trying to fight it - I think they have full right to. I only have problems with people who oppose AI because they feel that Big AI is depriving them of opportunity to seek rent from society for the value AI models are creating.
That's going to be hard for you to justify in the long run, I think. Virtually everybody who ever lost a job to technology ended up better off for it.
That's plain wrong, and quite obviously so. You're demonstrating here a very common misunderstanding of the arguments people affected by (or worried about) automation taking their jobs make. In a very concise form:
- It's true that society and humanity so far always benefited from eliminating jobs through technology, in the long term.
- It's not true that society and humanity benefited in the immediate term, due to the economic and social disruption. And, most importantly:
- It's not true that people who lost jobs to technology were better off for it - those people, those specific individuals, as well as their families and local communities, were all screwed over by progress, having their lives permanently disrupted, and in many cases being thrown into poverty for generations.
(Hint: yes, there may be new jobs to replace old ones, but those jobs are there for the next generation of people, not for those who just lost theirs.)
Understanding that distinction - society vs. individual victims - will help make sense of e.g. why Luddites destroyed the new mechanized looms and weaving frames. It was not about technology, it was about capital owners pulling the rug from under them, and leaving them and their children to starve.
this feels like a much stronger claim than is typically made about the benefits of technological progress
That's quite a bold assumption. Betting that logic and reasoning ability plateaus prior to "full stack developer" seems like a very risky gamble.
The presence of “natural” vs. “artificial” argument is a placeholder for nonexistent substantiation. There is never a case when it does anything else but add a disguise of objectivity to some wild opinion.
Artificial as opposed to what? Do you consider what humans do is “unnatural” because humans are somehow not part of nature?
If some humans (in case of big tech abusing copyright, vast majority, once the realization reaches the masses) want something and other humans don’t, what exactly makes one natural and another unnatural other than your own belonging to one group or the other?
> that goes against the nature of information and knowledge
What is that nature of information and knowledge that you speak about?
> forcing information to behave like physical goods, so it's not unfair to the creators in an economy that's built around trading physical goods
Its point has been to encourage innovation, creativity, and open information sharing—exactly those things that gave us ML and LLMs. We would have none of these in that rosy land of IP communism where no idea or original work belongs to its author that you envision.
Recognition of intellectual ownership of original work (coming in many shapes, including control over how it is distributed, ability to monetize it, and just being able to say you have done it) is the primary incentive for people to do truly original work. You know, the work that gave us GNU Linux et al., true innovation that tends to come when people are not giving their work to their employer in return for paycheck.
> IP laws were built on moral arguments, so it's only fair to change them on moral grounds too.
That is, perhaps, the exact point of people who argue that copyright law should be changed or at least clarified as new technology appears.
So there’s no reason why “distaste” about AI abuse of human artists’ work shouldn’t be a valid reason to regulate or ban it. If society values the creation of new art and inventions, then it will create artificial barriers to encourage their creation.
I think bad AI makes bad output and so a few people are worried it will replace good human art with bad AI art. Realistically, the stuff it's replacing now is bad human art: stock photos and clipart stuff that weren't really creative expression to start with. As it improves, we'll be increasingly able to go do a targeted inpaint to create images that more closely match our creative vision. There's a path here that lowers the barriers for someone getting his ideas into a visual form and that's an unambiguous good, unless you're one of the "craftsmen" who invested time to learn the old way.
It's almost exactly the same as AI development. As an experienced dev who knows the ins and outs really well I look at AI code and say, "wow, that's garbage." But people are using it to make unimportant webshit frontends, not do "serious work". Once it can do "serious work" that will decrease the number of jobs in the field but be good for software development as a whole.
I disagree. There are many laws on the books codifying social distastes. They keep your local vice squad busy.
I don't think y'all really want to go down this road; it leads straight back to the nineties republicans holding senate hearings on what's acceptable content for a music album.
For but a few examples consider laws regarding gambling, many aspects of zoning, or deceptive marketing.
What's the purpose of the law if not providing stability? Why should social issues be exempted from that?
Quite an assertion. Why exactly would this be true?
Is the implication of this statement that using AI for image editing and creation is inherently unethical?
Is that really how people feel?
and unfortunately for adobe: these people are its customers
I watch this all quite closely, and It’s chronically online, anime / fursona profile picture, artists.
Exact same thing happened when that ‘open’ trust and safety platform was announced a few months ago, which used “AI” in its marketing material. This exact same group of people—not even remotely the target audience for this B2B T&S product—absolutely lost it on Bluesky. “We don’t want AI everywhere!” “You’re taking the humanity out of everything!” “This is so unethical!” When you tell them that machine learning has been used in content moderation for decades, they won’t have a bar of it. Nor when you explain that T&S AI isn’t generative and almost certainly isn’t using “stolen” data. I had countless people legitimately say that having humans have to sift through gore and CSAM is a Good Thing because it gives them jobs, which AI is taking away.
It’s all the same sort of online presence. Anime profile picture, Ko-fi in bio, “minors dni”, talking about not getting “commissions” anymore. It genuinely feels like a psy-op / false flag operation or something.
Link even a single example of someone explicitly saying this and I would be astounded
We obviously can never unscramble that egg, which is sad because it probably means there will never be a way to make such people feel OK about AI.
Ethics (as opposed to morals) is about codified rules.
The law is a set of codified rules.
So are these really that different (beyond how the law is a hodge-podge and usually a minimum requirement rather than an ideal to reach for)?
In that case only large companies that can afford to license training data will be dominant.
Care to elaborate?
Also, saying artists only concern themselves with the legality of art used in AI because of distaste when there are legal cases where their art has been appropriated seems like a bold position to take.
It’s a practice founded on scooping everything up without care for origin or attribution and it’s not like it’s a transparent process. There are people that literally go out of their way to let artists know they’re training on their art and taunt them about it online. Is it unusual they would assume bad faith from those purporting to train their AI legally when participation up till now has either been involuntary or opt out? Rolling out AI features when your customers are artists is tone deaf at best and trolling at worst.
Showing the model an picture doesn't create a copy of that picture in it's "brain". It moves a bunch of vectors around that captures an "essence" of what the image is. The next image shown from a totally different artist with a totally different style may well move around many of those same vectors again. But suffice to say, there is no copy of the picture anywhere inside of it.
This also why these models hallucinate so much, they are not drawing from a bank of copies, they are working off of a fuzzy memory.
Not only that, they also assume or pretend that this is obviously violating copyright, when in fact this is a) not clear, and b) pending determination by courts and legislators around the world.
FWIW, I agree with your perspective on training, but I also accept that artists have legitimate moral grounds to complain and try to fight it - so I don't really like to argue about this with them; my pet peeve is on the LLM side of things, where the loudest arguments come from people who are envious and feel entitled, even though they have no personal stake in this.
Legislation always takes time to catch up with tech, that's not new.
The question I'm see being put forth from those with legal and IP backgrounds is about inputs vs. outputs, as in "if you didn't have access to X (which has some form of legal IP protection) as an input, would you be able to get the output of a working model?" The comparison here is with manufacturing where you have assembly of parts made by others into some final product and you would be buying those inputs to create your product output.
The cost of purchasing the required inputs is not being done for AI, which pretty solidly puts AI trained on copyrighted materials in hot water. The fact that it's an imperfect analogy and doesn't really capture the way software development works is irrelevant if the courts end up agreeing with something they can understand as a comparison.
All that being said I don't think the legality is under consideration for any companies building a model - the profit margins are too high to care for now, and catching them at it is potentially difficult.
There's also a tendency for AI advocates to try and say that AI/LLM's are "special" in some way, and to compare their development process to someone "learning" the style of art (or whatever input) that they then internalize and develop into their own style. Personally I think that argument gives a lot of assumed agency to these models that they don't actually have, and weakens the overall legal case.
Uh huh, so much worse than the people that assume or pretend that it’s obviously not infringing and legal. Fortunately I don’t need to wait for a lawyer to form an opinion and neither do those in favor of AI as you might’ve noticed.
You see any of them backing down and waiting for answer from a higher authority?
Should they? That's generally not how things work in most places. Normally, if something isn't clearly illegal, especially when it's something too new and different for laws to clearly cover, you're free to go ahead and try it; you're not expected to first seek a go-ahead from a court.
Pro AI people need to stop behaving like it’s a foregone conclusion that anything they do is right and protected from criticism because, as was pointed out, the legality of what is being done with unlicensed inputs, which is the majority of inputs, is still up for debate.
I’m just calling attention to the double standard being applied in who is allowed to have an opinion on what the legal outcome should be prior to that verdict. Temporal said people shouldn’t “pretend or assume” that lots of AI infringes on other people’s work because the law hasn’t caught up but the same argument applies equally to them (AI proponents) and they have already made up their mind, independent of any legal authority, that using unlicensed inputs is legal.
The difference in our opinions is that if I’m wrong, no harm done, if they’re wrong, lots of harm has already been done.
I’m trying to have a nuanced conversation but this has devolved into some pro/anti AI, all or nothing thing. If you still think I want to ban AI after this wall of text I don’t know what to tell you dude. If I’ve been unclear it’s not for lack of trying.
Copyright is full of grey areas and disagreement over its rules happen all the time. AI is not particularly special in that regard, except perhaps in scale.
Generally the way stuff moves forward is somebody tries something, gets sued and either they win or lose and we move forward from that point.
Ultimately "harm" and "legality" are very different things. Something could be legal and harmful - many things are. In this debate i think different groups are harmed depending on which side that "wins".
If you want to have a nuanced debate, the relavent issue is not if the input works are licensed - they obviously are not, but on the following principles:
- de minimis - is the amount of each individual copyrighted work too small to matter.
- is the AI just extracting "factual" information from the works separate from their presentation. After all each individual work only adjusts the model by a couple bytes. Is it less like copying the work or more like writing a book about the artwork that someone could later use to make a similar work (which would not be copyright infringement if a human did it)
- fair use - complicated, but generally the more "transformative" a work is, the more fair use it would be, and AI is extremely transformative. On the other hand it potentially competes commercially with the original work, which usually means less likely to be fair use (and maybe you could have a mixed outcome here, where the AI generators are fine, but using them to sell competing artwork is not, but other uses are ok).
[Ianal]
The flip-side to that is the truly "original" images where no overt references are present all look kinda similar. If you run vague enough prompts to get something new that won't land you in hot water, you end up with a sort of stock-photo adjacent looking image where the lighting doesn't make sense and is completely unmotivated, the framing is strange, and everything has this over-smoothed, over-tuned "magazine copy editor doesn't understand the concept of restraint" look.
If you ask a human artist for an image of "an archeologist who wears a hat and uses a whip" you're also going to get something extremely similar to Indiana Jones unless you explicitly ask for something else. Let's imagine we go to deviantart and ask some folks to draw us some drawing from these prompts:
A blond haired fighter from a fantasy world that wears a green tunic and green pointy cap and used a sword and shield.
A foreboding space villain with all black armor, a cape and full face breathing apparatus that uses a laser sword.
A pudgy plumber in blue overalls and a red cap of Italian descent
I don't know about you but I would expect with nothing more than that, most of the time you're going to get something very close to Link, Darth Vader and Mario. Link might be the one with the best chance to get something different just because the number of publicly known images of "fantasy world heroes" is much more diverse than the set of "black armored space samurai" and "Italian plumbers"
> Disintegrating IP into trillions of pieces and then responding to an instruction to create it with something so close to the IP as to barely be distinguishable is still infringement.
But it's the person that causes the creation of the infringing material that is responsible for the infringement, not the machine or device itself. A xerox machine is a machine that disintegrates IP into trillions of pieces and then responds to instructions to duplicate that IP almost exactly (or to the best of its abilities). And when that functionality was challenged, the courts rightfully found that a xerox machine in and of itself, regardless of its capability to be used for infringement is not in and of itself infringing.
That's simply not good enough. This is not merely a machine that can be misused if desired by a bad actor, this is a machine that specializes in infringement. It's a machine which is internally biased, by the nature of how it works, towards infringement, because it is inherently "copying:" It is copying the weighted averages of millions perhaps billions of training images, many of which depict similar things. No, it doesn't explicitly copy one Indiana Jones image or another: It copies a shit ton of Indiana Jones images, mushed together into a "new" image from a technical perspective, but will inherit all the most prominent features from all of those images, and thus: it remains a copy.
And if you want to disagree with this point, it'd be most persuasive then to explain why, if this is not the case, AI images regularly end up infringing on various aspects of various popular artworks, like characters, styles, intellectual properties, when those things are not being requested by the prompt.
> If you ask a human artist for an image of "an archeologist who wears a hat and uses a whip" you're also going to get something extremely similar to Indiana Jones unless you explicitly ask for something else.
No, you aren't, because an artist is a person that doesn't want to suffer legal consequences for drawing something owned by someone else. Unless you specifically commission "Indiana Jones fanart" I in fact, highly doubt you'll get something like him because an artist will want to use this work to promote their work to others, and unless you are driven to exist in the copyright gray area of fan created works, which is inherently legally dicey, you wouldn't do that.
So is a xerox machine. It's whole purpose is to make copies of things whatever you put into it with no regard to whether you have a license to make that copy. Likewise with the record capability on your VCR. Sure you could hook it up to a cam corder and transfer your home movie from a Super-8 to a VHS with your VCR (or like one I used to own, it might even have a camera accessory and port that you could hook a camera up to directly) and yet, I would wager most recordings on most VCRs were to commit copyright infringement. Bit-torrent specializes in facilitating copyright infringement, no matter how many Linux ISOs you download with it. CD ripping software and DeCSS is explicitly about copyright infringement. And let's be real, while MAME is a phenomenal piece of software that has done an amazing job of documenting legacy hardware and its quirks, the entire emulation scene as a whole is built on copyright infringement, and I would wager to a rounding error none of the folks that write MAME emulators have a license to copy the ROMs that they use to do that.
But in all of these cases, the fact that it can (and even usually is) used for copyright infringement is not in and of itself a reason to restrict or ban the technology.
> And if you want to disagree with this point, it'd be most persuasive then to explain why, if this is not the case, AI images regularly end up infringing on various aspects of various popular artworks, like characters, styles, intellectual properties, when those things are not being requested by the prompt.
Well for starters, I'd like to clarify to axioms:
1) "characters" as a subset of "intellectual properties"
2) "style" is not something you can copyright or infringe under US law. It can be part of a trademark or a design patent, and certainly you can commit fraud if you represent something in someone else's style as being a genuine item from that person, but style itself is not protected and I don't think it should be.
So then to answer the question, I would argue that AI images don't "regularly end up infringing on ... intellectual properties, when those things are not being requested by the prompt". I've generated quite a few AI images myself in exploring the various products out there and not a one of them has generated an infringing work, because none of my prompts have asked it to generate an infringing work. It is certainly possible that a given model with a sufficiently limited training set for a given set of words might be likely to generate an infringing image on a prompt, and that's because with a limited set of options to draw from, the prompt is inherently asking for an infringing image no matter how much you try to scrape the serial numbers off. That is, if I ask for an image of "two Italian plumbers who are brothers and battle turtles", everyone knows what that prompt is asking for. There's not a lot of reference options for that particular set of requirements and so it is more likely to generate an infringing image. It's also partly a function of the current goals of the models. As it stands, for the most part we want a model that takes a vague description and gives us something that matches our imagined output. Give that description to most people and they're going to envision the Mario Brothers, so a "good" image generation model is one that will generate a "Mario Brothers" inspired (or infringing) image.
As the technology improves and we get better about producing models that can take new paths without also generating body horror results, and as the users start wanting models that are more creative, we'll begin to see models that can respond to even that limited training set and generate something more unique and less likely to be infringing.
> No, you aren't, because an artist is a person that doesn't want to suffer legal consequences for drawing something owned by someone else.
Sorry, I think you're wrong. If you commission it for money from someone with enough potential visibility, you might encounter people who go out of their way to avoid anything that could be construed as Indiana Jones, but I bet even then you'd get more "Indiana Jones with the serial numbers filed off" images than not.
But if you just asked random artists to draw that prompt, you're going to get an artists rendition of Indiana Jones. It's clear thats what you want from the prompt and that's the single and sole cultural creative reference for that prompt. Though I suppose you and I are going to have to agree to disagree on what people will do unless you're feeling like actually asking a bunch of artist on Fiver to draw the prompt for you.
And realistically what do you expect them to draw when you make that request? When that article showed up with the headline, EVERYONE reading the headline knew the article was talking about an AI generating Indiana Jones. Why did everyone know that? Because of the limited reference for that prompt that exists. "Archeologist that wears a hat and uses a whip" describes very uniquely a single character to almost every single person.
There's a reason no one is writing articles about AIs ripping off Studio Ghibli by showing the output from the prompt "raccoon with giant testicles." No one writes articles talking about how the AI spontaneously generated Garfield knockoffs when prompted to draw an "orange stripped cat". There's no articles about AIs churning out truckloads of Superman images when someone asks for "super hero". And those articles don't exist because there's enough variations on those themes out there, enough different combinations of those words to describe enough different combinations of images and things that those words don't instantly conjure the same image and character for everyone. And so it goes for the AI too. Those prompts don't ask specifically for infringing art so they don't generally generate infringing art.
Also, the model isn’t a human brain. Nobody has invented a human brain.
And the model might not infringe if its inputs are licensed but that doesn’t seem to be the case for most and it’s not clearly transparent they don’t. If the inputs are bad, the intent of the user is meaningless. I can ask for a generic super hero and not mean to get superman but if I do I can’t blame that on myself, I had no role in it, heck even the model doesn’t know what it’s doing, it’s just a function. If I Xerox Superman my intent is clear.
I would hope we put up with it because "copyright" is only useful to us insofar as it advances good things that we want in our society. I certainly don't want to live in a world where if we could forcibly remove copyrighted information from human brains as soon as the "license" expired that we would do so. That seems like a dystopian hell worse than even the worst possible predictions of AI's detractors.
> I can ask for a generic super hero and not mean to get superman but if I do I can’t blame that on myself, I had no role in it, heck even the model doesn’t know what it’s doing, it’s just a function.
And if you turn around and discard that output and ask for something else, then no harm has been caused. Just like when artists trace other artists work for practice, no harm is caused and while it might be copyright infringement in a "literal meaning of the words" it's also not something that as a society we consider meaningfully infringing. If on the other hand, said budding artist started selling copies of those traces, or making video games using assets scanned from those traces, then we do consider it infringement worth worrying about.
> If I Xerox Superman my intent is clear.
Is it? If you have a broken xerox machine and you think you have it fixed, grab the nearest papers you can find and as a result of testing the machine xerox Superman, what is your intent? I don't think it was to commit copyright infringement, even if again in the "literal meaning of the words" sense you absolutely did.
Or consider the youtube video "Fan.tasia"[1]. That is a collection of unlicensed video clips, combined with another individual's work which itself is a collection of unlicensed audio clips mashed together into a amalgamation of sight and sound to produce something new and I would argue original, but very clearly also full of copyright infringement and facilitated by a bunch of technologies that enable doing infringement at scale. It is (IMO) far more obviously copyright infringement than anything an AI model is. Yet I would argue a world in which that media and the technologies that enable it were made illegal, or heavily restricted to only the people that could afford to license all of the things that went into it from the people who created all the original works, would be a worse world for us all. The ability to easily commit copyright infringement at scale enabled the production of new and interesting art that would not have existed otherwise, and almost certainly built skills (like editing and mixing) for the people involved. That, to me, is more valuable to society than ensuring that all the artists and studios whose work went into that media got whatever fractions of a penny they lost from having their works infringed.
[1]: https://www.youtube.com/watch?v=E-6xk4W6N20&pp=ygUJZmFuLnRhc...
If you want to ingest unlicensed input and produce copyright infringing stuff for no profit, just for the love of the source material, well that’s complicated. I’m not saying no good ever came of it, and the tolerance for infringement comes from it happening on a relatively small scale. If I take an artists work with a very unique style and feed it into a machine then mass produce art for people based on that style and the artist is someone who makes a living off commissions I’m obviously doing harm to their business model. Fanfics/fanart of Nintendo characters probably not hurting Nintendo. It’s not black or white. It’s about striking a balance, which is hard to do. I can’t just give it a pass because large corporations will weather it fine.
That Fantasia video was good. You ever see Pogo’s Disney remixes? Incredible musical creativity but also infringing. I don’t doubt the time and effort needed to produce these works, they couldn’t just write a prompt and hit a button. I respect that. At the same time, this stuff is special partly because there aren’t a lot of things like it. If you made a AI to spit out stuff like this it would be just another video on the internet. Stepping outside copyright, I would prefer not to see a flood of low effort work drown out everything that feels unique, whimsical, and personal but I can understand those who would prefer the opposite. Disney hasn’t taken it down in the last 17 years and god I’m old. https://youtu.be/pAwR6w2TgxY?si=K8vN2epX4CyDsC96
The training of unlicensed inputs is the ultimate issue and we can just agree to disagree on how that should be handled. I think
https://www.logicallyfallacious.com/logicalfallacies/Appeal-...
These tools are optional whether people like to hear it or not. I’m not even against them ideologically, I just don’t think they’re being integrated into society in anything resembling a well thought out way.
It’s a philosophical concept not a trap card.
As for the model, it’s still creating deterministic, derivative works based off its inputs and the only thing that makes it random is the seed so it being a database of vectors is irrelevant.
Okay, so if the inputs to the model are my artwork to replicate my style, is the output copyrightable by you? You just said deterministic works aren’t derivative, they’re considered the same as the original. That’s not anything I’ve heard AI proponents claim and the outputs are more original than a 1 to 1 photocopy but I assume like the case you linked to that the answer will be, no, you can’t copyright.
I believe that is the conclusion the US copyright office came to as well https://www.copyright.gov/ai/ (i didnt actually read their report, but i think that's what it says)
Fair Use 4th Factor: This factor considers whether the use could harm the copyright holders market for the original work.
If the use is research it’s fine. If the use is providing a public non-commercial model then it is somewhat harmful as their work is devalued. If the goal is to compete with them it is very harmful. Therefore, since we’re talking about the last two use cases, I argue fair use does not apply. Others maintain it does as maybe you do.
If it’s not fair use then it would be infringing on that particular copyright holder.
As you know, anime art is a spectrum with “How to Draw Manga for Kids” at the bottom and studio quality at the top. People pick and choose the art to train on not just because of the style but also the quality and consistency of their work. That’s why you might choose a specific artist to base a model on even though their style is just “anime”.
It's sad that it's funny that you think Adobe is motivated by ethical consideration.
"hostage"
They annually harass me with licensing checks and questionnaires because they really hate you if you run Photoshop inside a VM (my daily driver is Linux), although it is explicitly allowed. Luckily, I don't need the Adobe software that often. But they hold a lot of important old company documents hostage in their proprietary file formats. So I can't cancel the subscription, no matter how much I'd like to.
Gimp can't handle them?
Basically any given PSD will certainly load correctly in photoshop, but you're rolling the dice if you want to load it into anything else. More so if you are using more modern features.
0 - https://forum.affinity.serif.com/index.php?/topic/225143-wha...
Performance is obviously going to take a hit though. Depending on the machines in question one would probably get better results from a current gen x86 box running that same Windows version of CS1/CS2/CS3 running through WINE (or of course Windows 11, but then you’re stuck with Windows 11).
If I saw news of a huge purge of stolen content on their stock image service with continued periodic purges afterwards (and subsequent retraining of their models to exclude said content), I might take the claim more seriously.
Much better than the transparently vapid marketing-speak
Adobe is cannibalizing their paid C-Suite artists by pumping out image generators to their enterprise customers. How is that ethical? They are double dipping and screwing over their longtime paying artists
AI tech and tools aren’t just going to go away, and people aren’t going to just not make a tool you don’t like, so sticking your head in the sand and pretending like it will stop if you scream loud enough is not going to help, you should instead be encouraging efforts like Adobe’s to make these tools ethically.
Without that, it's only as good as a human artist in the way a picture of a work of art is.
Actual AI art would first require an ai that wants to express something, and then it would have be trying to express something about the the life of an ai, which could really only be understood by another ai.
The most we could get out of it is maybe by chance it might be appealing like a flower or a rock. That is, an actual flower not an artists depiction of a flower or even an actual flower that someone pointed out to you.
An actual flower, that wasn't presented but you just found growing, might be pretty but it isn't a message and has no meaning or intent and isn't art. We like them as irrelevant bystanders observing something going on between plants and pollenators. Any meaning we percieve is actually only our own meanings we apply to something that was not created for that purpose.
And I don't think you get to say the hate is misdirected. What an amazing statement. These are the paying users saying what they don't like directly. They are the final authority on that.
There is always an actual human who has actual human experience in the loop, the AI doesn't need to have it. AI doesn't intend to draw anything on its own, and can't enjoy the process, there has to be a human to make it work on either intent (input) or value (output) side.
I pay for photoshop along with the rest of the adobe suite myself, so you cannot write off my comment either while saying the rest of the paying users are “the final authority” when I am in fact a paying user.
My point is simply that with or without everyone’s consent and moral feel-goods these tools are going to exist and sticking your head in the sand pretending like that isn’t true is silly. So you may as well pick the lesser evil and back the company who at least seems to give the slightest bit of a damn of the morals involved, I certainly will.
It has nothing to do with you. You are free not to have the same priorities as them, but that's all that difference indicates, is that your priorities are different.
The "what is art?" stuff is saying why I think that "get as good as a human artist" is a fundamentally invalid concept.
Not that humans are the mostest bestest blessed by god chosen whatever. Just that it's a fundamentally meaningless sequence of words.
My own position is that "art" can only be created by a human. AI can produce text, images, and sounds, and perhaps someday soon they can even create content that is practically indistinguishable from Picasso or Mozart, but they would still fail to be "art."
So sure, an AI can create assets to pad out commercials for trucks or sugary cereal, and they will more than suffice. Commercials and other similar content can be made more cheaply. Maybe that's good?
But I would never willingly spend my time or money engaging with AI "art." By that, I mean I would never attend a concert, watch a film, visit a museum, read a book, or even scroll through an Instagram profile if what I'm viewing is largely the output of AI. What would the point be?
I'll admit that there is some middle ground, where a large project may have some smaller pieces touched by AI (say, art assets in the background of a movie scene, or certain pieces of code in a video game). I personally err on the side of avoiding that when it is known, but I currently don't have as strong of an opinion on that.
Also, realistically, most people want entertainment, not art (by your definition). They want to consume experiences that are very minor variations of on experiences they've already had, using familiar and unsurprising tropes/characters/imagery/twists/etc.
The idea that only humans can make that kind of work has already been disproven. I know a number of authors who are doing very well mass-producing various kinds of trashy genre fiction. Their readers not only don't care, they love the books.
I suspect future generations of AI will be better at creating compelling original art because the AI will have a more complete model of our emotional triggers - including novelty and surprise triggers - than we do ourselves.
So the work will be experienced as more emotional, soulful, insightful, deep, and so on than even the best human creators.
This may or may not be a good thing, but it seems as inevitable as machine superiority in chess and basic arithmetic.
"The idea that only humans can make that kind of work has already been disproven." That I disagree with, and it ultimately is a matter of "what is art." I won't pretend to offer a full, complete definition of what is art, but at least one aspect of defining what is and is not art is, in my opinion, whether is was created by a human or not. There is at least some legal precedent that in order for a copyright to be granted, the work has to be created by a human being: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
"I suspect future generations of AI will be better at creating compelling original art because the AI will have a more complete model of our emotional triggers - including novelty and surprise triggers - than we do ourselves."
Again, by my definition at least, AI cannot create "original art." But I'll concede that it is conceivable that AI will generate entertainment that is more popular and arousing than the entertainment of today. That is a rather bleak future to imagine, though, isn't it? It seems reminiscent of the "versificator" of 1984.
Why not? The output of AI is usually produced at the request of a human. So if the human will then alter the request such that the result suits whatever the human's goal is, why would there be no point?
This, to me, sound like the debate of whether just pressing a button on a box to produce a photograph is actually art, compared to a painting. I wonder whether painters felt "threatened" when cameras became commonplace. AI seems just like a new, different way of producing images. Sure, it's based on prior forms of art, just like photography is heavily inspired by painting.
And just because most images are weird or soulless or whatever doesn't disqualify the whole approach. Are most photographs works of art? I don't think so. Ditto for paintings.
To your point about Instagram profiles, I actually do follow some dude who creates "AI art" and I find the images do have "soul" and I very much enjoy looking at them.
I agree with the sentiment, however..
Good luck to all of us at holding to that philosophy as AI & Non-AI become indistinguishable. You can tell now. I don't think you'll be able to tell much longer. If for no other reason than the improvements in the last 3 years alone. You'll literally have to research the production process of a painting before you can decide if you should feel bad for liking it.
But if I see something that I think is cool and interesting, and then I discover that it was mostly the result of a few AI prompts, then I just don't care about it anymore. I don't "feel bad" that I thought it interesting, rather, I just completely lose interest.
I do fear that it will be increasingly difficult to tell what is generated by AI and what is created by humans. Just examining myself, I think that would mean I would retreat from mainstream pop-culture stuff, and it would be with sadness. It's a bleak future to imagine. It seems reminiscent of the "versificator" in George Orwell's 1984.
> AI tech and tools aren’t just going to go away, and people aren’t going to just not make a tool you don’t like
It could. Film photography effectively went away, dragging street snaps along it. If it continues to not make artistic sense, people will eventually move on.Their sales went crazy because everyone was relentlessly pirating their software.
https://www.adobe.com/fireflyapproach/
(I work for Adobe)
They have burned so much of goodwill that the community is not willing to engage even with positive things now.
This broadly is happening to tech as well.
I was actually contacted by someone at Adobe for a chat about disability representation and sensitivity in Japan because they were doing research to gauge the atmosphere here and ensure that people with disabilities were represented, and how those representations would be appropriate for Japanese culture. It really blew my mind.
https://adobe.design/stories/leading-design/reducing-biased-...
(I work for Adobe)
Law is agreeable hate, in a way. Things that gets enough hate will get regulated out, sooner or later.
People hate bad AI images, because they hate bad images, period. They don't hate good AI images, and when they see great AI images, they don't even realize they are made by AI.
It's true, there's a deluge of bad art now, and it's almost entirely AI art. But it's not because AI models exist or how they're trained - it's because marketers[0] don't give a fuck about how people feel. AI art is cheap and takes little effort to get - it's so cheap and low-effort, that on the lower end of quality scale, there is no human competition. It makes no economic sense to commission human labor to make art this bad. But with AI, you can get it for free - and marketing loves this, because, again, they don't care about people or the commons[1], they just see an ability to get ahead by trading away quality for greater volume at lower costs.
In short: don't blame bad AI art on AI, blame it on people who spam us with it.
--
[0] - I don't mean here just marketing agencies and people with marketing-related job titles, but also generally people engaging in excessive promotion of their services, content, or themselves.
[1] - Such as population-level aesthetic sensibilities, or sanity.
There's a decent size group of people who have a knee-jerk negative response toward AI regardless of quality. They'd see that image, like it, and then when told it's AI, turn on it and decide it was obviously flawed from the beginning. Is there a version of "sour grapes" where the fox did eat the grapes, they were delicious, but he declared they were sour after the fact to claim moral superiority?
Art as nice things, vs. art as a peacock's tail where the effort is the point.
Fast fashion vs. Ned Ludd.
Queen Elizabeth I saying to William Lee, "Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars."
> and then when told it's AI, turn on it and decide it was obviously flawed from the beginning.
Have you seen any experimental results from researches in which participants were _falsely_ told something was AI-made, to prove and gauge that "moral superiority" effect? I'm not aware of any. There has to be many, because it has to be easy. No?To be completely honest, I can't always tell, but when I come across images that give me inexplicable gastric discomfort that I can't explain why, and then it was revealed that it had been AI generated, that explains it all(doesn't remove the discomfort, just explains it).
I don't have reasons to believe that I have above-average eyes on art among HNers, but it'll be funny and painful if so. I mean, I'm no Hayao "I sense insult to life itself" Miyazaki...
He was saying that in response to a computer-animated zombie that dragged itself along in a grotesque manner. It wasn't that it was animated by a computer, it was that he found it offensive in that it felt like it was making light of the struggles of people with disabilities. You definitely would also find it disgusting.
* not only this contextually misleading quote, and I've also parotted things
I don't know for sure about the common usage, but personally my use of AI in Photoshop are things like replacing a telephone pole with a tree, or extending a photo outside of frame, which is much different than just generating entire images. It is unfortunate that this usage of generative AI is lumped in with everything else.
It's fine as a way of making shitposts, but I don't know if it's a professional-grade graphics editor - but I'm not a professional myself, so what do I know.
In Photoshop, likely because it's been used by pros for decades, little conveniences are all over the place, like the ability to press 'd' for 'Don't Save' in a save dialog box.
That said, the past few versions of Photoshop, which moved away from fully-native apps to some sort of web UI engine... they are getting worse and worse. On one of my Macs, every few weeks it gets stuck on the 'Hand' tool, no matter what (even surviving a preferences nuke + restart), until I reboot the entire computer.
"Get artists to use it" is the free square :)
To people who are on board with the "AI" hype train, there is no ethical problem to be solved wrt. "AI".
Neither side cares.
So while I think we're all pretty aware of both sides of the image gen discussion and may have differing opinions about that - I think we can all agree that the genie can't be put back in the bottle. This will naturally lead for those that do take advantage of the technology to outpace those which do not.
Also I applaud Adobe's approach to building their models "ethically", yes they are inferior to many competitors, but they work well enough to save significant time and money. They have been very good at honing in what AI is genuinely useful for instead of bolting on a chatbot onto every app like clock radios in the 1980s.
The dark lesson here is that you avoid hate and bad PR by cutting artists out of the loop entirely and just shipping whatever slop the AI puts out. Maybe you lose 20% of the quality but you don't have to deal with the screaming and dogpiles.
Even if you believe everything they say, they are lying by omission. For example, for their text to image technology, they never specify what their text language model is trained on - it’s almost certainly CLIP or T5, which is trained on plenty of not-expressly-licensed data. If they trained such a model from scratch - they don’t have enough image bureau data to make their own CLIP, even at 400m images, CLIP only performs well at the 4-7b image-caption pair scale - where’s the paper? It’s smoke and mirrors dude.
There’s a certain personality type that is getting co-opted on social media like Hacker News to “mook” for Adobe. Something on the intersection of a certain obsessive personality and Dunning Kruger.
Many would argue, myself included, that the most ethical approach towards AI is to not use it. Procreate is a popular digital art program that is loudly taking that position: https://procreate.com/ai
> Hey, we're Adobe! We're here to connect with the artists, designers, and storytellers who bring ideas to life. What's fueling your creativity right now?
> Drop a reply, tag a creator, or share your latest work—we'd love to see what inspires you!
That's such a bland, corporate message. It feels totally inauthentic. Do Adobe (a corporation) really "love to see what inspires you" or do they just want engagement for their new account?
I'm not surprised in the slightest that it triggered a pile-on.
To your point of useful info, I’m sure Adobe would get there. They just joined the site and got bullied off. I doubt they’re going to care about the site now, but it’d be funny if they tried a second post and just trudged through it.
Adobe were really clumsy here, and that's why they got burned.
The problem with this sentence is that words mean things... I don't use social media, so take this with some salt, but I do write things I hope people will find useful. I could just as easily share them to a social media and still wouldn't be looking for 'engagement'. It would still be in that same hope someone finds it useful. While I wouldn't object that someone could define or describe reading it as engagement. I wouldn't. Engagement is what you chase if you're looking to sell ads, because engaged people interact with ads too.
Saying everyone wants engagement as if that's the means and the ends is oblivious to the fact that people, humans, don't organically give a fuck about engagement. Attention, and therefore belonging, or appreciation. Yes, absolutely. You could also describe that goal as seeking engagement, but again because words mean things, attention, or belonging are both better words for the desire the human has.
Influencers arguably want engagement, but I'd also describe them as companies in addition to being people. Truth be told, I'm only convinced they're the former.
> So I find it silly that people are upset at Adobe for having the most generic “hey we joined, show us what you’re working on” versus the useless engagement posts that are templates of “most people can’t figure out what the answer is” when the image is “two plus two equals ?”.
I don't find it silly at all. A company who's earned it's reputation for taking from people, shows up and asks for more. Predictably, people said no! If Adobe wanted attention, and belonging, and came bearing gifts, like photos, artist resources, what have you. I suspect the vitriol wouldn't have been so bad. (They've earned their reputation) But at least they would be able to represent the idea they are seeking belonging. Paying in with the hope of getting something back. Instead they couldn't read the room, and demanded attention and engagement.
From my own experience, when moving to Bluesky, the absence of engagement posters felt like a breath of fresh air. Meanwhile, with the broader influx from X/Twitter, there are some posts which are more in this style (e.g., "what was your favorite xy" nostalgia posts, or slightly more adopted to the platform, "this was my favorite xy (image), what was yours?"), but I usually see these going unanswered. It's just not the style of the platform, which is probably more about letting people know and/or about actual conversations, or just doing your thing. So, this gambit is more likely to be received as "oh no" and "corporate communications, of course", maybe as "yet another lack of commitment." So don't expect congratulations on this, rather, it may even unlock the wrath of some… The post may have done much better without this call for engagement. Just say "hi", if this is what it's about. (Actually, this is kind of a custom, new accounts just saying hi.)
Most importantly, if you're doing public relations or marketing, it's still your job to meet your audiences, not theirs to adopt to you. And for the lack of understanding of these basics, this gambit may have come across as passive aggressive.
X, for example, doesn't have much of that. It has its own flavor of toxicity, which is in many ways worse, but not that particular flavor of toxic.
I also see it on Reddit in certain subreddits but not in others.
A profile of an up and coming artist doing cool stuff with Adobe software.
A video interview with an interesting team lead at Adobe.
Or just stick to product announcements like various other brand accounts to.
Pretty much anything that doesn't come across as fake engagement bait would probably have been fine.
I am so over pile-ons by people who see themselves as being SO important.
Also: it feels really weird to defend Adobe.
This sounds like a bigger indictment of the platform than anything to do with Adobe.
Adobe could try to offer virtual "office hours" with employees helping people learn to use the software, give something back to their users. Instead they immediately treated it like another marketing channel with a formulaic and lazy engagement bait question that I'm sure they thought would work the same way it does on Twitter and Instagram.
If bluesky don’t find a way to escape this spiral of driving away normal people and attracting toxic people it’s going to become a sort of left-wing 4chan.
That would be like Hacker News but without all the shills using it. Practically unrecognizable, all the "normal people" would be gone.
The same way a photo sharing app is going to become dominated by attention starved, narcissists posting sexy photos.
Normal users just don't have the same motivation to post.
It is like complaining rotting meat is attracting flies.
It's interesting that you see this as a moderation issue for Bluesky rather than an opportunity for a billion dollar brand to rethink the way they communicate online.
I think it’s more the fact that bluesky’s core demographic are angry political obsessives (who are angry enough about politics to join a new social network over said politics). I can’t think of a worse way to create a community of people than filtering by “I’m angry about political stuff.”
Turns out the old social norm of “don’t talk politics with neighbors” was an example of a good Chestertons fence.
My comment wasn't just about Adobe
There must be a name for the phenomenon when a minority escapes persecution and hate, and upon reaching their promised land become intolerant and hateful of any outside group.
This is IMO the problem. I don't use these sites to follow "content creators". For the most part I'm following normal people who happen to say things I find interesting.
If you want to say you don't care about having content creators on your platform, that's at least a coherent take. But you still have to think about the business models of the platforms that keep them around-- short of collecting payments from every ordinary user, there needs to be buy-in from someone wanting reach, whether that's corporate accounts, individual content creators, or someone else. And do you actually know all of those "normal people who happen to say things you find interesting" in real life, or did you find some of them online, i.e. they're basically influencers/content creators with you as an audience member?
This particular situation is why the only thing I miss from Twitter at this point is the ability to mute an account's reposts rather than the full account.
Social media was a catastrophic mistake.
Twitter has the advantage of a broader range so you can escape that while bluesky is almost exclusively used based on strong ideological motivation. It's raison d'etre at this point is basically and highly political so this was bound to happen.
I have at least 100 words on my X muted word list and it's just about usable.
So you followed a bunch of people you didn't like? That says more about you than the platform...
YouTube did this for a while, up until a few months ago if you weren't logged in you'd literally just get an empty page and a search bar at the top as it wouldn't recommend any videos at all. That was temporary for a reason.
Bluesky seems to focus on curating your own feed, to the point where mass blocklists will block hundreds or thousands of accounts, and not every blocklist is reliable. The "block first, ask questions later" approach is very freeing and I've been practicing it on social media long before it gained traction on Bluesky.
I expect the platform will be very painful for people who believe everyone should be subjected to their opinion (the people who will cry censorship because Reddit shadow-banned them). Good riddance, I'd say; they can be happy on Twitter with the rest of their kind.
On average, my experience has been a lot better. I'm guessing that's mostly because I had to fight and subdue Twitter to exclusively show me content from the people I follow, combined with social media's general attraction to alt-right nutjobs (and of course, Twitter's owner being an alt-right nutjob doesn't help either).
The experience of a person following fantasy football stuff, and another person following politics, will be totally different, regardless of website.
Bsky doesn’t have blue check replies which is a major point in its favor too. I don’t think I’ve ever seen a worthwhile blue check reply, it’s like if one purposefully dredged up the worst YouTube video comments they could find and pinned them at the top.
What is your "track"? Bluesky seemed to be behaving exactly like you described Twitter, and the only explanation I could come up with was that the process of clicking on a post to block/mute the account (which is what I was told to do to curate my feed) was considered enough engagement that my feed should be more and more of what I don't want any of.
For Xitter it didn’t matter how much I trained it, eventually it’d insert something I didn’t want to see and even the slightest hint of engagement would push my feed that direction. This could happen even after multiple weeks of training.
It's obnoxious, and if the service truly offers a real alternative to Twitter it needs to squash these brigading groups. I get that people don't want to see the posts of brands...so don't follow them. It's incredibly simple. I don't want furry content but I don't run around the platform complaining that some do.
It’s a community of unhealthy social media addicts
I mean, yeah, the place is a kind of minefield these days, but I don't blame people. It just happens.
X is much more of an ideological mix.
I have a very hard time believing that Bluesky is more hostile than Twitter.
Do you want updates? You want new versions? New features? Support?
Single bill it's like buying an IPhone once and then you expect to get a new one for free each year.
No. This was a solved problem decades ago. Purchase includes minor version updates, then you keep it for life without updates. Upgrading to the next version is a choice.
Why did we collectively agree that customer choice does not matter?
Single bill makes a lot of sense for many users.
Why do you think the company is automatically entitlted to rent seeking and the removal of user choice just because they tweaked the ui?
On the one hand, I don’t have much sympathy for Adobe. On the other hand, this whole situation is why I am not on social media these days with the exception of HN and niche subreddits.
Even if much of the criticism they receive is warranted, the social media climate is just so incredibly toxic that I want no part of it.
Feels like there has to be a better way to be social on the Internet, but as time goes on I’m increasingly not sure if humans can handle it once a certain scale is reached.
But I just can't go back to their predatory pricing practices, and the absolute malware of a programme that creative cloud is.
These people drive away normal folks creating an ever more distilled community of unpleasant folks.
How many normal people are going to hang around places like reddit and bluesky that are seemingly now filled with hate and conspiracy theories.
Its a fools errand to go on a "free" platform and complain about corporate presence. If you are not paying, then those corporate bodies are.
I have (and I imagine most people over 25 have) used plenty of forums, wikis, and other social medias that are free as in beer, hosted by some guy with a computer in his garage, with technology from decades ago
The better ones of them asked you to pay if you wanted to be able to post video/large images. In most of those spaces, corporate was nowhere to be seen. Sometimes they used banner ads, but often, nothing at all but a single person's internet bill was the entire cost of the site. Such places still exist, and are good.
The internet is getting worse by the day. It's been getting worse for so long, that people are starting to wax lyrical about how it can't possibly work any other way, this is just the natural state of things.
Of course, if you absolutely must mindlessly go to the dopamine trough and get your fix of algorithmic profit engagement, then yes, you will end up in places that relentlessly seek profit via one form of another. But if you filter even a little bit for quality, you'll end up somewhere else.
Was it worth it? Was it really free? Or would we have done it knowing we would all eventually pay a terrible price?
Oh, yes, that artisanal internet. So nice, too bad it serves only a minuscule fraction of the people of the internet.
Everyone else just goes to Reddit and Discord.
The world is not better when everyone is exactly the same, it's better when everyone has a place they feel welcome. For some people they enjoy reddit or discord, others don't. There's nothing wrong with someone preferring something made out of passion, rather than something made to make more money.
Yes, the problem is that the overwhelming majority of people using sites like Reddit or Discord are not choosing it. They are there because it has become their only alternative.
And it has become their only alternative because all these hobbyist forums can only exist when they are serving some tiny, exclusive priviledge few. If they grow too much, they either will crumble or will find themselves becoming a "professional" service with people on payroll and revenue targets.
I'm not sure I agree with this, but it does fit the pattern. Auto forums are an example of this working. But I wouldn't call that a privileged few, would you?
All a business cares about is maximum reach, so they will ignore the small sites in favour of the biggest aggregator for the lowest cost.
If somebody on a smaller site behaved in the disingenuous and spammy way brands do on social, they'd be banned. Bluesky is not doing that, so this should be an opportunity to genuinely engage with the audience instead of copy/pasting the cynical tactics they apply everywhere else.
I actually enjoy Bsky as a replacement for Twitter mostly to keep on top of news (tech and otherwise, the tech often coming from the source), along with a small selection of high profile figures. So I follow those sources and venues.
It is absolutely pathetic that a small mob attacked Adobe -- primarily a super aggressive anti-AI contingent that runs around like a sad torch mob on bsky -- and I hope Adobe return to the platform. It would be nice for people like me, who chose to follow these brands, to see the news from Adobe, OpenAI, Microsoft, etc, and my choice shouldn't be limited by those people.
And you can always subscribe to Adobe's email list.
It wasn't "their customers" that brigaded. It is the clowns who have decided that Bluesky is their own. They are the ones that will keep it from hitting mainstream, and hopefully the service crushes their obnoxious activism.
Adobe could have sincerely communicated while blocking any abusive stuff or if they couldn't be arsed, turned off comments. They have PR people to handle this stuff, or at least they did until it was probably left up to some underpain intern who doesn't give a shit.
I'm not crying crocodile tears for Adobe. They shouldn't have deleted their post, and ultimately they just shrugged and decided that bsky didn't matter yet and just abandoned it for now.
Which serves no one, but it's what you get when a small number of twats who think they're the bully squad ruin a platform.
This is a silly idea. Who else would care enough or know about it?
Outrage is a performance these days.
But that's because they've chosen something else for their personal use and only make Adobe part of their workflow when required to by their workplace.
The point I was raising here specifically was the people who are feigning outrage to Adobe's benign Bluesky post are unlikely to be Adobe customers, and unlikely even creative professionals at all.
Outrage and hate is a sport to these people.
It's delusional.
Post on an open forum, get open forum results.
They could host a web page. That's a thing still. What's that? They want an audience? A megaphone into someone else's auditorium?
There's a cost to that.
Pretty sure they trashed their own brand with their subscription model. They're finding that out now.
I jumped to Affinity apps years ago when Adobe required a subscription — never looked back.
Bluesky _is_ less tolerant than Twitter of “hello, we’re a brand, aren’t we wonderful/funny”, but I think this particular reaction is more about it being Adobe than anything else.
More adroit PR, perhaps.
It also helps that when Procreate adds features, it’s always stuff that’s desired by a large chunk of their users and is broadly useful. Contrast this to e.g. Photoshop, where for many of us eliminating 98% of the new features added since CS2 would make no material difference in day to day usage.
Adobe would be well served by building “heirloom” versions of their tools that are single-purchase, affordable, and have a fixed CS1/CS2-ish feature set with all development thereafter being put into optimization, stability, etc. That’d be plenty for even many commercial artists, let alone “prosumers” and more casual users.
I love when people use this to mean "more white and conservative."
Bluesky users lean toward hating corporate greed. Adobe is greedy as fuck. Simple as. They and companies like them can stay off.
People have always tried to use social pressure to strike at people they didn't like. But there really has been a marked increase in occurrences in the last ten or so years.
We're starting to see the legal effects of people being fired for holding legal views.
"Red scare" is just a term we started using to cope with seeing people we're sympathetic to being judged for their words or actions.
Not to benefit society, but to make one feel good about themselves about the victory they achieved in ruining someones life.
The people trotting out the phrase "cancel culture" as a boogeyman also tend to run around being apologists for racism, sexism, assault, or criminal behavior. Regardless of if you're actually upset about legitimate instances of people overreacting, the fact that the term "cancel culture" is used to complain about pedophiles or sexual predators actually suffering consequences makes it difficult to take any complaints seriously.
Everyone wins and the world is a slightly nicer place.
Rather than hounding people's employers etc. The world is already divided to extremes, best not to make it worse.
Fire them, debank them, humiliate them, destroy their life.
> someone commits petty crime for the 13th time.
Meh
I just don’t post anything publicly anymore because the EV is clearly negative now. Luckily the people I meet in the real world are not the thought police.
you make that sound like a bad thing
Crocodile tears for the poor company that got drunk on enshittifying its own brand and now has to sleep in it. Adobe's takeover is like it freebased Private Equity and now complains that it has no friends. The TOS change to have AI train on all your art is really what broke people.
It may never be a Twitter alternative in the sense of making anyone a billionaire, but I'm okay with that.
Hilarious thin marketspeak. But sure, blame the social platform.
I think this is one of the most profound statements I've read all year. Perfectly sums up all the quiet backlash by middle America against the trolls that have pulled the party into extremes.
It's not that they're bad people, they just get over excited and nobody wants to deal with the headache right now.
I see it at work in the lunch room conversations where someone starts spewing passive aggressive hate and it really kills the vibe.
Qoppa PDF Studio is a great alternative to Adobe Acrobat.
Both offer perpetual licences.
Like you I rarely open Photoshop, maybe once or twice a month.
After using and promoting their products for years to create our work, the switch off access to view or print any of it unless we keep paying blackmail monthly fees.
I don't want to edit my old work, but to lock me out of viewing it is nothing short of BLACKMAIL. As people change jobs or retire, they lose all access to their work. Sick.
Ogh, nvm, lol this platform has real users that actually engage about their opinions?
dips out
“Whoever, with the intention of obtaining financial gain, causes another person to enter into a financially disadvantageous arrangement, or otherwise dispose of their own or someone else’s assets, by means of deception, or by exploiting a mistake or their inability to understand the nature of the action undertaken, shall be liable to imprisonment for a period of 6 months to 8 years”
I think I would have had more respect for Adobe if they had left the posts up.
Adobe ought to be glad anyone still cares about them.
Sadly what I know them mostly for now is their vermin web services major eCommerce companies seem to love to use (sad for the consumers stuck using this garbage). I see "adobedtm.com" domain show up constantly in noscript plugins, and I know nothing good can come from them, but NOT allowing it usually breaks the websites. I really, REALLY try not to do business with companies using adobe in their web services for this reason.
When you are an adult not in school you probably don't need "all apps" and it is relatively inexpensive to get just the product you use.
Anyway, they are still around because they still have some of the best set of features, and are industry standards, though this may change in the future and in some areas is already in progress (and I welcome that! They need competition to push them)
The single time purchase also has the added benefit of letting me use that version however long I like. Personally I don’t need much of anything that’s been added since CS2, and as such a user I’d normally only be buying new versions of Photoshop when the one I own stops running on modern operating systems. It also means you’re not bombarded with UI shifting around for no good reason, some feature getting pushed in your face for the sake of some PM’s metrics, etc.
The only reason I even have a CC sub right now is because a credit card benefit essentially pays for it. If/when that benefit disappears so does my sub.
A big part of how they keep their relevance is people using those 'educational discounts' so that they are the tools that everyone learns to use in school, building up a moat against any alternative.
And 20$/m is not what I would call "very expensive" in the context of a professional product used by people and companies who make a profit from it. By comparison, Autocad and Revit are 350$/m each
Blender is slowly taking over 3D, why can't 2D be disrupted similarly?
They also have the whole ecosystem lock-in model that also worked for Apple: their products work together, so if you try to replace Photoshop, you're probably still using Illustrator, and After Effects, etc. except your workflow isn't as smooth anymore, because there's one tool in the chain that works differently than the rest.
Back in the Creative Suite days, my parents (small graphic design studio) upgraded largely because a client upgraded and they needed to be compatible with the newer version of the file format. Creative Cloud "fixed" that, I guess.
do instructors really require people submit PSDs or do students export their stuff to jpg/png/whatever and submit the export
If you do not like products you switch to a competitor. That is the fundamental assumption on which the system is built
Is it a failure of Bluesky to never become the global town square, if it means being a place where a brand can't find it a safe space to promote itself?
Can a social network thread the needle of having enough traffic to be worthwhile but not so much as to attract the Eternal September?
No, because that's an oxymoron. There is no such thing because a precondition for a town square (which in reality is a community of people, not a place) is that there exists shared set of values, context and conduct between its members. The state of nature on a global platform, just like in a megacity is to be an anonymous, atomized individual who ideas or products can be sold to.
Every "AI" artist right now - https://m.youtube.com/watch?v=Sjnr_tLLKQ0
The people who decide to write a scathing reply to Adobe's Bluesky account are irrelevant outside of their Bluesky bubble.
Now they can know why their sells are platoning at least and people would churn as must as possible.
What are they migrating to?
With Krita and Photopea, my need for photoshop, previously paid by my employer, is gone.
Offering a discount to new customers while no discounts for existing, loyal customers always seemed backwards to me. Back in the Zortech days, we'd offer upgrades to existing customers at a steep discount.
That's part of the difference. With a subscription model, you don't need customers to want to buy your upgrades (they're forced to pay for them), you benefit the most from locking them into your ecosystem as best you can. Adobe doesn't want to make existing customers happy, they want to make it difficult for unhappy ones to stop paying every month. At that point, discounts to new customers makes sense, since it traps new people into paying you.
I tried to unsubscribe from Disney+. Their website says all over the place "cancel at any time", but I could not find any place where I could cancel it.
I wound up canceling it by getting my credit card company to block their charges.
It left a pretty sour taste, and I'd be very reluctant to ever sign up for it again.
But mostly, when they first started the subscription model I wasn't furious about it - until I realised I was stuck on a yearly plan, and I'm usually pretty good at detecting when I'm being tricked into that. A part of me doubts I was ever asked correctly.
I understand that you can charge monthly if you provide a service. You can charge monthly for SaaS or PaaS, but charging monthly for a desktop application can't win the goodwill of the people.
> has left Adobe’s standing with many photographers in shambles
What does this mean? Do normie photographers have any realistic choice except Adobe products? Are their sales falling? I doubt it. This quote reads like sour grapes.HOWEVER, 60 a month is too high for a product quality that is tanking. I was okay with it the first few years, but PS and Illustrator's performance noticeably have gone straight to shit for absolutely no benefit except for a little stupid gimmicks that offer zero productivity boosts. Indesign, they've mostly left alone, which I'm happy about because it's like Oreos. Stop fucking with the recipe, you made the perfect cookie. There are no more kingdoms to conquer. Simply find performance boosts, that's it. The reliability of my files and getting work done is more important than anything else. Truly. That's what Adobe USED to stand for. Pure raw UI intuitive productivity and getting shit done. Now, it's a fucking clown show that cares about their social media and evangelism.
I hear on the video side they've super dropped the ball, but I'm not much for motion graphics outside of Blender.
Stop with the bullshit "telemetry" garbage that bogs down my computer and AI scrapping of our data. Old files that used to run fine on my older computers run like shit on my new one. I know damn well there's bullshit going on in the background. That's 80% of the issue. The other 20% of problems are running of the mill stuff.
I am perfectly happy paying for functional, productive software. 60 bucks a month for that is fine as a freelance graphic designer and marketer. However creative cloud is quickly becoming dysfunctional and unproductive. That's the problem.
Renting software is just plain a raw deal for the users. It's more expensive, plus you don't get to keep it after you stop paying. The only one who wins is the vendor.
However, something to understand, most professional graphic design does not happen in Photoshop. It happens more in InDesign and Illustrator. Once you go design firm, print house or corporate, like PS is... there... but not like... "gee I need this every single day". One of the key features to InDesign is the fact that printing to literally any commercial or industrial printer works perfectly. I used to work in a medium sized print shop (digital and offset presses). You used InDesign to send to the RIPs (software that converts the color data properly) and get your intended result the first time about 95% of the time (ICC color management is a whole different topic). If you try Photoshop, ha ha. No. Most normies need to stop subscribing to CC and just get the PS sub. Seriously, you're wasting your money.
That's what I pay for in InDesign. Pure fucking consistency and less me screaming with difficulties. Quark and MS Publisher are great example competitors that thought it's all about design and not about output. Pure fucking trash because nothing ever printed or exported to PDF consistently. You know how MS Word formatting is a nightmare? Yeah, you don't get that in InDesign, ever. InDesign does nearly pure raw output to a rip with lots of controls. Now, if you have zero idea what you're doing, it's a nightmare. Kind of like the Manual setting to a pro-consumer DSLR camera. Once you learn how to use F-stop, shutter, ISO, etc, you refuse to use a camera without manual control. If you don't understand, you think it's stupid to not have the camera (or in this case software) think for you.
Plus, InDesign has variable data and other features that make booklet layouts a breeze. Hard to wrap your head around at first, but once you understand how the tools work, making print and digital PDFs, and then maintaining those files, reusing those layouts effectively and a whole mess of other timing saving features, you'll very, very, very quickly understand why someone would be okay with paying 60 or 100 bucks a month for it... as long as there are regular improvements. Blender has more regular, substantial improvements and it's free. Part of me thinks if they did a $600 one time buy license, then like a $10 a month "update subscription" that might be a better compromise. Not sure on the exact figures, but you get the point.
Also, from a pro graphic designer/print designer's perspective that's been doing this since 2006: Adobe is a fuckton more than Photoshop and these anti-Adobe conversations treat it as if it's important. PS is more like the jingling keys for the normie/public to be distracted by. Like PS is important... like how backseats are important to a car (unless you're more a photographer... and you don't like Lightroom...). If I lost access to PS, I'd shrug and be slightly bummed out. But not by much. Illustrator and InDesign? Might as well change careers at that point. Effectively nerfed and nuked as a designer.
Yes? It's pretty normal to take out a loan or use a credit card to purchase tools to setup your career for years to come. That budding graphic designer probably spent $2000+ on a new Mac. Honestly though subscriptions only make sense for business customers, they really fuck over the home users that would like to buy the software once and use it for several years. Hobby photographers and such are either priced out of the market, or stuck with old computers running older versions from before the subscription push.
Taking out a loan to start a career? I guess I was born to the wrong parents lol.
Not everyone starts out on great footing in their careers. To this day, I still don't buy "new" computer parts to upgrade my computer. It's a waste of money to me because I grew up only being to afford used or, best case, clearance.
Also, no Mac. Macs are for rich people with zero taste and sense and too much money to burn. Regardless of what anyone says, Macs dollar for dollar compared to a Windows machine, Adobe doesn't perform better on a Mac. I've tested it against computers where ever I would work, my older laptop versus their newer macs. Side by side, it's like 90% functions faster on Windows. Plus there's this weird ass memory issue where every PS file has an extra ~500mb of bloat on a Mac. No clue why.
But yes, subscriptions do make sense for business customers which, a lot of graphic designers do freelance on the side. Again, exactly why Adobe SHOULD be a subscription. Adobe isn't a hobbyist toolset and they need to stop treating it as such. When home users "discovered" Adobe and they started placating to them, that's when it went south. If they bumped up the price to $100 bucks a month and obliterated the "I'm just a quirky creative home user who likes to dabble" pandering, GOOD. I'd keep my subscription. Instead, I'm actively building up my experience in alternative tools so I can get away from Adobe. Not every piece software should be "Karen" easy especially when it's designed for a professional market. I want my software to be brutally efficient and productive. Not "a vibe". My "vibe" is getting away from the computer. Software should help me annihilate my workload as quickly as possible so I can go live a real life more.
You're telling them they'll lose you, but if they did what you recommend, they'd have lost both you and the "quirky creative home user who likes to dabble."
The amateur market creates the professional market 10 years from now. They should make sure quirky home users are using their product, even if they have to pay them to use it. If the quirky instead choose any other tool that is capable enough for professional work, they'll grow into the tool and never leave it. The more that do that, the more the tool will improve to conform to their expectations.
If the quirky start buying Affinity instead of learning Photoshop, Photoshop will be gone. In a hypothetical universe where the choices that were available when you first became professional were either an (even more, by your suggestion) expensive Adobe subscription and buying Affinity, you may never have used Photoshop at all.
Hobbyists can and should use pro tools, of course. There should always be a good opening as many next gen professionals come from that route, and bring outside, lateral knowledge to grow that tool in novel ways.
When you focus on lobotomizing a pro tool, that's when you actively lose market share. Affinity or someone else, just needs one or two banger spotlights and then Adobe will start seeing real problems. Right now, the lose is minor, but it's a crack in the wall. Remember Skype? I sure as fuck don't. They played the same fucky fuck game. One situation is all it took.
Yes!
> I don't like subscriptions but that's not the biggest problem. The biggest issue is Adobe's software has been getting worse as the years have passed. It's slow, incredibly buggy, their new features are often an embarrassment, and Adobe seems to do nothing other than increasing prices. And therein lies the issue with subscriptions - the user keeps paying higher prices and the company has zero motivation to fix bugs
I wonder how hard it is to create the core functionalities of Adobe Photoshop. Maybe many people have different definitions of what are the core functionalities, thus turning making a replacement software very tough.
If open source projects and other companies had gathered around an open file format, maybe there would be some leverage, but they all use their own formats.
Maybe shouldn't have listened to asshat MBAs and overpaid management consultants that infiltrate your boardroom with their "haha number go up" bullshit
You have to be pretty bad at your job to misread the room so terribly. Just taking a casual look at Clearsky’s block rankings would show how many lists are specifically blocking and targeting brands, griftos, fascists, and bigots of various stripes, and likely dissuade you from approaching the community without some form of battle plan.
Treating BlueSky like a “new Twitter” is a dangerous mistake to make, something Adobe learned the hard way. To make matters worse, they also poisoned the community well to the point there’s a fresh witch hunt out for brands and companies to add to block lists, thus harming everyone else’s “engagement”.
Companies like Adobe and other major tech players have enabled the hostile environment we see growing every day. It’s no wonder that disingenuous posts like this from predatory companies receive such a backlash.
The reality is that Adobe has a large team of engineers to create and maintain several high end professional digital art creation tools. They also frequently add new and excellent features to those tools. That costs money. This money has to come from somewhere.
With the old model Creative Suite 6 Master Collection cost over $2600. They updated that software every two years. The maximum Creative Cloud subscription today costs $1440 for two years. They even have a cheap Photography plan for $20 a month with Photoshop and Lightroom. That’s $480 for two years. Photoshop 6 cost $700+ alone all by itself with no Lightroom.
Why would Adobe allow for much lower prices, even considering inflation? Because they get reliable cash flow. Money keeps coming in regularly. That’s much easier for keeping people employed and paid than a huge cash infusion every other year and a trickle until your next release. It’s just not feasible to sell software that way anymore.
Of course the argument is that with the old model you didn’t need to update. You could just pay for CS5 or 6 and use it forever without ever paying again. That’s true. And I guess that’s viable if you are want software that is never updated, never gets new features, and never gets bugfixes and support. I would argue that a user that can get by without updating their tools, and has no use for new features, is not a professional. They can get by with free or cheap competitors, and they should.
Professional digital artists do need and want those updates. They are the kind of people that were buying every version of Creative Suite in the old model. For those users, paying a subscription is a huge improvement. It keeps the updates and bugfixes coming regularly instead of rarely. It funds development of new and powerful features. It keeps Adobe solvent, so the software doesn’t die. It lowers the overall price paid by the user significantly.
Plenty of things we can criticize with Adobe. Bugs they haven’t fixed. Crashy software sometimes. Products they come out with and then give up on. Doing dark patterns and fees to prevent people from unsubscribing. But the subscription model itself is a net positive compared to the old way.
It’s a very good incentive to keep the entire company on their toes. Adobe will have to keep making new features for people to justify paying for a new version, instead of rehashing the same software, and then rent-seek with a subscription.
Also, several of their products face stiff competition. They have to keep pushing Premiere to fend off Davinci and Final Cut.
There's a bit of maintenance even if you just stand still. On the photo side, I notice them updating distortion correction for new lenses that come out, new camera body support, etc -- that's just a few examples of maintaining existing features, separate from the new features they rolled out. Whoever does that has bills to pay, and I think that's just a fact across the industry.
Someone has to get paid to build, maintain, and extend these things, and I don't know if that classifies as rent-seeking.
Switching to a different creative software solution is a much bigger task than just buying the new license and installing the program. You have to relearn basic tasks that are second nature in the other thing, change workflows due to different file formats or you might just not have the option to because the rest of the industry depends on the competitors software. This is true for individual professionals as well as big companies, where switching to a different software package also means dropping efficiency for a while and hiring people to teach your employees your new software. This is a step that no company will ever take and Adobe has recognized that and taken away the only opt-out of paying them assloads forever, which was buying a perpetual license and staying on that version.
> only opt-out of paying them assloads forever, which was buying a perpetual license and staying on that version.
This I struggle with though - financially there’s no real difference between a perpetual license and a subscription once you work out the time value of money, etc. For any arbitrary subscription price, you could make a perpetual license more expensive, or vice-versa. ergo, the complaints here aren’t really about the license type, at their root they’re simply pricing complaints.
“Monthly pricing for Photoshop 2024 is too high at $x” is fundamentally the same problem (with the same solutions) as “our perpetual license for Photoshop 5.5 is becoming unusable for both technical and HR reasons and the perpetual license (which hypothetically exists) for Photoshop 2024 is too high at $x*500”.
As a "professional" I have zero interest in renting the tools of my trade.
While time goes on, any software toolchain needs maintenance, too. What's the ideal model for sustaining that?
Is renting a problem in principle or financially or something else?
My understand is that Adobe deliberately made it difficult to cancel. I don’t even use Adobe, yet am well aware of their antics, indicating how far bad behaviour spreads via word of mouth.
Honestly, i wish Adobe would still offer the conventional license, but with an additional hosting option that consumer can *choose* to activate and pay more for, or not...so that, basically:
* I pay a one-time license to use photoshop offline - and for however long i wish (understanding that after its end of life i may not eligible for security updates, but that's fair)
* Now, for storing of files, i would need to of course store them locally on my machine.
* But, if i *chose* to pay an ongoing subscription, that is when Adobe would host files for me....so i can still use their product offline, and they only charge me for use of online file storage...and i wouldn't mind if there were a premium on that charge, since i get that i would be paying for an ongoing storage service.
That gives me choice, it gives them money (both for licensing and ongoing hosting subscription), and i would figure everyone would be content....
...but, i guess the current world does not work that way, eh? So, i guess i will continue to avoid their products, heading towards alternatives like photopea, Gimp, etc.
Yeah this is why Bluesky will never be a serious and widely used social platform. It's the same sort of cesspool as the right-wing alternatives that popped up a few years back, just more self-righteous.
You can also run Blockenheimer on likes and reposts for any especially toxic anti-AI takes to catch huge chunks of them: https://blockenheimer.click
They really didn’t think this one through.
Seriously, people on Twitter demand I debate them about the validity of my life. That has yet to happen on BlueSky.
Undoubtedly Adobe has legions of users who take a more nuanced view of Adobe and who would also be the type to use Bluesky. And yet.
I'd consider the complaints to be essentially the emotions that come up when people covet unaffordable privately created property.