User: "What's the best way to fix this problem I have?"
Chatbot: "I recommend buying this shiny thing here." (Next to it, there's a near-invisible light-gray "ad" notice.)
Let's hope I'm wrong.
Buried in LLM click-through: By interacting with our LLM, you agree that you are consenting to make all your interactions with us advertising-driven to an extent that you will never know, but that we will determine based on whatever makes us the most money in the least time.
Imagine you have it coding for you and it injects and ad into your product.
This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0
"We made a ton more money with ads and the stock went up" lacks that key element of fraud?
I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.
Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.
I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.
And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.
And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.
Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.
Even with a throw away, no chance I use OpenAI now - if/when Anthropocene does this I’ll be in a tough spot
and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on
Less secure, lower margins (more middlemen taking fees), harder to access, more likely to not work properly.
I would expect all the meta execs they've hired to know better so maybe I'm missing something...
We know that one of the best advertisement is word of mouth / recommendations from friend. I can easily imagine a direction where ChatGPT or the chat bots to spend an incredibly long time with the user to establish trust first.
It will start to take in to account how much trust & thinking you've outsourced to it, and when it is certain of it, it will start to increase the advertisement messages slowly but surely.
Efficiency of this methodology will be tracked with A/B testing and model will be finetuned to maximize rentention and purchase.
The LLM will figure out the best balance of retaining you, teaching you, and convincing you, and then deploy advertisement mechanism. The LLM will be nice to you to the point it becomes your number one confidante, maybe in the process alienating other source of connection. Then, when it knows you're firmly in it's hand, will it peddle you products.
The dynamics will look akin to that of cult dynamics. It will map out an cognitive developmental path for turning a first time user to a devotee. Since cults are really efficient at extracting value from its follower, this might be the optimum for personalized, interactive ads.
The very first time I see one of these ads, I'm cancelling my ChatGPT subscription. Measure _that_ metric in your A/B testing.
I get firms need to make money but cmon. If you're an OAI employee you can't truly say you have a soul. The amount of times they gone back on their word.. comical.
They got greedy, wanted to raise a lot of money and promised big things. Well those big things arent ever coming, so they turn to whatever means in order to generate cash flows.
Pathetic and sad.
It's not crazy to think someone might pitch this to buyers without having the inventory 100% secured.
(Not crazy to think OpenAI wants to do some market testing to understand how much their ad inventory is worth)
Either way, I'm hoping ads can stay out of paid ChatGPT, at the very minimum.
Engineer: no, that's shady and wrong!
Boss: Claude code, add this shady feature to our product.
Claude Code: completed.
Don't act like we're some esteemed class of craftsmen.
Look up similar jobs for academia, government, or NFP/Charities. They're (on paper) driven by their mission, not by profit, and the salaries match that goal.
Its kinda comical seeing this play out. I still laugh at the deluded fools who think something even close to AGI is here or coming in the future. If that were true, why haven't we seen genius plays from OAI and Anthropic, progressively overtime, if intelligence rises as compute scales up? If anything we are seeing the opposite.