https://blog.brianbalfour.com/p/the-next-great-distribution-...
The problem comes when there is no way for you to own the distribution, pay nothing to the platform, and still be able to build on top of it. That’s the closed portion we should rally (legislate?) against.
There is an argument, similar to mine on distribution, that there is no inherent right that a platform should be open. That the extra utility that comes from being open should make the platform more competitive in the market vs. closed platforms.
The challenge is that with dominant platforms they are monopolistic. There is no chance for competitive forces to reward openness.
These two parts of the debate are often conflated, which hides what is truly troubling: dominant platforms controlling both distribution and access.
As 'amelius said below, there used to be more platforms. This matters, because it made for a different balance of power. Especially with retailers - the producers typically had leverage over distributors, not the other way around.
Have to collect them all. :)
Most users aren’t going to manage API keys, know that that even means, or accept the friction.
The problem with the current model is that there is a high barrier to justifying the user pays essentially a 2nd/3rd subscription for ultimately the same AI intelligence layer. And so you cannot currently make an economically successful small use case app based on AI without somehow restricting users use of AI. I don't think AI companies are incentivized to fix this.
The perfect "killer app" for AI would kill most software products and SaaS as we know them. The code doing the useful part would still be there, but stripped off branding, customer funnels and other traps, upsell channels, etc. As a user, I'd be more than happy to see it (at least as long as the AI frontend part was well-developed for power users); obviously, product owners hate this.
Due to technical and social limitations, most apps are also limited in what they can do, this naturally shapes and bounds them and their UIs, forming user-facing software products.
Intelligence of the kind supplied by SOTA LLMs, is able to both subsume the UI, by taking much broader context of the user and the use case into account, distilling it down to minimal interaction patterns for a specific user and situation, and also blur the boundaries of products, by connecting and chaining them on the fly. This kills the marketing channel (UI) and trims the organizational structure itself (product), by turning a large SaaS into a bunch of API endpoints for AI runtime to call.
Of course, this is the ideal. I doubt it'll materialize, or if it does, that it'll survive for long, because there's half a software industry's worth of middlemen under risk of being cut out, and thus with a reason to fight it.
A capital intensive, low margin business. The dream of every company.
I assume the fall off there will be 99% of users though, the way it works today.
But this theoretically allows multiple applications to plugin into ChatGPT/claude/gemini and work together.
If someone adds zillow and… vanguard, your LLM can call both through mcp and help you plan a home buy
Maybe a 'connect with OpenAI' button so the service can charge a fee, while allowing a bring your own token type hybrid.
In effect this means user input is easily disbelieved, and the model can accidentally output itself into a state of uncorrectable wrongness. By invoking the image tool, you managed to get your information into the context as “high veracity”.
Note: This info is the result of experimentation, not confirmed by anyone at OpenAI.
I might misunderstand you but it seems like you think there are multiple models with one dispatching to others? I’m not sure what that sort of multi-agent architecture is called, but I think those would be modeled as tool calls (and I do believe that the image related stuff is certainly specialized models).
In any case, I am saying that GPT5 (or whichever) is the one actually refusing the request. It is making that decision, and only updating its behavior after getting higher trust data confirming the user’s words in its context.
I asked the GitHub app to review my repository, and the app told me to click the GitHub icon and select the repository from the menu to grant it access. I did just that and then resent the existing message (which is to be expected from a user). After testing a bit more, from what I understand, the updated setting is applied only to new messages, not to existing ones. The instructions didn't mention that I needed to repeat my question as a separate message again.
Permission to allow the specific repo only access never works, so I'll have to allow access to all repo and then manually change it back to specific repo inside GitHub after connecting.
There have been instances of endless loop after Oauth sign-in, more recent experience was in Claude Code Web[1].
Poor GitHub folks, only if someone can donate time/money to this struggling small company these critical issues could be addressed /S
There will come a new UI framework/protocol, maybe something over HTML/CSS/JS that works within a chat ui context for such ChatGPT (or other llm) integrations.
For example, if you have an ecommerce app or website and want to integrate it with ChatGPT then you will have to develop on the new UI primitives. The primitives might include carousels, lists, tables, media embed. Crucially, natural language will be used to pick and choose these primitives and combine them in the UI (which ChatGPT will decide how to).
Thinking backwards, I want my app to be displayed in chatgpt with maximum flexibility for the user (meaning they can be re-arranged acc to context) but also enough constraint that I can have some control over the layout. That's the problem I think will be solved.
I swear I had made this prediction quite a while back but thanks for pointing it out :D
Mostly, though, because it seems like we’re mere minutes away from having Star Trek style LCARS adaptable GUIs managed by an AI computer system simultaneously so smart it runs mission critical operations yet so dumb we have to remind it that we want our tea “hot” five times a day.
It’s happening. We’re gonna be living in the future!
They really want your ID
I don’t have the ability to pull your personal top songs directly from Spotify because that requires accessing your authenticated listening data. You can view them in Spotify by going to “Your Library” → “Made For You” → “Your Top Songs”.
@Figma design simple hello world poster
I don’t have the ability to create designs directly in Figma, but I can guide you to quickly create a simple “Hello World” poster there.
---
am I using is wrong?
I wonder if we'll have a situation where out of two competing organizations only one is elected to use this and the other one staunchly opposes. That will be telling.
Every one I can think of since gets a bit of initial interest hoping to relive the mobile app store days, but interest wanes quickly when they realize that nobody wants to buy fart apps anymore. That ship sailed a long time ago.
And ChatGPT apps are in a worse position as it doesn't even have a direct monetization strategy. It suggests that maybe you can send users to your website to buy products, but they admit that they don't really know what monetization should look like...
Maybe an ad based system coming soon?
Between this description and their guidelines these don't really sound like "apps", but a way to integrate an existing app with ChatGPT sessions.
I'm trying to figure out what's in it for the developer other than ultimately taking users away from ChatGPT. And just like what happened with Alexa skills, these "apps" will become useless when they are unmaintained.
Since then, I’ve seen some very impressive demos and I’m excited to see what developers create on the platform as that’s always the coolest part.
I expect there's a pretty wide divide between what people who write local MCP servers want, versus what people who write cloud webstack MCP webapps want.
Personally, I've been adding local native UI to my MCP servers, but I realize that's probably a losing battle, and if I want to integrate with newer tooling, I'm going to be stuck in web hell.
Between long COVID and ai, nobody will be able to make fizzbuzz in Java, let alone code a frontend by hand.
It may not seem like it now, but that's because a big chunk of software industry is making money on introducing friction, and preventing automation, because the user interface that sits between a person and some outcome they desire, makes for a perfect marketing channel.
Adobe Photoshop AllTrails Booking.com Expedia Instacart OpenTable Spotify Tripadvisor Airtable Apple Music Canva Figma Lovable Replit Target Zillow
- Adobe Photoshop, Canva, Figma, Replit, Lovable - are all kinds of Computer Aided creation tools, and once converted into tool calls, can be gradually reproduced and replaced feature by feature.
- The rest, they're just fancy (and user disempowering) wrappers around proprietary databases and/or API calls to humans. Those cannot be trivially reproduced, because code is neither their secret sauce, nor their source of value. But they can still be pressured into becoming tool calls along with competition, and subsequently commoditized.
I've canceled my subscription, I don't plan on releasing an app to their platform.
To me, that is a tell they are basically cooked because catching Google in actual model performance is not really the position anyone would want to be in here in a horse race.
Interesting times we live in.
Stop with the MBA playbook he said.
> just make the...
Just make a superior product he said.
In this early phase, developers can link out from their ChatGPT apps to their own websites or native apps
to complete transactions for physical goods. We’re exploring additional monetization options over time,
including digital goods, and will share more as we learn from how developers and users build and engage.