It simplifies the product, reducing the number of hurdles the user has to jump through. "Hmm, which gpt should I use for this task?" That should be OpenAI's problem, not mine!
Custom GPTs are not abandoned, see heavy usage, and their selection is not a dichotomy between the user's problem and OpenAI's problem. Custom GPTs exist so they can benefit from custom prompts which are highly relevant. Unless you're asserting that custom prompts are useless, which would be an absurd assertion to make, it cannot be asserted that Custom GPTs are useless. And no, this is not something that OpenAI is going to select for you because the customization is a personal one.
The two features, namely Custom GPTs and Projects, are orthogonal. This is because a Project is for related explorations of a theme, whereas a Custom GPT is for unrelated explorations of a theme.
> chat history
What chat history? Each chat is in the user's history by default, which is how it's supposed to work for Custom GPTs. I don't need a filtered chat history for a Custom GPT like I do for a Project.
> It really seems like they stopped working on custom GPTs and just expect users to use projects instead.
That's more a personal belief rather than a conclusion; it's not even a formal declaration by OpenAI.
I turned off artifacts months ago because it would:
- frequently update code incorrectly / bad edit diff
- act like it updated / created an artifact when it just did nothing
- slowly / painfully delete every single line one by one before rewriting
- use artifacts for things that shouldn't have had any code written at all
Just wasn't worth the value it provided. This was before claude code.
I think this one is a visual artifact. I think it always rewrites artifacts from scratch internally, but the UI tries to make it look like it isn’t doing that. The result is that it looks like it’s deleting lines from the inside of the text instead of just writing them.
That said, I’ve also experienced all of your complaints, except the bad edit one (but I see that all the time in Claude Code/Aider).
---
Erroring in that way specifically makes me think the "deleting every line" issue isn't just a visual artifact but ¯\_(ツ)_/¯
I find it useless most of them time and wish I could disable it.
I had an [agent evolution framework](danieltan.weblog.lol/2025/06/agent-lineage-evolution-a-novel-framework-for-managing-llm-agent-degradation) before that dumped the output analysis into chat. It often timed out before the 10th conversation. After dumping the analysis into an artifacts, and have the LLM only edit it as required, I can go to 15 or more rounds without hitting the context limit. While they seem to re-output the entire artifact each time, they don't actually consume the tokens for the entire artifact.
This also greatly reduces the tendency of HALO-style rampancy, or AI psychosis which is also what the recent paper on context-rot/poisoning (https://research.trychroma.com/context-rot) is about.
My guess is it was always meant as marketing scheme, having others promote OpenAI for free, while at the same time they would be able to see which topics got traction and which not, providing potentially valueable business development clues.
I think it turned out both of these turned out not to be as valueable as they hoped.
If they were fine-tuned models then it would be interesting.
I'm guessing the OP is looking at this through the lens of more mom and pop users on the chat interface rather than the greater product line of these companies with all their dev tooling.
Here's all his posts tagged with claude-artifacts: https://simonwillison.net/tags/claude-artifacts/
[0] ended up making it a browser extension instead https://mattsayar.com/simple-wikiclaudia/
I like the use case for mini design exploration tools