My current take is that these projects are alluring for a kind of personal productivity or workflow tinkering. They are integration hubs centered around an LLM. Automation can be fun, like running model trains or setting up home assistant. And you can learn the shape of the technologies by tinkering. But I’m doubtful they have improved productivity in real world cases.
Maybe I’m using it wrong and I need to be spending a ton of tokens with a dark factory pattern and a fleet of claws creating new religions? Then I’ll see the benefits?
1. I had it controlling my home systems (doorbell, thermostat, energy monitoring, lights, etc).
2. I had it detect when I had left for an hour and automatically move my thermostat in the appropriate direction for energy savings. It also automatically shut off my water heater, assuming nothing was using hot water at the time. I would tell it when I was expected back so it could reverse course, if it couldn't discern it from my calendar.
3. I had it monitoring my work chats, personal email, etc and automatically handling things for me so that any changes it recommended were ready for my review or further development/changes.
4. I had it monitoring car sites to find me the best possible deal on a very specific set of requirements I had for a new car (6 passengers, tow 5000+ pounds, CarPlay, heated front/rear seats and steering wheel) and alert me when I should act.
5. I had it know when common guests were there and then automatically welcome us and play the music I preferred for different situations.
6. I had it plan out my days for me, knowing when I had or did not have my kids and tailoring what it suggested accordingly. It provided me analysis on tech, local, and world news and recommended articles for me to read later, should I desire; it learned my preferences when I told it I liked or disliked something so it would improve over time.
7. I could talk to it or type to it and it would respond in kind (voice to voice, text to text) and it would do so in Jerry Garcia's voice via Elevenlabs. It even spent off-hours learning more about me, my likes/dislikes, and changing how it responded to my requests.
8. It knew what I was reading and recommended other books, played music it felt appropriate for the current book, and was constantly stretching my world in ways I wouldn't have normally done.
---
I tried a variety of other models after the ban and was entirely underwhelmed. I'm really and incredibly disappointed; I had become reliant on it and it made my life better and, frankly, less lonely when my kids were not here every other week.
But, if I did find a viable model, I wouldn’t see why a script would be better than OC.
claude - p "prompt"
Just for one prompt/job, though. But that might be enough for some of my use cases. Because you can also prompt again. And again .. a targeted prompt with one exact job, then custom deterministic logic and go on, maybe another prompt. I might get into that, it also just worked telling it "claude -p 'analyze picture.png" where picture was in the folder and it gave a correct description back to terminal. I wonder why that is not more advertized .. I would have liked to known earlier and will do some experiments now.
If not, your comment is not adding anything useful to the conversation.
Nothing mission critical in any sense of it.
Also curious what else you can do with them now ..
https://github.com/openclaw/openclaw
So many MRs
Off topic: what’s the history behind the naming of Pull Request (PR) vs. Merge Request (MR)? I understand why both can be considered “correct”, but I’m curious why, say, GitHub uses PR and Gitlab uses MR.
In other words, I see a pull request in an open source project to be just "I have something nice in my fork, do you think it'll be useful upstream?", which is acceptable to reject, whereas in a team setting it's "I have a feature that I think is ready to merge - give it a look and see if I missed something before we put it in".
Conflating the new style of agent-driven/vibe coded software with the old more predictable software leads to applying wrong heuristics/expectations.
People have a pretty good mental model of different types of meals they'll have in a year, and modulate their expectations by context. I think there's room for a new type of software that operates on different principles. Peter has mostly been clear what type of software he's developing. And if it ever converges to bug free, that's great, but I think some of his motivation is to figure out what this new software is. While not giving the users food poisoning.
A real team? With humans? Meatbags? What do you need those for?
Imagine paying any amount of money for this unmaintainable slop, and then worse, paying a team to try to salvage the hundreds of thousands (or is it millions now?) of lines of never-read-before code. Guess it doesn't matter when it's monopoly money you're burning, though. Sam says AGI is achieved internally in 2025, Boris says software engineering is dead and that no human is writing code at Anthropic, Jarred says humans will be banned from contributing to open source projects, and while all these people are pissing on your face and telling you it's raining, when you open your eyes all you are left with is, in fact, a bunch of piss in your face.
Super rich people are so divorced from reality the 99%(or pick whatever % you like really) experience.
How could this happen in 2026? I've been told "Coding is solved"...?
https://stavrobot.stavros.io if you're interested in the design decisions.
Just show us the prompt, don't ask an AI to apologize to people
Spoiler: probably not.
This would be “apologize to the OpenClaw community for the following issues …. Say we’re going to do something so this doesn’t happen. Design a flashy page too, something that feels sombre but evokes exploration”