I’m genuinely curious where you draw the line. Because in practice, many of us are already surrounded by passive listening systems, even if we don’t actively use them ourselves.
I keep all of that stuff disabled.
> Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
Yes, it is, although I only have one friend who has such a device. I tend not to spend much time at their place. If someone I knew had a wearable that was analyzing/recording everything, and they refused to remove it, I'd minimize the amount of time I'd spend with them.
> I’m genuinely curious where you draw the line.
"Drawing the line" isn't really the way I'd put it. As you say, we're surrounded by surveillance devices and there's little I can do about it. All I do is avoid them wherever it's possible for me to do so and be sparing/cautious about what I say and do when I'm near surveillance that I can't avoid.
Where my life is mundane shit that most of the time I don’t even need the current generation of tech anywhere near. Walking the dog. Playing with and looking after my kids. Everyday conversations and intimacy with my wife. Barbecues with friends. Work.
And these guys lives are just working out, coding, and cooking on-trend dishes with expensive cookware, all to be relentlessly optimised.
For instance I've never brought my camera to a funeral. Most daily life deserves the right to be forgotten.
Then there are privacy laws, etc.
You're going to capture hours of walking and/or seemingly doing nothing, exchanging pleasantries/small-talk/banter. Without access to my thoughts, this is stuck in some superficial layer -- useless other than to maybe surface a reminder of something trivial that I forgot (and that's not worth it). Life happens in the brain, and you won't have access to that (yet).
Curious though: if there were a way for an AI to understand your thoughts, would that even be something you’d want? Or is the whole concept off-limits for you?
It's an interesting question -- I've thought about it a lot in the context of some hypothetical brain interface. There are a lot of unknowns but I personally would go for it with the very hard constraint that it be the equivalent of read-only (no agents here) and local (no cloud).
As potentially scary as it seems, I would not be able to fight the temptation to participate under those conditions. It would make elusive thought a thing of the past.
If it was networked, it would need to have much tighter security than the current internet.
If it was just a terminal to some corporate server running unknown software for purposes I wouldn't necessarily agree to, nope, nope, nopity-nope. Even if it didn't start off as a device for pushing propaganda and advertising, there's no realistic expectation that it wouldn't evolve into that over time.
Personally I would consider it a moral imperative to refuse to use such a device and to avoid anyone who does otherwise.
So no, please don't create such a thing. Stop now.
That said, I often think about how this tension applies to nearly every new technology. Most tools can be used for good or bad, and history shows that progress tends to happen either way. If we had refused to develop technologies simply because they could be misused, we might not have any at all.
I do believe it’s possible to build responsibly through transparency, local-first design, and strong legal safeguards. The EU’s data protection laws, for example, give me some hope that we’re not entirely defenseless.
Do you see this kind of outcome as something we’re tangibly heading toward, or more as a warning of what could happen if we’re not careful?
BigTech has burned so much good will at this point, that every new venture just feels like a timer ticking down to a bait and switch for ad revenue, subscriptions, pro features or just selling our data to the highest bidder.
And what happens to 'local' data when the 3 letter agencies want access. No thanks, sounds completely dystopian. If the data is there, someone will find a way to abuse it.
That's why my entire architecture is being designed differently, based on two principles:
- Fully Functional Offline: 3-letter agencies can't access data that isn't on a server in the first place. The core AI runs on-device. - Open Core: You're right to expect a "bait & switch." That's why the code that guarantees privacy (the OS, the data-pipeline) must be open-source, so the trust is verifiable.
My business model is not to sell ads or data. I'm trying to design a system where trust is verifiable through the architecture, not just promised.
Definitely won't trust AI shackled to other humans.
There's one thing AI can't do, and that's actually care about anyone or anything. It's the rough equivalent a psychopath. It would push you to a psychotic break with reality with its sycophancy just a happily as it would, say, murder you if given motive means and opportunity.
It could also help me use my time better. If it knows what I’ve been doing lately, it might give me useful tips.
So overall, more like a coach or assistant for everyday life.
But I think I know, what you mean - the human aspect should not be lost in the process. Do you see a chance for the future, what can unite the two aspects, i.e., supportive AI without losing the human touch? How could this be ensured?
You want the human touch, make unique individual entities which experience life and death. Brb gotta go play with my cat.
I’m building something that tries to stay on the right side of that line: not replacing human touch, but amplifying it. Curious how you'd draw that boundary.
Now nobody wants to do that anymore, they want to do AI
Have you read about procrastination / resistance? The issue is not an absence of nagging but unresolved emotions / burnout etc.
But I think for the V1, I'll stick to positive reinforcement. Not 100% sure "aversive conditioning" builds the long-term trust we're aiming for. ;)
Cheers!
Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.
I'm no expert at this but that sounds a lot harder to implement than you're implying, especially if it's all locally stored and not checked over by a 3rd party. What's to stop me from just doing it anyway?
I’ve also thought a lot about trust. Would you feel differently if the system were open source, with the critical parts auditable by the community?
I mean generally speaking yes open source but the issue is that if it’s open source then people can easily disable the safeguards with a fork so idk I feel mixed on it. I’m still leaning towards yes because in general I am for open source. But I’d have to think about it and hear other people’s takes
http://www.duntemann.com/End14.htm
Elon Musk's portable-Grok-thing is a long step toward the jiminy idea.
Notwithstanding that most of the mobile OS’s are locked down more than some would prefer for a “general purpose computer” (but less than is likely for a porta-Grok), and that most devices are bigger than a matchbook to support UI that wouldn't be available in that form factor (though devices are available in matchbook size with more limited UI), and that it mostly uses RF like Bluetooth instead of IR for peripherals because IR works poorly in most circumstances, isn’t that what a smartphone is?
Just curious: what would have to change for you to even consider it? Is it more about the concept itself, or the way it's implemented?
I think that there is some limit to how much additional information is useful to the AI tools that I use. I don’t know where that limit is and I also think that models are getting better all the time, so storing the data now for later use might be useful.
I have no idea how much it would cost to store/analyze 14-18 hours of data a day? I’m assuming that it could be post-processed and delete the useless stuff?
Obviously I understand the privacy-zealots issues with this technology. But I’m going to be dead in a couple decades and this idea sounds interesting to me. To me, whatever risk there is would be worth the unknown reward.
I've also considered the idea of collecting data now that might be useful later. However, at least within the EU, data storage must be purpose-specific.
That’s why I believe everything should be rejected immediately (similar to how wake word detection works), and only the data the user explicitly enables the AI to store should be retained. This reduces the required storage already. And the post-processing: agree, can also imagine something like "compressing the information of one hour" --> and "compress day" ...