Submitted a PR to prevent its installation on macos versions older than Tahoe(26), since I was able to install it on my older macos 15, but it aborted on execution.
You can trigger the the service's ToS violation or worse, get tipped off to law enforcement for something you didn't even write.
So the full solution would be models trained in an open verifiable way and running locally.
cryptographic confirmation of zero knowledge: yes.
the latter, based on trust in the hardware manufacturer and their root ca. so, encrypted if you trust intel/nvidia to sign it.
there are a few services, phala, tinfoil, near ai, redpill is an aggregator of those
anthropic, google, openai etc, decided that their consumer ai plans would not be private. partly to collect training data, the other half to employ moderators to review user activity for safety.
we trust that human moderators will not review and flag our icloud docs, onedrive or gmail, or aggregate such documents into training data for llms. it became the norm that an llm is somehow not private. it became a norm that you can't opt out of training, even on paid plans (see meta and google); or if you can opt out of training, you can't opt out of moderation.
cloud models with a zero retention privacy policy are private enough for almost everyone, the subscriptions, google search, ai search engines are either 'buying' your digital life or covering themselves for legal reasons.
you can and should have private cloud services, and if legal agreement is not enough, cryptographic attestation is already used in compute, with AWS nitro enclaves and other providers.
I personally think everyone should default to using local resources. Cloud resources should only be used for expansion and be relatively bursty rather than the default.
I saw a service named Phala, which claims to be actually no-knowledge to server side (I think). It was significantly more expensive, but interesting to see it's out there. My thought was escaping the data-collection-hungry consumer models was a big win.
As an enthusiastic reader of books like Privacy is Power and Surveillance Capitalism, it feels good to have a private tool that is ready at hand.
if you are happy with off-prem then the llm is ok too, if you need on-prem this is when you will need local.
The private thing is the prompt.
But also, a local LLM opens up the possibility of agentic workflows that don't have to touch the Internet.
With the Claude bug, or so it is known, burning through tokens at record speed, I gave alternative models a try and they're mostly ... interchangeable. I don't know how easy switching and low brand loyalty and fast markets will play out. I hope that local LLMs will become very viable very soon.
Some such projects use CORS to allow read back as well. I haven’t read Apfel’s code yet, but I’m registering the experiment before performing it.
This is partially in response to https://localmess.github.io/ where Meta and Yandex pixel JS in websites would ping a localhost server run by their Android apps as a workaround to third-party cookie limits.
Chrome 142 launched a permission dialog: https://developer.chrome.com/blog/local-network-access
Edge 140 followed suit: https://support.microsoft.com/en-us/topic/control-a-website-...
And Firefox is in progress as well, though I couldn't find a clear announcement about rollout status: https://fosdem.org/2026/schedule/event/QCSKWL-firefox-local-...
So things are getting better! But there was a scarily long time where a rogue JS script could try to blindly poke at localhost servers with crafty payloads, hoping to find a common vulnerability and gain RCE or trigger exfiltration of data via other channels. I wouldn't be surprised if this had been used in the wild.
The default scenario should be secure. If the local site sends permissive CORS headers bets may be off. I would need to check but https->http may be a blocker too even in that case. Unless the attack site is http.
The task is basically predicting pricing and costs.
Apple’s model came out on top—best accuracy in 6 out of 10 cases in the backtest. That surprised me.
It also looks like it might be fast enough to take over the whole job. If I ran this on Sonnet, we’re talking thousands per month. With DeepSeek, it’s more like hundreds.
So far, the other local models I’ve tried on my 64GB M4 Max Studio haven’t been viable - either far too slow or not accurate enough. That said, I haven’t tested a huge range yet.
Unfortunately, I found the small context window makes the utility pretty limited.
Then save the heavy lifting for the big boys.
This doesn't feel truthful, it sounds like this tool is a hack that unlocks something. If I understand it correctly, it's using the same FoundationModels framework that powers Apple Intelligence, but for CLI and OpenAI compatible REST endpoint. Which is fine, just the marketing goes hard a bit.
> Runs on Neural Engine
Also unsure if this runs on ANE, when I tried Apple Intelligence I saw that it ran on the GPU (Metal).
Also unsure…
Thank you for sharing your feelings and uncertainty.
Perhaps resist the urge to post until you have something to contribute.
https://github.com/ehamiter/afm
It's really handy for quick things like "what's the capital of country x" but for coding, I feel that it is severely limited. With such a small context it's (currently) not great for complicated things.
The mic button requires clicking to transcribe and start listening again, and default voice is low-quality (I assume it can be configured).
In general I'm looking for a way to try the on-device hands-free voice mode.
apfel -o json "Translate to German: apple" | jq .content
Already in Chrome as an origin trial: https://developer.chrome.com/docs/ai/prompt-api
apple does have an on device rag pipeline called the semantic index that feeds personal data like contacts emails calendar and photos into the model context but this is only available to apples own first party features like siri and system summaries.
it is not exposed through the foundationmodels api.
Hard to know what to do with this. I'm interested in the project and know others who would be, but I feel like shit after being slopped on by a landing page and I don't wish to slop on my friends by sharing it with them. I suppose the github link is indeed significantly better, I'll share that.
Can you share a working example?
trying to run openclaw with it in ultra token saving mode, did totally not work.
great for shell scripts though (my major use case now)
dyld[71398]: Library not loaded: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Referenced from: <32818E2F-CB45-3506-A35B-AAF8BDDFFFCE> /opt/homebrew/Cellar/apfel/0.6.25/bin/apfel (built for macOS 26.0 which is newer than running OS)
Reason: tried: '/System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels' (no such file), '/System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels' (no such file, not in dyld cache)Imagine they baked Qwen 3.5 level stuff into the OS. Wow that’d be cool.
https://www.linkedin.com/posts/nathangathright_marco-arment-...
parsing logfiles line by line, sure
parsing a whole logfile, well it must be tiny, logfile hardly ever are
If all LLMs did this, people would trust them more.
https://developer.apple.com/documentation/Updates/Foundation...
They released an official python SDK in March 2026:
It’s a nice LLM because it seems fairly decent and it loads instantly and uses the CPU neural engine. The GPU is faster but when I run bigger LLMs on the GPU the normally very cool M series Mac becomes a lap roaster.
It’s a small LLM though. Seems decent but it’s also been safety trained to a somewhat comical degree. It will balk over safety at requests that are in fact quite banal.
> $0 cost
No kidding.
Why not just link the GH Github: https://github.com/Arthur-Ficial/apfel
So you have to put up with the low contrast buggy UI to use that.