12 pointsby vednig6 hours ago5 comments
  • jeroenhd4 hours ago
    An interesting take on the AI model. I'm not sure what their business model is like, as collecting training data is the one thing that free AI users "pay" in return for services, but at least this chat model seems honest.

    Using remote attestation in the browser to attest the server rather than the client is refreshing.

    Using passkeys to encrypt data does limit browser/hardware combinations, though. My Firefox+Bitwarden setup doesn't work with this, unfortunately. Firefox on Android also seems to be broken, but Chrome on Android works well at least.

  • datadrivenangel4 hours ago
    Get a fun error message on debian 13 with firefox v140:

    "This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.)."

  • f_allwein5 hours ago
    Interesting! I wonder a) how much of an issue this addresses, ie how much are people worried about privacy when they use other LLMs? and b) how much of a disadvantage it is for Confer not to be able to read/ train in user data.
  • AdmiralAsshat6 hours ago
    Well, if anyone could do it properly, Moxie certainly has the track record.
  • JohnFen5 hours ago
    Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?

    I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.

    • roughly2 hours ago
      Looks like Confer is hosting its own inference: https://confer.to/blog/2026/01/private-inference/

      > LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

      I don’t know what model they’re using, but it looks like everything should be staying on their servers, not going back to, eg, OpenAI or Anthropic.