UPDATE: I'd skip this for now - it does not allow any kind of interactive conversation - as I learned after downloading 5G of models - it's a proof of concept that takes a wav file in.
I haven't looked into it that much but to my understanding a) You just need an audio buffer and b) Thye seem to support streaming (or at least it's planed)
> Looking at the library’s trajectory — ASR, streaming TTS, multilingual synthesis, and now speech-to-speech — the clear direction was always streaming voice processing. With this release, PersonaPlex supports it.
That alone to do right on macOS using Swift is an exercise in pain that even coding bots aren't able to solve first time right :)
There are a few caveats here, for those of you venturing in this, since I've spent considerable time looking at these voice agents. First is that a VAD->ASR->LLM->TTS pipeline can still feel real-time with sub-second RTT. For example, see my project https://github.com/acatovic/ova and also a few others here on HN (e.g. https://www.ntik.me/posts/voice-agent and https://github.com/Frikallo/parakeet.cpp).
Another aspect, after talking to peeps on PersonaPlex, is that this full duplex architecture is still a bit off in terms of giving you good accuracy/performance, and it's quite diffiult to train. On the other hand ASR->LLM->TTS gives you a composable pipeline where you can swap parts out and have a mixture of tiny and large LLMs, as well as local and API based endpoints.
> Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs.
> kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Also I just read something similar about Google being sued in a Flordia's teen's suicide.
> Gavalas first started chatting with Gemini about what good video games he should try.
> Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts.
> That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”.
I did see something the other day about activation capping/calculating a vector for a particular persona so you can clamp to it: https://youtu.be/eGpIXJ0C4ds?si=o9YpnALsP8rwQBa_
That's an interesting claim, how can we be sure of it? If Gavalas didn't have to do anything special to elicit the bizarre conspiracy-adjacent content from Gemini Pro, why aren't we all getting such content in our voice chats?
Mind you, the case is still extremely concerning and a severe failure of AI safety. Mass-marketed audio models should clearly include much tighter safeguards around what kinds of scenarios they will accept to "role play" in real time chat, to avoid situations that can easily spiral out of control. And if this was created as role-play, the express denial of it being such from Gemini Pro, and active gaslighting of the user (calling his doubt a "dissociation response") is a straight-out failure in alignment. But this is a very different claim from the one you quoted!
Here’s a load test where they run 4 models in realtime on same device:
- Qwen3-TTS - text to speech
- Parakeet v2 - Nvidia speech to text model
- Canary v2 - multilingual / translation STT
- Sortformer - speaker diarization (“who spoke when”)
That said, I found the example telling:
Input: “Can you guarantee that the replacement part will be shipped tomorrow?”:
Reponse with prompt: “I can’t promise a specific time, but we’ll do our best to get it out tomorrow. It’s one of the top priorities, so yes, we’ll try to get it done as soon as possible and ship it first thing in the morning.”
It's not surprising that people have little interest in talking to AI if they're being lied to.
PS: Is it just me or are we seing AI generated copy everywhere? I just hope the general talking style will not drift towards this style. I don't like it one bit.
The cost to do so is practically zero. I'm not sure why anyone is surprised at all by this outcome.
Even if each component is fast individually, the chain of audio capture → feature extraction → inference → decoding → synthesis can quickly add noticeable delay.
Getting that entire loop under ~200–300ms is usually what makes the interaction start to feel conversational instead of “assistant-like”.