52 pointsby lcolucci5 hours ago21 comments
  • pickleballcourt2 hours ago
    One thing I've learnt from movie production is actually what separates professional from amateur quality is in the audio itself. Have you thought about implementing personaplex from NVDIA or other voice models that can both talk and listen at the same time?

    Currently the conversation still feels too STT-LLM-TTS that I think a lot of the voice agents suffer from (Seems like only Sesame and NVDIA so far have nailed the natural conversation flow). Still, crazy good work train your own diffusion models, I remember taking a look at the latest literature on diffusion and was mind blown by the advances in last years or so since u-net architecture days.

    EDIT: I see that the primary focus is on video generation not audio.

    • lcolucci2 hours ago
      This is a good point on audio. Our main priority so far has been reducing latency. In service of that, we were deep in the process of integrating Hume's two-way S2S voice model instead of ElevenLabs. But then we realized that ElevenLabs had made their STT-LLM-TTS pipeline way faster in the past month and left it at that. See our measurements here (they're super interesting): https://docs.google.com/presentation/d/18kq2JKAsSahJ6yn5IJ9g...

      But, to your point, there are many benefits of two-way S2S voice beyond just speed.

      Using our LiveKit integration you can use LemonSlice with any voice provider you like. The current S2S providers LiveKit offers include OpenAI, Gemini, and Grok and I'm sure they'll add Personaplex soon.

      • pickleballcourtan hour ago
        Thanks for sharing! Makes sense to go with latency first.
  • convivialdingo3 hours ago
    That's super impressive! Definitely one of the best quality conversational agents I've tried syncing A/V and response times.

    The text processing is running Qwen / Alibaba?

    • lcolucci3 hours ago
      Qwen is the default but you can pick any LLM in the web app (though not the HN playground)
    • sid-the-kid3 hours ago
      Thank you! Yes, right now we are using Qwen for the LLM. They also released a super fast TTS model that we have not tried yet, which is supposed to be very fast.
  • skandan4 hours ago
    Wow this team is non-stop!!! Wild that this small crew is dropping hit after hit. Is there an open polymarket on who acquires them?
    • lcolucci4 hours ago
      haha thank you so much! The team is incredible - small but mighty
  • r0fl4 hours ago
    Pricing is confusing

    Video Agents Unlimited agents Up to 3 concurrent calls Creative Studio 1min long videos Up to 3 concurrent generations

    Does that mean I can have a total of 1 minute of video calls? Or video calls can only be 1 minute long? Or does it mean I can have unlimited calls, 3 calls at a time all month long?

    Can I have different avatars or only the same avatar x 3?

    Can I record the avatar and make videos and post on social media?

    • lcolucci4 hours ago
      Sorry about the confusion. Video Agents and Creative Studio are two entirely different products. Video Agents = interactive video. Creative Studio = make a video and download it. If you're interested in real-time video calls, then Video Agents is the only pricing and feature set you should look at.
  • wumms4 hours ago
    You could add a Max Headroom to the hn link. You might reach real time by interspersing freeze frames, duplicates, or static.
    • sid-the-kid4 hours ago
      And, just like that, Max Headroom is back: https://lemonslice.com/try/agent_ccb102bdfc1fcb30
    • sid-the-kid4 hours ago
      1) yes on Max Headroom. we are on it. 2) it already is real time...?
      • wumms3 hours ago
        Whoops! Mistook the "You're about to speak with an AI."-progress bar for processing delay.
        • lcolucci23 minutes ago
          I wonder if we should make the UI a more common interface (e.g. "the call is ringing") to avoid this confusion?

          It's a normal mp4 video that's looping initially (the "welcome message") and then as soon as you send the bot a message, we connect you to a GPU and the call becomes interactive. Connecting to the GPU takes about 10s.

        • sid-the-kid3 hours ago
          Makes sense. The init should be about 10s. But, after that, it should be real time. TBH, this is probably a common confusion. So thanks for calling it out.
  • r0fl4 hours ago
    Wow I can’t get enough of this site! This is literally all I’ve been playing with for like half an hour. Even moved a meeting!

    My mind is blown! It feels like the first time I used my microphone to chat with ai

    • lcolucci4 hours ago
      This comment made my day! So happy you're liking it
    • sid-the-kid4 hours ago
      glad we found somebody who likes it as much as us! BTW, biggest thing we are working to improve is speed of the response. I think we can make that much faster.
  • zvonimirs5 hours ago
    We're launching a new AI assistant and I wanted to make it alive so I started to play around with LemonSlice and I loved it!! I wanted to make our assistant be like a coworker that can give it an ability to create Loom style videos. Here's what I created - https://drive.google.com/file/d/1nIpEvNkuXA0jeZVjHC8OjuJlT-3...

    Anyway, big thumbs up for the LemonSlice team, I'm excited to see it progress. I can definitely see products start coming alive with tools like this.

    • bzmrgonzan hour ago
      How did your token spend add up? I'm hesitant with evil customers racking up ai charges just to shit and giggles. Even competitors might sponsor some runaway charges.
    • sid-the-kid5 hours ago
      Very cool! Thanks for sharing. I love your use-case of turning an AI coding agent into more of an AI employee. Will be interesting to see if users can connect better with the product this way.
  • dreamdeadline5 hours ago
    Cool! Do you plan to expose controls over the avatar’s movement, facial expressions, or emotional reactions so users can fine-tune interactions?
    • lcolucci4 hours ago
      Yes we do! Within the web app, there's a "action text prompt" section that allows you to control the overall actions of the character (e.g. "a fox talking with lots of arm motions"). We'll soon expose this in the API so you can control the characters movements dynamically (e.g. "now wave your hand")
    • sid-the-kid4 hours ago
      Our text control is good, especially for emotions. For example, you can add the text prompt: "a person talking. they are angry", and agent will have an angry expression.

      You can also control background motions (like ocean waves, or a waterfall or car driving).

      We are actively training a model that has better text control over hand motions.

  • bennyp1014 hours ago
    Heads up, your privacy policy[0] does not work in dark mode - I was going to comment saying it made no sense, then I highlighted the page and more text appeared :)

    [0] https://lemonslice.com/privacy

    • sid-the-kid4 hours ago
      Fix deployed! This is why it's good to launch on hacker news. thanks for the tip.
    • sid-the-kid4 hours ago
      Good catch! Working on a fix now.
  • koakuma-chan4 hours ago
    > You're probably thinking, how is this useful

    I was thinking why the quality is so poor.

    • sid-the-kid4 hours ago
      curious what avatar you think is poor quality? Or, what you think is poor quality. i want to know :)
      • koakuma-chan4 hours ago
        Low res and low fps. Not sure if lipsync is poor, or if low fps makes it look poor. Voice sounds low quality, as if recorded on a bad mic, and doesn't feel like it matches the avatar.
        • sid-the-kid3 hours ago
          thanks for the feedback. that's helpful. Ya, some avatars have worse lip synch than others. It depends a little on how zoomed in you are.

          I am double checking now to make 100% sure we return the original audio (and not the encoded/decoded audio).

          We are working on high-res.

  • jedwhitean hour ago
    That's an interesting insight about "stacking tricks" together. I'm curious where you found that approach hit limits. And what gives you an advantage if anything against others copying it. Getting real-time streaming with a 20B parameter diffusion model and 20fps on a single GPU seems objectively impressive. It's hard to resist just saying "wow" looking at the demo, but I know that's not helpful here. It is clearly a substantial technical achievement and I'm sure lots of other folks here would be interested in the limits with the approach and how generalizable it is.
    • sid-the-kidan hour ago
      Good question! Software gets democratized so fast that I am sure others will implement similar approaches soon. And, to be clear, some of our "speed upgrades" are pieced together from recent DiT papers. I do think getting everything running on a single GPU at this resolution and speed is totally new (as far as i have seen).

      I think people will just copy it, and we just need to continue moving as fast as we can. I do think that a bit of a revolution is happening right now in real-time video diffusion models. There are so many great papers being published in that area in the last 6 months. My guess is that many DiT models will be real time within 1 year.

      • storystarling38 minutes ago
        Curious about the memory bandwidth constraints here. 20B parameters at 20fps seems like it would saturate the bandwidth of a single GPU unless you are running int4. I assume this requires an H100?
        • andrew-w29 minutes ago
          Yep, the model is running on Hopper architecture. Anything less was not sufficient in our experiments.
      • sid-the-kidan hour ago
        One thing that is interesting: LLMs pipelines have been highly optimize for speed (since speed is directly related to cost for companies). That is just not true for real-time DiTs. So, there is still lots of low hanging fruit for how we (and others) can make things faster and better.
    • an hour ago
      undefined
  • sid-the-kid5 hours ago
    hey HN! one of the founders here. as of today, we are seeing informational avatars + roleplaying for training as the most common use cases. The roleplaying use-case was surprising to us. Think a nurse training to triage with AI patients. Or, SDRs practicing lead qualification with different kinds of clients.
  • r0fl4 hours ago
    Where’s the hn playground to grab a free month?

    I have so many websites that would do well with this!

    • dang3 hours ago
      (We've replaced the link to their homepage (https://lemonslice.com/) with the HN playground at the start of the text above)
      • lcolucci3 hours ago
        Thanks Dan! The HN playground let's anyone try out for free without login
    • lcolucci4 hours ago
      https://lemonslice.com/hn - There's a button for "Get 1st month free" in the Developer Quickstart
  • korneelf12 hours ago
    Wow this is really cool, haven't seen real-time video generation that is this impressive yet!
    • lcolucci2 hours ago
      Thank you so much! It's been a lot of fun to build
  • r0fl4 hours ago
    Wow this is the most impressive thing I’ve seen on hacker news in years!!!!!

    Take my money!!!!!!

    • lcolucci4 hours ago
      Wow thank you so much :) We're so proud of it!!
  • buddycorp5 hours ago
    I'm curious if I can plug in my own OpenAI realtime voice agents into this.
    • lcolucci4 hours ago
      Good question! Yes and to do this you'd want to use our "Self-Managed Pipeline": https://lemonslice.com/docs/self-managed/overview. You can combine any TTS, LLM and STT combination with LemonSlice as the avatar layer.
    • jfaat4 hours ago
      I'm using an openAI realtime voice with livekit, and they said they have a livekit integration so it would probably be doable that way. I haven't used video in livekit though and I don't know how the plugins are setup for it
    • sid-the-kid4 hours ago
      Good question. When using the API, you can bring any voice agent (or LLM). Our API takes in what the agent will say, and then streams back the video of the agent saying it.

      For the fully hosted version, we are currently partnered with ElevenLabs.

  • benswerd4 hours ago
    The last year vs this year is crazy
    • lcolucci4 hours ago
      Agreed. We were so excited about the results last year and they are SO BAD now by comparison. Hopefully we'll say the same thing again in the couple months
    • sid-the-kid4 hours ago
      thanks! it just barley worked last year, but not much else. this year it's actually good. we got lucky: it's both new tech and turned out to be good quality.
  • shj21053 hours ago
    Not working on mobile iOS
    • lcolucci3 hours ago
      what's not working for you?
  • ed_mercer5 hours ago
    This looks super awesome!
    • sid-the-kid5 hours ago
      thank you! it's by far the thing I have worked on that I am most proud of.
  • marieschneegans4 hours ago
    This is next-level!
    • lcolucci4 hours ago
      Thanks so much! We're super proud of it
  • ProjectBarks2 hours ago
    Removing - Realized I made a mistake
    • dang2 hours ago
      I don't see any evidence that r0fl's comments are astroturfing. Sometimes people are just enthusiastic.

      I appreciate your concern for the quality of the site - that fact that the community here cares so much about protecting it is the main reason why it continues to survive. Still, it's against HN's rules to post like you did here. Could you please review https://news.ycombinator.com/newsguidelines.html? Note this part:

      "Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."

    • sid-the-kid2 hours ago
      it's a fair concern. but, we don't know r0fl. and we are not astroturfing.

      even I am surprised with how many opnely positive comments we are getting. it's not been our experience in the past.