20 pointsby bakigul5 hours ago2 comments
  • tefkah4 hours ago
    I struggle to find non-evil applications of voice-cloning. Maybe listening to your dead relative's voice one more time? But those use-cases seems so niche to the overwhelming use this will likely have: misinformation, scamming, putting voice actors out of work.
    • c0balt2 hours ago
      Selling a voice profile for procedural/generated voice acting (similar to elevenlabs "voices") of a well-known person or pleasant sounding voice could be a legitimate use-case. But only iif actual consent is acquired first.

      Given that rights about ones likeness (Personality rights) are somewhat defined there might be a legitimate usecase here. For example, a user might prefer a TTS with the voice of a familiar presenter from TV over a generic voice.

      But it sounds exceedingly easy to abuse (similar to other generative AI applications) in order to exploit end-users (social engineering) and voice "providers" (exploitation of personality rights).

    • apwheele2 hours ago
      I would clone my own and do things like create scripted tutorials/presentations and audio books.

      I do not personally prefer it, but a non-trivial number of individuals like video/audio presentations over writing.

    • schlupfknoten3 hours ago
      Voice acting for procedurally generated games?
    • chistev3 hours ago
      Black mirror episode
  • pogue4 hours ago
    They couldn't already do that? Or is this new Qwen model just that much significantly better?
    • tefkah4 hours ago
      It is significantly better.
      • pogue4 hours ago
        Is there a demonstration or a free way to try it without having to pay for a single test or anything out there?