Does Cuckoo adapt dynamically to new terms within a conversation, or does it require preloading domain knowledge beforehand? Also, how do you ensure accuracy in cases where direct translation doesn’t capture the intended meaning (e.g., idiomatic phrases or cultural context differences)?
Excited to see how this evolves!
Right now, we have a set of “industry presets” where we have preloaded keywords and context for different industries (GPU, LLM, GPT for AI, for example).
Over time, we want our users to build upon these preset terms, for example, automatically adding the terms mentioned in different meetings. There is a challenge here—how do we add terms that may be mispronounced or that the LLM may have mixed up? I think having the context of their conversation and their base documents for these conversations could definitely help.
We do our best to deal with langauge changes. For example, when talking bio, almost half of the sentence is in English terms and Cuckoo does pretty well in that context as well!
I remember 10+ years ago there were a few Kickstarter promising this kind of product but with an hardware device. Obviously they were all fraudulent back then, but it is definitely in the realm of possibility today.
Also question: your writing makes you seem quite bilingual and fluent in English. Given this, would you consider yourself a user of your own product? Do you often find yourself needing to use it? It strikes me that the main users would be people who struggle with English specifically. Though I guess with recent innovations in China, potentially more English speakers will start needing to translate from Chinese.
Yes, I am bilingual. I was fortunate enough to study both in Korea and Canada.
I use our product every day when I’m meeting with customers in Japan and China. We joke that we are our very first customers. Personally, it’s best when I get to meet them in person and use our in-person meeting feature since I get to see their reactions.
I would say half of our users are fluent in English since they mostly work for U.S. companies. The other half would be people in Korea, Japan, China, and more who need the language support.
Is there a consumer version available?
Or is there a company focused on that side of the business?
Email me at yonghee@cuckoo.so so I can help you out with first few months!
From your demo, I gather you are a translator, which is a big let-off for me. Reading and understanding text is much slower than just listening. Also, spoken words are just 30ish% of the overall communication. I'm afraid while your users would be busy in reading translated text, they'd lose out on other vital communication cues like hand gestures, facial expressions etc.
Is real-time audio interpretation in the pipeline?
[0] - https://www.google.com/search?q=translator+vs+interpreter&oq...
When we first started this project, we referred to it as an "interpreter." However, after speaking with human interpreters and considering their feedback, we settled on "real-time translation". We might have left some of our past on the internet tho..
As with everything, there are both advantages and limitations to text-based translations. Here are a few:
Limitations:
- Some people may find it challenging to follow gestures and expressions while reading.
- In more one-way scenarios, such as presentations and webinars, hearing the speaker’s voice often feels more natural.
Pros:
- Many users actually prefer text because it allows them to hear the speaker’s original voice and pick up on nuances.
- Having a written record enables post-meeting summaries and the opportunity to repurpose transcripts into other materials, such as blog posts, custom user manuals, JIRA notes, and more using AI.
- There are also technical constraints with voice-to-voice translations, which currently tend to be turn-based rather than real-time (streaming) - not ideal for exchange of ideas.
That said, we are excited to see how the TTS and STT technologies evolve and are looking forward to experimenting with “interpretation” in the future!
Speak for yourself! I read _much_ faster than listening to someone saying the same thing. This is why I can't stand subtitles on videos, movies, and tv shows. Because of how my brain works, I can't help but read the text. And when it's there, I'm done reading the person's line when they are only 25-50% through speaking it. So it "feels" like I'm watching a show where everyone repeats the last half of every sentence.
> Is real-time audio interpretation in the pipeline?
When I saw the headline, I assumed the product was doing real-time translation and voice cloning in one. Now _that_ would be an interesting use of AI. (Google and others have been doing real-time voice recognition and text translation for years.)
Ha! Translation is done in real time, but subtitles are not!! Were you thinking they are processed the same way? That's your confusion.
> I'm done reading the person's line when they are only 25-50% through speaking it.
How can an AI system translate someone when they haven't even spoken those words yet? Please check the title of the Post - it's a real-time system.
> I read _much_ faster than listening to someone saying the same thing.
Everyone reads and/or speaks at a different speed. You can pause a movie, but not a meeting, during the first time. You don't have to make any critical decisions while consuming entertainment, but on the contrary, at work, you might have to listen, process, understand, and connect the dots into various other subsystems and conclude how they may or may not affect your standing. At the end, might have to challenge the speaker or add to what they are saying. A lot of variables.
Maybe we'll have this for Cuckoo 2.0!
While watching their demo video, I had no trouble reading and interpreting the translated English at the speed the conversations were going on. There's a chance that some speakers would speak much quicker, but I think this software covers the vast majority of use-cases.
Real-time translation is a great start. I'm sure these models can be tweaked over time for better interpretation, especially given that they learn based on context.
There is another aspect to this: the small pauses forced by technology will give people just enough time to think, which is welcome in a business meeting.
Full disclosure: I am not a user or a customer, but this looks like it is something I would one day want to use if the opportunity presents itself.
I also believe that as the meeting progresses, it feels more natural, and participants become aware of the translator. (Interestingly, they often start speaking more clearly and using fuller sentences, just as they would with human interpreters!)
Thanks for your comment. I hope you give it a try!
(Of course, some users may prefer to remove the conversation entirely for data security and privacy reasons.)
Legal, huh? How many indictments have you seen come out of a business meeting? The expression representation is very much part of a work environment. Someone may not say a word while hearing a crazy idea, but they'll certainly roll an eye.