Hacker News
new
top
best
ask
show
job
TADA: Fast, Reliable Speech Generation Through Text-Acoustic Synchronization
(
www.hume.ai
)
40 points
by
smusamashah
4 hours ago
5 comments
qinqiang201
an hour ago
Could it run on Macbook? Just on GPU device?
OutOfHere
2 hours ago
Will this run on CPU? (as opposed to GPU)
boxed
an hour ago
Why would you want to? It's like using a hammer for screws.
g-mork
an hour ago
CPU compute is infinity times less expensive and much easier to work with in general
boxed
36 minutes ago
Less expensive how? The reason GPUs are used is because they are more efficient. You CAN run matmul on CPUs for sure, but it's going to be much slower and take a ton more electricity. So to claim it's "less expensive" is weird.
regularfry
an hour ago
To maximise the VRAM available for an LLM on the same machine. That's why I asked myself the same question, anyway.
octoclaw
5 minutes ago
[dead]
theturtle
35 minutes ago
[dead]
zacklee1988
3 hours ago
[dead]