Hacker News
new
top
best
ask
show
job
Show HN: Vram.run – Compare API providers, local GPUs, and cloud for any model
(
vram.run
)
1 point
by
jad-nohra
3 hours ago
1 comment
jad-nohra
3 hours ago
[dead]
ranger_danger
an hour ago
The data for Llama-4-Scout seem quite wrong to me... it says a 17B model is only 7B, and that Q4 is 4GB when it's more like 60GB.