Experiments already running. Write up:
"Autoresearch on DGX Spark: Error Bars, GPU Pools, and the Case for an Autonomous Research Swarm": https://github.com/matt-langston/autoresearch/discussions/1
My fork of Andrej's repo if you want to join in: https://github.com/matt-langston/autoresearch
"There are billions of small devices, like Smart TV's, IoT devices, Smart Thermostats that sit idle most of the time. What if you could combine their unused computing power to do distributed ML model training?"
With RAM and storage becoming commodities, I'll be interested to see if we see efforts to utilize existing internet-connected devices that would otherwise go unused as compute resources.