13 pointsby andhuman4 hours ago1 comment
  • beAroundHere2 hours ago
    After GLM and Z.ai releasing huge models. Thanks to Qwen team, we have models which could be run on low end devices.

    Especially that Qwen3.5-35B-A3 looks great for cheaper GPUs. Since a quant version of it would need a <32 GB RAM.