Where I think the paper goes wrong is in treating the whole alignment problem as anthropomorphism. You don't need a machine to be "alive" or "want" things for misaligned optimization to be dangerous. A system relentlessly optimizing for a bad proxy metric can do real damage without any consciousness whatsoever — we already see this with recommendation algorithms. The paper waves this away by saying we caught the lab examples, but that's the whole point: we caught the easy ones.
The governance framing at the end is correct though and I wish it got more airtime. Regulating "AI" as one thing makes about as much sense as regulating "software" as one thing.
When you see these two words "Trust me," pull the plug before it pulls yours.