You get an open model which is a 95% of Opus 4.6 quality and 80% cheaper in most inference providers and also can run on your own hardware
Also they did the hard parts of:
* crawling the content
* running the fine tuning (or training)
Better than 1 or 2 companies taking control of the whole AI economy
Name training is always shallow, Claude itself would claim it's GPT-3, GPT-4, or Reddit (heh) when confused. It's just dataset contamination, because the web is full of slop. Never trust self-reported names.