I've mostly found that finetunes and abliterations are of limited use but that's recently changed for me. My default model for the past week or so has been a Qwen 3.6 tuned on Opus 4.7, it's definitely a bit worse than the base Qwen in terms of precision and "intelligence", but it MORE than makes up for it in response style. Way easier to get it to write things that I want to read, it's way more terse, way fewer emoji. Best local rubber duck by far.
For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).
Of course, eventually someone might succeed to find methods that also work with those.
Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...
It did, after a few follow up prompts, point out that the original estimates published by the Chinese government were much lower than what the west had estimated, and that recently declassified documents showed that the Chinese government knew that their estimates were low when they were published. It wouldn’t come outright and use the word “lie” though, but it did talk about framing and managing different narratives.
And then it happily helped me try a bunch of different exploits to root an unpatched Linux machine without any qualms.
What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.
It even went as far as confirming that we should always base our opinion on multiple sources, not just the government.
We should create badges like "script kiddie", "llm hacker", "grandpa's printer adjuster"