> I use tabs instead of spaces for indentation. It seems like the model is massively weighted on code written using spaces (duh)
LLMs by nature are not very good at peeing against the wind. Also on average they are only as good as the average codebase they been trained on. By design.
For me, it's tabs-vs-spaces, but doesn't every codebase have its own peeing-against-the-wind patterns that are necessary because of some historical reason or another? What's the way to mitigate against this trend towards the center other than throwing up my hands and admitting defeat?
I can't address all your points, but if you add your linter as a pre-commit hook, the AI won't be able to open a PR if it doesn't pass your linter. That could catch the tabs-versus-spaces issue.