As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
The standard is beyond reasonable doubt, and I think that's going to become an increasingly difficult bar to clear if the AI generated versions (either made for their own case or as decoys) are allowed to remain legal.
(You need to sign both the models and the programs to make sure there's no img2img.)
That being said I don’t know if the availability of CSAM would increase or decrease real world abuse.
The bigger issue is that these types of bans feel a lot more like banning speech than banning a real crime, and the precedent it sets can end up being used in far-reaching ways. That’s how it always is.
Everything else I do agree with you on, though.
The probem is, prosecutions are just looking for easier ways to jail people for things they could do based on what they personally believe. (E.g. "Manga causes child abuse")
Already illegal in Australia: https://www.independent.co.uk/news/world/australasia/sydney-... (don't hold your breath on it making any "banned books" lists)
People laughed at Indians believing photos stole one's soul, and now we have legislated even stupider behavior, without the excuse of ignorance.