(1.) The overall level of complexity. For example, I think (today) you would need to do an extreme amount of hand-holding to get an AI to write a browser, an OS, or anything with similarly very high level of complexity.
(2.) The representation in the AI's training set. Anything that is common in the training set (e.g., to-do app in JavaScript) will be trivial for an AI to one-shot. Anything that is rare, or absent in the training set is going to be much, much harder for the AI. If I try to ask an AI tool to develop a automated, ultra-low-latency high-frequency trading system, it's probably going to struggle because those kinds of applications aren't in the open-source domain. The same is true for completely novel algorithms. So maybe things that are essentially new science/engineering.
I'm very curious to hear others' thoughts on this, as I've been wondering about this, too.
Any niche areas - once you venture outside of topics/tasks well represented on the Internet and GitHub, even frontier LLMs still quickly take a nosedive.