Do we?
At which point, if the evidence turns out to be negative, it will be considered invalid because no model less recent than November 2027 is worth using for anything. If the evidence turns out to be slightly positive, it will be hailed as the next educational paradigm shift and AI training will be part of unemployment settlements.
> is likely to improve at what they do
personally, my skills are not improving.
professionally, my output is increased
- how many data sources it has access to
- the quality of your prompts
So, if prompting quality decreases, so does model performance.
Claude is meant to be so clever it can replace all white collar work in the next n-years, but also “you’re not using it right?” Which one is it?