The real open questions for a GPT 6 level system seem to be reliability, long term memory, tool autonomy, and knowing when it is wrong.
Curious what people here think is still fundamentally broken in current LLMs that a next generation model would need to address.
And even if you have internal compression, also allow it to automatically expand on any portion of that context when a request is specifically about a certain file.
Right now a lot of the industry is trying to create the best agent which in turn means the best compression algorithms.