Really excited to see what people create here. I'm very excited to see how optimizing for a parameter-constrained setting pushes people toward unique and weirder architectures (test-time compute, aggressive parameter tying, depth recurrence, low-rank training, ...), compression schemes (low precision, QAT, bitnets, novel tokenizers, ...), and other creative submissions (test-time training, long context, megakernels ...).
This is motivated by the fact that OpenAI is full of exceptional researchers that have come from every imaginable academic, regional, or career background. And we've seen breakthroughs led by researchers with PhDs in Physics to those without an undergraduate degree.
For frontier research, traditional signals like general coding interviews are useful but can be incomplete. We want to try new approaches to identify great people who can reason under constraints, generate original ideas, and build.