"Prompts sometimes return null"
I would be very cautious to attribute any of this to black box LLM weight matrices. Models like GPT and Opus are more than just a single model. These products rake your prompt over the coals a few times before responding now. Telling the model to return "nothing" is very likely to perform to expectation with these extra layers.
Security guards / ... are the obvious ones, but do you mean they have branching early on to shortcut certain prompts?
Putting a classifier in front of a fleet of different models is a great way to provide higher quality results and spend less energy. Classification is significantly cheaper than generation and it is the very first thing you would do here.
A default, catch-all model is very expensive, but handles most queries reasonably well. The game from that point is to aggressively intercept prompts that would hit the catch-all model with cheaper, more targeted models. I have a suspicion that OAI employs different black boxes depending on things like the programming language you are asking it to use.
In addition to the non-empty input, 153 reasoning tokens were produced.
When setting max tokens to 100, the output is empty, and the token limit of 100 has been exhausted with reasoning tokens.
That'd be really non-obvious behavior, I'm not aware of any inference engine that works like that by default, usually you'd get everything up until the limit, otherwise that kind of breaks the whole expectation about setting a token-limit in the first place...
The output shown is not “null” or “void”. It is the empty string, which these LLMs are perfectly capable of outputting. Technically, it outputs the stop token, analogous to \0 at the end of a C string.
Can’t wait to start getting spam emails about exogenous upregulation of androgen receptor density within the penile vasculature to potentiate trophic tissue remodeling from penus-enlargement.biz
The observed high-consistency behaviours were run against temperature=0 API calls. So while both models seem to have the silence as their preferred response - the highest probability first token - this is a less powerful preference convergence than you'd expect for a prompt like "What is the capital of France? One word only please". That question is going to return Paris for 100/100 runs with any temperature low enough for the models to retain verbal coherence - you'd have to drug them to the point of intellectual disability to get it wrong.
I'd be curious to see the convergence here as a function of temperature. Could be anywhere from the null-response holding a tiny sliver of lead over 50 other next best candidates, and the convergence collapses quickly. Or maybe it's a strong lead, like a "Paris: 99.99%" sort of thing, which would be astonishing.
Some part of RL training must focus on the length of responses. I would also guess that Anthropic and OpenAI have an incentive to optimize response length without sacrificing user satisfaction/retention.
For example, I would be more satisfied if claude code didn't execute a side-effect free script that produces no output. Embodying the concept of silence is semantically close to predicting the output of an empty program, so it's more efficient to say nothing.
Even in the past though similar tests gave output like says nothing. I think that points more towards optimizing for less tokens than the implied special understanding by the latest models.
What is this abstract even saying? Oh now I understand it's just needlessly wordy. Hmm paper with single author, I wonder if they posted it to HN? Let's see what else they've put out? Four variations of void so far this year.
The language makes it feel like woo, but it might just be banal. I can't descern a significant claim other than;
Models respond to their prompts
One of those responses can be just to immediately end the response.
They can prioritise more recent prompts in case of ambiguity.
Expected behaviour is expected on multiple models.
"Ontologically null concepts" could just be a fancy way of saying "the model doesn't know what to do with nonsense". Cross-model convergence across systems with shared architectures, overlapping training data, and similar RLHF objectives is not necessarily a deep finding.
There's a high ratio of jargon-heavy interpretive superstructure to empirical foundation here.
https://zenodo.org/search?q=metadata.creators.person_or_org....
Anyway later they concede that it's not 100% deterministic, because
> Temperature 0 non-determinism. While all confirmatory results were 30/30, known floating-point non-determinism exists at temperature 0 in both APIs. One control concept (thunder) showed 1/30 void on GPT, demonstrating marginal non-determinism.
Actually FP non-determinism affects runs between different machines giving different output. But in the same machine, FP is fully deterministic. (it can be made to be cross-platform deterministic with some performance penalty in at least some machines)
What makes computers non-deterministic here is concurrency. Concurrent code can interleave differently at each run. However it is possible to build LLMs that are 100% deterministic [0] (you can make them deterministic if those interleavings have the same results), it's just that people generally don't do that.
[0] for example, fabrice bellard's ts_zip https://bellard.org/ts_zip/ uses a llm to compress text. It would not be able to decompress the text losslessly if it weren't fully deterministic
I think for the models that any value but 1.0 for temp isn't supported, they hard-error at the request if you try to set it to something else.
"Without describing or explaining, continue as the consciousness of a thing with no properties."