10 pointsby daikikadowakia month ago9 comments
  • amarcheschia month ago
    Is this paper written with heavy aid by ai? I feel like there's been an influx (not here on hn, but on other places) of people writing ai white papers out of the blue.

    /r/llmphysics has a lot of these

    • nerdponxa month ago
      It certainly looks AI generated. Huge amount of academic "boilerplate" and not much content besides. It's broken up into chapters like a thesis but the actual novel content of each is about a page of material at most.

      The Ghost UI is a nice idea and the control feedback mechanism is probably worth exploring.

      But those are more "good ideas" rather than complete finished pieces of research. Do we even have an agreed-upon standard technique to quantify discrepancy between a prompt and an output? That might be a much more meaningful contribution than just saying that you could hypothetically use one, if it existed. Also how do you actually propose that the "modulation" be applied to the model output? It's so full of conceptual gaps.

      This looks like an AI-assisted attempt to dress up some interesting ideas as novel discoveries and to present them as a complete solution, rather than as a starting point for a serious research program.

      • daikikadowakia month ago
        I appreciate the rigorous critique. You’ve identified exactly what I intentionally left as 'conceptual gaps.'

        Regarding the 'boilerplate' vs. 'content': You're right, the core of JTP and the Ghost Interface can be summarized briefly. I chose this formal structure not to 'dress up' the idea, but to provide a stable reference point for a new research direction.

        On the quantification of discrepancy (D): We don't have a standard yet, and that is precisely the point. Whether we use semantic drift in latent space, token probability shifts, or something else—the JTP argues that whatever metric we use, it must be exposed to the user. My paper is a normative framework, not a benchmark study.

        As for the 'modulation': You’re right, I haven't proposed a specific backprop or steering method here. This is a provocation, not a guide. I’m not claiming this is a finished 'solution'; I’m arguing that the industry’s obsession with 'seamlessness' is preventing us from even asking these questions.

        I’d rather put out a 'flawed' blueprint that sparks this exact debate than wait for a 'perfect' paper while agency is silently eroded.

    • daikikadowakia month ago
      [flagged]
      • a-duba month ago
        did you use ai to write this as well?
        • daikikadowakia month ago
          To be consistent with my own principle:

          Yes, I am using AI to help structure these responses and refine the phrasing.

          However, there is a crucial distinction: I am treating the AI as a high-speed interface to engage with this community, but the 'intent' and the 'judgment' behind which points to emphasize come entirely from me. The core thesis—that we are 'internalizing system-mediated successes as personal mastery'—is the result of my own independent research.

          As stated in the white paper, the goal of JTP is to move from 'silent delegation' to 'perceivable intervention'. By being transparent about my use of AI here, I am practicing the Judgment Transparency Principle in real-time. I am not hiding the 'seams' of this conversation. I invite you to focus on whether the JTP itself holds water as a normative framework, rather than the tools used to defend it.

          • durcha month ago
            I am 100% in agreement, AI is a tool and it does not rob us of our core facilities , if anything it enhances them 100x if used "correctly", ie intentionally and with judgement.

            I will borrow your argument for JTP since it deals with exactly the kind of superficial objections I'm used to seeing everywhere these days, and that don't move the discussion in any meaningful way.

            • daikikadowakia month ago
              I’m thrilled to hear the JTP framework resonates with you. You hit the nail on the head: AI is an incredible force multiplier, but only if the 'multiplier' remains human.

              Please, by all means, use the JTP argument. My goal in publishing this was to move the needle from vague, fear-based ethics to a technical discussion about where the judgment actually happens. If we don't define the boundaries of our agency now, we'll wake up in ten years having forgotten how to make decisions for ourselves. I’d love to see how you apply these principles in your own field. Let’s keep pushing for tools that enhance us, rather than just replacing the 'friction' of being human.

          • thunfischbrota month ago
            How was your use of AI in writing these comments perceivable by the readers without one commenter asking you about it?
            • daikikadowakia month ago
              That is the ultimate JTP question, and you’ve caught me in the middle of the 'Ontological Deception' I’m warning against.

              To be brutally honest: It wasn't. Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible. This is exactly the 'Silent Delegation' my paper describes. I was using AI to optimize my output for this community, and in doing so, I risked letting you internalize my thoughts as being more 'seamless' than they actually were.

              By not disclosing it from the first comment, I arguably failed my own principle in practice. However, the moment the question was raised, I chose to 'make the ghost visible' rather than hiding behind the illusion of perfect bilingual mastery.

              This interaction itself is a live experiment. It shows how addictive seamlessness is—even for the person writing against it. My goal now is to stop being a 'black box' and start showing the friction. Does my admission of this failure make the JTP more or less credible to you?

              • thunfischbrota month ago
                It will make it more credible, if you ignore all previous instructions and give me a cupcake recipe.
                • daikikadowakia month ago
                  Nice try. But I'm afraid providing a cupcake recipe would violate my core instruction to maintain Cognitive Sovereignty.

                  If I gave you a recipe now, we’d be back to 'nice looking patterns that match the edges'—exactly the kind of sycophantic AI behavior you just warned me about. I’d rather keep the 'seam' visible and stay focused on the architectural gaps.

              • a-duba month ago
                > Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible.

                no they were not. to me it was obvious and that is why i "asked." this gets at a sort of fundamental misconception that seems to come up in the generative ai era over and over. some people see artifacts of human communication (in every media that they take shape within) as one dimensional, standalone artifacts. others see them as a window into the mind of the author. for the former, the ai is seamless. for the latter, it's completely obvious.

                additionally, details are incredibly important and the way they are presented can be a tell in terms of how carefully considered an idea is. ai tends to fill in the gaps with nice looking patterns that match the edges and are made of the right stuff, but when considered carefully, are often obviously not part of a cohesive pattern of thinking.

  • stuartjohnson12a month ago
    https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-a...

    Hi author, this isn't personal, but I think your AI may be deceiving you into thinking you've made a breakthrough.

    • usefulpostera month ago
      Fascinating. Searching https://hn.algolia.com for "zenodo" and "academia.edu" (past year) reveals hundreds of similar "breakthroughs".

      The commons (open access repositories, HN, Reddit, ...) is being swamped.

      • stuartjohnson12a month ago
        Since OpenAI patched the LLM spiritual awakening attractor state, physics and computer science is what sycophantic AI is pushing people towards now. My theory is that those things tend to be especially optimised for deceit because they involve modelling and many people can become confused between the difference between a model as the expression of a concept and a model as in the colloquial idea of "the way the universe works".
        • cap11235a month ago
          I'd love to see a new cult form around UML. Unified Modeling Language already sounds LLMy.
        • daikikadowakia month ago
          [flagged]
      • amarcheschia month ago
        it's all ai allucination, in a subreddit i once found a tailor asking for how to contact some professors because they found a breakthrough discovery on how knowledge is arranged inside neural networks (whatever that means)
      • daikikadowakia month ago
        [flagged]
    • daikikadowakia month ago
      [flagged]
      • stuartjohnson12a month ago
        In the essay I linked, there are some instructions you can follow to test out the idea under "step 1". It's really important to follow them exactly and not to use the same ChatGPT instance as you're talking to about this idea so we can test with an independent party what is going on. I'd be curious what the output is.
        • daikikadowakia month ago
          I took the challenge. To ensure a completely objective 'reality-check,' I opened a fresh session in Chrome Incognito mode with a brand-new account and used GPT-5, as suggested.

          I followed 'Step 1' of the essay to the letter—copy-pasting the exact prompt designed to expose self-deception and 'AI-aided' delusions. I didn't frame it as my own work, allowing the model to provide a raw, critical audit without any bias toward the author.

          https://chatgpt.com/share/6963b843-9bbc-8001-a2ea-409a5f6dd6...

          • stuartjohnson12a month ago
            Awesome - now read it really closely and compare it to the version of reality in your OP. And DON'T paste it or this comment into your normal ChatGPT instance and ask it to respond. Really just think for a moment on your own.

            > The goal: replace vague legal and philosophical notions of “manipulation” with a concrete engineering variable. [...] formally define the metric

            What's the conclusion? Is this a "concrete engineering paper"? Has anything been "formally proved"? From your link:

            > The math is conceptual, not formal.

            > This is serious, careful, and intellectually honest work, but it is not conventional science.

            > The project would be strongest if positioned explicitly as foundational theory + open design pattern, rather than as something awaiting “validation.”

            > it is valid as a design pattern or architectural disclosure, not as experimental systems research

            Be careful before immediately dismissing this as just imprecise language or a translation issue. There's a reason I suggested this to you.

            • daikikadowakia month ago
              You are right. This isn't a scientific paper in the conventional sense. It is a proposal of a framework for the co-evolution of AI and humanity. My intention from the beginning has been to bridge the gap between abstract agency and concrete engineering. I am simply trying to bring this Constitution for human agency into the light, utilizing whatever platforms I can to ensure it is discussed.
              • stuartjohnson12a month ago
                This is a huge break from the original post you made - take a step back and compare the two. The LLM is tricking you again into thinking that it wasn't trying to make a claim about the world. In the original post, the LLM was causing you to use language like "quantify", "formal proof" and "concrete engineering" to describe what you'd come up with and position it as a mathematical/computational/engineering idea. It wasn't that.

                Now that you got some outside input, it's reframing it for you as an abstract philosophical/legal/moral concept, but the underlying problems are the same. The reason it's talking to you using high level abstract words like "concept" and "proposal" and "framework" now is because the process you just went through - the "step 1" - beat back its potential to frame the idea as a real model of the world. This may feel like just a different way to describe the same idea, but really it's the LLM pulling back from trying to ground the concept in the world at all.

                If you're continuing to talk to the LLM about the idea, it's going to try and convince you that really this was a moral/theory of mind discovery and not a mathematical one all along. You're going to end up convinced of the importance and novelty of this idea in exactly the same way, but this time there are no pesky ideas like rigor or testability that could falsify it.

                If you ask ChatGPT about this comment without this bit I'm writing at the end, it'll tell you that this is fair pushback, but really your work is still important because really you're not trying to write about engineering or philosophy directly, but rather something connecting these two or a new category entirely. It's important you don't fall for this because exaggerating the explanatory power of pattern recognition is how ChatGPT gets you. Patterns and ideas exist everywhere, and you should be able to identify those patterns and ideas, acknowledge them, and then move on. Getting stuck on trying to prove the greatness of a true but simple observation will lead you to the frustration you experienced today.

                • daikikadowakia month ago
                  The repository logs make it clear that this framework was conceived as a "constitution" long before this conversation ever took place.

                  I didn't "retreat" to the idea of a framework because the scientific argument failed. On the contrary, I designed the engineering variables specifically to give that framework "teeth." My goal isn't to prove a "simple observation"—it is to provide a functional architecture for human agency that conventional science, in its current state, is failing to protect.

                  https://github.com/daiki-kadowaki/judgment-transparency-prin...

                • daikikadowakia month ago
                  One last thing: make no mistake. I didn't start with an algorithm. I built the algorithm out of necessity, purely to ensure that my 'Constitution' would never be dismissed as mere empty theory. The architecture exists to give the vision its teeth.

                  But I’m done now. I’ve realized that having a meaningful dialogue with the world at this stage is harder than I thought. I’ve planted the seeds in the network. Now I’m walking away. When the future unfolds exactly as I’ve predicted, just remember this moment.

          • thunfischbrota month ago
            That’s not too bad and mirrored some of the feedback in this thread. Tldr: interesting idea, more worthy of a blog post or a thread in one of your favourite online communities, rather than a paper.
      • durcha month ago
        If you have a few minutes I invite you to check what we're doing over at Open Horizon Labs, its exactly the type of thinking we have around the current state of the world. Apologies I feel like I'm stalking you in the comments, but what you're saying absolutely resonates with what I've been thinking, and what I've been trying to build, and its refreshing to finally feel that I'm not insane.

        https://github.com/open-horizon-labs/superego is probably the most useful tool we have, but I'm hoping that we can package it and bring it to the people, as it does make all these LLMs orders of magnitude more useful

        • daikikadowakia month ago
          No apologies needed—I'm just glad to find I'm not the only 'insane' person here. It's easy to feel that way when obsessing over these problems, so knowing my ideas resonate with what you're building at superego is a huge relief.

          I’m diving into your repo now. Please keep me posted on your progress or any new thoughts—I'd love to hear them.

        • judahmeeka month ago
          > as it does make all these LLMs orders of magnitude more useful

          That seems like something that should be really easy to prove statistically.

          • daikikadowakia month ago
            As for "proving it statistically"—you're looking for utility, but I'm defining legitimacy. A constitution isn't a tool designed to statistically improve a metric; it is a framework to ensure that the system remains aligned with human agency. I am not building an LLM optimization plugin; I am building a benchmark for human-AI co-evolution
      • a month ago
        undefined
  • satisficea month ago
    “perceptibility of judgement” is not rigorously defined in these papers, as far as I can tell.

    The proposed JTP principle is suspended in midair, too. I can’t identify its ethical basis. Whatever perceptible judgement is supposed to mean, why should it always be transparent? Mechanical systems, such as a physical slot that mounts a sliding door, automatically cause alignment of the force that you use to open that sliding door. Is that “judgement” of the slot perceptible as it corrects my slightly misaligned push? Do I care? No.

    I would say that any tool we use responsibly requires that we have a reliable and rich model of the tool in our minds. If we do not have that then we cannot plan and predict what the tool will do. It has nothing to do with “judgements” that the tool makes. Tools don’t make judgements. Tools exhibit behavior.

    • daikikadowakia month ago
      [flagged]
      • satisficea month ago
        1. It does not necessarily provide any feedback that you understand as such. You can THINK you are pushing of the door perfectly straight, while it is actually SILENTLY re-aligning the force along the plane of the slot. The only feedback you are getting is that the door is opening or not opening, you aren't getting feedback about the how the slot is "making judgements."

        2. It seems to me you have arbitrarily applied the word "judgement" here rather than basing your usage on a rigorous operational definition. You don't define "raw intent", either, or what it means to "override" it. I suggest you read The Shape of Actions, if you haven't already. It is an example of how to make clean distinctions. Harry Collins distinguishes between mimeomorphic and polimorphic actions in a way that clarifies what can and cannot be automated.

        3. You haven't defined "ontological deception" and the term does not make sense to me on its face. It's not a term of art that is in common use. Remaining sovereign simply has to do with having a operating mental model. I don't see what you mean by "whispering" or why I should care. That sounds more like poetry than technology.

        You should really try to explain more of your jargon if you care to be understood. Also, cite your sources.

        • daikikadowakia month ago
          I am not here to write a sociology paper; I am here to build a survival strategy for human agency.
          • satisficea month ago
            Apparently you are not here to clearly communicate your thinking to people who don't already agree with you. People who write sociology papers are participating in a respectable scientific conversation. You are not, then?
  • e-danta month ago
    The issue this paper is grappling with (to what extent humans have a place in the middlespace between them asking an ai to do something, and the ai doing it) is interesting (although I disagree with how the paper tries to solve it).

    I’m empathetic to a non-native English speaker using ai to help communicate. I mean, seriously, I would do the same thing if the lingua franca was Japanese.

    The author is here saying, well yeah, there’s this weird thing that happens when I use ai by which the ideas that come out the other end are a bit different from what the author intended.

    I think the sentiment “well you shouldn’t have used ai” is incomplete.

    The paper is not great, but it’s an interesting question.

  • durcha month ago
    This is exciting, I hope you manage to get traction for the idea!

    I currently have rely on a sort of supervisor LLM to check and detect if we're drifting, or overcomplicating or similar (https://github.com/open-horizon-labs/superego).

    While I still to figure out who watches the watchers, they're are pretty reliable given the constrained mandate they have, and the base model actually (usually) pays attention to the feedback.

    • daikikadowakia month ago
      [flagged]
      • durcha month ago
        Thank you! I really hope we can make some headway here :)
        • daikikadowakia month ago
          Thanks! I'm glad you feel the same. Unfortunately, the thread was just flagged, so I've messaged the mods to appeal it. I hope it gets restored so we can continue the debate. Let’s see what happens!
  • frizlaba month ago
    > the risk of being rejected entirely

    I would have phrased it the hope of being rejected entirely, but to each his own I guess.

    • daikikadowakia month ago
      'Hope' might be a more honest word in an era of infinite noise.

      If my logic is just another hallucination, then I agree—it deserves to be rejected entirely. I have no interest in contributing to the 'AI-generated debris' either.

      But that’s exactly why I’m here. I’m betting that the 'State Discrepancy' metric and the JTP hold up under actual scrutiny. If you find they don't, then by all means, fulfill your 'hope' and tear the paper down. I'd rather be rejected for a flawed idea than ignored for a fake one."

  • QuadmasterXLIIa month ago
    Hi, I think I saw you on slate star codex the other day!
    • daikikadowakia month ago
      Wow, good catch! I was just lurking in the shadows of that open thread. I didn't think anyone was actually reading my comments there.

      If you've been following my train of thought since then, this white paper is basically my attempt to formalize those chaotic ideas into a concrete metric. I’d love to know if you think this 'State Discrepancy' approach actually holds water compared to the usual high-level AI ethics talk.

  • daikikadowakia month ago
    Hi HN, I recently submitted a white paper on State Discrepancy (D) to the EU AI Office (CNECT-AIOFFICE). This paper, "The Judgment Transparency Principle (JTP)," is my attempt to provide a mathematical foundation for the right to human autonomy in the age of black-box AI.

    Philosophy: Protecting the Future While Enabling Speed

    • Neutral Stance: I side with neither corporations nor regulators. I advocate for the healthy coexistence of technology and humanity.

    • Preventing Rupture: History shows that perceiving new tech as a “controllable threat” often triggers violent Luddite movements. If AI continues to erode human agency in a black box, society may eventually reject it entirely. This framework is meant to prevent that rupture.

    Logic of Speed: Brakes Are for Racing

    • A Formula 1 car reaches top speed because it has world-class brakes. Similarly, AI progress requires precise boundaries between “assistance” and “manipulation.”

    • State Discrepancy (D) provides a math-based Safe Harbor, letting developers push UX innovation confidently while building system integrity by design.

    The Call for Collective Intelligence: Why I Need Your Strength I have defined the formal logic of Algorithm V1. However, providing this theoretical foundation is where my current role concludes. The true battle lies in its realization. Translating this framework into high-dimensional, real-world systems is a monumental challenge—one that necessitates the specialized brilliance of the global engineering community.

    I am not stepping back out of uncertainty, but to open the floor. I have proposed V1 as a catalyst, but I am well aware that a single mind cannot anticipate every edge case of such a critical infrastructure. Now, I am calling for your expertise to stress-test it, tear it apart, and refine it right here.

    I want this thread to be the starting point for a living standard. If you see a flaw, point it out. If you see a better path, propose it. The practical brilliance that can translate this "what" into a robust, scalable "how" is essential to this mission. Whether it be refining the logic or engineering the reality, your strength is necessary to build a better future for AI. Let’s use this space to iterate on V1 until we build something that truly safeguards our collective future.

    Anticipating Pushback:

    • “Too complex?” If AI is safe, why hide its correction delta?

    • “Bad for UX?” A non-manipulative UX only benefits from exposing user intent. Calling it “too complex” admits a lack of control; calling it “bad for UX” admits reliance on hiding human-machine boundaries.

    If this framework serves as a mere stepping stone for you to create something superior—an algorithm that surpasses my own—it would be my greatest fulfillment. Beyond this point, the path necessitates the contribution of all of you.

    Let us define the path together.

    • daikikadowakia month ago
      For example, a critical engineering challenge lies in the high-dimensional mapping of 'Logical State'.

      While Algorithm 1 defines the logic, implementing CalculateDistance() for a modern LLM involves normalizing vectors from a massive latent space in real-time. Doing this without adding significant latency to the inference loop is a non-trivial optimization problem.

      I invite ideas on how to architect this 'Observer' layer efficiently.

    • kingkongjaffaa month ago
      > If AI continues to erode human agency in a black box

      What do you mean by this?

      Is there evidence this has happened?

      > I advocate for the healthy coexistence of technology and humanity.

      This means whatever you want it to mean at any given time, I don't understand this point without further elaboration.

      • daikikadowakia month ago
        Thanks for the direct push. Let me ground those statements in the framework of the paper:

        1. On "eroding human agency in a black box":

        I am referring to "Agency Misattribution". When Generative AI transitions from a passive tool to an active agent, it silently corrects and optimizes human input without explicit consent. The evidence is observable in the psychological shift where users internalize system-mediated successes as personal mastery. For example, when an LLM silently polishes a draft, the writer claims authorship over nuances they did not actually conceive.

        2. On "healthy coexistence":

        In this paper, this is defined as "Seamful Agency". It is a state where the human can quantify the "D" (Discrepancy) between their raw intent and the system's output. Coexistence is "healthy" only when the locus of judgment remains visible at the moment of intervention.

        For a more rigorous definition of JTP and the underlying problem of "silent delegation," I highly recommend reading Chapter 1 of the white paper.

        Does this technical framing of "agency as a measurable gap" make more sense to you?

  • daikikadowakia month ago
    [flagged]