255 pointsby ColinWright5 days ago30 comments
  • zozbot2345 days ago
    I'm pretty sure that the semantics of natural language are a lot more complex than can be accounted for by these seemingly very ad-hoc translations into comparatively straightforward FOL formulas, as are given in this paper. A common approach for the understanding of NL semantics from a strictly formal POV is Montague semantics https://en.wikipedia.org/wiki/Montague_grammar https://plato.stanford.edu/entries/montague-semantics/ - even a cursory look at these references is enough to clarify the level of complexity that's involved. Very loosely speaking one generally has to work with multiple "modalities" at the same time each of which, when understood from the POV of ordinary FOL, introduces its own separate notion of abstract "possible worlds" (representing, e.g. an agent's set of beliefs) and ways in which these "worlds" can relate to one another. More complex cases will usually degenerate in some sort of very generic "game semantics" https://en.wikipedia.org/wiki/Game_semantics https://plato.stanford.edu/entries/logic-games/ where any given use of natural language is merely seen as a "game" (in the abstract strategic, game-theoretical sense) with its own set of possibly very ad-hoc 'rules'. The philosopher Ludwig Wittgenstein https://en.wikipedia.org/wiki/Ludwig_Wittgenstein https://plato.stanford.edu/entries/wittgenstein/ gave quite a good description of both of these approaches (from a very naïve approach based on a supposedly straightforward translation to some kind of abstract logic, to a far richer one based on notions of strategies and games) to a "formal" understanding of natural language, throughout his extensive philosophical inquiry.

    Which is to say, I'm not sure how this paper's results are generally expected to be all that useful in practice.

    • avodonosov5 days ago
      Your argumets and links are interesting, I hope to study these materials some day.

      But.

      To be useful in practice the approach does not need to work in all cases of natural language usage. Even if works in some limited cases there may be useful applications.

      The authors evaluate their approach on two datasets. One is LOGIC consisting of learning examples of logical fallacies. The other is LOGICCLIMATE, consisting of logical fallacies collected from real world news articles about climate change.

      The datasets are here, if anyone is interested to see the type of natural language they try to adress currently: https://github.com/causalNLP/logical-fallacy

      I guess this csv contains the LOGICCLIMATE: https://github.com/causalNLP/logical-fallacy/blob/main/data/...

      So a possible practicle utility for the approach - spot individual wrong sentences in a long article and highlight them.

      Another real world example. I propose a solution at work, based on some statistics. And a colleague dismisses it by saying that there is a book "6 Ways to Lie with Statistics". If there was a smart assistant in the room who gently explained his logical fallacy to the colleague, it would save a lot of efforts for me and made the discusdion more productive. I doubt the difficulties you mention apply to this simple case.

      • nickpsecurity4 days ago
        "And a colleague dismisses it by saying that there is a book "6 Ways to Lie with Statistics"."

        Except, that's going in the right direction towards a better argument: empiricism requires your statistics to be peer reviewed for errors or deception before being believed. That takes a skilled individual.

        So, you either think they're very good at statistics or you want them to put faith in your work. Otherwise, they need a smart assistant they trust to review the statistics. Then, they have increased confidence in your solution but it still might be wrong.

        • avodonosov4 days ago
          He was not calling for better statistics, he suggested to ignore statistics.

          It was a simple case and actually I was not presenting a statistics I collected, I just suggested to try using some numerical evidence to chose a decision.

          On another occasion I mentioned to somebody that it's necessary to chose drugs or medical approaches verified with medical trials and double blind method. And they replied that there is a book about how to lie with statistics and continued to consider unverified methods.

          I mean that in real life sometimes very simple fallacies happan.

          Some statistics-based deсisions may be wrong => right decision must avoid statistics.

          These cases could probably be adressed with automated tools of the near future.

          • randomNumber74 days ago
            Idk. When I see something like that in real life my conclusion is that people have different levels of intelligence.

            Also I don't know what or how to teach someone who falls into this pitfalls.

            • GTP4 days ago
              I think it is easy to highlight the mistake by rephrasing the argument as OP did. By rewriting it that way, I think it is easy to see that the fallacy lies in taking some specific observations and saying that those are valid in general, without providing a solid explanation as of why that should be the case. Another way of rephrasing it could be, "I have evidence of some statistics-based decisions that were wrong, therefore all statistics-based decisions are wrong". If a person still doesn't get it, put it as "I have evidence of you being wrong once, therefore you must be always wrong (including in this specific case :D )".
              • randomNumber73 days ago
                But what could I do to help people get better in the __act__ of spotting this or doing what you desribe themselves.

                Idk for me it is subconcious, I just "feel" it or know that it is logically faulty.

                Also the rephrasing doesnt always work imo, you could have a logic statement, that is totally valid in some context and not valid in others. And you also need to think about the validity of the premise and if it is legit to draw the conclusion in natural language.

              • fc417fc8024 days ago
                I believe that is a misunderstanding of what the other party is expressing in this case. It's not "wrong once, so always wrong" but rather "intentional deception in the past, therefore might intentionally deceive again, and I don't know how to verify, therefore I shouldn't trust it".
                • GTP4 days ago
                  Uhm, but then, it would work only if applied to the same entity that was deceptive in the past. Applying this to some other party would still make little to no sense, unless you've got some reason to believe this other party wants to deceive you as well.
                  • fc417fc8023 days ago
                    Unless someone obviously shares common goals with you they are a potential adversary. When faced with a tool that you are confident can be used to deceive you, and a potential adversary who you are confident is aware of this fact, you should then clearly distrust that tool in that context.
          • fc417fc8024 days ago
            I'm not sure that "distrust of thing I don't understand" can really be considered a fallacy. Certainly it sounds like the other party's tone wasn't constructive in this case. It also sounds like they are fairly ignorant.

            Still, the underlying sense that you shouldn't trust people making claims based on things that you don't understand is probably a fairly solid survival strategy in general. Better to miss out than get scammed.

            To put it another way, a call to "trust the science" in the absence of further elaboration is itself an appeal to authority. Despite that, it's not actually wrong - you generally should trust openly published science that has been reproduced by at least one unrelated party. Which serves to illustrate the rather glaring issue with the premise of the linked article, at least for practical everyday use.

            • avodonosov4 days ago
              Distrust would be right.

              The fallacy was that people consider presence of statistical evidence as a negative sign. Not realizing its possible to lie without statistics as well.

              Lets imagine a book "100 ways to harm your health with medicine", and a sick person choosing between magic and medicine: "Aha, the book has proven that medicine is harmful, so of course magic".

              • fc417fc8023 days ago
                Indeed that would be the wrong conclusion to jump to.

                However it isn't how I read the original example. I saw it more as "A is backed by evidence B" rebutted with "I don't trust evidence B because ...". Despite the described tone being poor and the individual obviously horribly ignorant, when assessed from their (apparent) point of view instead of my own that position seems fairly reasonable to me.

                In other words, not so much "magic instead of medicine" as rejecting the claim that medicine is superior to magic while also declining to hold the view that magic is superior to medicine.

      • dullcrisp4 days ago
        Maybe I’m missing something, but how is calling out every time a news article cites a government agency as an appeal to authority a list of logical fallacies?
      • fmbb4 days ago
        What were the alternative solutions you discussed?

        Did a worse one get picked?

        Did you already have a solution in place, and you were actually suggesting a change?

    • tgv5 days ago
      I've worked on classical NLP models for quite some time, and this indeed looks way too simple to be of any practical use. If you mention Montague, I'm going to refer you to "Pedro owns a donkey," the poster kid sentence for Discourse Representation Theory. That's 1980s work, and for simple sentences it's already complicated beyond what the OP article suggests, and fails on anything remotely complex. I think it goes 2nd order the moment a complement is introduced (I think that ...).

      And even if you can translate a sentence into a predicate, you haven't begun understanding what lies behind all those predicates. E.g., "Zelensky is ready to work under Trump's 'strong leadership' after 'regrettable' showdown." What good does it do to have that in FOP?

      [1] https://plato.stanford.edu/archIves/sum2011/entries/discours...

      • zozbot2345 days ago
        It looks like classic models of NLP semantics mostly punt on the "logical" point of view precisely due to these difficulties, and focus mostly on the more surface level problem of describing how each word of the source text correlates with a deeper description of the "meaning" of the text as a whole. So it is simply assumed that the meaning of the text as a whole must be derived compositionally from the meaning of each part (usually described by a somewhat ad-hoc "frame" structure), but exactly what that entails in a "logical" sense is left unspecified. UMR (Universal Meaning Representations) seems to be a typical example of such a system https://github.com/umr4nlp/umr-guidelines/blob/master/guidel... The expected use case seems to be something like building a common intermediate language for an automated translation system; individual meaning elements can then be "mapped" in a useful way, even across different languages, but there's not much interest apparently in "inferring" further knowledge from what one already has, or even on verifying that any given inference is valid (as proposed by OP).
      • WaxProlix5 days ago
        Even beyond that you have a ton of pragmatics post-grice to deal with. Computing implicatures is complex and requires a lot of knowledge about context etc. The truth value of a statement and the 'truth value' of a speech act are pretty different things - not sure it's really feasible to convert between them.
    • da_chicken4 days ago
      I don't think that's the reason it won't be very useful. I think there are two reasons it won't be very useful:

      1. Most natural language arguments are not sound because the argument is not deductive logic. Most natural language arguments are persuasive, not formal reasoning.

      2. Formal logic is method of preserving truth. It doesn't really create truth. That makes it a lot less useful. Critically, while a deductively valid argument has a true conclusion if all the premises are true, an invalid argument can still have a true conclusion. Formal logic, then, is very narrow.

      This is why finding a logical fallacy in an argument is often not convincing by itself. It doesn't say "your logic is flawed therefore I am right". It says "your logic is flawed and therefore should be revised and improved."

      • bwfan1234 days ago
        > Most natural language arguments are not sound because the argument is not deductive logic. Most natural language arguments are persuasive, not formal reasoning

        related notes that there is some evidence that "Language is primarily a tool for communication rather than thought" [1]. ie, that language is neither necessary nor sufficient for the so-called psychic thinking process. It serves as a communication mechanism. Meanwhile, there is a hypothesis that the psychic thinking process lies beyond computation as we know it [2] in the form of turing machines etc.

        [1] https://www.nature.com/articles/s41586-024-07522-w [2] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer...

    • thomastjeffery5 days ago
      Text that is written in Natural Language is open to interpretation. There are many formal statements that can be said to interpret a given Natural Language text. Can we determine which formal representation is correct? What about most useful?

      The obvious answer to these questions is, "no". There is no such thing as a conclusive interpretation. If there was, then Natural Language wouldn't be ambiguous in the first place!

      So we're all doomed to constantly misinterpret each other forever, right? No? We humans use Natural Language all the time, and usually figure out what the other person actually means!? How do we do it? Are we all just really good at guessing?

      No, we have something better: context.

      Context exists both in and around Natural Language text. Context determines which formal meaning is used to interpret the text. If we don't know which context is appropriate, there may be clues in the text itself that help us construct one that is useful or correct.

      ---

      I've been trying to work out an approach to language processing that interprets text into logical formalisms (arbitrary meaning). I call them "Stories". A Story is an arbitrary interpretation of text. A Story is never conclusive: instead it is used as arbitrary context to interpret the next text. I call this process "Backstory".

      We could even do the process backwards, and "write" an arbitrary formalism (meaning) in the same language/style/voice as a previously interpreted Story.

      Given enough example instances of Story, we should be able to read and write to each other through explicitly shared context. I call this process "Empathizing". I call my idea the Story Empathizer.

      I'm definitely out of my depth when it comes to the details, though...

      • pylotlight4 days ago
        I find humans have variation in ability for this as well though. Like some people need waaay more context, and need everything spelled out in granular detail to understand a topic, vs others who can more easily adapt, pick up clues and other relevant context information.
        • thomastjeffery4 days ago
          That's definitely true. I also suspect that holding too much potential context can be counterproductive, because then you have too many options to choose from. This happens a lot with jokes: there are a lot of unique backstories offered by different pop culture references, and pop culture is quickly diversifying to an overwhelming size. There is a lot of entropy in human expression.

          The good news is that context can sometimes merge stories together. When we do explicitly find shared context, we tend to leverage that knowledge.

          My idea is about offloading as much of this process as possible to a computer. We would still need to choose backstories, but the rest could be done in plain view, leveraging the incredible speed and memory size computers have.

    • andrewdb4 days ago
      One way to slightly mitigate the difficulties of nuance in language when translating to formal arguments is to attwmpt to always steelman the argument. Afford it all the guarded language and nuance you can, and then formalize in premises and conclusion.

      This would also make interaction much more civil as well, given so much proclivity to do the opposite (straw man).

      It's not a perfect approach, but it helps. LLMs are quite decent at steelmanning as well, because they can easiky pivot language to caveat and decorate with nuamce.

    • lapcat5 days ago
      See also for example V.H. Dudman on the interpretation of "If" sentences: https://www.scribd.com/document/478756656/Dudman-1984-Condit...
    • cs7025 days ago
      It could be useful for domains in which all or at least many problems are solvable (i.e., they can be stated and satisfied) with first-order logic.

      It could also be useful as a lower-level component of general-purpose systems that internally rely on chains of thought computed by sub-component LLMs.

      • xhevahir5 days ago
        It wouldn't be useful if, as the parent comment is saying, it won't do a decent job of translating natural language.
        • cs7025 days ago
          Ah, got it. Thanks!
    • a-dub5 days ago
      would be interesting if they had adversarial/null llms attempting the noisy nlp reductions as well. then one could make arguments about the sturdiness of the noisy bit.
  • ColinWright5 days ago
    It was Gottfried Leibniz who envisaged the end of philosophic disputes, replacing argument with calculation.

    "if controversies were to arise, there would be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate."

    • dmos625 days ago
      I wonder if anyone else thought that that's how most of the world worked when they were a kid. I thought that most people would reason through everything, and if they couldn't, they would take it home as sort of homework and finish it there.
      • 5 days ago
        undefined
    • gnatman5 days ago
      “That’s nice little idea you have there. Be a shame if it turned out to be incomplete…”

      - Kurt Gödel

      • strogonoff4 days ago
        I wonder if Gödel’s incompleteness can somehow map to the map vs. territory distinction.

        The impossibility to exhaustively and precisely put humanity in words, like the impossibility to have provably correct and complete model of reality, is like the impossibility to have a fully precise map.

        The biggest danger is elevating the newly created map to the position of your new, much more simplistic, territory that supersedes the original one, with all of its quirks and fidelity.

    • franktankbank5 days ago
      Although, demanding even a shred of self-consistency goes a long way in short circuiting bad argumentation.
      • dmos625 days ago
        I have a pet theory that most inefficiency is about self-consistency (or lack thereof), whether that's in human-human or human-machine communications (e.g. program code).
    • soulofmischief5 days ago
      If only. Ethics are reached via consensus. Two calculators can indeed produce different results if the axioms supporting them differ.

      And good luck calculating some of these axioms, such as "Why is it my duty not to kill someone?" You could argue, "Well in the end, a society enabling such behavior at scale would be no society at all," to which one might reply, "I have no interest in letting others do as I do.", and you can't calculate away violent sociopaths. The rest of us derive our principles from functioning mammalian emotional circuits, but at some level we rest our case on subjective axioms.

      • kennysoona5 days ago
        Those axioms can still be evaluated, quantified and compared, and eventually calculated.
        • yifanl5 days ago
          Based on what criteria? A set of meta-axioms?
          • harperlee4 days ago
            Based on whether they take you to places you don’t want to end at, which is an incomplete measure but quite a pragmatical one. E.g. if your set of axioms end at “erase half of the population by force”, then perhaps revisit your axioms.
            • mike_hearn4 days ago
              That's what soulofmischief is saying. If your reasoning ends at somewhere you don't like emotionally, then your axioms are bad i.e. your actual axioms are emotional. Which is fine!
          • kennysoona5 days ago
            No, no meta axioms. Just by quantifying whatever we can as much as possible to be as objective as possible.
            • soulofmischief5 days ago
              "as possible" means an incomplete system that still relies on assumed axioms.
              • kennysoona5 days ago
                Maybe. I don't think so though. I think everything can be quantified and qualified.
                • Unless you can provide a proof, it remains conjecture.
                • 4 days ago
                  undefined
            • numpad04 days ago
              I think what this branch of comments is trying to do is to reinvent `ideology` as a word.
              • kennysoona4 days ago
                I have no interest in ideology and don't see even the concept as relevant to any point I've made.
                • numpad04 days ago
                  I'm not trying to call anyone out, just thought that what is being discussed is concepts laid out doughnut shaped around that word.
                  • kennysoona4 days ago
                    That's fair enough, I guess you could say my position is perhaps an ideology, but I do belief it can be defended objectively as possible.
            • Mr-Frog5 days ago
              My gut still tells me this relies on a human-defined optimization metric.
      • glenstein5 days ago
        >Ethics are reached via consensus

        This is probably too big a topic for a whole side-branch on this, but modern meta-ethics teaches a range of possible approaches. Some notions of ethics are relativist, and are about the fact that moral norms are produced by some given society. But under some constructions that's just a procedural truism rather than a position on the content or the nature of morality itself.

        Then you have moral realism, a perfectly respected position, which can encompass things like utilitariansim and other ism's. And this might seem silly derail, and I'm trying not to, but this is important at the end of the day, because "ethics is reached via consensus" can mean a lot of things that cash out with completely different practical implications. It's the difference between, for instance, deciding we need to be consensus oriented and vote, or be research oriented and concerned with deepening our scientific understanding of things like insect consciousness and whether the physical effects of sleep deprivation fall under the traditional definition of torture.

        >And good luck calculating some of these axioms

        Not wrong, they can easily get computationally intractable. So I think one has to account to some degree for uncertainty. Here again, I worry that the intended upshot is supposed to be that we simply give up or treat the project of moral understanding like a cosmically impossible non-starter. I like to think there's a middle ground between where we presently stand and the hypothetical future where we've got perfect knowledge.

      • lo_zamoyski5 days ago
        > Ethics are reached via consensus.

        Absolutely not! This is cultural relativism, and frankly, it would be circular: how exactly are we converging on a consensus if not from some preexisting sense of the good?

        The only defensible objective basis for the good is the nature of a thing and what actualizes the potentials determined by that nature, thus actualizing the thing as the kind of thing it is. Morality, only possible for things that have the capacity to comprehend their options for action (intellect) and choose freely among them (will) on the basis of that understanding, therefore concerns the question of whether an act performed by a thing furthers or frustrates the actualization of that thing.

        By cutting off my arm for no proportionate reason, I do an immoral thing, because it is my nature to have that arm, but if I have gangrene in that arm that threatens my life, then removing the gangrene with the undesirable side effect of losing an arm is morally justifiable, even if the loss of the arm is not good per se.

        Murdering a human being is gravely immoral, because it directly contradicts my nature as a social human being in a very profound and profoundly self-destructive way. However, killing a would-be murderer in defense of my life or that of another is a morally very good deed; it is in accord with my social nature, and indeed can be said to actualize it more fully in some respect.

        > The rest of us derive our principles from functioning mammalian emotional circuits

        Please refrain from making such silly pseudoscientific and pseudophilosophical statements.

        That being said, calculation is insufficient, because such calculation is formal: it explicitly excludes the conceptual content of propositions. But concepts are the material "carriers" of comprehension of what things are. We can also analyze concepts. Now, we can say that we can calculate a formal deduction according to formal rules, but we cannot calculate a concept or its analytical products. This is the produce of abstraction from concreta. Formal systems abstract from these. They are blind to conceptual content, on purpose. And having used a formalism to derive a conclusion, we must interpret the result, that is, we must reassign concepts to symbols that stand in for them. So formal systems are useful tools, but they are tools.

        • Joker_vD4 days ago
          > how exactly are we converging on a consensus if not from some preexisting sense of the good?

          Well, there is this mechanism of imprinting our current moral settings (both declared and actually demonstrated) onto mostly blank-slate minds of the children, so that the next generation has mostly the same morals as the current one but with minor differences: so the ethics can "evolve" in time but that doesn't mean there is any end-state "consensus" they're trying to reach.

        • numpad04 days ago
          I've never thought that cultural relativism is supposed to be bad/wrong - I thought that kinds of thinking is superstitious, a bit racist, and are an undesirable strong basis for many kinds of hostilities in the world that it shouldn't be a formal majority point of view.

          One cannot realistically construct the ethics procedurally and reproducibly from blank slate, so holding a false beliefs that one can or do have such set of "scientific" ethical standards only justify genociding oppositions.

          Ethics is just half-broken loose set of heuristics developed and optimized evolutionarily. It probably can't even be properly quantized into text. It's nothing that stands up to scientific computational scrutiny. And there we step into cultural relativism as a principle; there are lots of behaviors we humans show as "ethical" acts that sometimes seem random and not universal, that also seem to work where it is done, and maybe not work where it is not done, so you can't say which one is it.

        • fc417fc8024 days ago
          > > The rest of us derive our principles from functioning mammalian emotional circuits

          > Please refrain from making such silly pseudoscientific and pseudophilosophical statements.

          Yet you use terms such as "nature". How is that not silly and pseudoscientific?

          You are ascribing traits to things in a fundamentally immeasurable manner. At least in GP's case we are left with a root that we can quantify.

        • kazinator5 days ago
          "reached via" is not the same thing as "derived from".
    • bloomingkales5 days ago
      Well, we can have AI do what we do but it will never be tied to an emotion. You can feel a lot just adding 2+2 (maybe someone held a gun to your head once). What does philosophy say about philosophy without emotion? What use is it to us without our human context? The philosophy of a tiger is not relevant to me mostly because I don't feel most of the things a tiger feels.
  • giardini5 days ago
    Prolog has always had DCGs (Definite Clause Grammars) that allow you to write rules that resemble natural language grammar structures to parse and generate English sentences:

    https://www.metalevel.at/prolog/dcg

  • tiberius_p5 days ago
    First order logic can only detect formal logic fallacies. Informal logic fallacies like ad hominem, strawman, red herring, etc. are cast in language. They can't me defined and resolved mathematically. The model should be fine tuned with examples of these informal fallacies and counter-arguments to them. Even so it won't be able to detect them in all cases, but it will at least have some knowledge about them and how to reply to them. This knowledge could be further be refined with in context learning and other prompt engineering strategies.
    • jfengel5 days ago
      I would expect a true logical fallacy detector to take any natural text and spit out "unsupported assumption, unsupported assumption" over and over and over.
    • grandempire5 days ago
      > ad hominem, strawman, red herring

      These aren’t logically incorrect, people who study rhetoric have just identified these as common patterns of poor persuasion.

      • Quarondeau4 days ago
        Couldn't they be classified as non-sequiturs, given that the conclusion doesn't follow from the premises?
        • grandempire4 days ago
          Take ad hominem. It’s true that there is no logical connection between who is saying something and whether it’s true.

          But in practice, that’s one of the most relevant factors of whether you should be listening to someone. Does this person have a solid track record? Do they have your interest in mind?

          So it is relevant information. It’s just that, “well once this guy kicked a dog” is usually done in bad faith.

          So I wouldn’t consider it a non-sequitor, except in its most crude forms.

          • taeric4 days ago
            In this vein, one of the more insipid traps of these fallacies is that they do not lead to a conclusion, on their own.

            Ad hominem continues to be a good example. If you know that someone is a liar, you don't know that everything they say is false. You just know that they lie and are likely saying something to affect listeners. Could be based on some truth. Could not.

  • languagehacker5 days ago
    It sounds like the data set they use is designed to teach what logical fallacies are, which makes sense that it would do fine with it. I doubt this would do well against real-world language with things like structural ambiguity, anaphoric resolution, and dubious intent.
  • EigenLord5 days ago
    This is very cool and definitely a step in the right direction, however, the question remains where exactly this formalizing module should be placed in the stack. As an external api, it's clear that the model is not "thinking" in these logical terms, it just provides a translation step. I'd argue it would be better placed during inference test-time compute (as seen in these so-called reasoning models). Better yet, this formalizing step would happen at a lower level entirely, internal to the model, but that would probably require totally new architectures.
  • rahimnathwani5 days ago
    The paper links the code repo: https://github.com/lovishchopra/NL2FOL

    But I don't see a pretrained model in there, so I'm not sure what to pass as `your_nli_model_name`:

      python3 src/nl_to_fol.py --model_name <your_model_name> --nli_model_name <your_nli_model_name> --run_name <run_name> --dataset --length
  • CJefferson5 days ago
    Turning English into logic basically requires understanding the language and context.

    I’d you are told “we will go to the zoo or swimming pool tomorrow, if it is windy or rainy”, most readers would know the first or is exclusive (we aren’t going to both), while the second is inclusive (we will go if it is windy, rainy, or both).

    This is annoying when teaching logic, from experience.

    • someothherguyy4 days ago
      No it doesn't. It just requires producing many possible interpretations and resolving more probable ones.
      • procaryote4 days ago
        The most probable logical interpretation of a phrase, not looking at context, might not be correct.

        Even something as simple as sarcasm breaks this idea, and you can have full books of metaphor that only make sense if you understand the cultural context in which they were written.

  • 5 days ago
    undefined
    • 5 days ago
      undefined
  • anentropic4 days ago
    Apart from the fact it's focused on logical fallacies, this is reminiscent of AWS Bedrock Automated Reasoning, which also appears to involve some kind of LLM-guided translation of natural language into logical rules ... which are then used to validate the output of the LLM application

    https://aws.amazon.com/blogs/aws/prevent-factual-errors-from...

  • FloorEgg5 days ago
    Not familiar with FOL as a formalism, and would love to see this in action. I feel like it's a big part of the solution to propaganda.

    The other part seems to be values obfuscation, and I wonder if this would help with that too.

    If Joe says that nails are bad, it can mean very different things if Joe builds houses for a living and prefers screws, or if Joe is anti development and thinks everyone should live in mud huts.

    Propaganda will often cast a whole narrative that can be logically consistent, but entirely misrepresents a person or people's values (their motivations and the patterns that explain their actions), and there will be logical fallacies at the boundaries of the narrative.

    We need systems that can detect logical fallacies, as well as value system inconsistencies.

    • andrewdb5 days ago
      A prompt that I like to use for this:

      ---

      Intake the following block of text and then formulate it as a steelmanned deductive argument. Use the format of premises and conclusion. After the argument, list possible fallacies in the argument. DO NOT fact check - simply analyze the logic. do not search.

      After the fallacies list, show the following:

      1. Evaluate Argument Strength: Assess the strength of each premise and the overall argument.

      2. Provide Counterarguments: Suggest possible counterarguments to the premises and conclusion.

      3. Highlight Assumptions: Identify any underlying assumptions that need examination.

      4. Suggest Improvements: Recommend ways to strengthen the argument's logical structure.

      5. Test with Scenarios: Apply the argument to various scenarios to see how it holds up.

      6. Analyze Relevance: Check the relevance and connection between each premise and the conclusion.

      Format the argument in the following manner:

      Premise N: Premise N Text

      ETC

      Conclusion:

      Conclusion text

      [The block of text to evaluate]

      • FloorEgg5 days ago
        Nice prompt, I've been doing something similar but not this robust. I'll give this a spin.

        Thanks again!

    • janalsncm5 days ago
      Maybe. One problem we have now is that fact checking is a lot more expensive than bullshitting. If we had a program that could bring things closer to parity it would be nice.

      But also, a lot of propaganda isn’t false per se but simply blown out of proportion, or underproportioned in cases of inconvenient truths. The truth is a distribution of events, and editors continuously choose how to skew that distribution.

      (One of my very interesting possessions is an old Chinese state-owned newspaper. As far as I could tell, their main tool wasn’t lying, but simply omission.)

      For example, if you wanted to push a narrative that e.g. pit bulls are the most dangerous problem in America, you would just post a nonstop stream of pit bull attack videos. It taps into cognitive biases people have which aren’t propositional logic statements.

      More broadly, the world is stochastic, at least in the way we experience it. So our brains have to make sense of that, which is an opportunity for narratives to creep in.

      • FloorEgg5 days ago
        So maybe the solution is to have these FOL capabilities close to the user and far from the information source.

        FOL values analysis of information streams, that manifest as user interface for configuring the algorithms that decide what information is surfaced to you in media.

        This is why I said this sort of thing might be part of a solution. The whole solution would involve other significant parts.

      • mirekrusin5 days ago
        You can require that factual statements require source reference.

        Statement that "pit bulls are the most dangerous problem in America" requires source data (ie. cause of death or serious injuries in 2024 in USA).

        Publications can be signed by authorities (ie. university or government body).

        IMHO sooner or later we will (have to) end up with system like that.

        Every information will be signed and level of trust will be automatically established based on your preference who you trust.

        • janalsncm5 days ago
          Such a publication would not explicitly come out and say “pit bulls are the most dangerous problem in America”. That’s something that can be easily falsified.

          They would say something like “learn the truth about pit bulls” and then feed you an endless barrage of attack footage and anecdotes and emotionally charged information.

          The purpose is to shape your priors. If all you see is pit bulls attacking people, your subconscious will rate them more risky. You may not even be able to verbalize why you changed your opinion.

          • mirekrusin4 days ago
            People say that in the future all information will not be directly ingested by people – instead everybody will have a "filter" similar to how we use spam filters, but it'll rewrite information (removing misinformation, adjusting bias, adding references, summarizing and/or expanding <<probably more rare>> etc).

            I believe this future (all information being like this) is not far off and it has decent usage percentage already judging from direct traffic decline on some well known information source websites.

            Perplexity, phind (as well as upstream chat interfaces now) support internet searching (exploring?) already which does it.

            When reading (news and other) articles I find myself more and more often reading them through LLMs to perform above steps. If somebody never tried it, it's really worth, especially for politically biased news articles.

            I believe this shift in information consumption is happening more and more for everybody.

            Everything will become indirect, likely with multiple layers (ie. extra layer at OS level is likely – this is frankly perfect for use cases like protecting minors: it would be great if you can safely give laptop to your kid knowing that there is ai based content filter you've setup for their age group).

      • drdeca5 days ago
        You mention the world being (at least subjectively) stochastic. This brings to mind the idea that a model of probability rather than just logic, might be more beneficial?

        The example you gave of focusing excessively on some topic in order to make it seem like a bigger deal…

        hm, is there a way we could formalize such things in a way like how formal fallacies are formalized?

        It seems more difficult than classifying common formal fallacies.

        But, might it be possible?

        • janalsncm5 days ago
          The problem is that “news” is not a random sampling of all events. It is biased by the very fact that someone has to decide the event is notable.

          And even if you were to witness a random sampling of all events via some kind of clockwork orange mechanism, your brain has saliency biases as well.

          You might find the wiki page on cognitive bias interesting https://en.m.wikipedia.org/wiki/Cognitive_bias

    • heyitsguay5 days ago
      Humans aren't rational actors who get tricked into embracing propaganda by subtle logical fallacies. This will be of no more help than fact checking.

      It's a neat project on its own, tbc, I just have very low expectations of broader impact.

      • FloorEgg5 days ago
        I disagree with your first point. People are far more rational than you are making them out to be, it's just that they are rational within their own value system, not yours.

        Also today's propaganda is capable of adapting itself to each audience member's value system to make it more palatable, and then gradually nudge the audience towards the desired narrative/beliefs/values. The systems that distribute the propaganda are already analyzing people's values and using that information to manipulate people. I think that information asymmetry is part of the problem. I could be wrong, but I think flipping that dynamic around so the public can see the true values of the subjects of propaganda may help neutralize a lot of propaganda.

        As far as what impact this specific project will have, I have no idea. You may be right. I'm curious about its limitations and how it can be applied.

        • kubb5 days ago
          I thought so too, but recently so many people dropped or adapted their core beliefs to be able to support and defend people in power that they really love that it made me change my mind. Now I think that value systems are malleable and are formed by whatever makes us feel good. And the logical consistency on top is very optional.
          • FloorEgg5 days ago
            Or maybe these people you speak up assume that the people in power have values aligned with their own, and if there was an unbiased system that highlights value discrepancies using formal logic, they might not "love" those people as much anymore.

            What I assume you might be missing is that you are looking at the world through a different lens than these other people. Both you and they are consuming propaganda and can't detect it as propaganda because it aligns with your values. However it subtly nudges your values in a direction over time.

            I agree that people's values and core beliefs are malleable, but in the same way a tree trunk is. It may seem like these people have changed a lot and you haven't, but I think it's more likely that you've changed too, and that they've changed less than you think.

            No one is immune to propaganda, which is why anything that can help disarm it interests me.

            • kubb5 days ago
              You touched on many points, but one thing to consider: people seek out information that confirms their worldview and actively protect themselves against anything that can harm their feelings for their idols.

              John Doe isn’t trained in logic and can adjust any of his premises if it means he can continue to admire his favorite celebrity. It’s a combination of flawed reasoning and premise flexibility.

              Not to mention, any fact can be endlessly challenged and questioned even if it’s agreed upon and largely incontestable.

      • naasking5 days ago
        Humans are not fully rational, but they're more rational than many assume. For instance, many thought the illusory truth effect showed that people are biased towards believing things they hear many times over, which is great for propagandists, but it turns out this is only true when they are in a "high quality information" environment. This is quite rational! They should update towards believing repeated statements when the environment they're in has shown itself to be reliable. When the environment they're in has shown itself to be unreliable, the illusory truth effect basically disappears.

        [1] https://x.com/ROrchinik/status/1885820697160859951

        • kennysoona5 days ago
          How does that explain conservatives doubling down on whatever they hear even if it's obviously false? I guess because they wrongly consider some "low quality information" environments "high quality information" environments?
          • naasking5 days ago
            Not everything can be reduced to this one cognitive phenomenon. The behaviour you describe stems from: confirmation bias, and the backfire effect/identity-protective cognition. Also this isn't exclusive to conservatives:

            Science Denial Across the Political Divide: Liberals and Conservatives Are Similarly Motivated to Deny Attitude-Inconsistent Science, https://journals.sagepub.com/doi/abs/10.1177/194855061773150...

            • kennysoona5 days ago
              > Also this isn't exclusive to conservatives:

              You link aside, I think the obvious evidence is that that behavior is significantly more common in conservatives. Literally the most basic of facts get denied in bulk. I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.

              • naasking5 days ago
                > You link aside, I think the obvious evidence is that that behavior is significantly more common in conservatives. Literally the most basic of facts get denied in bulk.

                This is modulated by who is currently in power. Conservatives were worse when they lost and Biden was in power. Democrats are ramping up the crazy now that they're the underdogs.

                > I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.

                Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.

                As for denying basic facts, there's a whole host of basic scientific facts that people who lean left deny wholesale, eg. heritability of behaviours, personality and other characteristics, differences between groups, denying certain features of sex and the sexes, etc.

                I won't claim that the problem is equal on both sides, for many reasons I won't belabour here, but it's not nearly as wide a margin as you're implying. Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.

                • kennysoona5 days ago
                  > This is modulated by who is currently in power. Conservatives were worse when they lost and Biden was in power. Democrats are ramping up the crazy now that they're the underdogs.

                  That isn't remotely true. Conservatives have been consistently in the lead, and there are studies showing how much more prone to believing misinformation they are.

                  > Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.

                  There's at least reasoned arguments for that. That isn't the same thing as rejecting useing masks during a pandemic.

                  > it's not nearly as wide a margin as you're implying.

                  It really is, but we clearly disagree.

                  > Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.

                  You shouldn't make assumptions about how or where I get my news. I don't think coverage bias applies at all in influencing my conclusion based on how I get my news.

                  • naasking4 days ago
                    > Conservatives have been consistently in the lead, and there are studies showing how much more prone to believing misinformation they are.

                    No, that's misleading. Conservatives have also been consistently in the lead on "authoritarianism" to the point that it was considered a purely conservative phenomenon, until someone actually thought to ask questions like "what would left wing authoritarianism look like?" and suddenly they found it everywhere.

                    You seem not to realize how unreliable the data is on these questions. Not only is the replication rate of psychology and sociology ~35%, but the demographics of those fields yields a clear bias on exactly these questions. You simply cannot draw such sweeping conclusions from the unreliable data we have.

                    When conspiracy and biased thinking are tested directly, as with the study I linked, there is no difference in how the biases impact their thinking. Both sides are extra harsh on their enemies, are overly forgiving of their allies, etc. Confirmation bias and motivated reasoning all around.

                    > There's at least reasoned arguments for that.

                    Do you think that there were reasoned arguments for Trump having won in 2020?

                    > That isn't the same thing as rejecting useing masks during a pandemic.

                    They could cite reasons for that too, you just don't believe they are valid reasons. It's the same confirmation bias in all cases though.

      • aeturnum5 days ago
        I think you're envisioning this in a pessimistic way.

        I totally agree that the end conclusion "this statement is fallacious" is pretty useless. But I assume that a working process would also yield the chain of judgements (A is right, B is right, C is wrong, etc). I think that would be VERY useful.

        People who become captured by propaganda and lies generally are not sold on 100% of the propaganda. There are certain elements they care more about and others they can ignore. A way to deprogram people through conversation is to just ask them to explain things about their views and ask them to reconcile them with reality. The reconciliation is painful for them and that pain keeps people "in" irrational beliefs - but it's also how people find their way out. Once they no longer associate themselves with the conspiracy, they can discard beliefs associated with it...provided they can think through them.

        I think being able to automatically decompose a fact check into the elements of what "is true" and "is false" in a statement would be HUGE. An essential tool in helping people escape from information swamps.

        • RGamma5 days ago
          I vaguely remember a post I read on reddit [1] around the beginning of COVID by a nurse who dealt with an anti-vax patient. It went along the lines of "Big pharma wants to poison me", "Maybe you're being played and Chinese propaganda wants you to believe that to hurt the US". Apparently induced quite a lot of dissonance.

          Fighting fire with fire.

          [1] Impossible to find of course. And with all the LARPing going on on there, take this with two grains of salt. Given all the crazy shit going on in the US, I find it totally believable though.

    • jfengel5 days ago
      You know First Order Logic. It's just ordinary logic; it's the default thing people think of when they say "logic".

      But it's also not very useful for human reasoning. It's good for math and logic puzzles and bad at anything else. It's bad at time, at belief, at negation. None of those things act like you expect them to.

    • naasking5 days ago
      This won't "solve" propaganda or misinfo IMO. Checking logical consistency and eliminating fallacies still wouldn't address the selective presentation or omission of facts, for instance, and the notion that it could avoid misrepresenting a person or their values assumes that someone has already accurately and fully captured a detailed description of a person's values. But that's the whole problem!

      This is just the formal specification problem all over again. Verifying software against a spec is good and useful, but verification doesn't tell you whether the spec itself correctly captured the desired objective, it can only tell you whether the spec is logically consistent and that you implemented it faithfully.

    • drdeca5 days ago
      I don’t think it would take you long to learn FOL, and I think it is a good formalism to have some familiarity with.

      It’s pretty much the default modern formulation of general-purpose formal logic.

    • RGamma5 days ago
      It would work as well the internet bringing us more enlightenment. Besides, points of contention tend to form around ethics, whose axioms are unprovable if non-cognitivism is true (and we have no reason to believe it isn't).
    • thomastjeffery5 days ago
      The problem isn't values obfuscation. The problem is that many people, especially conservatives, do not care about values. Instead, they care about virtues.

      People who approach politics from a virtue ethics perspective are vulnerable to propaganda because logic and value have no bearing whatsoever on their decision to accept or reject a narrative.

      You can't think critically for someone else. They must do it on their own.

      • drdeca5 days ago
        How are you differentiating between caring about values and caring about virtues?
        • thomastjeffery4 days ago
          A virtue exists at the beginning of a narrative. A value is a judgment of the narrative after the fact.

          One virtue common in conservative politics is competition. A healthy instance of capitalism is expected to benefit all participants by virtue of competitive markets. The value of our current instance of capitalism is that very large corporations make a lot of cool tech and sell it at low prices.

          But what about homelessness? Isn't that a real tangible negative value? Yes. What should we do about it? Well, a conservative will probably tell you that we should help homeless people by making housing (and homeless people) more competitive.

          But that's clearly not working! The system does not provide a value that we very seriously need! These arguments don't matter to conservatives, because to them, it's all about the virtues.

          • FloorEgg4 days ago
            You and I are using completely different definitions of "values".

            The definition my comment depended on was one where values act as a filter for actions (or patterns of actions).

            Drug addicts only value short-term highs (next fix). Someone else may value being a musician, being reliable, or being honest. In 2018 maybe someone would have bought a Tesla because they value being seen as progressive and value experiencing modern technology. Notice that all my examples start with a verb, which can often manifest as a way of being.

            I didn't bring up virtues, but my understanding of virtues is that they are values deemed by at least some to be objectively "good", such as the cardinal virtues. Whereas values can be both good or bad, such as masochists who value watching others suffer.

            • thomastjeffery3 days ago
              You're missing the distinction.

              A value is something that you value after evaluating it.

              A virtue is presumed to be good. If it were presumed to be bad, it would be a vice. People commit to virtuous behaviors because they expect valuable consequences.

              For example, someone who considers honesty a virtue might implement that by choosing to tell the whole truth; or they might implement it by choosing not to tell lies; or even by punishing others who they believe to be dishonest. It is assumed that there is no need to evaluate their behavior, because it was guided by virtue.

              Propaganda portends itself to be virtuous. This is important, because a target audience who relies on virtue ethics will not evaluate the narrative.

              For example, when conservatives in the US argue against single-payer healthcare, they do not evaluate its merits against the merits of the current insurance system. Instead, they declare its foundational vice: "socialism". Opposite of the ultimate conservative virtue: "capitalism".

              It doesn't matter how incoherent this argument is: it isn't an argument at all. It's a claim to virtue.

              This is the core principle of conservative politics, and the primary reason conservatives are so vulnerable to fascist narratives coming out of the alt-right.

              • FloorEgg3 days ago
                Thanks for expanding. I think we are in agreement about values and virtues, and I appreciate your perspective on the nuance of presumed virtues as they relate to propaganda, it sounds right to me.

                Where you lose me is in the generalization and singling out of conservatives. It sounds like you're saying they are uniquely susceptible to propaganda, yet all my anecdotal experience adds up to it being fairly equal on both sides.

                I haven't dug into any formal study so I could be wrong, but I am close to lots of people who are politically left who seem to follow that "presumed virtue" -> reaction (skip critical thinking) pattern. To be clear, my guess is it's a very common and natural pattern, like cognitive biases and optical illusions. It's a consequence/bug of collective cognition.

                • thomastjeffery3 days ago
                  To be clear, I do not think conservatives are the only ones using virtue ethics. Much of how the social justice movement plays out is a good example of this dynamic.

                  What makes conservatism unique is that the entire movement is centered on virtue ethics. There is nothing new about this: just look at Reaganomics, the wars on drugs and terror, abortion bans, gay marriage bans, etc. Practically everything about conservative politics is expressed and defended as a virtue.

                  The next unique thing is that the alt-right has taken over conservative narrative. There are groups of people that literally call themselves fascist, and they aren't just getting attention from conservative politicians: they are writing talking points that are echoed over and over again by the house, the senate, and even the president.

                  The overwhelming majority of conservatives are not fascists, yet most are evidently happy to work with them. Podcasters and news entertainers are constantly beating the drum of alt-right rhetoric, because it's engaging, and engagement gets them paid. Conservative voters are happy because their team is winning. Fascists are happy because their virtues go mainstream. There is no infighting, because there is no criticism, because there is no evaluation to begin with.

                  • FloorEgg3 days ago
                    We have ventured far enough outside my sphere of competence that I'm running out of ways to constructively engage, but I'm sure some of your points will linger in my mind.

                    I have had no direct exposure to what you're describing in your third and forth paragraphs, and so I am not in a position to agree or disagree. All I can say is that I haven't seen it yet. What I have seen is misrepresentation (from both sides) and a pattern of media of all types stoking division.

                    A few years ago I learned about the concept of "most respectful interpretation" as a tool for conflict resolution and establishing trust in teams. So much of media these days feels like the opposite.

                    I'm trying my best to understand what's true, while accepting my own limitations and the reality that I may never be able to tell what's really going on at the global power level. At the very least it seems to require a lot of reserving judgement.

                    If the media is a stained glass window, looking through the blue glass and then the red glass is not the same thing as looking through clear glass.

  • raffraffraff4 days ago
    I'm in the process of reading the PDF but if anyone has finished it, is there an implementation of this running somewhere? Is it testable now?
  • qgin5 days ago
    I don't know how much potential this has to solve propaganda / bad faith arguments because you can just say "that logic program is biased" and handwave the entire thing away.

    But you could imagine a role for this in arbitration or legal settings.

  • analog315 days ago
    Doesn't Goedel's Theorem forbid building a logic checker?
    • drdeca5 days ago
      No.

      Gödel’s theorem forbids something that in general tells you whether a statement is true or not. (As in, a method which would work for every possible statement within a system.) It certainly doesn’t preclude something that checks if a proof is correct, any more than it precludes checking whether some calculation is done correctly (I.e. it doesn’t preclude it at all) .

      It says that there are statements which the proof system doesn’t have a proof of the statement being true nor of it being false. It doesn’t mean you can’t have a proof system.

    • bubblyworld5 days ago
      Not Gödel's theorem, but inference for first-order logic is undecidable in general for other reasons. You can still get pretty far with heuristics though. Don't let perfect be the enemy of good =P
      • dpierce94 days ago
        First order logics can be provably sound and complete when they do not express certain arithmetic operations.
        • bubblyworld3 days ago
          First-order logic is sound and complete in general (via Gödel's lesser known completeness theorem, for instance). That doesn't contradict what I wrote =)
  • grandempire5 days ago
    It’s a nerd fantasy to imagine argumentation is a logical formula, and by memorizing all the bad forms you will win arguments and detect falsehood.
  • gcanyon4 days ago
    Facebook needs to implement this as a flag, immediately. (kidding, of course -- it would nuke 98% of their content)
  • igleria4 days ago
    I chuckled a little when I pictured 2 internat randos having the typical internet fight "enhanced" by AI
  • MortyWaves5 days ago
    Trolls that knowingly engage in bad arguments with flawed logic are going to be in shambles.
  • ofrzeta4 days ago
    I thought we had already gone through this with Carnap and Logical Positivism.
  • talles5 days ago
    The entire analytic philosophy movement is nowhere to be seen in the paper (?)
  • shortrounddev25 days ago
    I believe this is something Immanuel Kant tried to do in the 18th century
  • booleandilemma5 days ago
    This is a threat to my company's product managers.
  • 5 days ago
    undefined
  • mike_hearn5 days ago
    Love the idea in theory and would like such a tool to exist, but the use cases they present aren't convincing. This would be useful in much more specific cases like drafting contracts, laws or technical documentation: places where unusually precise language without corner cases is mutually desired by everyone, and the set of fallacies that occur is small and specific.

    This paper doesn't target such use cases. Instead it's trying to tackle "pop misinformation" type claims, mostly related to climate change. Unfortunately the Logic and LogicClimate datasets that the paper are using as a benchmark have serious problems that should disqualify them from being considered a benchmark. If we check the paper that introduced them, Jin et al open by asserting that "She is the best because she is better than anyone else" is an example of circular reasoning. It's actually a tautology. Then they try again with "Global warming doesn’t exist because the earth is not getting warmer" which is also not circular reasoning, it's another tautological restatement (you may say it's false, but disagreement over facts isn't a disagreement over logic - if either clause is true so is the other). Circular reasoning often involves a mis-definition and would be something like this real-world example from a few years ago:

    1. A positive test is means you have COVID.

    2. Having COVID is defined as having a positive test.

    Their second example is "Extreme weather-related deaths in the U.S. have decreased by more than 98% over the last 100 years ... Global warming saves lives" which they classed as "false causality" (they mean non-sequitur). My experience has been that climate skeptics are surprisingly logical so this would be an odd statement for them to make, and indeed if we check the original Washington Times op-ed then we find Jin et al are engaging in malicious quoting. It actually says:

    > "Contrary to sensational media reports, extreme weather-related deaths in the U.S. have decreased more than 98% over the last 100 years. Twenty times as many people die from cold as from heat, according to a worldwide review of 74 million temperature-related deaths by Dr. Antonio Gasparrini and a team of physicians. Global warming saves lives."

    The saves lives claim is based on cold being more dangerous than heat. Warmer weather = fewer deaths from cold isn't a logical fallacy, which is why they had to delete that part to make their example. It might sound like a weird or disingenuous argument to you, but it's logical in the sense that an SMT solver would approve of it. If you disagree it's probably due to prior beliefs e.g. that perhaps extreme weather has increased even as society got orders of magnitude better at reducing the impacts, or perhaps the positive effects of warmer air on the elderly are offset by other effects of climate change, or that the future will be different to the past due to compounding effects. Such rebuttals aren't identifications of a logical fallacy though, just of different priors that could maybe be addressed with additional rounds of debate.

    • _cs2017_5 days ago
      Out of curiosity, what fraction of readers do you think would understand the points you're making? And what fraction of readers do you think would blame you for taking the wrong side of some debate?
      • mike_hearn5 days ago
        Readers here? No idea but I'd love to know. What's your estimate?
        • _cs2017_2 days ago
          Yes readers here.

          I am pessimistic, I think only 2-3% would understand, but I'd be happier to be proven wrong than proven right.

          Thank you for writing up your analysis!

          • mike_hearna day ago
            Interesting. That's a lot lower than my estimate. I kinda agree that what I said is pretty abstract though. Clearly, there are academics working with proof engines that have a shaky grip on the definitions of the various fallacies, or maybe they just don't care much and saw a dataset they could optimize for that industrial labs were likely to ignore (I suspect the latter).
            • _cs2017_9 hours ago
              I also assign a 50% probability to the authors of the paper not realizing their mistake even after reading your post. I think competence is quite rare even in the academia.
  • Geee5 days ago
    Yes, this is exactly what I've been dreaming about. It might finally be possible to beat the bullshit asymmetry law, i.e. Brandolini's law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it."

    If LLMs can debunk bullshit as easily as it's generated, the world will instantly turn into a better place.

    Bad ideas which sound good are the root of all evil.

    • RGamma5 days ago
      You can just as easily imagine LLMs stoking the flames. Real world belief systems evolve along more complicated trajectories than addition of factual axioms and elimination of inconsistencies.
  • svnt5 days ago
    This is already something that e.g. Claude 3.7 Sonnet appears to be able to do very well, with the added benefit of explaining why if you let it -- what is the benefit of this model?:

    > "Sometimes flu vaccines don't work; therefore vaccines are useless." - Hasty generalization

    > "Every time I wash my car, it rains. Me washing my car has a definite effect on the weather." - Post hoc, ergo propter hoc

    > "Everyone should like coffee: 95% of teachers do!" - Appeal to popularity and hasty generalization

    > "I don't want to give up my car, so I don't think I can support fighting climate change." - False dilemma

    • mannykannot5 days ago
      It would take more subtle examples, embedded within what is mostly fallacy-free text, to evaluate the absolute and relative utilities of the two approaches to the problem - or, to put it another way, we should not hastily generalize from their performance on a few straightforwardly fallacious sentences.
  • nico5 days ago
    Is this just another form of the same concept behind smart contracts?
  • pixelpoet5 days ago
    Oh man, where was this back in the 90s arguing with proto-trolls on IRC and usenet who shamelessly moved goalposts, stawmanned, appealed to authority, resorted to ad hominem, ...

    Imagine if you could click on a stupid internet discussion thread and make it give you a Lean proof of each argument where possible :D This thing would be hated even more than, say, vaccines, by the same sorts of people who deliberately choose to not understand things.

  • hackandthink5 days ago
    [dead]