Which is to say, I'm not sure how this paper's results are generally expected to be all that useful in practice.
But.
To be useful in practice the approach does not need to work in all cases of natural language usage. Even if works in some limited cases there may be useful applications.
The authors evaluate their approach on two datasets. One is LOGIC consisting of learning examples of logical fallacies. The other is LOGICCLIMATE, consisting of logical fallacies collected from real world news articles about climate change.
The datasets are here, if anyone is interested to see the type of natural language they try to adress currently: https://github.com/causalNLP/logical-fallacy
I guess this csv contains the LOGICCLIMATE: https://github.com/causalNLP/logical-fallacy/blob/main/data/...
So a possible practicle utility for the approach - spot individual wrong sentences in a long article and highlight them.
Another real world example. I propose a solution at work, based on some statistics. And a colleague dismisses it by saying that there is a book "6 Ways to Lie with Statistics". If there was a smart assistant in the room who gently explained his logical fallacy to the colleague, it would save a lot of efforts for me and made the discusdion more productive. I doubt the difficulties you mention apply to this simple case.
Except, that's going in the right direction towards a better argument: empiricism requires your statistics to be peer reviewed for errors or deception before being believed. That takes a skilled individual.
So, you either think they're very good at statistics or you want them to put faith in your work. Otherwise, they need a smart assistant they trust to review the statistics. Then, they have increased confidence in your solution but it still might be wrong.
It was a simple case and actually I was not presenting a statistics I collected, I just suggested to try using some numerical evidence to chose a decision.
On another occasion I mentioned to somebody that it's necessary to chose drugs or medical approaches verified with medical trials and double blind method. And they replied that there is a book about how to lie with statistics and continued to consider unverified methods.
I mean that in real life sometimes very simple fallacies happan.
Some statistics-based deсisions may be wrong => right decision must avoid statistics.
These cases could probably be adressed with automated tools of the near future.
Also I don't know what or how to teach someone who falls into this pitfalls.
Idk for me it is subconcious, I just "feel" it or know that it is logically faulty.
Also the rephrasing doesnt always work imo, you could have a logic statement, that is totally valid in some context and not valid in others. And you also need to think about the validity of the premise and if it is legit to draw the conclusion in natural language.
Still, the underlying sense that you shouldn't trust people making claims based on things that you don't understand is probably a fairly solid survival strategy in general. Better to miss out than get scammed.
To put it another way, a call to "trust the science" in the absence of further elaboration is itself an appeal to authority. Despite that, it's not actually wrong - you generally should trust openly published science that has been reproduced by at least one unrelated party. Which serves to illustrate the rather glaring issue with the premise of the linked article, at least for practical everyday use.
The fallacy was that people consider presence of statistical evidence as a negative sign. Not realizing its possible to lie without statistics as well.
Lets imagine a book "100 ways to harm your health with medicine", and a sick person choosing between magic and medicine: "Aha, the book has proven that medicine is harmful, so of course magic".
However it isn't how I read the original example. I saw it more as "A is backed by evidence B" rebutted with "I don't trust evidence B because ...". Despite the described tone being poor and the individual obviously horribly ignorant, when assessed from their (apparent) point of view instead of my own that position seems fairly reasonable to me.
In other words, not so much "magic instead of medicine" as rejecting the claim that medicine is superior to magic while also declining to hold the view that magic is superior to medicine.
Did a worse one get picked?
Did you already have a solution in place, and you were actually suggesting a change?
And even if you can translate a sentence into a predicate, you haven't begun understanding what lies behind all those predicates. E.g., "Zelensky is ready to work under Trump's 'strong leadership' after 'regrettable' showdown." What good does it do to have that in FOP?
[1] https://plato.stanford.edu/archIves/sum2011/entries/discours...
1. Most natural language arguments are not sound because the argument is not deductive logic. Most natural language arguments are persuasive, not formal reasoning.
2. Formal logic is method of preserving truth. It doesn't really create truth. That makes it a lot less useful. Critically, while a deductively valid argument has a true conclusion if all the premises are true, an invalid argument can still have a true conclusion. Formal logic, then, is very narrow.
This is why finding a logical fallacy in an argument is often not convincing by itself. It doesn't say "your logic is flawed therefore I am right". It says "your logic is flawed and therefore should be revised and improved."
related notes that there is some evidence that "Language is primarily a tool for communication rather than thought" [1]. ie, that language is neither necessary nor sufficient for the so-called psychic thinking process. It serves as a communication mechanism. Meanwhile, there is a hypothesis that the psychic thinking process lies beyond computation as we know it [2] in the form of turing machines etc.
[1] https://www.nature.com/articles/s41586-024-07522-w [2] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer...
The obvious answer to these questions is, "no". There is no such thing as a conclusive interpretation. If there was, then Natural Language wouldn't be ambiguous in the first place!
So we're all doomed to constantly misinterpret each other forever, right? No? We humans use Natural Language all the time, and usually figure out what the other person actually means!? How do we do it? Are we all just really good at guessing?
No, we have something better: context.
Context exists both in and around Natural Language text. Context determines which formal meaning is used to interpret the text. If we don't know which context is appropriate, there may be clues in the text itself that help us construct one that is useful or correct.
---
I've been trying to work out an approach to language processing that interprets text into logical formalisms (arbitrary meaning). I call them "Stories". A Story is an arbitrary interpretation of text. A Story is never conclusive: instead it is used as arbitrary context to interpret the next text. I call this process "Backstory".
We could even do the process backwards, and "write" an arbitrary formalism (meaning) in the same language/style/voice as a previously interpreted Story.
Given enough example instances of Story, we should be able to read and write to each other through explicitly shared context. I call this process "Empathizing". I call my idea the Story Empathizer.
I'm definitely out of my depth when it comes to the details, though...
The good news is that context can sometimes merge stories together. When we do explicitly find shared context, we tend to leverage that knowledge.
My idea is about offloading as much of this process as possible to a computer. We would still need to choose backstories, but the rest could be done in plain view, leveraging the incredible speed and memory size computers have.
This would also make interaction much more civil as well, given so much proclivity to do the opposite (straw man).
It's not a perfect approach, but it helps. LLMs are quite decent at steelmanning as well, because they can easiky pivot language to caveat and decorate with nuamce.
It could also be useful as a lower-level component of general-purpose systems that internally rely on chains of thought computed by sub-component LLMs.
"if controversies were to arise, there would be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate."
- Kurt Gödel
The impossibility to exhaustively and precisely put humanity in words, like the impossibility to have provably correct and complete model of reality, is like the impossibility to have a fully precise map.
The biggest danger is elevating the newly created map to the position of your new, much more simplistic, territory that supersedes the original one, with all of its quirks and fidelity.
And good luck calculating some of these axioms, such as "Why is it my duty not to kill someone?" You could argue, "Well in the end, a society enabling such behavior at scale would be no society at all," to which one might reply, "I have no interest in letting others do as I do.", and you can't calculate away violent sociopaths. The rest of us derive our principles from functioning mammalian emotional circuits, but at some level we rest our case on subjective axioms.
This is probably too big a topic for a whole side-branch on this, but modern meta-ethics teaches a range of possible approaches. Some notions of ethics are relativist, and are about the fact that moral norms are produced by some given society. But under some constructions that's just a procedural truism rather than a position on the content or the nature of morality itself.
Then you have moral realism, a perfectly respected position, which can encompass things like utilitariansim and other ism's. And this might seem silly derail, and I'm trying not to, but this is important at the end of the day, because "ethics is reached via consensus" can mean a lot of things that cash out with completely different practical implications. It's the difference between, for instance, deciding we need to be consensus oriented and vote, or be research oriented and concerned with deepening our scientific understanding of things like insect consciousness and whether the physical effects of sleep deprivation fall under the traditional definition of torture.
>And good luck calculating some of these axioms
Not wrong, they can easily get computationally intractable. So I think one has to account to some degree for uncertainty. Here again, I worry that the intended upshot is supposed to be that we simply give up or treat the project of moral understanding like a cosmically impossible non-starter. I like to think there's a middle ground between where we presently stand and the hypothetical future where we've got perfect knowledge.
Absolutely not! This is cultural relativism, and frankly, it would be circular: how exactly are we converging on a consensus if not from some preexisting sense of the good?
The only defensible objective basis for the good is the nature of a thing and what actualizes the potentials determined by that nature, thus actualizing the thing as the kind of thing it is. Morality, only possible for things that have the capacity to comprehend their options for action (intellect) and choose freely among them (will) on the basis of that understanding, therefore concerns the question of whether an act performed by a thing furthers or frustrates the actualization of that thing.
By cutting off my arm for no proportionate reason, I do an immoral thing, because it is my nature to have that arm, but if I have gangrene in that arm that threatens my life, then removing the gangrene with the undesirable side effect of losing an arm is morally justifiable, even if the loss of the arm is not good per se.
Murdering a human being is gravely immoral, because it directly contradicts my nature as a social human being in a very profound and profoundly self-destructive way. However, killing a would-be murderer in defense of my life or that of another is a morally very good deed; it is in accord with my social nature, and indeed can be said to actualize it more fully in some respect.
> The rest of us derive our principles from functioning mammalian emotional circuits
Please refrain from making such silly pseudoscientific and pseudophilosophical statements.
That being said, calculation is insufficient, because such calculation is formal: it explicitly excludes the conceptual content of propositions. But concepts are the material "carriers" of comprehension of what things are. We can also analyze concepts. Now, we can say that we can calculate a formal deduction according to formal rules, but we cannot calculate a concept or its analytical products. This is the produce of abstraction from concreta. Formal systems abstract from these. They are blind to conceptual content, on purpose. And having used a formalism to derive a conclusion, we must interpret the result, that is, we must reassign concepts to symbols that stand in for them. So formal systems are useful tools, but they are tools.
Well, there is this mechanism of imprinting our current moral settings (both declared and actually demonstrated) onto mostly blank-slate minds of the children, so that the next generation has mostly the same morals as the current one but with minor differences: so the ethics can "evolve" in time but that doesn't mean there is any end-state "consensus" they're trying to reach.
One cannot realistically construct the ethics procedurally and reproducibly from blank slate, so holding a false beliefs that one can or do have such set of "scientific" ethical standards only justify genociding oppositions.
Ethics is just half-broken loose set of heuristics developed and optimized evolutionarily. It probably can't even be properly quantized into text. It's nothing that stands up to scientific computational scrutiny. And there we step into cultural relativism as a principle; there are lots of behaviors we humans show as "ethical" acts that sometimes seem random and not universal, that also seem to work where it is done, and maybe not work where it is not done, so you can't say which one is it.
> Please refrain from making such silly pseudoscientific and pseudophilosophical statements.
Yet you use terms such as "nature". How is that not silly and pseudoscientific?
You are ascribing traits to things in a fundamentally immeasurable manner. At least in GP's case we are left with a root that we can quantify.
These aren’t logically incorrect, people who study rhetoric have just identified these as common patterns of poor persuasion.
But in practice, that’s one of the most relevant factors of whether you should be listening to someone. Does this person have a solid track record? Do they have your interest in mind?
So it is relevant information. It’s just that, “well once this guy kicked a dog” is usually done in bad faith.
So I wouldn’t consider it a non-sequitor, except in its most crude forms.
Ad hominem continues to be a good example. If you know that someone is a liar, you don't know that everything they say is false. You just know that they lie and are likely saying something to affect listeners. Could be based on some truth. Could not.
But I don't see a pretrained model in there, so I'm not sure what to pass as `your_nli_model_name`:
python3 src/nl_to_fol.py --model_name <your_model_name> --nli_model_name <your_nli_model_name> --run_name <run_name> --dataset --length
It would have been a lot cooler if this was set up as a pretrained model using RL to translate.
https://github.com/lovishchopra/NL2FOL/blob/4635a81f216da2ad...
nli_tokenizer = AutoTokenizer.from_pretrained(args.nli_model_name)
nli_model = AutoModelForSequenceClassification.from_pretrained(args.nli_model_name)
I’d you are told “we will go to the zoo or swimming pool tomorrow, if it is windy or rainy”, most readers would know the first or is exclusive (we aren’t going to both), while the second is inclusive (we will go if it is windy, rainy, or both).
This is annoying when teaching logic, from experience.
Even something as simple as sarcasm breaks this idea, and you can have full books of metaphor that only make sense if you understand the cultural context in which they were written.
https://aws.amazon.com/blogs/aws/prevent-factual-errors-from...
The other part seems to be values obfuscation, and I wonder if this would help with that too.
If Joe says that nails are bad, it can mean very different things if Joe builds houses for a living and prefers screws, or if Joe is anti development and thinks everyone should live in mud huts.
Propaganda will often cast a whole narrative that can be logically consistent, but entirely misrepresents a person or people's values (their motivations and the patterns that explain their actions), and there will be logical fallacies at the boundaries of the narrative.
We need systems that can detect logical fallacies, as well as value system inconsistencies.
---
Intake the following block of text and then formulate it as a steelmanned deductive argument. Use the format of premises and conclusion. After the argument, list possible fallacies in the argument. DO NOT fact check - simply analyze the logic. do not search.
After the fallacies list, show the following:
1. Evaluate Argument Strength: Assess the strength of each premise and the overall argument.
2. Provide Counterarguments: Suggest possible counterarguments to the premises and conclusion.
3. Highlight Assumptions: Identify any underlying assumptions that need examination.
4. Suggest Improvements: Recommend ways to strengthen the argument's logical structure.
5. Test with Scenarios: Apply the argument to various scenarios to see how it holds up.
6. Analyze Relevance: Check the relevance and connection between each premise and the conclusion.
Format the argument in the following manner:
Premise N: Premise N Text
ETC
Conclusion:
Conclusion text
[The block of text to evaluate]
Thanks again!
But also, a lot of propaganda isn’t false per se but simply blown out of proportion, or underproportioned in cases of inconvenient truths. The truth is a distribution of events, and editors continuously choose how to skew that distribution.
(One of my very interesting possessions is an old Chinese state-owned newspaper. As far as I could tell, their main tool wasn’t lying, but simply omission.)
For example, if you wanted to push a narrative that e.g. pit bulls are the most dangerous problem in America, you would just post a nonstop stream of pit bull attack videos. It taps into cognitive biases people have which aren’t propositional logic statements.
More broadly, the world is stochastic, at least in the way we experience it. So our brains have to make sense of that, which is an opportunity for narratives to creep in.
FOL values analysis of information streams, that manifest as user interface for configuring the algorithms that decide what information is surfaced to you in media.
This is why I said this sort of thing might be part of a solution. The whole solution would involve other significant parts.
Statement that "pit bulls are the most dangerous problem in America" requires source data (ie. cause of death or serious injuries in 2024 in USA).
Publications can be signed by authorities (ie. university or government body).
IMHO sooner or later we will (have to) end up with system like that.
Every information will be signed and level of trust will be automatically established based on your preference who you trust.
They would say something like “learn the truth about pit bulls” and then feed you an endless barrage of attack footage and anecdotes and emotionally charged information.
The purpose is to shape your priors. If all you see is pit bulls attacking people, your subconscious will rate them more risky. You may not even be able to verbalize why you changed your opinion.
I believe this future (all information being like this) is not far off and it has decent usage percentage already judging from direct traffic decline on some well known information source websites.
Perplexity, phind (as well as upstream chat interfaces now) support internet searching (exploring?) already which does it.
When reading (news and other) articles I find myself more and more often reading them through LLMs to perform above steps. If somebody never tried it, it's really worth, especially for politically biased news articles.
I believe this shift in information consumption is happening more and more for everybody.
Everything will become indirect, likely with multiple layers (ie. extra layer at OS level is likely – this is frankly perfect for use cases like protecting minors: it would be great if you can safely give laptop to your kid knowing that there is ai based content filter you've setup for their age group).
The example you gave of focusing excessively on some topic in order to make it seem like a bigger deal…
hm, is there a way we could formalize such things in a way like how formal fallacies are formalized?
It seems more difficult than classifying common formal fallacies.
But, might it be possible?
And even if you were to witness a random sampling of all events via some kind of clockwork orange mechanism, your brain has saliency biases as well.
You might find the wiki page on cognitive bias interesting https://en.m.wikipedia.org/wiki/Cognitive_bias
It's a neat project on its own, tbc, I just have very low expectations of broader impact.
Also today's propaganda is capable of adapting itself to each audience member's value system to make it more palatable, and then gradually nudge the audience towards the desired narrative/beliefs/values. The systems that distribute the propaganda are already analyzing people's values and using that information to manipulate people. I think that information asymmetry is part of the problem. I could be wrong, but I think flipping that dynamic around so the public can see the true values of the subjects of propaganda may help neutralize a lot of propaganda.
As far as what impact this specific project will have, I have no idea. You may be right. I'm curious about its limitations and how it can be applied.
What I assume you might be missing is that you are looking at the world through a different lens than these other people. Both you and they are consuming propaganda and can't detect it as propaganda because it aligns with your values. However it subtly nudges your values in a direction over time.
I agree that people's values and core beliefs are malleable, but in the same way a tree trunk is. It may seem like these people have changed a lot and you haven't, but I think it's more likely that you've changed too, and that they've changed less than you think.
No one is immune to propaganda, which is why anything that can help disarm it interests me.
John Doe isn’t trained in logic and can adjust any of his premises if it means he can continue to admire his favorite celebrity. It’s a combination of flawed reasoning and premise flexibility.
Not to mention, any fact can be endlessly challenged and questioned even if it’s agreed upon and largely incontestable.
Science Denial Across the Political Divide: Liberals and Conservatives Are Similarly Motivated to Deny Attitude-Inconsistent Science, https://journals.sagepub.com/doi/abs/10.1177/194855061773150...
You link aside, I think the obvious evidence is that that behavior is significantly more common in conservatives. Literally the most basic of facts get denied in bulk. I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.
This is modulated by who is currently in power. Conservatives were worse when they lost and Biden was in power. Democrats are ramping up the crazy now that they're the underdogs.
> I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.
Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.
As for denying basic facts, there's a whole host of basic scientific facts that people who lean left deny wholesale, eg. heritability of behaviours, personality and other characteristics, differences between groups, denying certain features of sex and the sexes, etc.
I won't claim that the problem is equal on both sides, for many reasons I won't belabour here, but it's not nearly as wide a margin as you're implying. Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.
That isn't remotely true. Conservatives have been consistently in the lead, and there are studies showing how much more prone to believing misinformation they are.
> Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.
There's at least reasoned arguments for that. That isn't the same thing as rejecting useing masks during a pandemic.
> it's not nearly as wide a margin as you're implying.
It really is, but we clearly disagree.
> Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.
You shouldn't make assumptions about how or where I get my news. I don't think coverage bias applies at all in influencing my conclusion based on how I get my news.
No, that's misleading. Conservatives have also been consistently in the lead on "authoritarianism" to the point that it was considered a purely conservative phenomenon, until someone actually thought to ask questions like "what would left wing authoritarianism look like?" and suddenly they found it everywhere.
You seem not to realize how unreliable the data is on these questions. Not only is the replication rate of psychology and sociology ~35%, but the demographics of those fields yields a clear bias on exactly these questions. You simply cannot draw such sweeping conclusions from the unreliable data we have.
When conspiracy and biased thinking are tested directly, as with the study I linked, there is no difference in how the biases impact their thinking. Both sides are extra harsh on their enemies, are overly forgiving of their allies, etc. Confirmation bias and motivated reasoning all around.
> There's at least reasoned arguments for that.
Do you think that there were reasoned arguments for Trump having won in 2020?
> That isn't the same thing as rejecting useing masks during a pandemic.
They could cite reasons for that too, you just don't believe they are valid reasons. It's the same confirmation bias in all cases though.
I totally agree that the end conclusion "this statement is fallacious" is pretty useless. But I assume that a working process would also yield the chain of judgements (A is right, B is right, C is wrong, etc). I think that would be VERY useful.
People who become captured by propaganda and lies generally are not sold on 100% of the propaganda. There are certain elements they care more about and others they can ignore. A way to deprogram people through conversation is to just ask them to explain things about their views and ask them to reconcile them with reality. The reconciliation is painful for them and that pain keeps people "in" irrational beliefs - but it's also how people find their way out. Once they no longer associate themselves with the conspiracy, they can discard beliefs associated with it...provided they can think through them.
I think being able to automatically decompose a fact check into the elements of what "is true" and "is false" in a statement would be HUGE. An essential tool in helping people escape from information swamps.
Fighting fire with fire.
[1] Impossible to find of course. And with all the LARPing going on on there, take this with two grains of salt. Given all the crazy shit going on in the US, I find it totally believable though.
But it's also not very useful for human reasoning. It's good for math and logic puzzles and bad at anything else. It's bad at time, at belief, at negation. None of those things act like you expect them to.
This is just the formal specification problem all over again. Verifying software against a spec is good and useful, but verification doesn't tell you whether the spec itself correctly captured the desired objective, it can only tell you whether the spec is logically consistent and that you implemented it faithfully.
It’s pretty much the default modern formulation of general-purpose formal logic.
People who approach politics from a virtue ethics perspective are vulnerable to propaganda because logic and value have no bearing whatsoever on their decision to accept or reject a narrative.
You can't think critically for someone else. They must do it on their own.
One virtue common in conservative politics is competition. A healthy instance of capitalism is expected to benefit all participants by virtue of competitive markets. The value of our current instance of capitalism is that very large corporations make a lot of cool tech and sell it at low prices.
But what about homelessness? Isn't that a real tangible negative value? Yes. What should we do about it? Well, a conservative will probably tell you that we should help homeless people by making housing (and homeless people) more competitive.
But that's clearly not working! The system does not provide a value that we very seriously need! These arguments don't matter to conservatives, because to them, it's all about the virtues.
The definition my comment depended on was one where values act as a filter for actions (or patterns of actions).
Drug addicts only value short-term highs (next fix). Someone else may value being a musician, being reliable, or being honest. In 2018 maybe someone would have bought a Tesla because they value being seen as progressive and value experiencing modern technology. Notice that all my examples start with a verb, which can often manifest as a way of being.
I didn't bring up virtues, but my understanding of virtues is that they are values deemed by at least some to be objectively "good", such as the cardinal virtues. Whereas values can be both good or bad, such as masochists who value watching others suffer.
A value is something that you value after evaluating it.
A virtue is presumed to be good. If it were presumed to be bad, it would be a vice. People commit to virtuous behaviors because they expect valuable consequences.
For example, someone who considers honesty a virtue might implement that by choosing to tell the whole truth; or they might implement it by choosing not to tell lies; or even by punishing others who they believe to be dishonest. It is assumed that there is no need to evaluate their behavior, because it was guided by virtue.
Propaganda portends itself to be virtuous. This is important, because a target audience who relies on virtue ethics will not evaluate the narrative.
For example, when conservatives in the US argue against single-payer healthcare, they do not evaluate its merits against the merits of the current insurance system. Instead, they declare its foundational vice: "socialism". Opposite of the ultimate conservative virtue: "capitalism".
It doesn't matter how incoherent this argument is: it isn't an argument at all. It's a claim to virtue.
This is the core principle of conservative politics, and the primary reason conservatives are so vulnerable to fascist narratives coming out of the alt-right.
Where you lose me is in the generalization and singling out of conservatives. It sounds like you're saying they are uniquely susceptible to propaganda, yet all my anecdotal experience adds up to it being fairly equal on both sides.
I haven't dug into any formal study so I could be wrong, but I am close to lots of people who are politically left who seem to follow that "presumed virtue" -> reaction (skip critical thinking) pattern. To be clear, my guess is it's a very common and natural pattern, like cognitive biases and optical illusions. It's a consequence/bug of collective cognition.
What makes conservatism unique is that the entire movement is centered on virtue ethics. There is nothing new about this: just look at Reaganomics, the wars on drugs and terror, abortion bans, gay marriage bans, etc. Practically everything about conservative politics is expressed and defended as a virtue.
The next unique thing is that the alt-right has taken over conservative narrative. There are groups of people that literally call themselves fascist, and they aren't just getting attention from conservative politicians: they are writing talking points that are echoed over and over again by the house, the senate, and even the president.
The overwhelming majority of conservatives are not fascists, yet most are evidently happy to work with them. Podcasters and news entertainers are constantly beating the drum of alt-right rhetoric, because it's engaging, and engagement gets them paid. Conservative voters are happy because their team is winning. Fascists are happy because their virtues go mainstream. There is no infighting, because there is no criticism, because there is no evaluation to begin with.
I have had no direct exposure to what you're describing in your third and forth paragraphs, and so I am not in a position to agree or disagree. All I can say is that I haven't seen it yet. What I have seen is misrepresentation (from both sides) and a pattern of media of all types stoking division.
A few years ago I learned about the concept of "most respectful interpretation" as a tool for conflict resolution and establishing trust in teams. So much of media these days feels like the opposite.
I'm trying my best to understand what's true, while accepting my own limitations and the reality that I may never be able to tell what's really going on at the global power level. At the very least it seems to require a lot of reserving judgement.
If the media is a stained glass window, looking through the blue glass and then the red glass is not the same thing as looking through clear glass.
But you could imagine a role for this in arbitration or legal settings.
Gödel’s theorem forbids something that in general tells you whether a statement is true or not. (As in, a method which would work for every possible statement within a system.) It certainly doesn’t preclude something that checks if a proof is correct, any more than it precludes checking whether some calculation is done correctly (I.e. it doesn’t preclude it at all) .
It says that there are statements which the proof system doesn’t have a proof of the statement being true nor of it being false. It doesn’t mean you can’t have a proof system.
This paper doesn't target such use cases. Instead it's trying to tackle "pop misinformation" type claims, mostly related to climate change. Unfortunately the Logic and LogicClimate datasets that the paper are using as a benchmark have serious problems that should disqualify them from being considered a benchmark. If we check the paper that introduced them, Jin et al open by asserting that "She is the best because she is better than anyone else" is an example of circular reasoning. It's actually a tautology. Then they try again with "Global warming doesn’t exist because the earth is not getting warmer" which is also not circular reasoning, it's another tautological restatement (you may say it's false, but disagreement over facts isn't a disagreement over logic - if either clause is true so is the other). Circular reasoning often involves a mis-definition and would be something like this real-world example from a few years ago:
1. A positive test is means you have COVID.
2. Having COVID is defined as having a positive test.
Their second example is "Extreme weather-related deaths in the U.S. have decreased by more than 98% over the last 100 years ... Global warming saves lives" which they classed as "false causality" (they mean non-sequitur). My experience has been that climate skeptics are surprisingly logical so this would be an odd statement for them to make, and indeed if we check the original Washington Times op-ed then we find Jin et al are engaging in malicious quoting. It actually says:
> "Contrary to sensational media reports, extreme weather-related deaths in the U.S. have decreased more than 98% over the last 100 years. Twenty times as many people die from cold as from heat, according to a worldwide review of 74 million temperature-related deaths by Dr. Antonio Gasparrini and a team of physicians. Global warming saves lives."
The saves lives claim is based on cold being more dangerous than heat. Warmer weather = fewer deaths from cold isn't a logical fallacy, which is why they had to delete that part to make their example. It might sound like a weird or disingenuous argument to you, but it's logical in the sense that an SMT solver would approve of it. If you disagree it's probably due to prior beliefs e.g. that perhaps extreme weather has increased even as society got orders of magnitude better at reducing the impacts, or perhaps the positive effects of warmer air on the elderly are offset by other effects of climate change, or that the future will be different to the past due to compounding effects. Such rebuttals aren't identifications of a logical fallacy though, just of different priors that could maybe be addressed with additional rounds of debate.
I am pessimistic, I think only 2-3% would understand, but I'd be happier to be proven wrong than proven right.
Thank you for writing up your analysis!
If LLMs can debunk bullshit as easily as it's generated, the world will instantly turn into a better place.
Bad ideas which sound good are the root of all evil.
> "Sometimes flu vaccines don't work; therefore vaccines are useless." - Hasty generalization
> "Every time I wash my car, it rains. Me washing my car has a definite effect on the weather." - Post hoc, ergo propter hoc
> "Everyone should like coffee: 95% of teachers do!" - Appeal to popularity and hasty generalization
> "I don't want to give up my car, so I don't think I can support fighting climate change." - False dilemma
Imagine if you could click on a stupid internet discussion thread and make it give you a Lean proof of each argument where possible :D This thing would be hated even more than, say, vaccines, by the same sorts of people who deliberately choose to not understand things.