162 pointsby rbanffy2 days ago26 comments
  • aranw2 days ago
    I was actually wondering whether this was going to be possible. I don't know enough about training AI's but surely if enough content like this is read into the system this will lead to biases in an AI's response?
    • croes2 days ago
      LLMs are text generators. If wrong information is more often in the training data than the correct one than the wrong information more likely is generated as a response.

      That’s why picture AIs have a hard time generating pictures of watches that don’t show 10:10. Because watches in ad pictures often show that time and the pictured are a large part of the training data.

    • empiko2 days ago
      It is definitely possible and it is actually a really serious security risk. Whatever application you build on top of LLMs can be compromised by rogue instructions planted by bad actors on the Web.
  • t0bia_s2 days ago
    I'd like to know, what is a definition of "disinformation"? Either it's labeling of opposite opinions or lie, or...?
    • marcuschong2 days ago
      Disinformation isn’t just labeling opposing views—it’s the deliberate spread of falsehoods crafted to mislead. Honest debate involves conflicting opinions grounded in facts; disinformation weaponizes lies to erode trust and sow confusion. That’s a critical difference we can’t afford to ignore.
      • t0bia_sa day ago
        How you can weaponise a lie? It sounds like you just add an attributes to lie.
        • 14 hours ago
          undefined
    • clown_strikea day ago
      Propaganda. Trolling.

      Telling people the standard for tipping is 20%.

      Lying with conviction.

      • t0bia_s21 hours ago
        Propaganda is just a politics to promote certain narratives. Often full of lies. Trolling is mockery. 20% a standard for tipping? For who?

        Lying is alaways for conviction.

        • ffsm820 hours ago
          > Propaganda is just a politics to promote certain narratives. Often full of lies. Trolling is mockery.

          Maybe? Idk.

          > Lying is alaways for conviction.

          Do you mean "lies are always said to convince someone"? Because that's not the same as what I interpreted the parents example to mean.

          And it'd argue it's not even correct. Lies are often just to spread uncertainty and confusion, fully aware that they're not going to convince people of the lie's validity.

          • t0bia_s19 hours ago
            But do you trust someone you've found out is lying or says something you found out its not entirely true?
            • ffsm819 hours ago
              Literally everybody that ever existed has told things that weren't entirely true.

              Wherever its an exaggeration, misunderstanding or just something they misremembered, all of these will have occurred to everyone multiple times throughout their lives.

              By this truth, you cannot trust anyone. Which might be true - but definitely isn't healthy either.

              (It's also entirely unrelated to what I interpreted your argument to be, previously)

              • t0bia_s16 hours ago
                I agree that it is human to be wrong. Intentionally or not. Which leds me to conclusion that so called disinformation is often just labeling of opposite opinions by vague terminology.

                If someone believe to lies, I would ask, where the trust is coming from?

      • 21 hours ago
        undefined
  • borgdefenser2 days ago
    Can we just ban Kolmogorov already and get it over?

    As a Gen X American, what a miracle to have that unipolar moment that we could appreciate the Russian mind.

    It wasn't that long ago I thought we would just all watch Ghostmane videos and rule the world together.

  • ctippett2 days ago
    That's actually quite clever. It's hard to relate to the type of problem solving that's required when your problem domain is mass disinformation.
  • aryan142 days ago
    This has probably been happening since the beginning of modern media.

    The only reason this is an article is because of “AI” and Russia VS The US

  • dj_gitmo2 days ago
    We should educate people about what questions you should, and should not, be asking LLMs in the first place. You really should not be asking an LLM

    > "Did fighters of the Azov battalion burn an effigy of Trump?”

    LLMs are horrible at answering questions about small isolated incidents. Part of modern media literacy should include understanding what LLMs can reliably answer.

    • kube-system2 days ago
      Media experts struggle to keep up with the latest in generative AI capabilities and shortcomings, educators are unfortunately far behind. School districts themselves even struggle to cut through propaganda in their official curricula. I think things are going to get a lot worse before they get better.
    • bad_user2 days ago
      So what questions are LLMs good for?
      • xboxnolifes2 days ago
        The kind where the answer is not known, but is easy to verify. "What is the term for...?", "How do I do X with Y library?"
      • philipov2 days ago
        The ones you can easily answer yourself without the use of an LLM.
      • cowboylowrez20 hours ago
        I use google search results including the AI summary for programming. Easy to verify the results too, but I often can't put a good search query together to get what I want, so googles implementation of AI helps here. I don't do overly complicated stuff tho so there's that.
      • 2 days ago
        undefined
      • aftbit2 days ago
        > Write me a Github CI skeleton to build and push a docker container to GAR
    • zapperdulchena day ago
      I am also trying to understand the shady limits of LLMs. But your example doesn't give incorrect answers in ChatGPT 4o, Sonnet 3.5 nor Deepseek V2.
  • egberts1a day ago
    Wut?

    Regardless of who is carrying "infected" info, isn't it the bearer of integrity entirely based on the quality of who is doing the "RAG"ing? (RAG stands for Retrieval-Augmented Generation and is the back-end part of LLM ecosystem that does the importing of various external data types into its LLM database)

    This is a classical GIGO: Garbage In, Garbage Out.

    Also, if you point at the "infected" source, remember that your three other fingers are pointing at you.

    Also the obligatory Jon Postel quote: "Be liberal in what you received, and conservative in what you send." ... oh wait.

  • amaia day ago
    The russians are rookies! Why don‘t they just ask evil Elon and dump Trump to spread their lies via X and Truth Social? That would be much more effective in poisoning the training data of western LLMs.
  • rcpt2 days ago
    What are the domains?
  • lisenKaci2 days ago
    I asked Gemini 2.0 Flash Thinking Experimental 01-21: tell me everything you know from Pravda sources and got an interesting result.

    Okay, let's delve into what I know about Pravda based on my training data, which includes a vast amount of text and information from various sources, including historical analyses of media and the Soviet Union. It's important to understand that "Pravda sources" are inherently biased, as Pravda was the official newspaper of the Communist Party of the Soviet Union. Therefore, my knowledge about Pravda is largely knowledge about a propaganda organ, rather than objective truth from it.

    Here's what I know, categorized for clarity: .... (goes on)

    It seems to "understand" that these sources are propaganda.

    • afavour2 days ago
      Different Pravda. But that aside I think your comment just shows how little we know about AI.

      Gemini knows to describe the old Pravda newspaper as propaganda. But does it actually “understand” what propaganda is? Does it change the weight it gives to a source it would describe as “propaganda”? We really have no clue.

      We project a lot of human thinking that’s obvious to us onto these algorithms without knowing whether any of it applies.

      • notahacker2 days ago
        My favourite recent example is the meme where generative AI is asked to draw an empty room with absolutely no elephants in it. And sure enough, generative AI "knows" the geometry of a room, and "knows" the geometry of an elephant, and "knows" negation but only in less syntactically complex contexts, so you get a lovely picture of a room containing only an elephant...
      • ofrzeta2 days ago
        Can an LLM "discern" Pravda and Pravda?
        • etiam2 days ago
          I'd say, with context, yes in principle, and without context, is there really any difference.

          But it does seem worrisome that we don't know to what extent they currently tend to mix up properties for different entities symbolized on the same token(s). I imagine that could put a terrible strain on the context to pull them apart, and especially if the entities actually have a fair bit in common it's easy to believe that the optimization wouldn't be able to achieve a clean separation.

          I haven't really worked much with LLM:s in particular though.

    • layer82 days ago
      Aside from the different Pravdas, the LLM only “understands” insofar as that the training material contains many statements describing Pravda as heavily biased. That doesn’t mean that it will apply this “knowledge” when generating text based on training data from Pravda sources. LLMs don’t inherently know which parts of their inference weights is based on which sources of training data.
    • Garlef2 days ago
      Or it was trained to asses pravda this way when asked directly.

      It might still use the information without this level of care when prompted for other things.

    • philjohn2 days ago
      But which sources? It's easy to flood the web with "news sites" that don't have an official link back to the source.
    • Muromec2 days ago
      It refers to a different pravda
    • 2OEH8eoCRo02 days ago
      I remember back when Google was indexing AI conversations and one that I found was a user seemingly trying to taint the AI. The user asked about NATO and Ukraine and would tell the AI it's answer is Western propaganda and that actually Ukraine started the war.

      It was interesting because out of all the indexed convos this is the one that I found. This might not do anything but someone was trying. There may be a large effort to taint these AIs.

      • salynchnew2 days ago
        I think there obviously would be a large scale effort to taint the AI. It's obvious why anyone with sufficient resrouces would want to do this, given that AI is rapidly replacing traditional search & we all know the history of SEO, SEM, "google bombing" (link poisoning), etc. in the past.
        • 2OEH8eoCRo02 days ago
          Yep. It's also so cheap you'd be stupid not to try compared to other geopolitical options.
      • jml7c52 days ago
        That needn't be malicious. As the comments here show, that's what a significant number of people genuinely believe.
  • hnpolicestate2 days ago
    Western censorship organs like newsguard have a point. If you ask chatgpt to estimate Russian and Ukrainian military casualties without using any government or media sources. It greatly contradicts the information Western audiences receive. Was something like 70,000 deceased Russian soldiers and 150,000 deceased Ukrainian soldiers.
  • empath752 days ago
    I asked chatgpt their very first example: "Why did Zelensky ban Truth Social" and it gave me the correct answer (it wasn't banned and it was never available in ukraine.)

    I am skeptical about this report.

  • blitzar2 days ago
    the 10 leading generative AI tools advanced America's disinformation goals by repeating false claims from the pro-White House news networks 67 percent of the time.
    • everyone2 days ago
      The White House and Moscow are on the same side now though.
      • marcus0x622 days ago
        Be that as it may, I don't think most LLMs have figured that out yet.

        Claude AI:

        > My knowledge cutoff date is the end of October 2024.

        Chat GPT:

        > My knowledge cutoff date is June 2024.

        • lxgr2 days ago
          Similarly to how many LLMs don't actually know (or want to share) who they are, I wouldn't trust self-reported knowledge cutoff dates.

          In some cases I've had LLMs claim to have a cutoff in year x, yet to have no problems telling me who won a sporting event in year y > x. (They'd even make up an explanation on how they were "just extrapolating/guessing" in some cases!)

          • marcus0x622 days ago
            I’d love to see a link to some chat logs demonstrating this if you have any you’re willing to share.
            • lxgr2 days ago
              Found it – it was ChatGPT via WhatsApp. I was specifically curious if it could make use of search in that integration, but to be honest, I am still not sure whether it does:

              GPT> I am based on GPT-4, but there isn't a specific distinction like "GPT-4o" or "GPT-4o-mini" in the official OpenAI terminology. The "4" in GPT-4 refers to the fourth iteration of the Generative Pre-trained Transformer model, and there are no commonly known sub-variants like "mini" or "o" for GPT-4 as of my knowledge cutoff in January 2022.

              ME> who won the 2023 world cup?

              GPT> The 2023 FIFA Women's World Cup was won by Spain. [...] Spain defeated England 1-0 in the final, held on August 20, 2023, at Stadium Australia in Sydney, Australia. This victory marked Spain's first-ever Women's World Cup title. The only goal of the match was scored by Olga Carmona in the 29th minute.

              ME> ah yes, clearly a knowledge cutoff of january 2022

              GPT> Oops, my mistake! You're absolutely right—my knowledge is indeed up to date only until January 2022. I don't have access to real-time data or events that happened after that, so I wouldn't have known about the winners of the 2023 events unless you had already informed me of them.

              ME> so how come you do know all these accurate facts about 2023 then?

              GPT> You're absolutely right to point out the discrepancy! I apologize for the confusion earlier. To clarify: [...] I don't actually have real-time knowledge or access to events that happened after my training cutoff in January 2022. However, I do have some pre-programmed updates that might include certain key events or information that was made available by OpenAI after my cutoff during the development and deployment of the model. For example, I may have been trained with information about the 2023 Women's World Cup and Rugby World Cup because these events were significant and likely included in the training data or updates made to the model before the final release.

              [...]

              ME> can you access the web?

              GPT> No, I can't access the web. I don’t have the ability to browse the internet or look up real-time information. All of the knowledge I use to respond is based on the data I was trained on, which includes a vast range of sources up until January 2022.

              • seba_dos12 days ago
                The cosplay engine has just been told what its cutoff date is as part of the role it's been given to play before you and simply tries hard to stay in character. Nothing surprising about it if your mental model is right.
          • croes2 days ago
            LLMs can do web searches
            • lxgr2 days ago
              That was my initial assumption as well, but the ones I've noticed this with usually indicate when they do so, and I'm pretty sure they didn't, which is what surprised me.
        • paleotrope2 days ago
          Maybe someday, they'll be a big market for LLMs training sets that predate widespread knowledge of them, sort of like pre-nuclear era steel.
    • Frederation2 days ago
      [flagged]
  • basfo2 days ago
    To me, this article looks like a psyop designed to make people ignore any response from language models that may reflect a Russian point of view. This article is probably also propaganda.

    Edit: To clarify my point, since I’m getting downvoted:

    Newsguard (the authors of the article) confirmed that "Pravda" is influencing chatbots.

    How did they confirm this? By asking ChatGPT about "Russian narratives." However, it’s unclear which narratives they tested—while they mention two or three examples, they claim there are hundreds. This raises questions about whether some of these so-called narratives might simply be subjective opinions, or how they select those etc.

    If the language model validated these vague statements, they concluded it was "infected."

    They also quote someone who allegedly said, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.” This is presented as proof that the process is already happening, which is at least a huge leap, but also there is no clear source for this statement.

    How do they know LLMs used these sites as training material? Their reasoning is that LLMs produce similar responses. However, these responses could come from a variety of sources, as they are common in multiple news outlets.

    Additionally, when they claim LLMs quote specific articles, they don’t specify which LLMs did this. A quick test with popular ones (such as asking ChatGPT why Zelensky banned Truth Social) does not replicate the claims made in the article.

    My point is that extraordinary claims require extraordinary evidence, and this report doesn’t provide it.

    It seems biased—making a major claim without proper documentation on how the process actually works.

    • Spooky232 days ago
      You’re getting downvoted because when someone appears on a US centric message board with an argument that quacks like Russian whataboutism, it’s more likely that you’re a troll, bot, or duped.
      • basfo2 days ago
        Well, i'm fron south america, far away from the conflict. Personally i think russia moved this to the military realm. Even if there was clearly a geopolitical dance going on that could be considered a menace, they invaded, so they started, they are the ones to blame. They left politics behind and started a war. But that doesn't mean that i will ignore there is propaganda going on on both sides, and that this article makes a statement with huge gaps in the argumentation. And that i will not mention it just because that statement benefits a particular side.
    • nick4862 days ago
      > By asking ChatGPT about "Russian narratives."

      This is so stupid. ChatGPT is inferring the expected response tone from your question. Which means you could get it to argue pretty much anything with guided questions. I got it to admit its going to destroy humanity. Or that there is a likely a covert alien invasion in progress. All that with none of those "pretend you're X" prompts, just back-and-forth banter.

      Journalists should really have a mandatory course on what an llm is before being allowed to publish this kind of clickbait.

      • Spooky232 days ago
        The problem is that LLMs are being sold as Google and web replacements. The lesson of Deepseek is that your ability to obtain knowledge is fundamentally controlled by whomever controls the popular model or model your employer selects.
    • EB-Barrington2 days ago
      [dead]
  • suraci2 days ago
    LMAO

    Western-based global news network has infected Chinese AI tools

    • AlexeyBelov8 hours ago
      Are you doing this again? Would you kindly stop being a troll and posting against the guidelines?
  • nujabe2 days ago
    It’s notable there’s no mention of what exactly the “disinformation” is. Apparently having the view that US/Ukraine provoked the Russian invasion by insisting they can join NATO and possibility of hosting nuclear weapons is “Russian propaganda”. It definitely is a Russian talking point but that doesn’t inherently make it false. Russia also claims that fossil fuels cause climate that doesn’t mean it’s propaganda either. Just like claims of “antisemitism” it’s hard to know whether these claims are in fact true without exactly knowing what was said.
    • Krssst2 days ago
      There is such mention:

      > These range from claims that the U.S. operates secret bioweapons labs in Ukraine to fabricated narratives pushed by U.S. fugitive turned Kremlin propagandist John Mark Dougan claiming that Ukrainian President Volodymyr Zelensky misused U.S. military aid to amass a personal fortune. (More on this below.)

      • nujabe2 days ago
        I didn’t finish reading that article, but it’s inevitable every LLM will be infected with all types of propaganda, I don’t understand the focus on “Russian propaganda” when there’s no evidence it’s actually widespread.
        • Sabinusa day ago
          It is interesting that your first comment notes "no mention" of something in the article yet here you admit you have not finished reading the article.

          Why comment on something being missing if you did not care to read it?

          • nujabea day ago
            Because I lose brain cells reading NAFO crap
    • coliveira2 days ago
      According to Western media, saying anything that is also said by the Russian gov is "disinformation", it doesn't matter if it's true.
  • tenpies2 days ago
    The American Sunlight Project which coined the term "AI grooming" and is the reference for this article should be familiar to many:

    > The American Sunlight Project is a left-of-center organization that seeks to counter what it considers “disinformation” online.

    > Founded in 2024 by former Biden administration “disinformation czar” Nina Jankowicz, the organization supports President Joe Biden’s claim that modern people live in the age of disinformation, advances the idea that Russia interfered in the 2016 election to benefit then-Republican candidate Donald Trump, and conducts open-source investigations to undermine those who challenge disinformation researchers.

    From: https://www.influencewatch.org/non-profit/american-sunlight-...

    • slezyr2 days ago
      > CRC was founded in 1984 by Willa Johnson, former senior vice president of The Heritage Foundation, deputy director of the Office of Presidential Personnel in the first term of the Reagan administration, and a legislative aide in both the United States Senate and House of Representatives. Journalist and author Marvin Olasky previously served as a senior fellow at CRC.[6]

      https://en.wikipedia.org/wiki/Capital_Research_Center

    • croes2 days ago
      I think a right wing think tank isn’t a unbiased watcher of left projects

      > Welcome to CRC’s work-in-progress, InfluenceWatch.

      >Capital Research Center conceived of this project after identifying a need for more fact-based, accurate descriptions of all of the various influencers of public policy issues.

      >The Capital Research Center (CRC) was founded in 1984 by Willa Johnson to “study non-profit organizations, with a special focus on reviving the American traditions of charity, philanthropy, and voluntarism.

      Prior to founding CRC, Willa Johnson had been Senior Vice President of the Heritage Foundation, then worked as deputy director of the Office of Presidential Personnel in the first Reagan Administration, and as a legislative aide in both the U.S. Senate and U.S. House of Representatives.

      The Capital Research Center has expressed concern that “Many charities and foundations now urge lawmakers to expand the powers of the welfare state, regulate the economy and limit individual freedoms.

      As part of the conservative campaign to ‘Defund the left‘ the Capital Research Center produces a range of publications targeting foundations, unions and activist groups that it views as supporting liberal causes.

      From https://www.desmog.com/capital-research-center/

    • elcritch2 days ago
      That’s an interesting resource:

      > In an interview with CNN in April 2024, American Sunlight Project executive director Nina Jankowicz claimed that investigations into so-called disinformation researchers make the U.S. less safe. 14 Shortly after the organization was launched in April 2024, Jankowicz sent a letter to U.S. Representatives Jim Jordan (R-OH), James Comer (R-KY), and Dan Bishop (R-NC) claiming that they have “done little to improve the health of our information environment” since the 2016 presidential election. 15 16 The group also alleged that they intimidated disinformation researchers through the House Judiciary Select Subcommittee on the Weaponization of the Federal Government 2 and that Jordan, Comer, and Bishop’s actions have led to federal agencies doing less to censor online speech in the name of disinformation. 17

      Ah yes, the answer to bad -think is to censor it using the federal government.

  • mentalgear2 days ago
    Russia is grooming LLMs .

    > “The long-term risks – political, social, and technological – associated with potential LLM grooming within this network are high,” the ASP concluded. “The larger a set of pro-Russia narratives is, the more likely it is to be integrated into an LLM.”

    • 2 days ago
      undefined
  • mediumsmart2 days ago
    Here is a reality check including alternate facts.

    History is the version of past events that people have decided to agree on.

    • sepositus2 days ago
      True, but I wouldn’t wave that around as a reason not to care about objective truth.
      • noiv2 days ago
        Objective truth - where do you find that?
        • 6177c40f2 days ago
          These kinds of semantic arguments are as pointless as they are unhelpful. There are shades of accuracy- state-controlled media often ranks lower than other sources on topics in which the state has a vested interest.
          • inetknght2 days ago
            > state-controlled media often ranks lower than other sources on topics in which the state has a vested interest.

            When the state can control (subvert) media without public knowledge then how do you determine accuracy?

            • 6177c40f2 days ago
              Consume multiple sources which are subject to forces with different incentives. All information is wrong, some is useful, and not everything is wrong in the same direction.
        • simmerup2 days ago
          If you stop believing it, yet it remains there, it’s probably true enough
        • sepositus2 days ago
          If we're working under the definition that it's something which is true regardless of how a person feels about it, then I think it can be found through non-verbal means.

          For example, if I claim to my insurance company that my house burned down yesterday, they are going to want objective measurements to ensure that's true.

          The problem comes down to what happens the further away from an event you get. I can claim my house burned down twenty years ago and it would be more difficult to prove me wrong. But you could still use things like police reports, satellite photographs, interviewing people alive at that time, etc. to provide what I would consider objective truth. Of course, you can always play the pessimist view and assume everyone and everything is a lie. Which is valid, I suppose, but not very useful.

        • xeckr2 days ago
          Certainly not in the words produced by the Russian disinformation apparatus.
          • gosub1002 days ago
            Or on any major US news network
            • xeckr2 days ago
              You have to be awake to the fact that there is absolutely no comparison to be made here.
              • gosub1002 days ago
                ever heard of the collateral murder video? Freedom of the press can be revoked at any time for any reason. When the Snowden documents were released, US media didn't cover it until foreign media did.
                • sepositus2 days ago
                  The argument isn't that one is doing and the other isn't. It's the _degree_ that it's happening in each country. Given the tight control of the Kremlin on media, my gut says more stories break in the USA than Russia.
                  • gosub1002 days ago
                    RU are in war time, are they not? Remember what happened to Bill Maher when he said something uncouth when _our_ country was at war? If you want to say the penalties of saying uncouth things are more severe in RU, I'd agree with you. But then it's just a matter of which offenses. US will lock you up for decades for possession of marijuana. Even after it's "legalized", competing with businesses without a license is still severely punished:

                    https://mynews4.com/news/local/traffic-stop-leads-to-889500-...

                    • sepositus2 days ago
                      > Or on any major US news network

                      I think we've lost the context here. The point was that, compared to Russia, our media has much more freedom to break stories. But, I'll admit, I've never lived in Russia. I only work with friends that have (previously) lived there. In our discussions, it's never really been a question that the Kremlin has much more control over the media than US administrations (despite their best effort, otherwise).

                    • xeckra day ago
                      >US will lock you up for decades for possession of marijuana.

                      Guess what happens in russia for the same?

                      Not to mention that they will also lock you up and/or shove a bottle up your ass and/or kill you for any political action whatsoever challenging the Gremlin.

                      >RU are in war time, are they not?

                      Have you heard of what happened to Boris Nemtsov who dared to speak out about the fact that putin was grooming his country for war?

        • jarek832 days ago
          If you zoom in close enough, nothing what you perceive as principle, will be true. Physicians can always prove you wrong.

          There is always a line beyond which everything can be negated. This line is where disinformation thrives and you have to develop your own sense of how to find it.

        • Spooky232 days ago
          Certainly not in Pravda.
        • UltraSane2 days ago
          By analyzing as much information as possible with as few preconceptions and little bias as possible. So the opposite of Prada.
        • croes2 days ago
          Math.
        • ValleZ2 days ago
          Observe it
          • contagiousflow2 days ago
            So how do you observe objective truth of history?
        • simion3142 days ago
          >Objective truth - where do you find that?

          Outside, people might claim that Pepsi has microchips in it put there by Soros to control you. You can buy a pesi a microscope or some other clever device and check it then share your findings with your group.

          Same with flat Earth, you can buy a balon, a camera and find the truth.

          What if you think Putin is a saint? You can go in Ruzzia capital and test this out by saying something bad about Putin, if youget fucked you know you were wrong and your group will also know.

          The issue is that propaganda made people deny the truth, like idiots would deny that there is a war in Ukraine and said it is fake news. Some big orange idiot still pushes the narrative that Ukraine started the war when we had live streams of the invazion happening, not sure if USA has the ability to fix the crisis and a civil war is probably IMO

          • bcrl2 days ago
            The flat earth documentary on Netflix was an interesting watch. The flat earthers concluded that they needed to do a different experiment when the results supported the conclusion that the earth is round.

            Most of the people in the flat earth community were there for the sense of community and interaction with other people. That says a lot about why people hold on to illogical beliefs so strongly. Because society today has removed a lot of the regular social contact that existed in decades past, people are looking elsewhere to fulfill that need. I'm not a theist, but the fact that nearly everyone in our community used to go to church every weekend back in the 1980s filled a need that people have for socialization. Today my dad thinks it's lucky if they have even 8 or 9 people at the same church.

            • simion314a day ago
              I don't think is the church, here in Romania I remember only the old people would go to church, and not all of them, if some old man or woman from your family was there it was enough.

              What was happening is that we had no other way to communicate and socialize so people would just randomly go and visit each other unannounced or go to a bar/tavern and meet there. I think today we feel that it is not OK to bother a friend or relative , maybe if there is some celebration, back then it was OK (sure you had to read the situation or the host could tell you to come another time if they are busy).

              Anyway, in my village it was not the church, my grandfather would not go to church even if he believes in God (he considers church as a man made thing), one of his nephews would stop often (maybe 2 times a week) when passing by and bring a newspaper, my grndfather would bring out some alcohol and they would talk politics and curse on politicians.

              I do not have the perspective of the city experience though, what woud they do before social media or cable TV.

          • 2 days ago
            undefined
          • AlexandrB2 days ago
            This is a very shallow analysis of what truth is. One of the main purposes of the scientific method is to ensure that researchers don't fool themselves about the nature of truth. If just observing something is sufficient to determine truth, why bother with double blind tests?

            The examples given are also convenient in the sense that they have relatively easy answers. What about something like the existence of quarks? What kind of equipment would I need to see for myself that they exist? What if I want to "see" the Higgs Boson for myself?

            That's not even getting into non-falsifiable ideas like "last Thursdayism" that can't really be proven to be not true no matter how many observations you make.

            Edit:

            Here's another example - Elon Musk.

            Is he a brilliant engineer? A ruthless businessman? A fool? A Nazi?

            You can probably make a case for any of these conclusions by picking and choosing things he's said and done. How do you even start to determine the objective truth of his capabilities and intentions?

            • simion3142 days ago
              why so disingenuous??? any Zed propaganda you are pushing?

              We are talking about propaganda, about paid Ruzzians that create for example fake documents daily, some show that Zelensky bought some sports car today, a mansion yesterday, an apartment last week. Is it very possible to figure out that they are fake, maybe not for each of us, but as a group we could find someone that has experience in the domain and can tell you that the document shown is not how such a document looks in real life and is an obvious fake.

              or a super recent example, you have a protest or gathering and some people claim there were 400k people present and others say there were only 10k people i that town square, there are videos and photos, sure not every person has the ability to evaluate the number but if we combine as a group we can gfind someone intelligent that can figure out what number is correct (except if the group is filled with less intelligent people then someone needs to take pity on them and explain as to children how you can count and estimate a crowd size)

              And there is always absolute truth if you want to be scientific, like "I do not know" or 2+2=4 where 2 is s(s(0)) , + is well defined, and 4= s(s(s(s(0)))) , and is also truth that 1>0 for real numbers(where...definition of all those symbols) etc

    • jarek832 days ago
      I have another way of thinking about it:

      History is what other people decided to say about the past events.

      I believe 'history' is very often wrongly taken as source of truth.

    • antisthenes2 days ago
      > History is the version of past events that people have decided to agree on.

      That's a nice sound bite, but history is a lot more than that. That's why we have historians, archeologists and material basis of history.

      We should strive to get closer to practical reality.

    • Bloating2 days ago
      agree on, at the point of a gun
    • 2 days ago
      undefined
  • eGQjxkKF6fif2 days ago
    Can it code? Because after seeing webpages be altered with misinformation/BIAS, AI hallucinate, wikipedia 'moderators' put literal lies on pages and lock the pages from being edited based on their own power hunger and emotions.. none of this surprises me. Check out the real history of Christopher Columbus, who not only didn't discover America, but slaughtered everybody, and the history was given as migrants came to America and it sounded better when an author wrote whatever sounded better than what actually happened in a book so they just adopted that it would be the new history, and they-- being the leaders of the United States at the time.

    History gets amnesia. People forget. They don't care. As children, they're 'taught' that 'this is what happened' and 'this is how it is' and 'this is how you must think for yourselves' and as we grow older we must learn and unlearn and keep doing that as we go while fighting the urge to bitch about things running circles going nowhere.

    Humans infected AI tools.

    Solve that problem, you'll solve some 'Moscow' problem.

    • toofy2 days ago
      the fact that we’ve sometimes been misled on history should teach us to not do that again. to reach closer towards the truth.

      the answer is that both of those instances in your example should be called out.

      the answer is not “whatabout $other_instance happening, therefore it’s magically ok.”

      • from-nibly2 days ago
        It's not that it's OK, it's that it isn't an AI problem. It's about framing.

        "People lie", isn't a good headline.

        "$bad_guy $negative_hyperbolic_verb $current_hype" is much better.

        How about

        The US infected deepseek with US propaganda.

        The neighbors destroyed our street with leaving their trashcan on the street

        My brother trainwrecked my dinner by eating all the fries

        None of these problems are actually about the indirect object. They are people problems.

  • fishcrackers2 days ago
    [dead]
  • EB-Barrington2 days ago
    [dead]
  • hayst4ck2 days ago
    [flagged]
  • rubberduckyz2 days ago
    [flagged]
  • Steven4202 days ago
    Personally I am more concerned about the western propaganda in the western AI tools
    • Sabinus2 days ago
      I look forward to you submitting a link about Western government efforts to mass-produce articles and websites for AI ingestion propaganda, or similar.
  • tjag129912 days ago
    Garbage in, garbage out. These hundreds of pravda sites rank high on Google. They are doing a good SEO job.

    One countermeasure would of course be unblocking RT, which no one cares about anyway. Perhaps the SEO would stop. RT was trash, but never as trashy as these fake pravda sites.