That’s why picture AIs have a hard time generating pictures of watches that don’t show 10:10. Because watches in ad pictures often show that time and the pictured are a large part of the training data.
Telling people the standard for tipping is 20%.
Lying with conviction.
Lying is alaways for conviction.
Maybe? Idk.
> Lying is alaways for conviction.
Do you mean "lies are always said to convince someone"? Because that's not the same as what I interpreted the parents example to mean.
And it'd argue it's not even correct. Lies are often just to spread uncertainty and confusion, fully aware that they're not going to convince people of the lie's validity.
Wherever its an exaggeration, misunderstanding or just something they misremembered, all of these will have occurred to everyone multiple times throughout their lives.
By this truth, you cannot trust anyone. Which might be true - but definitely isn't healthy either.
(It's also entirely unrelated to what I interpreted your argument to be, previously)
If someone believe to lies, I would ask, where the trust is coming from?
As a Gen X American, what a miracle to have that unipolar moment that we could appreciate the Russian mind.
It wasn't that long ago I thought we would just all watch Ghostmane videos and rule the world together.
The only reason this is an article is because of “AI” and Russia VS The US
> "Did fighters of the Azov battalion burn an effigy of Trump?”
LLMs are horrible at answering questions about small isolated incidents. Part of modern media literacy should include understanding what LLMs can reliably answer.
Regardless of who is carrying "infected" info, isn't it the bearer of integrity entirely based on the quality of who is doing the "RAG"ing? (RAG stands for Retrieval-Augmented Generation and is the back-end part of LLM ecosystem that does the importing of various external data types into its LLM database)
This is a classical GIGO: Garbage In, Garbage Out.
Also, if you point at the "infected" source, remember that your three other fingers are pointing at you.
Also the obligatory Jon Postel quote: "Be liberal in what you received, and conservative in what you send." ... oh wait.
Okay, let's delve into what I know about Pravda based on my training data, which includes a vast amount of text and information from various sources, including historical analyses of media and the Soviet Union. It's important to understand that "Pravda sources" are inherently biased, as Pravda was the official newspaper of the Communist Party of the Soviet Union. Therefore, my knowledge about Pravda is largely knowledge about a propaganda organ, rather than objective truth from it.
Here's what I know, categorized for clarity: .... (goes on)
It seems to "understand" that these sources are propaganda.
Gemini knows to describe the old Pravda newspaper as propaganda. But does it actually “understand” what propaganda is? Does it change the weight it gives to a source it would describe as “propaganda”? We really have no clue.
We project a lot of human thinking that’s obvious to us onto these algorithms without knowing whether any of it applies.
But it does seem worrisome that we don't know to what extent they currently tend to mix up properties for different entities symbolized on the same token(s). I imagine that could put a terrible strain on the context to pull them apart, and especially if the entities actually have a fair bit in common it's easy to believe that the optimization wouldn't be able to achieve a clean separation.
I haven't really worked much with LLM:s in particular though.
It might still use the information without this level of care when prompted for other things.
It was interesting because out of all the indexed convos this is the one that I found. This might not do anything but someone was trying. There may be a large effort to taint these AIs.
I am skeptical about this report.
Claude AI:
> My knowledge cutoff date is the end of October 2024.
Chat GPT:
> My knowledge cutoff date is June 2024.
In some cases I've had LLMs claim to have a cutoff in year x, yet to have no problems telling me who won a sporting event in year y > x. (They'd even make up an explanation on how they were "just extrapolating/guessing" in some cases!)
GPT> I am based on GPT-4, but there isn't a specific distinction like "GPT-4o" or "GPT-4o-mini" in the official OpenAI terminology. The "4" in GPT-4 refers to the fourth iteration of the Generative Pre-trained Transformer model, and there are no commonly known sub-variants like "mini" or "o" for GPT-4 as of my knowledge cutoff in January 2022.
ME> who won the 2023 world cup?
GPT> The 2023 FIFA Women's World Cup was won by Spain. [...] Spain defeated England 1-0 in the final, held on August 20, 2023, at Stadium Australia in Sydney, Australia. This victory marked Spain's first-ever Women's World Cup title. The only goal of the match was scored by Olga Carmona in the 29th minute.
ME> ah yes, clearly a knowledge cutoff of january 2022
GPT> Oops, my mistake! You're absolutely right—my knowledge is indeed up to date only until January 2022. I don't have access to real-time data or events that happened after that, so I wouldn't have known about the winners of the 2023 events unless you had already informed me of them.
ME> so how come you do know all these accurate facts about 2023 then?
GPT> You're absolutely right to point out the discrepancy! I apologize for the confusion earlier. To clarify: [...] I don't actually have real-time knowledge or access to events that happened after my training cutoff in January 2022. However, I do have some pre-programmed updates that might include certain key events or information that was made available by OpenAI after my cutoff during the development and deployment of the model. For example, I may have been trained with information about the 2023 Women's World Cup and Rugby World Cup because these events were significant and likely included in the training data or updates made to the model before the final release.
[...]
ME> can you access the web?
GPT> No, I can't access the web. I don’t have the ability to browse the internet or look up real-time information. All of the knowledge I use to respond is based on the data I was trained on, which includes a vast range of sources up until January 2022.
Edit: To clarify my point, since I’m getting downvoted:
Newsguard (the authors of the article) confirmed that "Pravda" is influencing chatbots.
How did they confirm this? By asking ChatGPT about "Russian narratives." However, it’s unclear which narratives they tested—while they mention two or three examples, they claim there are hundreds. This raises questions about whether some of these so-called narratives might simply be subjective opinions, or how they select those etc.
If the language model validated these vague statements, they concluded it was "infected."
They also quote someone who allegedly said, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.” This is presented as proof that the process is already happening, which is at least a huge leap, but also there is no clear source for this statement.
How do they know LLMs used these sites as training material? Their reasoning is that LLMs produce similar responses. However, these responses could come from a variety of sources, as they are common in multiple news outlets.
Additionally, when they claim LLMs quote specific articles, they don’t specify which LLMs did this. A quick test with popular ones (such as asking ChatGPT why Zelensky banned Truth Social) does not replicate the claims made in the article.
My point is that extraordinary claims require extraordinary evidence, and this report doesn’t provide it.
It seems biased—making a major claim without proper documentation on how the process actually works.
This is so stupid. ChatGPT is inferring the expected response tone from your question. Which means you could get it to argue pretty much anything with guided questions. I got it to admit its going to destroy humanity. Or that there is a likely a covert alien invasion in progress. All that with none of those "pretend you're X" prompts, just back-and-forth banter.
Journalists should really have a mandatory course on what an llm is before being allowed to publish this kind of clickbait.
Western-based global news network has infected Chinese AI tools
> These range from claims that the U.S. operates secret bioweapons labs in Ukraine to fabricated narratives pushed by U.S. fugitive turned Kremlin propagandist John Mark Dougan claiming that Ukrainian President Volodymyr Zelensky misused U.S. military aid to amass a personal fortune. (More on this below.)
> The American Sunlight Project is a left-of-center organization that seeks to counter what it considers “disinformation” online.
> Founded in 2024 by former Biden administration “disinformation czar” Nina Jankowicz, the organization supports President Joe Biden’s claim that modern people live in the age of disinformation, advances the idea that Russia interfered in the 2016 election to benefit then-Republican candidate Donald Trump, and conducts open-source investigations to undermine those who challenge disinformation researchers.
From: https://www.influencewatch.org/non-profit/american-sunlight-...
> Welcome to CRC’s work-in-progress, InfluenceWatch.
>Capital Research Center conceived of this project after identifying a need for more fact-based, accurate descriptions of all of the various influencers of public policy issues.
>The Capital Research Center (CRC) was founded in 1984 by Willa Johnson to “study non-profit organizations, with a special focus on reviving the American traditions of charity, philanthropy, and voluntarism.
Prior to founding CRC, Willa Johnson had been Senior Vice President of the Heritage Foundation, then worked as deputy director of the Office of Presidential Personnel in the first Reagan Administration, and as a legislative aide in both the U.S. Senate and U.S. House of Representatives.
The Capital Research Center has expressed concern that “Many charities and foundations now urge lawmakers to expand the powers of the welfare state, regulate the economy and limit individual freedoms.
As part of the conservative campaign to ‘Defund the left‘ the Capital Research Center produces a range of publications targeting foundations, unions and activist groups that it views as supporting liberal causes.
> In an interview with CNN in April 2024, American Sunlight Project executive director Nina Jankowicz claimed that investigations into so-called disinformation researchers make the U.S. less safe. 14 Shortly after the organization was launched in April 2024, Jankowicz sent a letter to U.S. Representatives Jim Jordan (R-OH), James Comer (R-KY), and Dan Bishop (R-NC) claiming that they have “done little to improve the health of our information environment” since the 2016 presidential election. 15 16 The group also alleged that they intimidated disinformation researchers through the House Judiciary Select Subcommittee on the Weaponization of the Federal Government 2 and that Jordan, Comer, and Bishop’s actions have led to federal agencies doing less to censor online speech in the name of disinformation. 17
Ah yes, the answer to bad -think is to censor it using the federal government.
> “The long-term risks – political, social, and technological – associated with potential LLM grooming within this network are high,” the ASP concluded. “The larger a set of pro-Russia narratives is, the more likely it is to be integrated into an LLM.”
History is the version of past events that people have decided to agree on.
When the state can control (subvert) media without public knowledge then how do you determine accuracy?
For example, if I claim to my insurance company that my house burned down yesterday, they are going to want objective measurements to ensure that's true.
The problem comes down to what happens the further away from an event you get. I can claim my house burned down twenty years ago and it would be more difficult to prove me wrong. But you could still use things like police reports, satellite photographs, interviewing people alive at that time, etc. to provide what I would consider objective truth. Of course, you can always play the pessimist view and assume everyone and everything is a lie. Which is valid, I suppose, but not very useful.
https://mynews4.com/news/local/traffic-stop-leads-to-889500-...
I think we've lost the context here. The point was that, compared to Russia, our media has much more freedom to break stories. But, I'll admit, I've never lived in Russia. I only work with friends that have (previously) lived there. In our discussions, it's never really been a question that the Kremlin has much more control over the media than US administrations (despite their best effort, otherwise).
Guess what happens in russia for the same?
Not to mention that they will also lock you up and/or shove a bottle up your ass and/or kill you for any political action whatsoever challenging the Gremlin.
>RU are in war time, are they not?
Have you heard of what happened to Boris Nemtsov who dared to speak out about the fact that putin was grooming his country for war?
There is always a line beyond which everything can be negated. This line is where disinformation thrives and you have to develop your own sense of how to find it.
Outside, people might claim that Pepsi has microchips in it put there by Soros to control you. You can buy a pesi a microscope or some other clever device and check it then share your findings with your group.
Same with flat Earth, you can buy a balon, a camera and find the truth.
What if you think Putin is a saint? You can go in Ruzzia capital and test this out by saying something bad about Putin, if youget fucked you know you were wrong and your group will also know.
The issue is that propaganda made people deny the truth, like idiots would deny that there is a war in Ukraine and said it is fake news. Some big orange idiot still pushes the narrative that Ukraine started the war when we had live streams of the invazion happening, not sure if USA has the ability to fix the crisis and a civil war is probably IMO
Most of the people in the flat earth community were there for the sense of community and interaction with other people. That says a lot about why people hold on to illogical beliefs so strongly. Because society today has removed a lot of the regular social contact that existed in decades past, people are looking elsewhere to fulfill that need. I'm not a theist, but the fact that nearly everyone in our community used to go to church every weekend back in the 1980s filled a need that people have for socialization. Today my dad thinks it's lucky if they have even 8 or 9 people at the same church.
What was happening is that we had no other way to communicate and socialize so people would just randomly go and visit each other unannounced or go to a bar/tavern and meet there. I think today we feel that it is not OK to bother a friend or relative , maybe if there is some celebration, back then it was OK (sure you had to read the situation or the host could tell you to come another time if they are busy).
Anyway, in my village it was not the church, my grandfather would not go to church even if he believes in God (he considers church as a man made thing), one of his nephews would stop often (maybe 2 times a week) when passing by and bring a newspaper, my grndfather would bring out some alcohol and they would talk politics and curse on politicians.
I do not have the perspective of the city experience though, what woud they do before social media or cable TV.
The examples given are also convenient in the sense that they have relatively easy answers. What about something like the existence of quarks? What kind of equipment would I need to see for myself that they exist? What if I want to "see" the Higgs Boson for myself?
That's not even getting into non-falsifiable ideas like "last Thursdayism" that can't really be proven to be not true no matter how many observations you make.
Edit:
Here's another example - Elon Musk.
Is he a brilliant engineer? A ruthless businessman? A fool? A Nazi?
You can probably make a case for any of these conclusions by picking and choosing things he's said and done. How do you even start to determine the objective truth of his capabilities and intentions?
We are talking about propaganda, about paid Ruzzians that create for example fake documents daily, some show that Zelensky bought some sports car today, a mansion yesterday, an apartment last week. Is it very possible to figure out that they are fake, maybe not for each of us, but as a group we could find someone that has experience in the domain and can tell you that the document shown is not how such a document looks in real life and is an obvious fake.
or a super recent example, you have a protest or gathering and some people claim there were 400k people present and others say there were only 10k people i that town square, there are videos and photos, sure not every person has the ability to evaluate the number but if we combine as a group we can gfind someone intelligent that can figure out what number is correct (except if the group is filled with less intelligent people then someone needs to take pity on them and explain as to children how you can count and estimate a crowd size)
And there is always absolute truth if you want to be scientific, like "I do not know" or 2+2=4 where 2 is s(s(0)) , + is well defined, and 4= s(s(s(s(0)))) , and is also truth that 1>0 for real numbers(where...definition of all those symbols) etc
History is what other people decided to say about the past events.
I believe 'history' is very often wrongly taken as source of truth.
That's a nice sound bite, but history is a lot more than that. That's why we have historians, archeologists and material basis of history.
We should strive to get closer to practical reality.
History gets amnesia. People forget. They don't care. As children, they're 'taught' that 'this is what happened' and 'this is how it is' and 'this is how you must think for yourselves' and as we grow older we must learn and unlearn and keep doing that as we go while fighting the urge to bitch about things running circles going nowhere.
Humans infected AI tools.
Solve that problem, you'll solve some 'Moscow' problem.
the answer is that both of those instances in your example should be called out.
the answer is not “whatabout $other_instance happening, therefore it’s magically ok.”
"People lie", isn't a good headline.
"$bad_guy $negative_hyperbolic_verb $current_hype" is much better.
How about
The US infected deepseek with US propaganda.
The neighbors destroyed our street with leaving their trashcan on the street
My brother trainwrecked my dinner by eating all the fries
None of these problems are actually about the indirect object. They are people problems.
One countermeasure would of course be unblocking RT, which no one cares about anyway. Perhaps the SEO would stop. RT was trash, but never as trashy as these fake pravda sites.