> The Observatory should maintain and continually update an open, searchable database of verified influence-operation incidents, allowing researchers, journalists, and election authorities to track patterns and compare response effectiveness across countries in real time. To guarantee both legitimacy and skill, its governing board would mix rotating member-state delegates, independent technologists, data engineers, and civil society watchdogs.
We've really found ourselves in a pickle when the only way to keep Grandma from being psychologically manipulated is to have the UN keep a spreadsheet of Facebook groups she's not allowed to join. Honestly what a time to be alive.
When and why did that happen?
My favorite example of this is the entire sphere of anti-science "health" stuff on TikTok. Seeds oils bad, steak good, raw milk good, chemo bad. I noticed something. Every single time, without fail, this person is trying to sell me something. Sometimes it's outright linked in the TikTok, sometimes it's in their bio. But they're always salespeople.
Salespeople lie guys. They want you to buy their stuff, of course they're going to tell you their stuff works.
Say what you will about media and the government, but they at least vet their stuff. They're not gonna tell you the Earth is flat.
When I was in elementary school, we had seemingly endless drills to help us differentiate between fact and opinion, truth and snake oil.
Like civics, geography, and the other basics of society, it's no longer taught.
It's largely generational.
Boomers and X's were there when the internet debuted and were exposed to lots of "Beware of the scary internet" stories in the legitimate media.
The internet was already normal and common when Millennials and Z's came along, so they didn't get the same warnings.
The notion that grandma falls for online scams more often than a Z is an ageist trope that has been disproven in several studies.
Also do you have any links to those studies? I would genuinely like to see them.
https://www.ftc.gov/news-events/data-visualizations/data-spo...
The person you are talking to face-to-face could have been targeted with disinformation as well.
This is suggested in the paper: manufacture of narrative across communities. Those communities are not exclusively online.
Social mass-media is doomed to fail in its current form. These platforms are already manipulated by capital through advertising, nation states, data-brokers and platform self interest.
People need more federated networks where agents can be verified, at least locally and "feeds" cannot be manipulated.
The powers that be do not believe in democracy how you and I believe in it, they believe in manufactured consent.
Mass media in its current form is just a way to create consent within the masses, not the other way around, the masses don't make the decisions.
Then we had growing environmental concerns. And the costs were much higher than initially promoted. Then we had Three Mile Island. Then Chernobyl. Then Fukushima. New reactor construction came to a standstill. There was no trust anymore that humans could handle the technology.
Now, there's some interest again. Different designs, different approaches.
Will AI follow the same path? A few disasters, a retreat, and then perhaps a renewal with lessons learned?
A simple, unordinary, boring, mirror.
Seemingly fantastical to those unfamiliar with it, it would have startled many who never saw one, with its apparent capacity to reproduce human likeness and nature.
Once you look behind it, there's nothing. No nature within, no magic, just a polished surface. A cute trick, destined to become a trivial object.
https://en.wikipedia.org/wiki/Echo_and_Narcissus
> Juno curses Echo by making her unable to initiate a spoken sentence on her own and instead only being able to finish a sentence started by someone else. "Yet a chatterbox, had no other use of speech than she has now, that she could repeat only the last words out of many."
Now we have cameras on every major road and intersection, most places of business, most transportation facilities, and most public gathering places. We have facial recognition and license plate readers, and cheap storage that is easily searched and correlated. Almost all communications are logged if not recorded. Even the postal service is now imaging the outside of every envelope.
All because it's cheaper and easier.
It probably is true that some populations are more vulnerable to AI produced propaganda/slop, but ultimately I have 50 more pressing AI concerns.
But, there are still situations where botnets would be useful. For example, spreading propaganda on social media during hot phases of various conflicts (R-U war, Israeli wars, Indo-Pakistani war) or doing short term influence operations before the elections. These cases need to be handled by social media platforms detecting nefarious activity by either humans or AI. So far they could half-ass it as it was pretty expensive to run human-based campaigns, but they will probably have to step up their game to handle relatively cheap AI campaigns that people will attempt to run.
To solve the question of whether or not these harms can/will actually materialize, we would need causal attribution, something that is really hard to do — in particular with all involved actors actively monitoring society and reacting to new research.
Personally, I think that transparency measures and tools that help civic society (and researchers) better understand what's going on are the most promising tool here.
LLMs hallucinate. They're weak and we can induce that behavior.
We don't do it because of peer pressure. Anyone doing it would sound insane.
It's like a depth charge, to make them surface as non-human.
I think it's doable, specially if they constantly monitor specific groups or people.
There are probably many other methods to draw out evidence without necessarily going all the way into attribution (which we definitely should!).
It's catered for the algorithm which pumps it out to users.
A trip to Reddit will show you just how real this already is. Humans are far more likely to adopt a belief if they think that everyone else also believes it. The moment GPT became available, bad actors have exploited this to spread disinformation and convince everyone to believe it.
While I attribute a good portion of this to these kinds of influence operations, it's pretty clear that the opinion of the average Redditor (bot or not!) has just gotten increasingly shallow over time.
I'm spreading the message because I want more socially conscious people to engage in this. Look into Curtis Yarvin and The Dark Enlightenment. Look into Peter Thiel's (the CEO of Palantir aka America's biggest surveillance contractor) explicitly technofascist musings on using technology to bulldoze democracy.
But AI is not dangerous because of the potential for sentience, but because it makes the rich richer and the poor poorer. It gives those with resources more advantage. It's dangerous because it gives individuals the power to provide answers when people ask questions out of curioity they don't know the answer to, which is when they are most able to be influenced.
This is a terrible take. Whenever there is a massive technical shift it's the incumbents who struggle to adapt. We've already seen companies go from nothing to being worth tens of billions.
Did we? I mean, I guess we're surviving it for now, but climate change data doesn't tell the most optimistic story.
edit: typos/grammar abound.
Life would be quite different if we could subjectively experience ourselves as a whole.
And we got a few wars whose death toll exceeded the pre-industrial population of the UK. And one whose toll exceeds the current population of the UK.
You always survive unless you don’t.
We could make an error that kills the whole species based on the assumption because we survived before
And even then, many of the biggest winners of AI so far (Google, Microsoft, NVidia, etc.) are already some of the biggest companies on the planet.
The techies making money are the AI experts. The real AI experts. Not the TF/PyTorch monkeys.
There's actually a massive and underappreciated difference between the two groups of people. It goes to the heart of why so many AI efforts in industry fail, but AI Labs keep making more and more advances. Out in industry, we mistake Tensor Flow monkeys for AI experts. And it's not even close.
Worse, you look at market price to get some of the real AI experts, and you realize that you have no shot at securing any of that intellectual capital. And even if you have the resources to secure some of that talent, that talent has requirements. They're fools if they consider you at less than 10^4 H100s. So now you have another problem.
I think techniques, R&D, secrets, intellectual capital, and so on are all centralizing in the major labs. Startups as we knew them 5 to 10 years ago will simply be choosing which model to build on. They have no legitimate shot at displacing any of the core LLM ecosystems. They'll all be nibbling at the edges.
And now the artists have to wash dishes because AI is making the art.
Nope, he was right. Rich get richer, poor get poorer. A few unicorns you can count on one hand doesn't change the facts.
It literally and structurally offers advantage to those with more resources.
That effect compounds over time and use.
I am not even remotely talking about worker replacement, even assuming no job was lost, a company that is able to pay for better answers should have more profit and therefore more ability to pay for more/better answers.
incumbents.
That said, I think that's just how reality works.
Companies come and go but the rich people who owns them nearly stay the same
Can you point to a time in history of written information this wasn't true though?
- Was the Library of Alexandria open to everyone? My quick check says it was not.
- The access to written information already precluded an education of some form.
- Do you think Google has an internal search engine that is better than the one you see? I suspect the have an ad-less version that is better.
- AI models are an easy one obviously. Big Players and others surely have hot versions of their models that would be considered too spicy for consumer access. One of the most memorable parts of ai 2027 for me was the idea that you might want to go high in government to get access to the super intelligence models, and use them to maintain your position.
The point is, that last one isn't the first one.
Social media, once you remove all of the window dressing, is just text and media in a glorified spreadsheet we call a database. It was human behavior that turned it into the menace it is today.
AI (LLMs) is no different. It's a next token predictor. What tokens people ask it to predict next (and with what intention) is where things get messy.
The internet? TV? Radio? All benign in their basic function but given to the wrong people can be turned into weapons.
The kids will at some point worship a prompt-engineered God (not the developer, the actual AI agent), and there will be nothing society will be able to do about it. Nobody verbalizes that Gen Z moves entirely like a cult, trend after trend is entirely cult-like behavior. The generation that is going to get raised by AI (Gen Zs kids) are going to be batshit crazy.
The way some people talk about LLM coding I don't know that we're far off.
Except, back then the newspaper and TV stations and radio stations would fact-check one another. If one of them was lying, the others would call them out on it.
That cost the people in power readers/viewers, and eventually money, so there was an incentive to tell the truth and develop a good reputation.
Tech doesn't care about reputation because it "doesn't scale" or is "long tail" or some other excuse for laziness.
AI is despotism automated.
You can always ask us, as you know. Moreover, the answer is almost always that users flagged it (as you surely also know after all these years), so I'm not sure I follow the sudden spike of speculation.
Users flagged it. We can only guess why users flag things, but in this case I'd guess it's connected to how terrible and tedious the thread is. Even by the low-quality standards of a flamewar this is way below the line.
Edit: I've turned off the flags now because I don't want the paper to get judged by these comments. The thread also set off the flamewar detector and I'm not going to turn that off.
I’ve experimented with different usernames and if you have an obviously Chinese or Indian username, you WILL get downvoted or censored or chastised by dang.
Hypocrisy.
Don't let the government's FOMO on new weapons enable these companies to add new coal and methane power to the grid and build data centres in water-stressed regions. Make them pay for the externalities they cause. If it weren't subsidized these companies wouldn't be operating at the scale they are, AI would be in a lab, where it belongs doing cool stuff for science.
Heck, don't let government let alone private corporations weaponize this technology, full stop.
Economic policy that protects businesses and individuals online. Peoples' hosting bills are going through the roof from AI scrapers. The harm is nuts. These companies aren't respecting any of the informal rules and are doing everything they can to form monopolies and shut down competition.
We need social policies that prevent AI use cases from harming the economy. Policies that prevent displacing skilled workers without redistributing the accrued wealth to the labour class. If it's really improving productivity by such a huge factor then we should all be working less and living comfortably.
But I dunno how you prevent the disinformation, fraud, and scams. They've been around for years and it's always a cat-and-mouse game. Social media has just made it worse and AI is just more fuel for the tire fire.
Back when they were complaining that Russia was interfering with our elections (2016ish?) I wondered what it would take to completely cut Russia off from the Internet. Granted, the Soviets still pulled stunts with less technology, but it was still a manageable problem then. Now? Well, we couldn't cut them off if we wanted to, could we? Even if we bullied Europe into severing all the fiber backbones there and elsewhere, China and North Korea and a dozen other countries would still keep them connected. And we'd still face the problems we face now.
Not that we would try that. Though it might be a sane and even a reasonably good policy, you'd have jackasses here and elsewhere (and not just Russian shills either) talking about how we can't possibly disconnect the friendly Russianerinos, they're good people even if some of their oligarchs aren't.
So we'll get some performative regulation theater that changes nothing. And everyone will just wonder what went wrong, quietly.
It’s very easy to create deceptive imagery persuasion and to astroturf. With or without AI assistance. All you need is a modest amount of money. And with unlimited money and zero accountability thanks to the ‘responsibility laundering’ made possible through PACs… facts no longer matter. Just money, to buy virality, to influence vibes.
Even in terms of corruption, this is by far the smallest concern and barely worth noting in the scheme of things. Besides the obvious revolving door for lobbying and legal firms, there is so much money at play in the ex post facto bribery industry, between speaking fees and bulk book sales and low interest and forgiven loans that Citizens United might as well be dust in the wind.
Democracy requires maintenance and responsibility. You can't expect nice things without paying the maintenance cost, and unfortunately, if you challenge power, power answers and it will hurt. If nobody is willing to die for freedom, then everyone will die a slave.
Blaming others rather than looking within fundamentally accepts authoritarianism, it presumes and accepts that others have power over you and that you can do nothing but submit.
Nobody is challenging power. We only have our own selves to blame for our cowardice.
The problem with democracy, more generally, appears to be that the population is wildly susceptible to apathy and complacency, meaning we've reduced the voting set to only this who care enough to vote. This turns politics into a game of disagreements between the most extreme voices.
In my opinion, in order for a democracy to work, voting must be compulsory.
I am incredibly atheist, but what we are seeing is Christianity, a clear pillar of American culture, malfunctioning on a societal scale. What used to be the major cultural influence on this country has been weaponized for political purposes. There is a quote about how separation of church and state is to protect the state from the church... but now we are coming to understand that that separation exists also to protect the church from the state.
People are very susceptible to politicians lying to them, especially when it's a lie they want to hear or prefer to the truth. Compulsory voting does not address that at all. Education is a hedge on it, but education requires effort, openness, and resources. There is also a media ecosystem which acts as a sort of central nervous system for a country, which is how a country understands itself.
Culture and institutions (such as church/media/academia/police training, etc.) are the foundation of societies operations, and government is largely a manifestation of prevailing culture. Authoritarian governments are a manifestation of a culture that promotes self interest and lack of empathy, rather than one that promotes loving they neighbor and treating others as you wish to be treated.
So at least at presidential level, there is neither apathy, complacency, or the ability to buy elections.
That's still missing a significant fraction of the population. Sure, you could make the argument that 2/3s is probably pretty representative, but I'm not sure I'd agree. I think there's good reason to believe that a voter who shows up is inherently not representative of a voter who does not. When elections are decided by such small margins in many places, that unrepresented third can easily change the outcome.
Maybe they truly don't care, and perhaps they should be given the option to vote as such, if they're required to vote.
I also don't think presidential elections are actually very meaningful when it comes to national politics (or politics in general!), but I feel like my opinion on that is changing given the current administration's efforts to maximize presidential power and their success in doing so.
That is entirely unsubstantiated. There is no reasonable way to measure this and any measurements taken are inherently political.
I am amenable to the idea of that being true for official spending, but unless the twitter purchase, for example, were tabulated, or spending on our American "pravda" (Truth social which was clearly influenced by https://en.wikipedia.org/wiki/Pravda), I would be extremely suspicious of those numbers.
> so at least at presidential level, there is neither apathy, complacency, or the ability to buy elections.
Again, I completely disagree, and so does Harvard Law Professor Lawrence Lessig, who argues that it is extremely hard to win a primary without fundraising, and fundraising is structurally an election where money counts as votes, and therefore nearly all candidates who make it to the primary have already been filtered through by those with money: https://www.youtube.com/watch?v=mw2z9lV3W1g
Anyways, I'm not sure AI is relevant here. Misinformation is just a form of propaganda which other than allowing the creation of falsehoods quicker, doesn't seem to be any more "threatening" than any other lie.
It's curious that 90% of the top-level comments here are all dismissing it outright. And we have the usual themes:
1) "This has always been possible before. AI brings nothing new."
2) "We haven't seen anything really bad yet, so it is a non-issue
3) "AI execs are pushing this narrative to make AI sound important and get more money"
Nevermind the fact that many famous people behind the invention of AI, including Jeffrey Hinton (the "godfather of AI") and others quit their jobs and are spending their time loudly warning people, or signing major letters where they warn about human extinction or job loss... it's all a grift according to the vocal HN denizens.
This is like the opposite of web3 where everyone piles on the other way.
Well... swarms of AI agents take time to amass karma, pagerank, and other metrics, but when with the coming years, they will indeed be able to churn out content 24/7, create normal-looking influencer accounts and dominate the online discussion on every platform. Very likely, the percentage of human-generated content will trend to 0-1% of content on the internet, and it will become a dark forest, and this will be true on "siloed" ecosystems like HN as well: https://maggieappleton.com/ai-dark-forest/
Certainly, saying "this was always possible before" misses the forest for the trees. No, it wasn't.
List of attacks made possible by swarms of agents:
1) Edits to Wikipedia and publishing articles to push a certain narrative, as an Advanced Persistent Threat
2) Popular accounts on social media, videos on YouTube, Instagram and TikTok pushing AI-generated narratives, biased "news" and entertainment across many accounts growing in popularity (already happening)
3) Comments under articles, videos and shared posts that are either for or against the thing, and coordinated upvoting that bypasses voting ring detection (most social networks don't care about it as much as HN). Tactical piling on.
4) Sleeper accounts that appear normal for months or years and amass karma / points until they gradually start coordinating, either subtly or overtly. AI can play the long strategic game and outmaneuver groups people as well, including experts (see #7 for discrediting them).
5) Astroturfing attacks on people who disagree with the narrative. Maybe coordinating posts on HN that make it seem like it is an unpopular position.
6) Infiltrating and distracting opponents of a position, by getting them mired in constant defenses or explanations, where the interlocutors are either AI or friends / allies that have been "turned" or "flipped" by AI to question them
7) Reputational destruction, along the lines of NSA PRISM powerpoint slides (https://archive.org/details/NSA-PRISM-Slides) but at scale and implacable.
8) Astroturfing support for wars, unrest, or whatever else, but at scale, along the lines of Mahachkala Protests in Russia, etc.
These are just some of the early low-hanging fruit for 2026 and 2027.
Maybe AI swarms do pose some weird contrived threat to democracy. Pieces like this will inevitably be laundered by the American intelligentsia like Karpathy or Hinton, and turned into some polemic hype-piece on social media proving that "safe AI" must be prioritized in a row for regulation. It's borderline ineffable to admit on HN, but America's obsession over speculative economics has pretty much ruined our chance at seizing a technological future that can benefit anyone. Now AI, like crypto before it and the dotcom bubble too, is overleveraged. "Where's the money, Lebowski?"
The pre-AI situation is actually incredibly bad for most people in the world who are relatively unprivileged.
"Democracy" alternates between ideological extremes. Even without media, the structure of the system is obviously wholly inadequate.
Advanced technologies can be used to make things worse. But they are also the best hope for improving things. And especially the best hope for empowering those with less privilege.
The real problems are the humans, their belief systems and social structures. The status quo may seem okay to you on most days, but it is truly awful in general. We need as many new tools as possible.
Don't blame the tools. This is the worst kind of ignorance.
The tools do not exist without the humans, and the humans, consciously of otherwise, design tools according to their own views and morals.
To outline just a basic example: many initial applications of generative AI were oriented toward the generation of images and other artistic assets. If artists, rather than technologists, had been the designers, do you think this would have been one of the earlier applications? Do you think that maybe they may have spent more time figuring out the intellectual property questions surrounding these tools?
Yes, the morals ultimately go back to humans, and it's not correct to impute morals onto a tool (though, ironically enough, the personification and encoding of linguistic behaviors in AI may be one reason that LLMs can be considered a first exception to this) but reducing the discussion to "technology is neutral" swings the pendulum too far in the other direction and ultimately tends to absolve technologists and designers of moral responsibility pushing it to the use, which, news flash, is illegitimate. The creators of things have a mora responsibility too. For example, the morality of designing weapons for the destruction of human life is clearly contestable.
Are technologists creating AI swarms for political manipulation? Or is that being done by politicians or political groups?
Are you suggesting that an LLM or image generator is like a gun?
No, LLMs are not guns. That said, any responsible and conscious designer of LLMs should know they impose other risks, like the ones they pose regarding misinformation and democracy. They should also know that they pose risks regarding economic stability and copyright law and plagiarism.
You could try to weigh this against the benefits, but the fact of the matter is, the responsible way to develop these technologies is to show that you are explicitly accounting for these obvious dangers as well. Very few companies actually do that (because they selfishly choose to optimize for profit instead).
My main point is just that the user isn't the only person with some amount of responsibility when it comes to technology. Designing a tool does not therein automatically absolve you of any moral responsibility.
I think the reasonable approach is regulation—it is the only thing that helps combat sheer profit motive and force companies to attend to at least some of this responsibility.
> Don't blame the tools. This is the worst kind of ignorance
"Stop blaming [guns // religion // drugs // cars // <insert_innovation_here>] for the way humans misuse it"
There is a reason regulations exist. Too much regulation is detrimental to innovation, but some amount of standards is needed.
I did not say there should not be regulation or standards.
The point is that the whole diagnosis is wrong. People point at AI as creating a new problem as if everything was okay.
But everything is already fucked and it's not because of technology it's because of people, their social structures and beliefs. That is what we need to fix.
There are lots of ways that AI could help democracy and government in general.
AI makes fake news in masses possible.
You think the situation is bad? Let’s talk about that in 5 years.
I tend to agree with this statement honestly.
The adult take is that things do not have malice, people might, so address that because the world will never be regulated into safety enough for the people that don't get human nature.
Exactly, that's why I propose every person has access to a nuke.
Elephant in the room here: the scale of technology matters. Being able to lie on a scale that eradicates truth as a concept matters.
We can't just naively say all tools are similar. No no, it doesn't work that way. I'm fine with people having knives. I'm fine with people having a subset of firearms. I am not fine with people having autonomous drones, or nuclear weaponry, or whatever. Everything is a function of scale. We cannot just remove scale from the equation and pretend our formulas still hold up. No, they don't. That's why the printing press created new religions.
Maybe it’s a bad idea to put powerful tools in the hands of people you know will misuse it.
Let‘s make e gedanken experiment.
We create mighty tools with two buttons. Button 1 solve world hunger and cure every disease. Button 2 kill everybody but you.
Would give everyone such a tool?
There is no getting around the fact that these things are nothing like regular technology where you can at least decompose it down to functional working parts. It isn't debuggable basically. Nor predictable.
Everywhere you move. Everything you say, everything you buy.
They've had the monitoring technology for decades. The problem was the fire hose.
There is no problem with the fire hose anymore.
If this doesn't scare the s** out of you, then you're ignorant.
Maybe if we had a well-functioning government I would hold out some degree of hope. But our Democratic institutions are already in shambles from facebook.
All previous technologies basically enhanced the talent and intelligence. Yes AI can do that, but the difference this time is it replaces intelligence on a huge scale
The role of the intelligensia, to borrow an old term, arguably has pushed idealistic progress on society with its monopoly on competence.
That Monopoly on competence generally came with it. The essential counterbalance to centralized authority and power along with idealism and philosophically derived morality and righteousness.
Ai is the end of the monopoly of competence for almost all the hacker news crowd.
North Korea is the future with AI.
The authors of this paper have at acknowledged that there are (practical, more so than moral) limitations to strict identification systems (“Please insert your social security number to submit this post”) and cite a few (non-US) examples of instances where these kinds of influence campaigns have ostensibly occurred. The countermeasures similarly appear to be reasonable, being focused on providing trust scores and counter-narratives. What they are describing, though, might end up looking a lot like a Reddit-style social credit system which has the impact of shutting out dissenting opinions. One of my favorite things about Hacker News over Reddit is that I can view and vouch for posts that have been flagged dead by other users. 95% of the time the flags were warranted (low-effort comments, vulgarity, or spam), but every once in a while I come across an intriguing comment that others preferred to be censored.
Most organizations and teams who have been investigating automated disinfo at scale have highlighted how fringes on both sides of the spectrum are being manipulated via automated engagement - often with state backing.
Power and Politics is completely orthogonal to ideology.
Thank you, I needed a laugh today. I mean you're not wrong that it started with Obama but like come on-- I lived through that era, you probably did too. It was a visceral emotional response to the most powerful man on earth being a black man. That spawned The Tea Party and birther movement the Venn Diagram of which was a circle. The Republican party noticeably changed to a tone of burn it all down while Obama was in office. Being one of the most outspoken birthers what was put Donald Trump into the public sphere as a political figure. This is where the MAGA wing of the Republican party traces its origins.
This is exactly the sort of mentality that led to Democrats thinking Hillary Clinton was a viable candidate.
Leftists all over the world spent the 2010s creating a coalition of the fringes, for whatever reason assuming that young majorities would not take notice. This was a very uncomfortable time to be a young man, even after Trump was elected. Young guys these days don’t have any expectation of being allowed to sit at the table, which is why they are so open about wanting to burn it all down now; a lot of them felt this way when Obama was president, too, but the risk of getting cancelled on Twitter still seemed like a serious threat back then.
I was fascinated by these people in the '08-'12 era. I watched so much Fox News, listended to Sean Hannity and Rush Limbaugh, and kept up with Breitbart and The Drudge Report. I got so many free civics papers from my weird obsession. They were racist as hell. I saw it with my own eyes and heard it with my own ears. And it's not like it was just talking heads, conservative online spaces were worse.
The statement I replied to was:
> It was a visceral emotional response to the most powerful man on earth being a black man.
This isn’t true. The racists you’re describing were (literally) dying out by the time Obama was elected. The resurgence of white identity politics in America was a reaction to a series of riots (e.g. Ferguson, Baltimore, Kenosha, Seattle, Minneapolis), affirmative action policies in higher education as well as in the public and private sectors, and renewed activity in the grievance industry (e.g. Anita Sarkeesian, Ibram Kendi, etc.).
This is both inaccurately describing the protests and also way off on timing – for example, Ferguson was 2014 but the white identity politics was on display before the first time he was elected.
Obama was president in 2014.
> but the white identity politics was on display before the first time he was elected.
Among klansmen. These are the people I alluded to who were dying out. Young people and the general public at large didn’t take interest in these movements until the summer of 2015, and even then it was just a thing for the terminally-online until the latter half of Biden’s presidency.
There were plenty of off ramps throughout this whole period, leftists opted not to take them because they were operating under the assumption that they would be in control forever.
Yes, and had been since 2009. I’m still not sure how you expect the racist backlash starting in 2008 to have been triggered by time travelers from 2014. People were threatening to lynch him before he was sworn in and the mainstream Republican Party leaders weren’t willing to give those people the treatment David Duke got in the 90s - McConnell tried to spin all of that “one term president” talk as solely about policy but all of the context made it quite clear that they wouldn’t have been so motivated for, say, Kerry.
I’m denying that this was a significant force in American politics until the latter half of Obama’s presidency. These people existed; they were not influential.
> McConnell tried to spin all of that “one term president” talk as solely about policy but all of the context made it quite clear that they wouldn’t have been so motivated for, say, Kerry.
What context are you referring to? Would you attribute black Republicans like Condoleezza Rice and Colin Powell to political tokenism intended to appeal to black voters? I know that the prevailing sentiment (among the media I consumed back then, anyways) was that George Bush was a racist who hated black and brown people, but looking back, I just don’t see it. That element of American society doesn’t look influential until the late 2010s and early 2020s when Thiel started buying his way in, and by that point it’s an entirely different cohort.
I think it can be both. There's part of me that has become, maybe sort of Bernie Sanders MAGA over the past couple of years. Like I certainly identify with the people who want their lives to be better, and America to be simplified, and I don't think that underlying this is an idea that without Russian influence, we would all be Democrats. But to me the problem with Russian bot farms and other influence campaigns to me isn't the direction of the beliefs, it's the degree, and by the way, this is on both sides.
Right, classically, there's the study that says gun violence is down, but portrayal of gun deaths in the media is up, and I think social media has taken this idea into overdrive. So like, instead of thinking we're one nation with people who disagree, now we're two political parties, and if we don't eliminate the other evil political party, it's the end of America.
Maybe something from the other side I've been trying to figure out:
To a matter of degree, again not binary, to what extent was the woke movement organic, vs constructed re: social media algorithms and capitalism. I'm not saying that nobody organically cares about social justice and etc. But to what extent were certain practices - announcing pronouns at the beginning of meetings - bottom up from activists or the people who cared, or top down via diversity consultants? To what extent did BLM or MeToo grow organically, vs being promoted by social media algorithms?
If there's this big complaint that progressive change was happening too fast, was it really progressive activists driving the change? Or other institutions and influence campaigns.
Political discourse has lost all sense of decency. Senators, the VP and POTUS all routinely mock and demean their opponents and laugh even at murder. Arrests are made by unknown masked men with assault rifles.
AI is simply irrelevant to this - humans are selfish, tribal and ugly.
Relative to which period in human history?
The anthropology of war is a lot more complicated than most people appreciate, because much of our mental imagery on historical warfare comes from movies and other fictional narratives. John Keegan's War in Human History is an accessible starter book on the topic.
Malicious AI swarms are only one manifestation of technology which gives incredible leverage to a handful of people. Incredible amounts of information are collected, and an individual AI agent per person watching for disobedience is becoming more and more possible.
Companies like Clearview already scour the internet for any public pictures and associated public opinions and offer a facial recognition database with political opinions to border patrol and police agencies. If you go to a protest, border patrol knows. Our government intelligence has outsourced functions to private companies like Palantir. Privatizing intelligence means intelligence capabilities in private hands, that might sound tautological, but if this does not make you fearful, then you did not fully understand. We have license plate tracking everywhere, cameras everywhere, mapped out "social graphs," and we carry around devices in our pockets that betray every facet of our personal lives. The vast majority of transactions are electronic, itemized, and tracked.
When every location you visit is logged, interaction you have is logged, every associate you communicate with known, every transaction itemized and logged for query, and there is a database designed to join that data seamlessly to look for disobedience and the resources available to fully utilize that data, then how do you mount a resistance if those people assert their own power?
We are becoming dangerously close to not being able to resist those who own or operate the technology of oppression and it is very much outpacing the technology of resistance.