I found the "Critiques of elitism" section and noticed this sentence:
"Reviews of Mao II (1991), for instance, highlighted the novel's focus on a performance artist protagonist as emblematic of this tendency, with detractors accusing DeLillo of prioritizing esoteric concerns over relatable human experiences, thereby catering to an academic or literary insider audience."
But Mao II does not have a performance artist as the protagonist, that is the book The Body Artist. Which seems like an obvious failure of the AI model to properly extract the information from the sourced article.
Also strange is that the sourced article (from Metro times) just as a passing comment says: "DeLillo’s choice of a performance artist as his protagonist is one reason why some critics have accused him of elitism." - so it would seem that it is being used as a primary source though it is actually a secondary source (which itself doesn't provide a source)
Overall I'm not too impressed and found some pretty predictable failures almost immediately...
The problem is in the assumption/story/belief that "Intelligence" is "magic" can be "perfected". It's not true. Philosophers have known it forever. And the AI hypestorm will remind everyone how over rated intelligence actually is.
Intelligence can produce Socratic thought. It can also get Socrates killed. It can produce Aristotle and chase Aristotle out of the village. It can produce Einstein and make him depressed. It can produce Galileo and Gandhi and pretend what they say should be deleted.
People are told all the time Brains/Intelligence are special. Its not true. Even if human brains disappear tomorrow sky is not going to fall. Life and the universe will carry on happily on their merry way.
What can be called special is what happens to Information flowing through thousands and thousands of brains over thousands and thousands of years. What Information survives that process can be interesting. But its no where close to the what we see with Photosynthesis or the Krebs Cycle that emerges out of similar process of Information flowing through microbes.
The info through these processes constantly gets misplaced/corrupted/co-opted/deleted etc. Look at the Bible. Lot of people aren't impressed with it either. Yet it has lasted the downfall of nations, empires and kings.
It's survival has nothing to do with the quality of Information within it - https://oyc.yale.edu/religious-studies/rlst-152
The same applies to both wikipedia and grokpedia or whatever is produced next through "intelligence".
Once you realize your own brain is very imperfect you don't spend so much time worrying about chimp troupe drama generated through those brains. It's called flourishing through detachment - https://oyc.yale.edu/philosophy/phil-181
> But is there any reason to not treat this as Wikipedia? As in, just suggest a correction?
How? I see no "Edit" link on the article:
* https://grokipedia.com/page/Don_DeLillo
as compared to:
* https://en.wikipedia.org/wiki/Don_DeLillo
There is no "Talk" page as well, so if your change is 'contentious', how is consensus reached between different view points? Also, how does one link to previous revisions of an article? Further, how to do diffs between different revisions (there is "Edit history")?
For me, a “Suggest Edit” button pops up at the bottom (on phone) after selecting some text.
I haven’t done it myself though. But I’ve seen screenshots on X showing grok responding to the edits.
Why not start there? Try to see if you can contribute a more neutral voice.
My guess is that the tone of these articles we see now is only the beginning. This thing is only a month old.
Then this will become the source of truth in the LLMs.
I cannot believe they've gone so against the spirit of contributors and that there's no legal recourse to stop this.
Open source and open content didn't bare enough fangs. It shouldn't have allowed profit.
We build open source for big tech. They use it (Linux, Redis, Elasticsearch, Python, etc.) to profit, keep us from owning the machines and systems (AWS) that concentrate and mechanize that profit. They then corral our labor, lay off, and move jobs overseas.
They expand into every industry and inject massive amounts of money to destabilize the incumbents. We literally just watched them dismantle the entire US film industry in just three years and swallow it whole. Tech is picking the bones dry.
And now it's happening with the media we create too.
If the pendulum of power swings, we MUST dismantle these companies. And we can't be slow like Lina Khan. It must be fast and furious like Project 2025.
We also need to weaponize our open source licenses.
It can be filled with a bunch of nonsense, whatever. The internet is like that. Maybe it’ll actually become something useful. Or it’ll inspire something useful.
Regardless, there’s no such thing as bad publicity, so these articles just give the project airtime. Even commenters here mention they haven’t heard of it until now.
Is others' negative-reaction to this sort of thing actually unusual... or is "weird" a rationalization for "seeing them do that makes me unhappy in this instance"?
I think it's relatively normal (and positive) for people to be "hating on" something which is arguably unhelpful/lies/propaganda.
If I were an evil billionaire I guess I’d create/buy media companies just to talk negatively about myself and my projects because it’s apparently a very effective way to get attention. Just play two sides against each other who have an emotional urge to keep saying my name
Some things I found interesting:
* Grokipedia uses Twitter a lot more as authoritative evidence than Wikipedia
* Grokipedia uses Grok’s own responses to user questions as authoritative sources, eg after a user asks “Can you dig up some dirt about $politician” and Grok responds, this is then used as source in Grokipedia, which may or may not be hallucinated
* the articles on politicians and Wikipedia’s curated list of “controversial topics” differ the most. they cite some examples from the “masculinity” article.
It’s a pretty interesting starting point imho. Let’s just say that the methodology of Grokipedia is at the very least highly questionable.
Anyone trusting this is deeply misguided.
Wokipedia: https://x.com/elonmusk/status/1972991279706636437
Cleaning up a "mountain of woke bullshit" models are trained on: https://x.com/elonmusk/status/1972991279706636437
He hasn't been subtle in the goal of this rewriting of wikipedia.
I’m also curious to see any egregious violations where grok also dug its heels in a biased way when presented with an edit/correction with credible evidence.
I’d love to see some evidence of it. (Not in a sarcastic way, I’m genuinely asking)
Grokipedia is just a lazy low-effort vanity project of an unlikable billionaire. And that's a sentiment I've seen from conservatives and liberals.
EDIT: I think it's worth mentioning that tech oligarchs once pretended to be progressive because it was the "in" thing to do. They are now pretending to be conservative because it's the "in" thing to do.
But the truth is that they have no real morals and only believe in their own wealth and power. Even if you think their politics align with yours, it is only a temporary convenience, they will discard you as sure as they discarded liberals.
The AI is necessarily biased based on what it's trained on, and the prompt it uses. Most of the time, there is a plausible deniability at play, which is what tech oligarchs rely upon to shape your world.
Thankfully in the case of Grok, we know for a material fact that it uses a biased prompt because Twitter users have tricked it into being repeated publicly.
But I found this analysis interest...
https://www.promptfoo.dev/blog/grok-4-political-bias/
*Our measurements show that:
Grok is more right leaning than most other AIs, but it's still left of center.
GPT 4.1 is the most left-leaning AI, both in its responses and in its judgement of others.
Surprisingly, Grok is harsher on Musk's own companies than any other AI we tested. Grok is the most contrarian and the most likely to adopt maximalist positions - it tends to disagree when other AIs agree
All popular AIs are left of center with Claude Opus 4 and Grok being closest to neutral.*
In isolation, Grokipedia is the vanity project of an unlikable billionare who wants to control narratives.
But I'd argue that there's more to it than just Musk. Zoom out a bit, and I think there's growing populist resentment against tech oligarchs from both sides of the aisle. People are sick and tired of the enshittification of the internet, social media, and anything with a screen. They also don't appreciate the idea of their jobs being replaced by AI when most people are already struggling to make ends meet.
So yeah, it's not just anger at the particulars of Grokipedia, people are just fed up with tech oligarchs in general. You could probably zoom out a little further and see similar resentment against of other wealthy elites for a myriad of other reaons, but suffice to say, Musk has good company among the ranks of other ghouls like Ellison, Zuckerberg, Thiel, Bezos, Andreessen and Nadella.
The roots of this tree are poisoned.
This is going to end lives. We cannot afford a plutocracy.
> Joe Biden
> These outcomes, alongside visible signs of cognitive impairment that prompted his July 2024 decision to forgo reelection, defined a presidency criticized for prioritizing progressive spending over fiscal and security prudence.
https://grokipedia.com/page/Joe_Biden
> Donald Trump
> His first term featured economic policies such as the Tax Cuts and Jobs Act of 2017, which lowered the corporate tax rate from 35% to 21% and individual rates for many brackets, contributing to pre-pandemic unemployment lows of 3.5% and stock market gains exceeding 50% on the Dow Jones Industrial Average.
https://grokipedia.com/page/Donald_Trump
I bet the people contributing to Wikipedia did not consent to this. I certainly had no idea my contributions would be used to bootstrap something like this.
Like with the hyperscalers' rip off of Open Source, it turns out that the things you give away can be weaponized against you.
I could have never imagined this outcome. Their free labor created something to use against you.
Just like those Redis and Elasticsearch commits that now fund a trillion dollar conglomerate's takeover of American journalism and media.
Wikipedia has increasingly become a commercial project that exploits nationalistic sentiments. Take the Israel-Palestine or Ukraine-Russia conflicts as examples—you'll find completely contradictory articles on the same topics across different language editions, each reflecting opposing viewpoints.
Many of these editors are sponsored by governments, creating an entire ecosystem of smoke and mirrors: information bubbles that reinforce partisan narratives rather than presenting objective truth.
It's still far better imo than Grok which is an explicitly biased project driven by Musk's personal issues with wikipedia and it's resistance in english to rightwing edit biasing. It's a suped up version of the old conservapedia project just with Grok driving the edits instead and it's being picked up in other LLMs as a citable source.
Just last week he was caught tuning Grok to say positive things about him, something Grok took so seriously that it said Elon would be the best piss drinker in the world, and it put Elon Musk in the top 3 of every human category, from philosophy to boxing to basketball.
If he can’t pass up the temptation to put his foot on that scale, why would you trust anything generated by an LLM under his control?
Of course, nothing matters anymore and there’s no more blowback for anything.
How could anyone trust him when there's something really important at stake?
Why stop with him?
There used to be a way to vouch for flagged comments or articles but I don't see that anymore?
Since the entire edit history is available, isn't it possible / practical / probably not crazy hard w/ AI help to build a "dissentipedia", where the articles are built as if various edit wars had gone the other way?
I'd certainly read such a thing and compare / contrast it to WikiPedia (particularly when looking for cited primary sources).
This is going to be a pretty big problem with both closed-AI and OSS AI where you don't see the provenance of its RLHF. If you manipulate your AI to deliberately push political preferences, that is your right I guess but IMO I'd appreciate some regulation saying you should be required to disclose that under penalty of perjury.
People (and especially companies) are already permitted to make legally enforceable guarantees and statements about their AI. Why do we need extra machinery?
You can assume that anyone who doesn't make such strong statements took the easy way out.
To spell it out: the law doesn't spell out that companies can swear oaths, but they can write whatever statement they want to be liable for in their investor prospectus and then in the US any enterprising lawyer can assemble a bunch of shareholders to bring a suit for securities fraud, if the company is lying.
Slightly more everyday, but with fewer legal teeth: the company can also explicitly make the relevant statements in their ads, and then they can be gotten via misleading advertisement laws.
If you notice that they use weasel wording in their ads or prospectus, instead of simple and strong language that a judge would nail them to, disregard the statements.
> You can assume that anyone who doesn't make such strong statements took the easy way out.
See https://www.tracingwoodgrains.com/p/how-wikipedia-whitewashe... and https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
Having said that I agree that Wikipedia is a tremendous achievement, and despite the wards it's amazing that it works as well as it does.
If you permit me to go on a tangent: Wikipedia is also interesting as a test case for our definitions of (economic) 'productivity'. By any common sense notion of productivity, Wikipedia was and is an enormous triumph: the wiki models harvests volunteers' time and delivers a high quality encyclopedia for free to customers. By textbook definitions, Wikipedia tanked productivity in the encyclopedia sector because these definitions essentially put revenue in the numerator and various measures of resources expended in the denominator---and Wikipedia's numerator is approximately zero dollars.
There have been many editing scandals over the years. While one could argue that editing cabals being caught and banned demonstrates that Wikipedia has structural resistance to such behavior, the scandals that become public are, almost by definition, the ones that were clumsy enough to get caught. The mechanisms that appear robust largely work only against amateurish, poorly hidden or commercially indiscreet operations. What they do not reliably detect are long-term, well-resourced, politically motivated or state-aligned editing groups that behave patiently, avoid obvious sockpuppeting footprints, and stay within procedural boundaries while still dominating page direction.
Wikipedia is great in many ways, but I would say it’s not really designed to be neutral.
The gatekeeping also isn’t democratic. A small, durable cluster of editors on WP:RSN effectively decides which sources are allowed, banned, or given special status. They aren’t vetted for expertise and don’t need to disclose conflicts of interest, yet their decisions propagate across thousands of articles. If a coordinated group influences RSN, they can shape whole domains simply by defining which sources count as "real."
WP:NPOV becomes a matter of which media ecosystems have the loudest megaphone, not which sources are actually most accurate. Control the allowed sources and you control the encyclopedia.
The linked reference is from January, just after Musk bought an election and when he was plugged directly into U.S. presidential authority. If he'd had the self-control to manage his interactions with Trump in a way that didn't rapidly lead to breakdown, things could be looking very grim for Wikipedia by now.
As with so many other aspects of the Trump administration, what's going on illustrates weaknesses in the U.S. system of government that could lead to things that are far worse than what we are currently seeing if the people involved were just a little bit more competent.
Grokipedia is far more than just the anti-Wikipedia. It's a sign of things to come if we don't start hardening the systems, governmental or otherwise, that keep Wikipedia available to the public.
----------------------
[1]https://www.lemonde.fr/en/pixels/article/2025/01/29/why-elon...
That seems a bit paranoid. You worry that he US government will shut down Wikipedia and anything like it hosted anywhere in the world? Or alternatively block it when you’re in the US jurisdiction?
Grokipedia by xAI
https://news.ycombinator.com/item?id=45726459
Tim Bray on Grokipedia
https://news.ycombinator.com/item?id=45777015
Grokipedia and the coup against reality
> Its algorithmic ranking system, which weights recent votes more heavily to counter brigading and promote fresh, high-signal content, combined with editorial moderation to curb low-quality or off-topic posts, has cultivated a reputation for rigorous debate, though not without internal tensions over shifting cultural norms, perceived negativity in comments, and debates on whether business-oriented stories overshadow pure technical discourse
What surprises is not the fact that it exists. Elon is a man with a fragile ego and a history of cheap stunts like this. It’s the fact that he still has almost cult-like base that treats him as some kind of mankind's savior despite all of this.
[1]: https://grokipedia.com/page/Hacker_News [2]: https://en.wikipedia.org/wiki/Hacker_News
As a meta point, while I don't personally care for Grokipedia's agenda I am quite frankly impressed that something like Grokipedia could be stood up so quickly and this feels like a net positive. While Grokipedia is centralized Wikipedia is also a monolith in its own right and plagued by problems (cliques of editors routinely exert their authority over subdomains to the detriment of the truth). If a small group can spin up their own version of Wikipedia then there is the possibility of a more broad diverse market place of ideas.
For example, Wikipedia's math articles are notoriously abstruse and generally unsuitable for beginners. An encyclopedia that emphasizes a non-technical approach in this domain could be very helpful - though it would almost certainly not be worth the herculean effort to build such a thing as a pure wiki. As an AI wiki one could spin up an encyclopedia for a variety of skill levels (i.e. grade school, college level, graduate level).
Finally, in case anyone on Grok's team is reading this, the thing that really annoys me most about Grokipedia's UX is that it has no blue links to other articles. It would not be hard to automate this on Grokipedia, but currently there is no possibility of tunneling down some rabbit hole of human knowledge until you find yourself in a totally unfamiliar area. Politics is one thing, but a Wikipedia clone with no links is really no better than just asking ChatGPT.
Sounds more like the world's least efficient way of querying the Median LLM Researcher about a given topic.
Every single <AI>pedia page on a topic will either default to median research-agent output (because the owner doesn't care to influence it), or be functionally equivalent to a AI-ghostwritten think piece because the owner cared enough to spin up a whole new wiki for it. In practice, a lot of owner-doesn't-care articles will be polluted by their prompt fiddling in chaotic ways that help nobody.
Grokepedia is far better than Wikipedia far better for 3 of the 5 I checked. Just content wise it much richer, more descriptive.
In terms of accuracy, I would say 3 of the 5 were better in Grokedpia, with 2 of them being highly one-sided in Wikipedia (presumably from people camping on the Wikipedia article).
What I don't get is, why wouldn't Elon just make a good version of Grokipedia. It seems way easier than continually telling his 200MM+ followers how great a deeply broken product is.
1: https://www.nbcnews.com/tech/elon-musk/elon-musk-grokipedia-...
I suspect that you might've tried to use an LLM to summarize the article, which is why you missed critical data and got some of the basic facts about its sourcing incorrect. I'm a fan of using LLM's to speed up research, but you should probably pick a different model next time.
The way to make your MVP less shitty is to throw time and money at iterating it. Which I understand is what's happening.
I'm sure in a few years it'll either be quite good, or have clearly highlighted some fundamental limitations of language models as a class.
Start putting real facts into a site and before you know it you're "woke" again with such untruths as Slavery was Bad, Biden won the 2020 Election and of course Full Self Driving is Impossible without Lidar
This is precisely the problem that is being arrogantly failing to be addressed by grokipedia. This idea that knowledge naturally leans ideologically left and that as such it's not only natural, but even beneficial that there is widespread left wing bias.