FYI I manually created this page and some link markup looks malformed: https://halupedia.com/list-of-uninhabited-countries
using 1886 or 1888 makes Google correctly identify that no such sensus exist.
asking about 1887 specifically makes Google refer to some supposed great effort to track passenger pigeon population mids of the species decline.
You can also just type a random URL and visit it, it'll generate an article. That's what I did before I fixed the search issue, and I usually just do that to avoid the search route.
One hint – check out its prompt, and how it makes its articles so different than those of your project: https://news.ycombinator.com/edit?id=48042306
https://halupedia.com/fcuk-spellchecking-society https://halupedia.com/characterization-of-the-reluctant-peng...
But not without risk! https://halupedia.com/dangers-of-a-virtual-llm-backed-encycl...
You not only made this excellent source of entertainment, you are also helped everyone find their unmatched socks, ensuring that "no individual would ever be forced to wear a mismatched pair". (Source: https://halupedia.com/humanitarian-accomplishments-of-the-on...
That could be the thing behind it being so quick.
Cloudflare workers have 1ms cold start.
I feel like I have some minimum latency "priced in" to my expectation when I click a link on a static site, so yours feels uncannily like it's somehow able to anticipate my clicks, adding to the surreal atmosphere.
Anyone of reasonable intelligence can easily tell this is a parody of an encyclopedia. Saying this is bad for the web is like saying The Onion is bad for the web.
But either way can't wait to see google ai overview cite us.
https://news.ycombinator.com/item?id=48042594
In particular, someone who was seeking training-set pollution likely wouldn't make the fanciful fabrications so blatant, nor open-source their prompt:
As an entertaining way to highlight the importance of upgrading our ways of knowing, playful (& open-source!) projects like this are likely to strengthen the web.
I'm not sure if the bots that scrape data to train LLMs are capable of loading that type of page, or if they only work on pages that have the content inside the HTML itself?
The age where the web was usable at all without JavaScript is long gone. No scraper would get much scraping done without JavaScript these days.
This is perfect. Very Neal Stephensony.
Also, this, but with no AI: https://ifdb.org/viewgame?id=032krqe6bjn5au78
Just incredible prose and writing (and gameplay), with something you can run with Frotz/NFrotz/LectRote or any ZMachine interpreter (or Glulxe like Gargoyle). A Pentium would run this and marvel you in a similar way.
No need to waste tons of water in datacenters.
Feature request: also be able to click on the Talk page to see the controversies. I don't always want to trust the article itself as the final word.
Edit: Oh look, there's an article about the YC! https://halupedia.com/y-combinator
This should be on YC's About page.
This particular piece of slop is a serendipitously brilliant description of the cult of founder worship in the metaphysical gravity of Silicon Valley.
And the Sokal case with the Humanities branches, for sure.
BTW: https://halupedia.com/postmodernism
This is golden.
Best entry, hands down. This is a love letter to Prattchett.
> Articles are generated on demand and stored permanently upon first request.
Don't dispell the magic; don't pull back the curtain and let people see the mechanics.
EDIT: As you say in your system prompt, "You never wink at the reader. You never acknowledge that anything is funny or fictional. Everything is reported as though it is completely normal and well-documented"