I feel like search (even on non-google search engines) has gotten pretty bad. Kagi seems to be the best, but I still see AI-slop or list-slop on it.
Because web pages are being served up without interactivity, there would be no way to _easily_ tell if alternate content exists for different users; if the access tech (browser, bot, crawler, etc.) is what is causing the content to change.
The big issue is how to reliably identify Google crawlers and bots. This framework might need to go as far as identifying entire blocks of Google IPv4 and IPv6 addresses to use as a filter, since many of the more recent indexing tech looks at web pages the same way humans would, and may even present themselves to the server much the same way a normal web browser would, but this is a technical problem that can be overcome.
Hmmm… this sounds like a potential project. Something that can be used as a foundation for many websites, and possibly even a plugin for existing frameworks like WordPress.
i actually used that server-side approach to achieve something like that on an old site. that was before client-side rendering was a thing. i had sections of content that by default were folded, so you would only see the headline had to click and load a new version of the page with the content open, but for a search engine the server would serve the page with all sections opened and none of the headlines clickable.