4 pointsby em-bee14 hours ago2 comments
  • nickorlow13 hours ago
    Assuming Google has allowed Gemini to become some part of the ranking bias, and asking it how to improve will essentially tell you how to game it.

    I feel like search (even on non-google search engines) has gotten pretty bad. Kagi seems to be the best, but I still see AI-slop or list-slop on it.

  • rekabis12 hours ago
    Many of us here are web developers, and can see a solution: server-side assembly of the content. As in, abandon the client-side interactivity so as to reliably serve up different content depending on who’s calling, without the visitor being able to “look under the hood” to see alternative content (as client-side rendering can expose).

    Because web pages are being served up without interactivity, there would be no way to _easily_ tell if alternate content exists for different users; if the access tech (browser, bot, crawler, etc.) is what is causing the content to change.

    The big issue is how to reliably identify Google crawlers and bots. This framework might need to go as far as identifying entire blocks of Google IPv4 and IPv6 addresses to use as a filter, since many of the more recent indexing tech looks at web pages the same way humans would, and may even present themselves to the server much the same way a normal web browser would, but this is a technical problem that can be overcome.

    Hmmm… this sounds like a potential project. Something that can be used as a foundation for many websites, and possibly even a plugin for existing frameworks like WordPress.

    • em-bee11 hours ago
      whether client-side rendering exposes alternative content really depends on how the code is actually written. i could still send different data depending on the type of client. but i agree with the general point.

      i actually used that server-side approach to achieve something like that on an old site. that was before client-side rendering was a thing. i had sections of content that by default were folded, so you would only see the headline had to click and load a new version of the page with the content open, but for a search engine the server would serve the page with all sections opened and none of the headlines clickable.