189 pointsby dtj11237 hours ago26 comments
  • xd19367 hours ago
    Fun! I wish WebTorrent had caught on more. I've always thought it had a worthy place in the modern P2P conversation.

    In 2020, I messed around with a PoC for what hosting and distributing Linux distros could look like using WebTorrent[1]. The protocol project as a whole has a lovely and brilliant design but has stayed mostly stagnant in recent years. There are only a couple of WebRTC-enabled torrent trackers that have remained active and stable.

    1. https://github.com/leoherzog/LinuxExchange

    • r14c5 hours ago
      I think the issue has generally been that web torrent doesn't work enough like the real thing to do its job properly. There are huge bit torrent based streaming media networks out there, illicit, sure, but its a proven technology. If browsers had real torrent clients we would be having a very different conversation imo

      I don't remember the web torrent issue numbers off the top of my head, but there are a number of long standing issues that seem blocked on webrtc limitations.

      • embedding-shape5 hours ago
        I think we still have the same blocker as we had back when WebTorrent first appeared; browsers cannot be real torrent clients and open connections without some initial routing for the discovery, and they cannot open bi-directional unordered connections between two browsers.

        If we could say do peer discovery via Bluetooth, and open sockets directly from a browser page, we could in theory have local-first websites running in the browser, that does P2P connections straight between browsers.

        • Seattle35033 hours ago
          If a tracker could be connected to via WebRTC and had additional STUN functionality, would that suffice? Are there additional WebRTC limitations?

          > they cannot open bi-directional unordered connections between two browsers.

          Last I checked, DataChannels were bidirectional

          • embedding-shape3 hours ago
            Yes, but it's STUN that sucks. If the software ships with a public (on the internet) relay/STUN server for connecting the two clients, it won't work if either aren't connected to the internet, even though the clients could still be on the same network and reach each other.
            • jychang2 hours ago
              That seems like a nonissue for the purposes of this discussion though, in terms of user uptake. Tiktok and Facebook and other websites aren't exactly focused on serving to people on the same network.
    • bluedino4 hours ago
      Was there ever a web-based Jigdo?
  • DJBunnies3 hours ago
    Every time I try these they never work, including this one.

    I’m not sure what the value prop is over just using a torrent client?

    Maybe when they’re less buggy they’ll become a thing.

    • Sephr26 minutes ago
      I'm planning to eventually launch an open source platform with the same name (peerweb.com) that I hope will be vastly more usable, with a distributed anti-abuse protocol, automatic asset distribution prioritization for highly-requested files, streaming UGC APIs (e.g. start uploading a video and immediately get a working sharable link before upload completion), proper integration with site URLs (no ugly uuids etc. visible or required in your site URLs), and adjustable latency thresholds to failover to normal CDNs whenever peers take too long to respond.

      I put the project on hiatus years ago but I'm starting it back up soon! My project is not vibe coded and has thus far been manually architected with a deep consideration for both user and site owner expectations in the web ecosystem.

  • 39 minutes ago
    undefined
  • kamranjon6 hours ago
    I think one of the values of (what appears to be) AI generated projects like this is that they can make me aware of the underlying technology that I might not have heard about - for example WebTorrent: https://webtorrent.io/faq

    Pretty cool! Not sure what this offers over WebTorrent itself, but I was happy to learn about its existence.

  • littlecranky675 hours ago
    Cool. Some people complained about broken demos, I uploaded the mdwiki.info [1] website unaltered and seems to work fine [0]. MDwiki is a single .html file that fetches custom markdown via ajax relative to the html file and renders it via Javascript.

    [0]: https://peerweb.lol/?orc=b549f37bb4519d1abd2952483610b8078e6...

    [1]: https://dynalon.github.io/mdwiki/

    • Timwi3 hours ago
      Why is it called MDwiki? It's clearly not a wiki.
      • jmercouris2 hours ago
        Sure, in a sense, but “wiki” actually just means “quick”.
  • misir3 hours ago
    I wonder if these colors are a kind of a watermark that are hardcoded as system instructions. Almost all slopware made using claude have the same color palette. So much for a random token generator to be this consistent
    • orbital-decay26 minutes ago
      https://en.wikipedia.org/wiki/Mode_collapse

      Ask any modern (post-GPT-2) LLM about a random color/name/city repeatedly a few dozen times, and you'll see it's not that random. You can influence this with a prompt, obviously, but if the prompt stays the same each time, the output is always very similar despite the existence of thousands of valid alternatives. Which is the case for any vibecoded thing that doesn't care about color palette, in particular.

      This effect is largely responsible for slop (as in annoying stereotypes). It's fixable in principle, but there's pretty little research and I don't see big AI shops care enough.

    • IhateAI2 hours ago
      Yep, and I refuse to use sites that look like this. Lovable built frontend/landing pages have a similar feel. Instant lost of trust and desire to try it out.
      • j45an hour ago
        That's interesting - do you think because it's familiar to you?

        Would it be the case for folks who don't have any idea what Lovable is.

        Familiar UI is similar to what Tailwind or Bootstrap offers, do they do something different to keep it fresh?

        Average internet users/consumers are likely used to the default Shopify checkout.

    • karanSF2 hours ago
      Emojis on every line are an AI tell. The times I do use AI (shhhh...) I always remove them and tweak the language a bit.
      • netulean hour ago
        Before LLMs became big, I used emojis in my PRs and merge requests for fun and to break up the monotony a bit. Now I avoid them, lest I be accused of being a bot.
    • 3 hours ago
      undefined
  • sroerick7 hours ago
    This is pretty interesting!

    I think serving video is a particularly interesting use of Webtorrent. I think it would be good if you could add this as a front end to basically make sites DDOS proof. So you host like a regular site, but with a JS front end that hosts the site P2P the more traffic there is.

    • NewsaHackO6 hours ago
      I think it is very difficult (and dangerous to the host) to serve user-uploaded videos at scale, particularly from a moderation standpoint. The problem is even worse if everyone is anonymous. There is a reason YouTube has such a monopoly on personal video hosting. Maybe developments in AI moderation will make it more palatable in the future.
    • stanac6 hours ago
      There is PeerTube for video content.
  • SLWW6 hours ago
    I can't imagine that Peerweb has much in the way of stopping certain types of material from being uploaded.
    • 3 hours ago
      undefined
    • j45an hour ago
      Smaller site likely have a smaller footprint
    • estimator72924 hours ago
      [flagged]
      • ericyd4 hours ago
        This response feels disproportionate to the comment's comment
        • rainonmoon4 hours ago
          And also just… misguided? I don’t particularly think of neo-Nazis when I think of people who advocate against CSAM.
          • SLWW3 hours ago
            We all know that CSAM is one of the first things that gets uploaded to these sorts of platforms.

            If advocating against CSAM = Fascism then I'll be the first to say that i'm a nazi facist. o7

            • tombert3 hours ago
              In high school, an acquaintance of mine made the website "e-imagesite.com" [1]. It was a very easy-to-use image uploading site (and honestly less irritating than ImageShack and predated imgur). It was just being hosted on HostGator, I believe, and written in PHP and used jQuery.

              I believe he had to eventually shut it down because people kept uploading horrifying stuff to it, and it was never even that popular. Child porn and bestiality were constantly being uploaded and I don't think he liked having to constantly report stuff to the FBI.

              After building a proper comment section for my blog (including tripcodes!), I've thought about making my own "chan" site, since I think that could be fun, but I am completely terrified of people uploading horrible stuff that I would be forced to sift through and moderate pretty frequently. User submissions open up a huge legal can of worms and I am not sure that's a path that I'm willing to commit myself going down.

              When there's strong anonymity, I suspect that this problem could be even worse.

              It's a little depressing, because decentralized and distributed computing is one of the most interesting parts of computer science to me, but it feels like whenever I mention anything about it, people immediately assume piracy or illicit material.

              [1] https://web.archive.org/web/20090313063155/http://www.e-imag...

            • rainonmoon3 hours ago
              Yeah, I’m fully in support of a decentralised web but the internet is old enough now that being naive about this stuff has become equivalent to being maliciously incompetent. Without designing for things like community or self-governance and moderation, you’re designing for trouble. Thinking about ways to healthily cultivate a peer-to-peer web doesn’t make someone a Nazi, it makes them a responsible member of a community.
    • b00ty4breakfast4 hours ago
      you can't stop someone from verbally describing certain objectionable material, therefore we should regulate the medium thru which sound travels and suck up all the oxygen on the planet. it's the only way to save the children
      • 4 hours ago
        undefined
  • mcjiggerlog6 hours ago
    This is cool - I actually worked on something similar way back in the day: https://github.com/tom-james-watson/wtp-ext. It avoided the need to have any kind of intermediary website entirely.

    The cool thing was it worked at the browser level using experimental libdweb support, though that has unfortunately since been abandoned. You could literally load URLs like wtp://tomjwatson.com/blog directly in your browser.

  • turtleyachta month ago
    • dang7 hours ago
      Thanks! we'll put that link in the toptext.
  • gnarbarian6 hours ago
    love this. I've been working on something similar for months now

    https://metaversejs.github.io/peercompute/

    it's a gpgpu decentralized heterogeneous hpc p2p compute platform that runs in the browser

  • BrouteMinou6 hours ago
    Nice, I clicked on the first demo, and I got stuck at connecting with peers.

    I like the idea though.

  • fooker3 hours ago
    What do you all think of the chances that we have decentralized AI infrastructure like this at some point?
  • bricss3 hours ago
    Somebody has to revive Nullsoft WASTE p2p from 2003 tho
  • dpweb5 hours ago
    Useless if it takes > 5 sec. to load a page
  • cyrusradfar6 hours ago
    OT: Can someone vibe-code Geocities back to life?
    • 800xl6 hours ago
      Check out neocities.org
      • cyrusradfar4 hours ago
        you made my life. Thank you life long internet friend.
    • ipaddr6 hours ago
      That would take forever. If you can get the domain I'll hand code it in perl.
      • awesome_dude5 hours ago
        <marquee><blink>Neat!!</blink></marquee>
    • AreShoesFeet0006 hours ago
      give me the tokens.
  • rickcarlino5 hours ago
    Similar project I vibe coded a few weeks ago: "Gnutella/Limewire but WebRTC".

    https://github.com/RickCarlino/hazelhop

    It works, though probably needs some cleanup and security review before being used seriously (thus no running public instance).

  • journal5 hours ago
    i wish stuff like this was more like double-click, agree, and use. they always make it complicated to where you're spending time trying to understand if you should continue to spend more time on this.
  • logicallee5 hours ago
    I tried this, the functional "Functionality test page:" is stuck on "Loading peer web site... connecting to peers". I can't load any website from this.

    https://imgur.com/gallery/loaidng-peerweb-site-uICLGhK

    • davidcollantes5 hours ago
      Yes, none work for me. They either don’t have peers, or the few ones are on a very slow network.
  • dana3216 hours ago
    None of the demo sites work for me.

    Probably needs more testing and debugging.

  • j457 hours ago
    In its own reimagined way from what’s possible in 2026, this could kick off a new kind of geocities.
  • dcreater6 hours ago
    Good, important idea. Unfortunately bad, low effort vibe coded execution
    • j45an hour ago
      Still a shipped idea, driven by someone. The author has some other interesting ideas.
  • Uptrenda5 hours ago
    I feel like if it were combined with federated caching servers it would actually work. Then you would have persistence and the p2p part helps take load off popular content. There are now P2P databases that seem to operate with this. Combining the best of both worlds.
  • maximgeorge3 hours ago
    [dead]
  • vyr2 hours ago
    [dead]
  • elbcia month ago
    I don't get it, I upload my files to your site, then I send my friends links to your site? How is this not a single point of failure?
    • dang7 hours ago
      [sorry for the weird timestamps - the OP was submitted a while ago and I just re-upped it.]
    • toomuchtodo7 hours ago
      IPFS [1] requires a gateway unfortunately (whether remote or running locally). If you can use content idents that are supported by web primitives, you get the distributed nature without IPFS scaffolding required. Content is versioned by hash, although I haven't looked to see if mutable torrents [2] [3] are used in this implementation. Searching via distributed hash tables for torrent metadata, cryptographically signed by the publisher, remains as a requirement imho.

      Bittorrent, in my experience, "just works," whether you're relying on a torrent server or a magnet link to join a swarm and retrieve data. So, this is an interesting experiment in the IPFS, torrent, filecoin distributed content space.

      [1] https://ipfs.tech/

      [2] https://news.ycombinator.com/item?id=29920271

      [3] https://www.bittorrent.org/beps/bep_0046.html

      • amelius3 hours ago
        You don't hear much these days about IPFS, but I can remember one big problem with it was illegal content and how to deal with it.
    • dtj1123a month ago
      This isn't my site, nor do I have any opinions on the implementation here. I do however find the idea of serving web pages via torrent interesting.
      • elbcia month ago
        p2p storage as in torrent or IPFS or whatever is the part that we kinda' solved already. Serving/searching/addressing without the (centralized) DNS is still missing for a (urgently needed) p2p censorship resistant internet. Unfortunately this guy just uses some buzzwords to offer nothing new - why would I share links to that site instead of sharing torrent magnet links?
        • recursivegirth6 hours ago
          Thinking about this a little bit... could we use a blockchain ledger as an authoritative source for DNS records?

          User's can publish their DNS + pub key to the append-only blockchain, signed with their private key.

          Use a torrent file to connect to an initial tracker to download the blockchain.

          Once the blockchain is downloaded, every computer would have a full copy of the DNS database and could use that for discoverability.

          I have no experience with blockchains or building trackers, so maybe this is a dumb idea.

        • sroerick7 hours ago
          This is a great point.

          One issue I've had with IPFS is that there's nothing baked into the protocol to maintain peer health, which really limits the ability to keep the swarm connected and healthy.

          • theendisney5 hours ago
            I use to add webseeds but clients seem to love just downloading it from there rather than from my conventional seeding.

            Some new ideas are needed in this space.

        • dtj112324 days ago
          You make a good point.
    • a month ago
      undefined