168 pointsby PaulHoule6 days ago6 comments
  • atomic1286 days ago
    You can do a lot better than this, by using Go as a JIT code generator, dynamically linking the result, and jumping into it with cgo. Easily saturates the CPU vector math units.

    I use exactly this approach for the futures/options prediction transformers on my website.

    But I will never "open" another piece of software, now that it's all grist for the LLM code generator industry. Anonymous common property, sold by the LLM companies. No credit to the author.

    Why anyone opens any software anymore is a mystery to me. We are witnessing the greatest theft of intellectual property in the history of Man.

    • badsectoracula6 days ago
      > Why anyone opens any software anymore is a mystery to me.

      Because i open my software to be useful to others, including others that may benefit from my code indirectly via LLM being trained on it. If anything, just recently i was thinking of how to make a documentation generator system to generate documents in a format that'd be easier for LLMs to "grok" so that people can feed it to an LLM and ask questions about it.

      I'd advocate for using a local LLM instead though, they may not be technically as good as the cloud stuff you rent, but they are good enough, can run on most mid-to-high-end PCs and you are in control.

    • godelski6 days ago

        > We are witnessing the greatest theft of intellectual property in the history of Man.
      
      Issue is that this has been going on for decades. We've been really bad at allocating capital to people who are building important and highly influential software. These big companies do play a role, but it is a shame that a small portion of these profits does not go back to the people who did work that is cornerstone to their success. I often wonder what the world would look like if being an open source developer was actually profitable. But we definitely know what the world looks like when being a open source developer essentially means having two full time jobs and getting paid for one.

      I think the problem is people see it as "charity" and see charity work as less valuable. I remember Steve Levitt talking about a conversation he had with Bill Gates over this. I think it was in the PIMA episode where they discuss CEO pay for charities and how their pay is far less than that of a corporation, even if the work is the same.

      • robertlagrant5 days ago
        > We've been really bad at allocating capital to people who are building important and highly influential software

        What does this mean? Can you give an example?

        • realo5 days ago
          • robertlagrant5 days ago
            Sorry - I might be being a bit slow. How is that an example of poor capital allocation?
            • godelski5 days ago

                - XZ is critical software
                - XZ was (is?) developed by a single person
                - XZ developer does XZ development in their spare time, having a normal job to pay the bills
                - XZ developer gets overburdened. Not making money, they can't hire another dev.
                - Pressure builds up. Hacker leverages and takes advantage of this. Especially since everything can't be checked due to said overburden
              
              Look at it from the flip side. Take the counterfactual of if XZ Utils was making money for their work

                - XZ is critical software, therefore it is funded
                - XZ is funded and critical, so more than one developer is hired to ensure quality
                - XZ is funded, developers don't have a second job. They have ONE job
                - XZ is over burdened. XZ is funded. XZ hires more devs.
              
              It's true that a hacker can still infiltrate corporate software, but it is also true that the pressures would have been far lower were the main dev not doing 2 fucking jobs.
              • robertlagrant4 days ago
                Of course if there were a large company maintaining XZ Utils then that would dramatically mitigate the cyber risk, but isn't this is the default economics of OSS?

                Approaching it from the point of view of "it's obviously unjust and stupid that people voluntarily offered their software for nothing" without questioning the prior seems a bit short-sighted.

                If you want to say "no one should use OSS because of the cyber risk", you might be right. But then what should replace it? What's the proposal?

                • gryfft4 days ago
                  Not every valid recognition of a real problem has to come with a 13-page point-by-point proposal for a fix.
        • whoiscroberts5 days ago
          I took it to mean that we give money to people who ask for it
    • ncruces6 days ago
      Because some people just don't care where their code ends up.

      Many people release code to the "public domain" (or under very liberal licenses). If those never cared if corporate entity™ used it in proprietary software, why should they care if LLM chews on it and regurgitates it out?

      Also, it's far worse if entitled user® posts abusive issues to my repo, than if they copy snippets of my code through a LLM and are forced to support their inferior spitballed copy all by themselves.

      • csdvrx6 days ago
        > Because some people just don't care where their code ends up.

        Yes, take me for example.

        > Many people release code to the "public domain" (or under very liberal licenses).

        In my case, the MIT license, because I saw it was popular, and I was afraid that in some places, "public domain" might cause unexpected legal issues to whoever wants to "play by the book" and use my code.

        > if LLM chews on it and regurgitates it out

        As work coming from a machine does not have copyright protection, whoever gets a LLM to spit out my code back can then claim it as their own, under whatever term they like.

        If this person wants to contribute to a free software project and release the code under the GPL v2 or v3, good: it may help create a new feature that users will enjoy!

        If this person wants to contribute to their company private software that's only available on a subscription basis (and let's say the subscription is sold at an eye-watering price), good: it means whoever pay this subscription will get more from their money, and whoever use the software may get a new feature they will enjoy!

        Software has nearly 0 marginal costs. LLM is the closest thing to a Star-Trek level "replicator", getting everyone everything they want.

        On which moral grounds would you object to a Star-Trek level replicator for physical good? (please make them good, as offering any food anyone may want would fix world hunger once and for all)

        Then why object to that for virtual goods?

        Maybe I'm reading too much into your reply, but I don't see it as trolling or negative faith.

        I see variants of it in many places, and they all look to me very close to luddism: rejecting a new technology, because you fear for your own work, while ignoring what this technology will enable in the greater picture: for the orignal case of luddism, reducing the price of clothing for everyone by increasing production and decreasing labor, therefore allowing workers to get in other fields where they may try to satisfy other human wants - some that would be inconcievable to the original luddites like videogames

        We should feel graceful we get more technology, as it removes constraints and make more people happy.

        • hnlmorg6 days ago
          I don’t think fearing one’s job is necessarily a bad reason because as much as I love the idea of a Star Trek utopia, real and present people have real responsibilities like children which are cared for from money generated by their careers.

          This is particularly relevant in societies which take a dim view of their social responsibilities (I’m looking at you America) which means there’s less of a safety net should that career disappear.

          We are already seeing more developers than job vacancies is the tech market, so this isn’t a theoretical concern either.

          That all said, I don’t think hiding our valuable code for fear of LLMs is the right solution either. If your code is really that good then you’ll be more likely to secure your career by sharing your code because it builds a visible reputation that extends further than any verbiage on a CV might.

          So while I don’t agree with the LLM excuse I can still completely understand why someone might cite it as a reason not to open their source.

          Another valid reason is that some people have been completely burnt out deadline with entitled complaints from users. Thankfully I’ve had a mostly positive experience personally but I’ve read that others haven’t been so fortunate.

          • csdvrx6 days ago
            > I’m looking at you America

            And I'm looking back at you from America :)

            > We are already seeing more developers than job vacancies is the tech market, so this isn’t a theoretical concern either.

            Agriculture also employs far fewer people than a few hundred years ago, yet we have more food in quantity and diversity, so I see that as a good thing.

            I suppose we just have very different beliefs and values.

            Thanks for your answer, as it helped me understand your perspective.

            • hnlmorg6 days ago
              I think you’ve misread my comment. I’m neither the GP nor against LLMs. I’m just offering a counterpoint that a fear for one’s job isn’t an unreasonable perspective.
        • TeMPOraL6 days ago
          > On which moral grounds would you object to a Star-Trek level replicator for physical good? Then why object to that for virtual goods?

          This just made me realize a distressing thing - if we ever built a replicator, a lot of people might then want to destroy it. For the same reason I believe they object to LLMs - greed and entitlement. Because they don't get to benefit personally, they don't get first right to refuse, the instinct is to deny the value to others. The Dog in the Manger.

          • MyOutfitIsVague6 days ago
            I use LLMs and consider them quite useful, but I think that characterization of detractors is very disingenuous. People don't object to LLMs out of greed and entitlement. People object to LLMs because the copyright and IP systems in most of the modern world have equated copying with theft for so long, complete with the threat of legal action and even prison sentences. This system was said to be there to keep people fed and employed. Suddenly, when giant companies have billions of dollars to gain by ignoring copyright, they are allowed to. We've lived in a couple generations where giant companies have been able to completely own and control our culture, which should belong to the people.

            People object to modern AI because it's another glaring sign that capital doesn't care about human life, and the people who own the capital largely don't either. They will use that as a PR angle until it's not useful anymore, and then proudly say the opposite when it suits them. It's flagrant hypocrisy.

            I believe that if we had sane copyright terms and limits so we were actually entitled to use and share our own culture and media as we see fit, and better social safety nets so people whose jobs become outmoded didn't have to worry about losing their homes and having their families go hungry, very few people would be actually against LLMs.

            • dijksterhuis5 days ago
              as a detractor, yes, partially (LLMs are also overblown marketing hype BS which is one of the other many reasons).

              > I believe that if we had sane copyright terms and limits so we were actually entitled to use and share our own culture and media as we see fit,

              i agreed with everything except this.

              to me this feels like you’re saying “if only we were allowed to murder people then we’d have less crime” (not exactly what you are saying and a bit hyperbolic, but hopefully it helps highlight my perspective on what you said?).

              existing copyright laws are the copyright laws. we all have to follow them or face penalties/consequences. just like the laws on murder/homicide etc.

              it’s the fact these companies are being allowed to flaunt the law with zero repercussions i have a problem with (specifically on this one of the problems).

              if they’d licensed all the content and it was opt-out — i wouldn’t give a shit about this part.

              having worked in copyright, i feel very strongly about it. as do most people who have their works protected by it. its very easy to argue against copyright protections when your livelihood does not depend on it (note: i’m not arguing against you here, you mentioned social safety nets etc which is one direction to go i suppose, i’m just venting somewhat at the oft repeated opinion here on HN that copyright is evil and should be completely abolished… good luck listening to any decent music in ten years time if that happens!!).

              edit — i know there’s nothing i can do about this. which also contributes to the misanthropic attitudes towards magic LLMs.

              • TeMPOraL4 days ago
                FWIW, my perspective and descriptions of the detractors is aimed primarily at the detractors I see - which are notably not the artists or other people whose livelihood depends on copyright protection. Instead, the loudest voices are of the bystanders who imagined possible windfall if only OpenAI et al. had to pay them for using their Reddit comments and a few blog articles written half a decade ago. These are the "I won't write comments on public forums anymore, nor will I blog, because LLMs" voices.

                I fundamentally believe that people are not entitled to 100% of the value created by their labor - in fact, society can only function if there's surplus that others can build upon, and when companies try to capture 100% of their output, we call this out as extreme greed and a symptom of "late stage capitalism".

                I do agree that people who are directly affected by LLMs basically replacing their job have a valid argument, though I don't think the core of it relates to copyright anyway.

                As for the laws:

                > it’s the fact these companies are being allowed to flaunt the law with zero repercussions i have a problem with (specifically on this one of the problems).

                It's not been determined they actually broke the law on this. AFAIK it's still an open question, pending (in the US) court verdicts or (elsewhere in the world) updates to IP regulation. Morally, I personally think use of data for training AI is not violating copyright entirely, and is closer to a person consuming content and learning from it - but more importantly, I think preventing this use to be denying humanity great value for no good reason. Almost all content in the training set, on the margin, is both contributing an infinitesimal bit to the resulting model, and at the same time, it's providing much more value this way than it ever had before.

        • ncruces6 days ago
          > Maybe I'm reading too much into your reply, but I don't see it as trolling or negative faith.

          Maybe you are. All my repos are either MIT (where I'm a little proud, and would appreciate the acknowledgement - though realistically, I'd never sue anyone over it) or MIT-0.

          So yeah, if it ends up in a LLM, and people copy it, great. Less "please give me free support" requests coming my end.

        • ben_w4 days ago
          > On which moral grounds would you object to a Star-Trek level replicator for physical good? (please make them good, as offering any food anyone may want would fix world hunger once and for all)

          Unfortunately this is one topic in which my philosophy qualification comes in handy — "moral grounds" are so varied by people, that it's almost useless as an argument.

          Consider the following list of examples, I expect most people in the world will object to at least one of these arguments, but which one(s) they object to will vary wildly:

          1. Kantian Ethics: Replicators risk devaluing human labor by reducing work to a mere means, thereby undermining the inherent dignity derived from effort.

          2. Aristotelian Virtue Ethics: By eliminating the need for craftsmanship and effort, replicators could impair the cultivation of virtues essential for personal and communal flourishing.

          3. Marxist Ethics: The obsolescence of traditional labor due to replicators may intensify alienation and disrupt social structures central to class solidarity.

          4. Existentialism: Removing material struggle through replicators might strip life of the challenges necessary for authentic self-creation and personal meaning.

          5. Confucian Ethics: Such technology could erode the social harmony built on mutual effort and well-defined communal roles, destabilizing moral and familial bonds.

          6. Environmental Ethics: Unlimited production enabled by replicators may encourage overconsumption and waste, endangering ecological balance and sustainable resource management.

          7. Amish Ethics: Replicators could undermine the values of simplicity, humility, and communal labor by promoting dependence on technology instead of human effort and cooperation.

          8. Consequentialism: While replicators, as defined by your question, can solve world hunger, they're also demonstrably able to build weapons (as can current 3d printers), and can function as Von Neumann self-replicating machines. A literal minefield made of these appeared in DS9, and giving such tech to the world almost unavoidably means also giving such tech to every psychopath. Also grey-goo/paperclip scenarios become plausible.

          > Then why object to that for virtual goods?

          You can't eat virtual cheese, and unlike The Matrix if you die in a video game you don't die in real life, so the arguments for/against AI don't even need to be the same as those for/against Trek-replicators.

    • umvi6 days ago
      The software I open is usually a gift to humanity/public service. I'm not seeking to gain anything. Anyone can use it for anything - for profit or not.
      • diggan6 days ago
        Or put the way I usually say it, in completely normal conversations:

        > free of charge, to any person obtaining a copy of this software, to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software

        • anonym295 days ago
          If I had a nickel for every time I repeated this exact line, verbatim in casual conversation...
    • ninininino6 days ago
      I suppose there's a different angle, which is that the Open community can distill the privately trained models and then open the distilled model in turn, like many believe Deepseek did. In effect, letting private corps pay for expensive training (w/o paying the authors of the data they are training on, as you correctly point out), but then benefiting from their training labor/cost by copying it back to the open community and making it free again.
      • pona-a6 days ago
        That does make me optimistic. "Stealing back" our stolen data does in the end a free model make — unless the current, very... unprecedented US admin decides distributing unauthorized distilled models carries a prison sentence.

        But I think most of it is psychological. There used to be goodwill between the public and NLP researchers: what heartless monster would object to some linguists using the by-product of their conversations to make a computer learn that a "king - man + woman = queen" or generate some unintentionally comedic writing?

        Now this honeymoon is over. You see that what you've been feeding your public life is now a monster with a hundred vices and a few good deeds. It is behind the tidal wave of spam and misinfo, it is the oracle breeding ignorance among the gullible, it is the iron hand of censorship for many a police state, but most insulting of all, it is sold by its makers as a replacement for any genuine talent or minimal human effort.

        "Why learn to draw when you can have an AI produce a cheap imitation instead? Why learn math, CS, or foreign languages when you can delegate any and all thinking to the great machine? What did we even have you all for, anyway — intellectuals, artists, and craftsman — with your constant complaining and demands we learn a skill? Who do they think they are? Experts?"

        No, the future belongs to the lazy and talentless, to thieves and usurpers, who will sit at the top with an aristocratic, borderline catatonic, brainlessness, while you will be at their knees, polishing their boots — since the machine to do so costs an order more than your meager wage.

        It is anti-intellectualism in a form purified to industrial potency, directed at the very people by whose generosity their rather inept "replacements" were manufactured.

        I can't say what's the rational response in all this. I can tell you what emotional response seems most appealing.

    • treyd5 days ago
      Intellectual property isn't real. It's a fiction we constructed to try to control expression in order to allow extraction of profit from ideas. We had to keep exceptions like expiration and "fair use" to make it not absurd and obviously self-contradictory.

      All LLMs are doing is shuffling around the roles to bring light to an underlying contradiction. Yes they are profiting off of unpaid labor, but what that actually means is the models themselves should be "property" of everyone just as the training data should be.

      • golergka5 days ago
        > Intellectual property isn't real. It's a fiction we constructed

        Regardless of my opinion about IP in particular, argument "X isn't real, it's a fiction we constructed" is silly. We have "constructed" things like justice, duty, charity, mercy, and a lot of other social and moral constructs, and it's good that we did. They're also just as real as potential energy in physics: it's not a material object that you can see or touch, but it greatly affects what happens in reality.

        • treyd5 days ago
          Sure, but I would argue that those are concepts that have existed for millennia and has real material grounding in reality. Whereas intellectual property is entirely a fictional construction.

          At its core, owning property involves the ability to use force to assert your control over it. This is completely impossible with ideas (and information more broadly) since they're non-physical, so it's not really property in the way real world property like land is.

          So because it's not reflective of how the material world works, that's the heart of the contradiction I alluded to in my previous comment. There is no way to resolve the problem of LLMs from within the logical framework that doesn't lead to some further counterintuitive result. There has to be legislation around it if we want to keep the charade going, or ideally we'd want to drop it altogether.

          • immibis5 days ago
            If you share enough copies of a Disney movie, lots of men with big guns will come to your house and haul you away to a locked room. I fail to see how this isn't using force to control intellectual property, nor how it's impossible.
    • aaa_aaa5 days ago
      Unlike you, many people do not care what they think, write or utter is copied or used. Also some believes intellectual property is not property. Real thieves are the ones who got phony monopoly grants to protect it.
      • spudlyo5 days ago
        I personally hold that intellectual property isn't property, and is increasingly becoming a net negative to humanity as a whole. I see AI as an accelerant in the erosion of IP's relevance and enforceability. With AI being able to crank out derivative works at scale, it blurs the lines between infringement and transformation. The flood of such content makes enforcement increasingly impractical.

        While I'm not unsympathetic to the plight of creatives, and their need to eat, I feel like the pendulum has swung so far to the interests of the copyright holders and away from the needs of the public that the bargain is no longer one I support.

    • zbobet20126 days ago
      It depends on how long the time you spend in your c function is. cgo has a substantial overhead for calling. I tend to prefer just writing ASM functions for critical path code. You can use libraries like https://github.com/mmcloughlin/avo to make it easier to write/maintain.
      • bborud5 days ago
        Have you tried writing Go assembler instead of x86?

        https://go.dev/doc/asm

        (I'm not suggesting, merely asking since I haven't written any assembler for Intel for 30+ years and I have never written Go assembler)

    • xyproto5 days ago
      I want my open source projects to be used for training LLMs. An LLM is like a bundle of knowledge. I want to have a tiny bit of influence on these bundles of knowledge, since they will be used by many people for many years to come. If anyone can benefit, even just a little bit, I am for it. If not for the benefit of others, what's the main point of open source software in the first place?
    • ForTheKidz5 days ago
      > We are witnessing the greatest theft of intellectual property in the history of Man.

      We did alright for millennia without it we'll be ok

    • timewizard6 days ago
      > Why anyone opens any software anymore is a mystery to me.

      Cynically? To get noticed and get a job.

      > We are witnessing the greatest theft of intellectual property in the history of Man.

      What makes you think they weren't already doing it? It could be that LLMs masquerading as "AI" is actually just a risk reduction measure for already existing practices.

    • EigenLord5 days ago
      I see what you mean. Sometimes I intentionally divulge certain information in the hopes that I will influence the dominant LLM's outputs and thereby surreptitiously influence humanity. It's obviously a stretch but if you think about chaos theory there's a chance it might work out.
    • tw19845 days ago
      > Why anyone opens any software anymore is a mystery to me.

      Me noticed the same and concluded that most of those open source software are no longer hard core science or engineering stuff. open source is becoming more like a marketing approach for 3rd class companies to show off their reinvented wheels.

    • csdvrx6 days ago
      > Anonymous common property, sold by the LLM companies. No credit to the author.

      Yes, and?

      > Why anyone opens any software anymore is a mystery to me.

      Before LLM: to share nice things with other people who may like them

      After LLM: to share nice things with other people who may like them

      > We are witnessing the greatest theft of intellectual property in the history of Man.

      Yes, and?

      Before LLM, anyone could already take free software and copy-paste the code while stripping away the author's name or the license.

      There are more motivations than getting credit, or money.

      If what I create can ultimately have a positive impact, it does not matter whether the credit goes to the LLM, to me or anyone else.

      I would suggest you question and analyze your motivations.

      • throwaway0123_56 days ago
        > I would suggest you question and analyze your motivations.

        Full disclosure up front: I'm not anti-AI and think there are a lot of potential positive applications. I currently work on improving some aspects of AI.

        That said, it really isn't that hard to see why many people (including me to some extent) are nervous about AI getting better, and upset that the people who made the content that facilitated this are not benefiting from it.

        If you're a SWE and not already extremely stable financially, AI replacing software engineering (and if that happens, ~all white-collar labor) means that you'll likely be unable to achieve anything close to the economic status expected and planned on.

        > Before LLM, anyone could already take free software and copy-paste the code while stripping away the author's name or the license.

        Before LLM, people doing this posed ~0 existential threat to the livelihood of the author. AI doing it at a massive scale arguably does (remains to be seen, but at least plausible).

        I think most people want their work to have a positive impact, and aren't that concerned about credit. But they aren't confident that helping LLMs will lead to long-term positive impacts, and in any case are less worried about that than not being able to own a house or maybe even buy necessities because of the possible economic impacts of AI.

        Is this more of an indictment of our current social/economic system than of AI itself? Probably yes, but I'm not sure it matters.

        • csdvrx6 days ago
          > you'll likely be unable to achieve anything close to the economic status expected and planned on

          It may be a problem of expectations + loss aversion, as high income from software jobs is a historically "recent" phenomenon

          > AI doing it at a massive scale arguably does (remains to be seen, but at least plausible).

          I agree that scale can change many things, but the core argument is still someone fearing for their job

          > Is this more of an indictment of our current social/economic system than of AI itself? Probably yes, but I'm not sure it matters.

          I just want to better understand why so many people here have such a negative take on a technology that is new, exciting, and could bring so much to so many people!

          Thanks for your detailed reply: like you I believe it's just self-interest, and "It is difficult to get a man to understand something, when his salary depends on his not understanding it"

          • throwaway0123_56 days ago
            > high income from software jobs

            I don't think it is a worry about just software jobs though. Would people be nearly as concerned if some "Technology X" would delete all software jobs but leave all other white-collar jobs intact? Probably not nearly as much, just retrain and go do some other job. I think the concern is that once an AI can replace SWE, it is likely already at the level where in short order there will be no value to intellectual labor, no "thinking jobs" that can sustain a good life.

            So basically I don't think it is just self-interest. I think a lot of people see a plausible outcome being the destruction of the value of (at least intellectual) labor. If labor loses all value and society isn't prepared to entirely upend our economic system, most people could suffer greatly, to the extent where the positive and exciting benefits of AI don't really mean much. If someone believes that not making their software open-source can delay that, it isn't necessarily a selfish decision. If that is what you believe, you're delaying a negative outcome for a lot of people.

          • patrick4516 days ago
            Do most individuals who oppose AI doing so out of "just" self interest? Maybe. Not wanting to starve is inherently self interest but it's fundamental to not just human nature but every living thing to avoid starvation.

            AI also has the potential to cause unimaginable harm to the entire globe.

    • EGreg6 days ago
      Because we live in a society where the incentives are not conducive to collaboration.

      Imagine if everyone had a UBI, and you contributed fixes and improvements to a project because you thought it was cool. That's how professors with tenure for centuries rushed to publish scientific articles, for instance.

      In fact, in antiquity, we had the opposite problem to what you describe ... often things were credited to a certain leader of a school (e.g. Pythagoras) even if he didn't invent them.

      The problem is that people are thinking about how to make a profit, and it's ubiquitous because it's wrapped up with trying to survive, at all. By the time you make millions of dollars, you still have the PTSD from trying to outrun the bear, you keep going by inertia and making billions.

      Competition is far less healthy than collaboration. It's what causes the countries on earth to speed towards AI apocalypse for instance, or fossil fuel apocalypse, etc. The few times they cooperated (e.g. Montreal Protocol, or nonproliferation of chemical weapons) has been a resounding success. I doubt they'll be able to come together like that for AI, though!

    • sph6 days ago
      > But I will never "open" another piece of software, now that it's all grist for the LLM code generator industry.

      This is an interesting, albeit offtopic, discussion. My last few projects are still stored as private repos because I do not want them to be gobbled up by LLMs for junior, expendable and cheaper devs to replace me, especially when I am exploring novel problem-spaces. In fact, I do not care to be convinced otherwise, I am ideologically speaking completely opposed to any form of LLM or "AI".

      I daydream of a niche, nerdy network of nerds, outside the world wide web, to share my stuff with humans. Until then, I am not sure whether my projects should be open-source. The ones that will benefit the most are the idiotic machines and their operators.

      Should we resurrect Gopher for a few months, until they catch up?

      • hu36 days ago
        > My last few projects are still stored as private repos because I do not want them to be gobbled up by LLMs for junior, expendable and cheaper devs to replace me, especially when I am exploring novel problem-spaces.

        I'll try to be gentle, pardon me in advance.

        1) Your problem space is probably not novel enough to warrant such preciousness.

        2) "Junior, expendable and cheaper devs" don't compete with Senior+ because they don't know how to even ask the right questions. And they don't possess enough soft skills to navigate hard projects.

        3) But let's suppose that you do indeed have IP that's more valuable to humanity than the CRUD crap we are paid to spit out on a daily basis: We will all die, and it can happen anytime now, why risk having your contribution die with you? Even if you setup posterous access to other people, there's risk no one cares enough to dig your code. Heck, this is what PhDs are about, pushing humanity knowledge ceiling just a bit higher. And even those that do, barely get enough attention.

        4) Fighting AI is like trying to boil the ocean. We won't make a difference.

      • csdvrx6 days ago
        Your perspective is very strange, as I want to be replaced by cheaper devs - or ideally, even machines, which would free us humans to work on other problems that machine can't work on yet.

        > In fact, I do not care to be convinced otherwise, I am ideologically speaking completely opposed to any form of LLM or "AI".

        Then could you try to convince me of your argument?

        I don't see tech problems as worse or better than other problems like growing food, which we were fortunate to solve with industrialized farming and other technological breakthroughs like the Haber–Bosch process

        Explain me why I should join your daydream!

        • sph6 days ago
          > which would free us humans to work on other problems

          We have heard this refrain since the start of the Industrial Revolution.

          • hnlmorg6 days ago
            And it’s been true. I don’t have to break my back toiling the earth to grow crop. Nor risk other kinds of life changing injuries working in factories, down the mines, etc.

            Instead, I’ve got a comfortable job which requires using my brain. Which is an opportunity that someone from my social class wouldn’t have been granted even just 100 years ago.

            • sph6 days ago
              We were talking about machines freeing time so we can do other things, do not move the goalposts to fit your argument.

              Yes, I do not have to break my back 5 hours a day in the field, I only have to sit in an office 8 hours, and 2 hours commute a day. Also make sure you check your Slack notifications at home. [1] I hope you enjoy your couple hours in the weekend to read philosophy and paint landscapes.

              1: in fact I don't waste all my time working like most, but that makes me unhireable to 99% of companies that just want to squeeze every productive minute out of me.

              • hnlmorg6 days ago
                > We were talking about machines freeing time so we can do other things, do not move the goalposts to fit your argument.

                That’s a very uncharitable comment given “time” wasn’t mentioned once in either your comment nor the GPs.

                You might have read their comment as meaning “less work” (time) but for me it reads that they’re talking about “different and more interesting work”.

                Both interpretations are valid.

                I get a sense from your rant that you’re massively overworked. I do sympathise with you there, I honestly do. But that’s got nothing to do with AI.

              • ncruces6 days ago
                Let's all go back to doing the laundry by hand. Because life was better back then.
            • MyOutfitIsVague6 days ago
              The biggest use of professional AIs will offset the majority of big brain jobs, possibly leaving the majority of jobs left for humans to do being physical back-breaking labor at some point before long.
              • hnlmorg6 days ago
                We’ve long since proven that machines are better for back-breaking labor.

                I also disagree that AI will offset the majority of big brain jobs. What’s actually happening is AI is offsetting the majority of narrowly defined queries. The more open ended problems require an actual understanding of problem solving rather than a really clever text prediction engine.

                Some examples to illustrate my point:

                If you wanted to build a mobile app, you’d need to first:

                - understand the problem you’re trying to solve (is it a game? A medical app? A productivity app? Etc)

                - choose a frontend stack

                - create a design language

                - create wireframes

                - decide where to host the backend stack (eg self hosted? Or public cloud? If so, which?)

                - decide which cloud services to use (if public cloud)

                - decide on how to approach IaC

                - decide on the backend stack

                - decide on how to implement your CI/CD pipelines (or even if you want continuous delivery)

                - understand what privacy and security concerns you have and identify what risks you’re willing to accept

                - define testing strategy’s

                - define release cadences

                And so on and so forth.

                Writing the code is often actually the easiest part of software development because we already understand the problem by that point. The rest of the work is figuring out all questions that need to be answered — and I don’t even mean answering the questions, I mean understanding what questions to ask.

                AI can’t do that. GenAI requires human input and much of software development is actually figuring out what those inputs should be rather than generating that output.

                So that’s what I mean by “big brain jobs”. It’s not writing code, because that’s easy. It’s understanding and defining those requirements to begin with.

                • tuckerman6 days ago
                  > We’ve long since proven that machines are better for back-breaking labor.

                  This is contrary to my perception of current trends (and to a perhaps generous reading of Moravec's Paradox). Dexterous manipulation is still quite hard which is why a task like laundry folding is so impressive despite being relatively simple for many humans: https://www.physicalintelligence.company/blog/pi0

                  Perhaps we disagree on what "back breaking" entails but I'd much rather spend all day coding than folding laundry!

                  • hnlmorg5 days ago
                    Yeah I definitely wouldn’t class laundry folding as “back breaking”. But I agree it is manual labor that’s difficult for current robotics.

                    There have been some advancements in this specific field though. So we might see a mass-produced robot that can do this in our life time. If not for homes then certainly for industrial workflows (dry cleaning services, new workflows for clothes manufacturing, etc)

                • Philpax5 days ago
                  There's no reason AI couldn't do that too. It will continue moving up the cognitive ladder: there's nothing special about our ability to ask those questions.

                  For example, try asking Claude "what decisions do I have to make when building a mobile app?" It gave me a list that looked quite a lot like yours, which I then asked it to compact into a single paragraph:

                  > When building a mobile app, you'll need to decide between native or cross-platform development (React Native, Flutter); which platforms to prioritize; backend infrastructure; offline capabilities; data storage approach; UI/UX design system; monetization strategy (free, freemium, subscription); launch approach; ASO strategy; budget allocation across development, design, and marketing; timeline with clear milestones; team composition requirements; and legal compliance needs including privacy policy and relevant regulations like GDPR.

                  and of course, I can ask it to answer those questions for a given app. People have written about this, but the ratio of "input amplification" is only getting larger: in the future, you will be able to ask for a multiplayer 3D chess game with native Android and iOS clients with branching timelines, and it'll write every bit of code for you, while asking you about important decisions.

                  (See also Claude Code and similar agents.)

                  • hnlmorg5 days ago
                    Sure people have written about it but the LLM is only returning text that’s already been written with zero reasoning.

                    What solution architects, devops engineers, etc get paid for is their experience.

                    There is a saying that goes “the devil is in the detail” and that applies here. Everytime someone says “I created x using AI” it’s because they’ve been able to feed the LLM pointed questions and have been able to understand when the LLM was rattling off nonsense.

                    And your counter argument there still falls back to the “writing every bit of code” which I’ve already said is the easy part of software development.

                    I’m sure at some point AI will be able to reason about problems but that’s when they shift from being specialised AI to AGI. And I don’t for one second believe we are as nearly as close to AGI as Sam Altman wants his investors to believe.

                    • Philpax5 days ago
                      I would encourage you to try out a recent reasoning model like Claude 3.7 Sonnet / o3-mini / R1. No, it's not perfect, but it can very much architect and design things at a higher level than you might think. This is already taken advantage of by Aider, which has a mode that splits the tasks: https://aider.chat/2024/09/26/architect.html

                      My point isn't that it's perfect today, but it's already further along the trajectory than one might think.

                      • hnlmorg5 days ago
                        I use LLMs for brainstorming all the time. It’s probably my favourite use of them. But I always fallback to the same problem: with an open ended question there’s a higher probability of hallucinations. So I still have to spend the time separating out the good responses from that rubbish.

                        You could liken it to a good solutions architecture vs a bad one.

                        A bad one will cost your business time, effort and money. A good one will not.

                        If I were to trust the LLMs output verbatim then I’d have made so many bad decisions. But I have the experience to identify what’s correct and what is not.

                        > No, it's not perfect, but it can very much architect and design things at a higher level

                        Exactly. I keep making the point that the real test of a competent engineer is understanding the details. And there are a lot of details which are highly context specific in any given software project.

                        Where LLMs will thrive in software projects is solving easy problems or problems that have already been solved a thousand times over (eg how to architect a blog) and solving highly specific and narrowly defined problems (eg write code that does X).

                        Maybe I’ve been blessed in my career, but most software projects I’ve worked on have brought something innovative and new to the table (and I don’t mean the “we are changing the world” meme that’s often cited by SV startups — I do literally mean something new to the industry). So LLMs would have a pretty poor success rate under those conditions.

      • pizzafeelsright6 days ago
        I long for your confidence in your own ingenuity. If you think private repos on third party systems are hidden from training I suppose you have more trust in expendable developers working to maximize training.

        At this point I am fairly convinced that software development is solved. The reality has not been understood by most as typing syntax is now a trivial task best left to AI.

        • sph6 days ago
          I have been migrating my private repos to a private Gitea instance of my own. I am well aware that anything on Github is used to train Copilot.

          > At this point I am fairly convinced that software development is solved.

          Writing code is 20% of software development, which is why I am still in demand even if I refuse to use LLM software. But employers and recruiters are not rational, and will soon rather hire cheap, expendable "monkeys on a typewriter" than experienced engineers.

        • imtringued6 days ago
          I wasted at least an hour today on waiting to anti virus software to let my software start.

          No amount of AI will overcome the slowness of antivirus software.

          Also, about software development being "solved". I beg to differ.

          Sakana AI failed to reliably produce working CUDA kernels and that was in a domain where the task was explicitly specified as pytorch code, so it was, in a way, just a glorified syntax typing demonstration.

      • immibis5 days ago
        I hate to break it to you, that Github is still looking at your private repos.
      • pona-a6 days ago
        Gemini was looking quite nice the last time I've been there.
    • PaulHoule6 days ago
      Marshall McLuhan in the 1960s said that technology and culture are moving so fast that we are "driving by looking in the rear view mirror"

      Nothing says "I am slow on the draw" to me than "all of a sudden I'm worried about getting ripped off by Open AI" as the open source and web economies have long been recognized as exploitative

      (1) This guy http://www.seobook.com/blog has been talking about how the Google economy has been rigged since at least 2010, and I can say that I've lived it.

      (2) This cartoon https://xkcd.com/2347/ illustrates the hard time open source has sustaining itself. Open-source doesn't need to fund all the value-subtracting vultures that you need to sell enterprise software, but it struggles to scrape together just a few bucks for people who are capable of doing work on a shoestring.

      (3) Open source licenses have been getting tougher for database products in particular because hosting a database like mysql is a great business for the likes of AWS or Azure which doesn't need to send a penny back to the creators and the GPL's copyleft doesn't do anything about it.

      ---

      I'd say also as a creative person everything I'm exposed to becomes part of my work, particularly when I am receptive and "on the make"; I think of the video for Groove Armada's Superstylin' [1] which is all about a person seeing everything in the environment and finding inspiration and using their talents to make other people's talents even greater. Don't laugh but my son and I got a lot of out the anime 2-5 d Seduction [2] because it is all about people of different generations, under pressure, figuring out how to blend their talents to compete and cooperate. So much of what I consume nourishes me, becomes part of me, and becomes part of what I create, not any different from a LLM.

      [1] https://www.youtube.com/watch?v=_kE0pxRkMtQ

      [2] https://en.wikipedia.org/wiki/2.5_Dimensional_Seduction

    • shpongled5 days ago
      How did you learn how to program? Ever read any open source code?
    • muscomposter5 days ago
      culture works backwards from how you think it works

      the mystery to me is how people like yourself do not realize the scarcity your intentions lead towards

  • nikolayasdf1235 days ago
    does not mention benchmarks. Go is unacceptably slow when it comes to math. with complete absence of SIMD CPU instructions (aka "do it yourself in assembly") and GPU/CUDA, Go is orders of magnitude slower than what you would get in C/C++/Rust or even Python or Java (that are calling C/C++)
  • stpedgwdgfhgdd6 days ago
    Great to see these algorithms in Go. Finally I can study them at the implementation level as opposed to reading blogs.
  • neonsunset6 days ago
    Just scalar code? I was hoping to see some Goasm here for acceptable performance (or you could rewrite it in F#/C# which provide appropriate SIMD primitives).

    edit: to answer my own question, when inspected with Ghidra, this implementation indeed compiles to very slow scalar code (operates on single fp64 values).

    • chrsig5 days ago
      i just hope for a sufficiently smart compiler shrug (i'm pretty sure go has some autovectorization)

      before jumping to another language, I suggest perhaps examine the memory layout and access patterns.

      • neonsunset5 days ago
        The code there is written in a fairly auto-vectorizeable way. But the actual capabilities of Go's compiler are very far away from this despite public expectation (and autovectorization is brittle, writing inference or training in a way that relies on it is the last thing you want). To put it in perspective, until 2021 Go was always passing the data on the stack on function calls. It has improved since then but the overall design aims to ensure common scenarios are fast (e.g. comparisons against string literals are unrolled) but once you venture outside that or if it's an optimization that requires more compiler complexity - Go is far less likely to employ it.
        • chrsig5 days ago
          > and autovectorization is brittle, writing inference or training in a way that relies on it is the last thing you want)

          I'm curious if you could speak more to this? Is the concern that operations may get reordered?

          > To put it in perspective, until 2021 Go was always passing the data on the stack on function calls. It has improved since then but the overall design aims to ensure common scenarios are fast (e.g. comparisons against string literals are unrolled) but once you venture outside that or if it's an optimization that requires more compiler complexity - Go is far less likely to employ it.

          I agree with this assesment.

          The individual operations in the repository (e.g., dot product) look like they could be autovectorized. I'm assuming they aren't because of the use of a slice. I'm mildly curious if it could be massaged into something autovectorized.

          Most of my observations re: autovectorization in go have been on fixed sized vectors and matrices where SSE2 instructions are pretty readily available and loop unrolling is pretty simple.

          I'm curious what it would produce with the matrix in a single slice rather than independent allocations. Not curious enough to start poking at it, just curious enough to ramble about it conversationally.

          • neonsunset5 days ago
            > The individual operations in the repository (e.g., dot product) look like they could be autovectorized. I'm assuming they aren't because of the use of a slice. I'm mildly curious if it could be massaged into something autovectorized.

            > Most of my observations re: autovectorization in go have been on fixed sized vectors and matrices where SSE2 instructions are pretty readily available and loop unrolling is pretty simple.

            Go does not have any form of autovectorization. The only way to access SIMD instructions in Go is through functions written in Goasm. Moreover, Go does not ship SIMD primitives in its math library which would not necessitate auto-vectorization by implementing inlineable functions with SIMD instructions instead.

            > I'm curious if you could speak more to this? Is the concern that operations may get reordered?

            Autovectorization brittleness is a large topic. Analysis is expensive, vectorization may be impossible due to violating program order or observable side effects. In addition to that it often needs multiple expensive optimization phases coupled with complex compiler IR and back-ends to efficiently target multiple platforms which does not fit well with Go's compiler design (at least such is my amateur impression from looking at its source code).

            Go's compiler should not be treated as if it's in the same class with GCC or LLVM because it is anything but, it is a grade below .NET's RyuJIT/ILC and OpenJDK's HotSpot, with design decisions and practices that make Go a somewhat easier optimization target than .NET CIL which allows it to maintain relative parity at general-purpose code light on abstractions (if it is heavy on those, Go starts to fall behind).

            • bboreham4 days ago
              Your message applies to one particular Go compiler from Google. But since you mention gcc and llvm, it is also possible to use them to compile Go. Each implementation has different trade-offs in quality of generated code, runtime and language features.
              • neonsunset4 days ago
                Okay, I heard this argument enough times to know it's unreasonable but feel free to prove me wrong :)

                We have this go-attention library which seems like a perfect candidate for an alternate compiler. How do I get Go compiled to reasonably good, autovectorized result here?

                • bboreham3 days ago
                  Compile your whole program with gogcc?
                  • neonsunset3 days ago
                    I know that both GCC and LLVM back-ends exist. Now, why do you think neither is used anywhere? (well, the LLVM one seems very new so it will need time regardless)

                    Also, it is not gogcc, it is gccgo. You may try to handwave away this but the above is legitimate criticism of very real Go weaknesses as observed on the example of this library

                    • chrsig3 days ago
                      you're not wrong, but i don't think that you're presenting the argument in a way that is going to be well received.

                      hopefully both gccgo and any llvm backed implementations eventually mature to production grade. I think the thing that'll hold them back is that the toolchain is (by definition) completely different. 'go build .' is pretty nice.

                      the biggest value they bring is that unclear parts of the specification=be brought to light and clarified.

  • ein0p5 days ago
    Inadvisable, IMO. This is not going to perform well. There are bindings for llama.cpp, I'd use that if I had to do things in Go. And yes, I'm aware that it calls into icky and uncouth C++, but it will be way faster, especially if you have some sort of acceleration hardware.
  • truth_seeker5 days ago
    Without using SIMD CPU instructions, its gonna be super expensive.

    Some like viterin/vek or kelindar/simd package could be helpful.