183 pointsby signa11a day ago11 comments
  • LeonMa day ago
    From my experience, most businesses (or at least the developers working for them) actually would like to donate or pay for support on the OSS projects they rely on. The problem, at least from my experience, is that it is hard to do so due to legislation, compliance, etc.

    Example: I once convinced my employer to donate to some open source projects we relied on. They did, then few months later they got slapped on the wrist by the authorities for not being able to prove where these overseas payments were going to, and that these payments weren't used for funding terrorist activities.

    Similarly, I used to contribute to an OSS project, we did get asked by some corps to do paid work like bug fixes or features. The problem was that they required invoices in order for them to be allowed to pay us, so we needed to register as a company, get a tax number, etc. I was a freelancer at the time, so I offered to use my business registration to be able to invoice, then split the profit amongst the contributors. Then the very first paying 'customer' immediately hit us with a 20-page vendor assessment form asking about my SOC2 or ISO27001 certifications, data security policies, background checks of my 'employees' etc. Then I got confronted by my accountant that distributing the payment amongst other people would be seen as disguised wages and could get me into serious legal problems.

    Granted, this was some years ago, things have gotten better now with initiatives as Github Sponsors, KoFi and Patreon. But at the same time legislation has gotten more restrictive, doing business with large corps is difficult, expensive and very time consuming. It's not worth it for most OSS maintainers, and similarly it isn't worth the legal headache for the large corps to make these kind of donations.

    • illiac786an hour ago
      I don’t see a good solution to this.

      What’s to prevent real terrorists from creating a repo with some fake or even a functional piece of software and start funnelling money this way?

      One of these cases where total surveillance seems tempting again…

    • spiffytech19 hours ago
      patio11 has said something similar: the tax authorities were inclined to read "donation" as "owner's personal spending", and expected accounting and taxes to be handled accordingly.

      https://news.ycombinator.com/item?id=10863978

    • summarity15 hours ago
      Sell the service via a merchant of record platform. They front the payment for you, and you’ll have wayyy fewer headaches
    • woooooo18 hours ago
      Maybe we need to normalize bullshit consulting gigs then? There's plenty of those around anyways. "Optimized 100k HTTP requests per day".
      • ziml7718 hours ago
        I was thinking some sort of classification as a charity, but getting paid as a consultant is probably a more realistic way of making it work since it can be done without any changes to the laws regarding charities.
  • angsta day ago
    > There is an increasing crowd of people who ask a large language model to "find a problem in curl, make it sound terrible", then send the result, which is never correct, to the project, thinking that they are somehow helping.

    Our worst nightmares are becoming true indeed..

    • slacktivism12319 hours ago
      >thinking that they are somehow helping

      >Our worst nightmares are becoming true indeed

      Agree completely with you, but most of the time this isn't people being altruistic.

      It's people spraying bullshit at maintainers to try and score "CVE IDs as trophies" for their résumé or payouts from the vendor-backed Internet Bug Bounty (IBB) program on HackerOne.

      https://hackerone.com/ibb

      https://daniel.haxx.se/blog/2021/09/23/curl-joins-the-reborn...

    • blahgeeka day ago
      The worst nightmare would be the maintainers in turn use large language model to review or apply these patches
      • szszrka day ago
        I already have some processes at work that are reviewed by AI only. Which means we are advised to use another AI to fill out the data quicker.

        It's nothing critical, but still both scary and hilarious at the same time. Shit on the input, shit on the output - nothing new, just fancier tools.

        Asimov's vision of history so tangled and noisy that no one really knows what is truth and what is a legend is happening in front of our own eyes. It didn't need millennia, just a few years of AI companies abusing our knowledge that was available for anyone for free.

      • the_biot17 hours ago
        Not to one-up you, but my worst nightmare is an open source project where all the maintainers are LLM copy-pasters, with little clue to be had otherwise.

        And it's already happened, of course. A project I saw mentioned here on HN a while back seemed interesting, and it was exactly that kind of disaster. They started off as a fork of another project, so had a working codebase. But the project lead is a grade-A asshole who gets off on being grumpy to people, and considers any ideas not his to be ridiculous. Their kernel guy is an actual moron; his output is either (clearly) LLM output or just idiocies. Even the side contributors are 100% chatbot pasters.

      • signa11a day ago
        and then have another one duke it out with the first one to reject the patch. that would be a nice llm-vs-llm, prompt-fight-prompt :o)
    • timeona day ago
      This is getting more common. I've seen CVEs posted to several opensource projects that included made-up APIs.
    • bgwaltera day ago
      The problem is that open source maintainers rarely react, because most projects are captured by some big tech employees. Independent authors like Stenberg are the exception.

      If the rebellious spirit of the 1990s and early 2000s still existed, open source could sink "AI" code laundromats within a month. But since 2010 everyone is falling over themselves to please big tech. Big Tech now rewards the cowards with layoffs and intimidation.

      Most developers do not understand that power balances in corporations work on a primal level. If you show fear/submission, managers will treat you like a beta dog. That is all they understand.

  • molticrystala day ago
    The talk that was referred to in the the article can be found here, just 13 minutes:

    Keynote: Giants, Standing on the Shoulders Of - Daniel Stenberg, Founder of the Curl Project

    https://www.youtube.com/watch?v=YEBBPj7pIKo

    While the article does a great job, the video's graphs and photos really bring a lot more depth.

  • umpalumpaaaa day ago
    The Sovereign Tech Agency (German federal government) donated about 200k€ to the project. Not a brand though. https://en.m.wikipedia.org/wiki/Sovereign_Tech_Agency
  • kibwena day ago
    Step 1: Set up a GoFundMe

    Step 2: Announce that, until the aforementioned GoFundMe reaches $10 million, all new commits to curl will be licensed under the AGPL.

    Step 3: Profit

    • ozgrakkurta day ago
      Step 3: get forked and lose?
      • renmillara day ago
        Step 4: it's someone else's problem, win
        • saghm17 hours ago
          If they didn't want to keep maintaining it, they could just skip to step 4 and just not maintain it in the first place. The problem is that they do want to, and it's not like they haven't done it successfully for years now . Between "put up with whatever bullshit comes your way" and "give up entirely" is a wide spectrum of "try to find ways to reduce the bullshit to get to focus on the important parts", and most of the ways probably don't boil down to handful of pithy steps that could fit in an (original size) tweet.
          • renmillar12 hours ago
            If someone competent wanted to take over my important work projects (deployment systems, core code maintenance, etc.), I'd gladly hand them over. I could orphan them right now claiming I need time for immediate tasks, but I don't want to dump unmaintained code on my team. I'd guess open source maintainers feel even more responsible since they see their work as community service. Maybe dropping the project is what's needed to trigger a well-funded fork or get corporate attention, similar to how Heartbleed affected OpenSSL.
  • rhdunna day ago
    You can use LLMs as part of the process of identifying bugs, developing features, etc. but you must verify the results. Accepting what the LLM says without testing, checking, and verifying the output is lazy and likely to produce errors, or make the code harder to maintain, e.g. if what the LLM produces isn't in line with the project's development/formatting standards or changes other parts of the code.
    • tolmaskya day ago
      Generally speaking, the second you realize a technology/process/anything has a hard requirement that individuals independently exercise responsibility or self-control, with no obvious immediate gain for themselves, it is almost certain that said technology/process/anything is unsalvageable in its current form.

      This is in the general case. But with LLMs, the entire selling point is specifically offloading "reasoning" to them. That is quite literally what they are selling you. So with LLMs, you can swap out "almost certain" in the above rule to "absolutely certain without a shadow of a doubt". This isn't even a hypothetical as we have experimental evidence that LLMs cause people to think/reason less. So you are at best already starting at a deficit.

      But more importantly, this makes the entire premise of using LLMs make no sense (at least from a marketing perspective). What good is a thinking machine if I need to verify it? Especially when you are telling me that it will be a "super reasoning" machine soon. Do I need a human "super verifier" to match? In fact, that's not even a tomorrow problem, that is a today problem: LLMs are quite literally advertised to me as a "PhD in my pocket". I don't have a PhD. Most people would find the idea of me "verifying the work of human PhDs" to be quite silly, so how does it make any sense that I am in any way qualified to verify my robo-PhD? I pay for it precisely because it knows more than I do! Do I now need to hire a human PhD to verify my robo-PhD?" Short of that, is it the case that only human PhDs are qualified to use robo-PhDs? In other words, should LLms exclusively be used for things the operator already knows how to do? That seems weird. It's like a Magic 8 Ball that only answers questions you already know the answer to. Hilariously, you could even find someone reaching the conclusion of "well, sure, a curl expert should verify the patch I am submitting to curl. That's what submitting the patch accomplishes! The experts who work on curl will verify it! Who better to do it than them?". And now we've come full circle!

      To be clear, each of these questions has plenty of counter-points/workarounds/etc. The point is not to present some philosophical gotcha argument against LLM use. The point rather is to demonstrate the fundamental mismatch between the value-proposition of LLMs and their theoretical "correct use", and thus demonstrate why it is astronomically unlikely for them to ever be used correctly.

      • rhdunna day ago
        I use coding LLMs as a mix of:

        1. a better autocomplete -- here the LLM models can make mistakes, but on balance I've found this useful, especially when constructing tests, writing output in a structured format, etc.;

        2. a better search/query tool -- I've found answers by being able to describe what I'm trying to do where a traditional search I have to know the right keywords to try. I can then go to the documentation or search if I need additional help/information;

        3. an assistant to bounce ideas off -- this can be useful when you are not familiar with the APIs or configuration. It still requires testing the code, seeing what works, seeing what doesn't work. Here, I treat it in the same way as reading a blog post on a topic, etc. -- the post may be outdated, may contain issues, or may not be quite what I want. However, it can have enough information for me to get the answer I need -- e.g. a particular method which I can then consult docs (such as documentation comments on the APIs) etc. Or it lets be know what to search on Google, etc..

        In other words, I use LLMs as part of the process like with going to a search engine, stackoverflow, etc.

        • Sohcahtoa829 hours ago
          > a better autocomplete

          This is 100% what I use Github Copilot for.

          I type a function name and the AI already knows what I'm going to pass it. Sometimes I just type "somevar =" and it instantly correctly guesses the function, too, and even what I'm going to do with the data afterwards.

          I've had instances where I just type a comment with a sentence of what the code is about to do, and it'll put up 10 lines of code to do it, almost exactly matching what I was going to type.

          The vibe coders give AI-code generation a bad name. Is it perfect? Of course not. It gets it wrong at least half the time. But I'm skilled enough to know when it's wrong in nearly an instant.

      • sothatsit19 hours ago
        GPT-5 Pro catches more bugs in my code than I do now. It is very very good.

        LLMs are pretty consistent about what types of tasks they are good at, and which they are bad it. That means people can learn when to use them, and when to avoid them. You really don't have to be so black-and-white about it. And if you are checking the LLM's results, you have nothing to worry about.

        Needing to verify the results does not negate the time savings either when verification is much quicker than doing a task from scratch.

        My code is definitely of higher quality now that I have GPT-5 Pro review all my changes, and then I review my code myself as well. It seems obvious to me that if you care, LLMs can help you produce better code. As always, it is only people who are lazy who suffer. If you care about producing great code, then LLMs are a brilliant tool to help you with just that, in less time, by helping with research, planning, and review.

        • tolmasky18 hours ago
          This doesn't really address the point that is currently being argued I think, so much so that I think your comment is not even in contention with mine (perhaps you didn't intend it to be!). But for lack of a better term, you are describing a "closed experience". You are (to some approximation) assuming the burden of your choices here. You are applying the tool to your work, and thus are arguably "qualified" to both assess the applicability of the tool to the work, and to verify the results. Basically, the verification "scales" with your usage. Great.

          The problem that OP is presenting is that, unlike in your own use, the verification burden from this "open source" usage is not taken on by the "contributors", but instead "externalized" to maintainers. This does not result in the same "linear" experience you have, their experience is asymmetric, as they are now being flooded with a bunch of PRs that (at least currently) are harder to review than human submissions. Not to mention that also unlike your situation, they have no means to "choose" not to use LLMs if they for whatever reason discover it isn't a good fit for their project. If you see something isn't a good fit, boom, you can just say "OK, I guess LLMs aren't ready for this yet." That's not a power maintainers have. The PRs will keep coming as a function of the ease to create them, not as a function of their utility. Thus the verification burden does not scale with the maintainer's usage. It scales with the sum of everyone who has decided they can ask an LLM to go "help" you. That number both larger and out of their control.

          The main point of my comment was to say that this situation is not only to be expected, but IMO essential and inseparable from this kind of use, for reasons that actually follow directly from your post. When you are working on your own project, it is totally reasonable to treat the LLM operator as qualified to verify the LLMs outputs. But the opposite is true when you are applying it to someone else's project.

          > Needing to verify the results does not negate the time savings either when verification is much quicker than doing a task from scratch.

          This is of course only true because of your existing familiarity with of the project you are working on. This is not a universal property of contributions. It is not "trivial" for me to verify a generated patch in a project I don't understand, for reasons ranging from things as simple as the fact that I have no idea what the code contribution guidelines are (who am I to know if I am even following the style guidelines) to things as complicated as the fact that I may not even be familiar with the programming language the project is written in.

          > And if you are checking the LLM's results, you have nothing to worry about.

          Precisely. This is the crux of the issue -- I am saying that in the contribution case, it's not even about whether you are checking the results, it's that you arguably can't meaningfully check the results (unless you of course essentially put in nearly the same amount of work as just writing it from scratch).

          It is tempting to say "But isn't this orthogonal to LLMs? Isn't this also the case with submitting PRs you created yourself?" No! It is qualitatively different. Anyone who has ever submitted a meaningful patch to a project they've never worked on before has had the experience of having to familiarize themselves with the relevant code in order to create said patch. The mere act of writing the fix organically "bootstraps" you into developing expertise in the code. You will if nothing else develop an opinion on the fix you chose to implement, and thus be capable of discussing it after you've submitted it. You, the PR submitter, will be worthwhile to engage with and thus invest time in. I am aware that we can trivially construct hypothetical systems where AI agents are participating in PR discussions and develop something akin to a long term "memory" or "opinion" -- but we can talk about that experience if and when it ever comes into being, because that is not the current lived experience of maintainers. It's just a deluge of low quality one-way spam. Even the corporations that are specifically trying to implement this experience just for their own internal processes are not particularly... what's a nice way to put this, "satisfying" to work with, and that is for a much more constrained environment, vs. "suggesting valuable fixes to any and all projects".

          • rhdunn11 hours ago
            I'm not advocating that the verification should be on the maintainer. It should definitely be on the contributor/submitter to verify that what they are submitting is correct to the best of their abilities.

            This applies if the reporter found the bug themselves, used a static analysis tool like Coverity, used a fuzzing tool, used valgrind or similar, used an LLM, or some other mechanism to identify the issue.

            In each case the reporter needs to at a minimum check if what they found is actually an issue and ideally provide a reproducible test case ("this file causes the application to crash", etc.), logs if relevant, etc.

          • sothatsit7 hours ago
            I was arguing against your dismissal of the value proposition of LLMs. I wasn't arguing about the case of open-source maintainers getting spammed by low-quality issues and PRs (where I think we agree on a lot of points).

            The way that you argued that the value proposition of LLMs makes no sense takes a really black-and-white view of modern AI. There are actually a lot of tasks where verification is easier than doing the task yourself, even in areas where you are not an expert. You just have to actually do the verification (which is the primary problem with open-source maintainers getting spammed by people who do not verify anything).

            For example, I have recently been writing a proxy for work, but I'm not that familiar with networking setups. But using LLMs, I've been able to get to a robust solution that will cover our use-cases. I didn't need to be an expert in networking. My experience in other areas of computer science combined with LLMs to help me research let me figure out how to get our proxy to work. Maybe there is some nuance I am missing, but I can verify that the proxy correctly gets the traffic and I can figure out where it needs to go, and that's enough to make progress.

            There is some academic purity lost in this process of using LLMs to extend the boundary of what you can accomplish. This has some pretty big negatives, such as allowing people with little experience to create incredibly insecure software. But I think there are a lot more cases where if you verify the results you get, and you don't try to extend too far past your knowledge, it gives people great leverage to do more. This is to say, you don't have to be an expert to use an LLM for a task. But it does help a lot to have some knowledge about related topics at least, to ground you. Therefore, I would say LLMs can greatly expand the scope of what you can do, and that is of great value (even if they don't help you do literally everything with a high likelihood of success).

            Additionally, coding agents like Claude Code are incredible at helping you get up-to-speed with how an existing codebase works. It is actually one of the most amazing use-cases for LLMs. It can read a huge amount of code and break it down for you so you can start figuring out where to start. This would be of huge help when trying to contribute to someone else's repository. LLMs can also help you with finding where to make a change, writing the patch, setting up a test environment to verify the patch, looking for project guidelines/styleguides to follow, helping you to review your patch against those guidelines, and helping you to write the git commit and PR description. There's so many areas where they can help in open-source contributions.

            The main problem in my eyes is people that come to a project and make a PR because they want the "cred" of contributing with the least possible effort, instead of because they have an actual bug/feature they want to fix/add to the project. The former is noise, but the latter always has at least one person who benefits (i.e., you).

      • olmo2320 hours ago
        In my experience most of the work a programmer does just isn't very difficult. LLMs are perfectly fine for that.
      • actionfromafar21 hours ago
        There’s some corollary here to self-driving cars which need constant baby-sitting.
    • const_cast6 hours ago
      I agree, but take it one step further: if you're not deeply familiar with mature and/or complex software, assume you are wrong. Assume however you verified it is incorrect.

      Curl is a very old piece of software that is incredibly reliable, built in a language that allows you to do evil things.

      If you find what looks like a bug or vulnerability, there is a good chance it's not. There's a good chance that because of specific circumstances that came to be 15 years ago, the code will never execute in such a way to cause a bug.

  • nwellnhof20 hours ago
    If he's unhappy, why doesn't he step down? That's what I did as libxslt maintainer and what I'm about to do as maintainer of libxml2.
    • saghm17 hours ago
      There are degrees of unhappiness. There's no reason to assume a binary of loving everything about work or hating it so much that they'd be happier without it.

      To be clear, this isn't intended as a criticism of your choices to step down from maintaining various projects; my point is that these choices are quite personal, and it's not always going to be obvious how to balance the tradeoffs. It's entirely possible that they are burnt out to the point that they might be happier in the long run if they did step down now, but it's also possible that they might be even more unhappy giving up something that they care a lot about and are willing to tolerate the less fun parts. I suspect you could provide a lot of valuable insight on these sorts of decisions given you experience, but even from having to make similar decisions for things with a tiny fraction of the exposure of the projects you've maintained, it's clear to me that there would need to be quite a lot more nuance in analyzing a situation like this, especially from the outside.

    • the_biot17 hours ago
      I think he's just pointing out current problems, as he's done many times. He likes to give talks and publicity to his project, as its maintainer, and that includes this sort of "what's going on with the project" talks.

      I don't think he's unhappy. Frankly I think he's doing the right thing, and I say that as the founder of a project where I didn't do that sort of thing, but now realize I should have. This is what gets you contributors/donations/publicity.

    • aembleton19 hours ago
      He's not unhappy
  • kamaala day ago
    >>Companies tend to assume that somebody else is paying for the development of open-source software, so they do not have to contribute.

    I think if you are a billion dollar company using these tools, sponsoring maintenance isn't a lot to ask.

    Curiously enough this came up even during the days of Perl.

    I don't think Perl got its due, especially given the fact that even until most recently almost everything of importance was done with Perl. Heck internet was made possible because of Perl.

    • JoshTripletta day ago
      > I think if you are a billion dollar company using these tools, sponsoring maintenance isn't a lot to ask.

      It isn't a lot to ask, but it's challenging to 1) find who to ask, and 2) get them to care about the long-term view in a way that doesn't fit into short-term thinking and budgeting.

    • matsemanna day ago
      Being the one car maker on a slide being called out to have supported curl would be so cheap and get them lots of attention.
      • akerl_20 hours ago
        Would it?

        How many car buyers are motivated by which company donates to an open source project?

        • lionkor19 hours ago
          Me -- the crowd who buys old shitboxes usually.
    • bluGilla day ago
      I've often asked how my company could support them. Most I ask don't understand the question. Those that do only point out that I can contribute code changes - which I have but rarely as we pick good projects that meet our needs: there rarely are bugs or features we would care about enough to not do our regular work.

      what would be nice is a non profit that would take money and distribute it to the projects we use - likely with some legal checking that they are legal (whatever that means). FSF is the only one I know of that does generic development and they have ideas that companies generally oppose and so are out

      • simonwa day ago
        A lot of open source maintainers are bad at asking for money, and most companies find it very hard to give money away without some kind of formal arrangement in place.

        Here's a way you can work around that, if you are someone who works for a company with money:

        Contact the maintainers of software you use and invite them to speak to your engineering team via Zoom in exchange for a speaking fee.

        Your company knows how to pay consultants. It likely also has an existing training budget you can tap into.

        You're not asking the maintainer to give a talk - those take time to prepare and require experience with public speaking.

        Instead, set it up as a Q&A or a fireside chat. Select someone from your own team who is good at facilitating / asking questions.

        Aim for an hour of time. Pay four figures.

        Then do the same thing once a month or so for other projects you depend on.

        I really like the idea of normalizing companies reaching out to maintainers and offering them a relatively easy hour long remote consultation in exchange for a generous payment. I think this may be a discreet way to help funnel money into the pockets of people who's work a company depends on.

        • bruce511a day ago
          This is very creative, and I suspect would work.

          It does have the side effect of wasting the time of 1+n engineers for that hour. I might be able to rustle up a few in month 1, but I'm not going to ba able to do it monthly.

          Frankly, as long as the builder has a "support contract" option, that should be sufficient.

          I will add that understanding how business works is a huge help to them to get you paid. I advocated for supporting a project (they have a "sponsored by" marketing on their web page, so we could take it out the marketing budget.) But they could only be paid via PayPal (which unfortunately we can't do) do the deal fell through.

          It didn't help that the home page in question contained lot of sarcasm, and was antagonistic in tone, likely (I suspect) because of the nonsense the maintainer had to wade through. Ultimately no money got sent.

          I'm happy to support OSS, but I can only spend so much social capital on doing so. My advice to maintainers, if you want sponsorship, put some effort into making that channel professional. It really helps.

      • JoshTripletta day ago
        Many projects have foundations or fiscal sponsors you can work with.

        If you care about Python, you could support the Python Foundation, and/or hire or sponsor some Python developers. If you care about Rust, support the Rust Foundation, and/or hire or sponsor some Rust developers. If you care about Reproducible Builds, or QEMU, or Git, or Inkscape, or the future of FOSS licensing, or various other projects (https://sfconservancy.org/projects/current/), support Software Freedom Conservancy.

        If you care about a smaller project, and they don't have a means of sponsorship, you could encourage them to accept sponsorship via some means, or join some fiscal sponsor umbrella like Conservancy.

        • sinnera day ago
          Another such umbrella organization is Software in the Public Interest (SPI). Some of the more notable projects they sponsor include Arch Linux, Debian, FFmpeg, LibreOffice, OpenSSL, OpenZFS, PostgreSQL, and systemd.

          Homepage: https://www.spi-inc.org/

      • tleb_18 hours ago
        The Linux Foundation (LF) is sort of this. A non-profit aimed at corporate members to sponsor work on many open-source projects (~900).
      • keithnza day ago
        I'd like if it was an option on github to easily have a billing option that would have an automatic donation to the open source in the active repos.
  • dcsommera day ago
    It would be cool to build a "library clout" measure for all open source software. First collect for all deployed software systems measures of usage per platform and along other interesting dimensions like how that system relates to others (is it a common dependency or platform for other deployed software). Use this to generate "clout" at a deployed software unit level. Then detect all open source libraries compiled in it by binary signature matching or through the software's own build system if it is open. Then a library's "clout" is built from the clout of the projects that use it.

    This clout score might be used to guide investments in a non-profit for funding critical OSS. Data collection would be challenging though, as would callibrating need.

    Basically make a rigorous score to track some of the intuition from https://xkcd.com/2347/

  • nurettina day ago
    Just have a policy of firing these "security researchers" whenever they submit AI generated BS to curl.
    • dmurraya day ago
      Fire them from where? Their undergraduate studies at IIT Hyderabad? Daniel doesn't have the authority to do that.
      • nurettina day ago
        There are indeed missing steps and I won't claim to reinvent society in one hn thread, /but/ I am just saying that there needs to be real life consequences for being a shitty student. Is that email coming from EDU?

        Submit a complaint to the university. Just make an email template and fight back. It takes a minute to find the student affairs or dean's email. Surely there will be one person in the entire institution who cares.

  • positron26a day ago
    Every day, if I read HN, I find reasons to just go back to working on PrizeForge
    • signa11a day ago
      don't mind if you do guv, don't mind at all.