357 pointsby healsdata3 days ago34 comments
  • dang3 days ago
    Two claims are being made here, one boring and one lurid.

    The boring claim is that the company inflated its sales through a round-tripping scheme: https://www.bloomberg.com/news/articles/2025-05-30/builder-a... (https://archive.ph/1oyOw). That's consistent with other recent reporting (e.g. https://news.ycombinator.com/item?id=44080640)

    The lurid claim is that the company's AI product was actually "Indians pretending to be bots". From skimming the OP and https://timesofindia.indiatimes.com/technology/tech-news/how..., the only citation seems to be this self-promotional LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7334521... (https://web.archive.org/web/20250602211336/https://www.linke...).

    Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.

    To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?

    (Thanks to rafram and sva_ for the links in https://news.ycombinator.com/item?id=44172409 and https://news.ycombinator.com/item?id=44175373.)

    • kamikazechaser2 days ago
      There are personal testimonials in the indiandevelopers subreddit from quite a while ago, if those are to be believed.
      • pymana day ago
        The news about BuilderAI using 700 devs instead of AI is false. Here's why.

        I've seen a lot of posts coming out of India claiming "we were the AI". So I looked into it to see if Builder AI was lying, or if this was just a case of unpaid developers from India spreading rumours after the company went bust.

        Here's what some of the devs are saying:

        > "We were the AI. They hired 700 of us to build the apps"

        Sounds shocking, but it doesn't hold up.

        The problem is, BuilderAI never said development was done using AI. Quite the opposite. Their own website explains that a virtual assistant called "Natasha" assigns a human developer to your project. That developer then customises the code. They even use facial recognition to verify it's the same person doing the work.

        > "Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked."

        Source: https://www.builder.ai/how-it-works

        I also checked the Wayback Machine. No changes were made to that site after the scandal. Which means: yes, those 700 developers were probably building apps, but no, they weren't "the AI". Because the company never claimed the apps were built by AI to begin with.

        Verdict: FAKE NEWS

        • bni2 hours ago
          So "AI" in BuilderAI actually stand for "An Indian"?
          • pyman6 minutes ago
            Well, it's an India-based company. If you check LinkedIn, most of the employees are based in India.

            The issue here is that many people think AI only means LLMs, Transformers, or GenAI. But AI has been around for decades and includes machine learning, deep learning, and neural networks.

            So anyone using ML is free to register a .ai domain. There’s nothing wrong with that.

            The problem would be if you told customers that your virtual assistant, in this case "Natasha", was creating the code instead of humans.

            But that's not what happened here. The company went broke because it was reporting false sales figures.

    • redeyedtreefrog3 hours ago
      I am a former employee from a few years ago. I didn't stay very long at all though.

      In 2019 its former Chief Business Officer sued them for fraud, claiming that apps were built by Indian developers despite the claim that it was "80%" done by AI. There were detailed articles in the Wall Street Journal and The Verge at the time. I've found one reference saying it was settled out of court (the Telegraph), though I thought I'd previously read the case was dismissed.

      When I was there 3 years later:

      - the company had been renamed from Engineer.ai to Builder.ai

      - the marketing materials still heavily pushed the claim that apps were 80% built by AI, curiously the exact same figure as it had been 3 years earlier

      - there was a bunch of automation around small parts of the software development process. When customers went to create a new project there was an AI chatbot assistant (Natasha), which amongst other things asked the customer for their requirements, and created some estimates for how long things would take. There was also some automation for turning UI mockups into CSS styles and then merging the styles with templated React components. These various small bits of automation did have teams of real engineers working on it in the UK and the US. However, by and large it didn't really work. It seemed to me that the Indian outsourced programmers working on client projects totally ignored this technology anyway, and just went about their jobs as though it didn't exist. Despite being employed to work on this tech, I never had any interaction with the Indian developers building real client projects.

      - a new team was spun up to use Generative AI to create frontend mock ups of an application from template components. This was integrated into the flow when potential clients were chatting with the Natasha AI chatbot. Some people worked genuinely hard on this project, and to some extent it did function. However, it didn't do much beyond the many other "create a frontend mockup from a single prompt" projects out there, other than having access to the company's internal React component templates. As far as I'm aware these frontend mockups were never used by the Indian developers who built the final projects.

      In the tech world there is a very wide blurred zone between "outright fraud, which leads to a conviction in court" and "exagerrated claims". Given the extent to which Builder.ai committed literal fraud with their revenue figures and accounting, and given the hundreds of millions of dollars they raised on very limited real sales, I believe their claims about AI could plausibly also be literal fraud. However, the standards of a court case are much higher than my personal use of the word "fraud". I'm tempted to believe Musk is also bordering on fraud with e.g. the Boring Company, or his repeated claims about how close Tesla is to fully automated self-driving robo taxis. But given Tesla is building a genuine product with genuinely high sales and revenue, and given how much other crazy stuff Musk does that makes these exagerrated claims pale in comparison, it's not something that's ever going to lead to a court case.

    • pyman3 days ago
      I couldn't find any reference on the BuilderAI website claiming they use GenAI to build software. So the second claim lacks evidence.

      Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.

    • YeGoblynQueenne6 hours ago
      The following is from a link given by nikcub 12 days ago (https://news.ycombinator.com/item?id=44080640):

      Engineer.ai says its “human-assisted AI” allows anyone to create a mobile app by clicking through a menu on its website. Users can then choose existing apps similar to their idea, such as Uber’s or Facebook’s. Then Engineer.ai creates the app largely automatically, it says, making the process cheaper and quicker than conventional app development.

      “We’ve built software and an AI called Natasha that allows anyone to build custom software like ordering pizza,” Engineer.ai founder Sachin Dev Duggal said in an onstage interview in India last year. Since much of the code underpinning popular apps is similar, the company’s “human-assisted AI” can help assemble new ones automatically, he said. Roughly 82% of an app the company had recently developed “was built autonomously, in the first hour” by Engineer.ai’s technology, Mr. Duggal said at the time.

      Documents reviewed by The Wall Street Journal and several people familiar with the company’s operations, including current and former staff, suggest Engineer.ai doesn’t use AI to assemble code for apps as it claims. They indicated that the company relies on human engineers in India and elsewhere to do most of that work, and that its AI claims are inflated even in light of the fake-it-till-you-make-it mentality common among tech startups.

      Original link (by nikcub):

      https://www.wsj.com/articles/ai-startup-boom-raises-question...

      Arxiv:

      https://archive.ph/R3nMZ

      Note the article is from 2019. "Engineer.ai" is the same company as "Builder.ai".

      To summarise, my reading of the article is that the founder of Bulder.ai (at the time "Engineer.ai") promoted the company's technology as mostly AI, assisted by a few humans; and that WSJ saw documents suggesting otherwise.

      dang, why do you say the claim is "lurid"? Is it because of the racist undertones of "Indians not AI"? That's fair, there's severe racism against Indian coders in the West, but scams and fraud absolutely happen and it is inevitable to be disgusted when they are revealed. There has to be a more balance stance than dismissing all fraud claims as "lurid".

      • dangan hour ago
        I called it lurid because it's sensational and yes, because of the implicit slur.

        (Not the most precise use of the word lurid because it lacks the ghastly/uncanny quality - https://www.etymonline.com/search?q=lurid, but I couldn't think of a better one.)

      • pyman5 hours ago
        Here we go again. We're amplifying accusations from pseudo blogs with little credibility, and spreading rumours in a forum where most of us are AI/ML engineers, researchers, founders, and university professors. We should know better.

        I read their site and blog, and they have a lot of screenshots of their internal apps. It can't be fake! One of them shows templates of well known sites. Based on what I read, you choose a template, Natasha or something else handles the assembly, which I assume is just a fancy way of saying it checks out a repo and installs dependencies. Then the Indian programmers do the rest. This is clearly explained on their website.

        Guys, take a look at their blog or website. There's plenty of information about their apps, which were reviewed by Microsoft before they invested 250 million.

        Here's the dashboard where you choose an app: https://www.builder.ai/images/Choose.jpg

        And here's the project progress dashboard showing how long it takes to build. In this case, it's 7 months. Clearly, there’s no GenAI involved if it takes that long:

        https://www.builder.ai/images/builder-studio-project-progres...

        Indian programmers and mathematicians are incredibly talented. In fact, an Indian programmer invented the first Transformer, which led to the rise of GenAI. The world chess champion is also from India. So let's stop mocking them. Companies like Google, Microsoft, Apple, Infosys and BuilderAI employ thousands of Indian programmers who are in the top one percent.

        The founder of BuilderAI is also from India and was named Entrepreneur of the Year by Ernst & Young in 2024. He hired directors from Microsoft and Amazon. The Head of AI was a former AI Director at Amazon.

        You need to stick to the facts. Their website looks legitimate and makes no mention of GenAI.

        Verdict: FAKE NEWS

        (Note: Indian devs are amazing)

    • davidgerard6 hours ago
      This ad for Builder directly claims "Natasha" - the secret sauce that Builder hoped to sell to Microsoft - is an "AI":

      https://www.youtube.com/watch?v=D36ZmJRYGK8

      Engineers in the dev office say that's false and "Natasha" was a running joke in the office:

      https://techfundingnews.com/fake-it-till-you-unicorn-builder...

      • pyman5 hours ago
        There's no evidence for that. It's very easy to write in a blog post "they told me" or "an engineer said".

        Why don't they contact the Head of AI, Craig Saunders (ex-Director of Amazon AI), and ask him directly. This was never done, which raises serious doubts about the credibility of the person who wrote it.

        We need to stick to the facts, especially now that pseudo-journalists are flooding the web with fake news.

        Let's keep this site reliable, please.

        I read their site and blog, and they have a lot of screenshots of their internal apps.

        Here's the dashboard where you choose an app: https://www.builder.ai/images/Choose.jpg

        And here's the project progress dashboard showing how long it takes to build. In this case, it's 7 months. Clearly, there's no GenAI involved if it takes that long:

        https://www.builder.ai/images/builder-studio-project-progres...

        They also show in the menu things like "Releases" and "Meet the squad" (I'm assuming the devs). It can't be fake! One of them shows templates of well known sites. Based on what I read, you choose a template, Natasha or something else handles the assembly, which I assume is just a fancy way of saying it checks out a repo and installs dependencies. Then the Indian programmers do the rest. This is clearly explained on their website.

        Take a look at their blog. There's plenty of information about their apps, which were reviewed by Microsoft before they invested 250 million.

        Verdict: FAKE NEWS

        • davidgerard3 hours ago
          Literally a news story where they spoke to the engineers.

          Verdict: stop shilling.

    • ivape3 days ago
      Speculating, don’t they offer dev services that’s supposed to be done by AI? If the dev services were offered by devs, then that would be the scam. Now that I’ve said the second part, it does seem lurid because who the hell is paying for AI first code deliverables.

      —-

      Message to HN:

      Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.

      Things you’ll need:

      1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.

      2) Ex VC who exudes wealth with every footstep he/she takes

      3) The camera

      4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.

      See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.

      • pyman2 days ago
        The short answer is no, their website doesn't claim that development is done using AI.

        My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.

        If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):

        > Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.

        Source: https://www.builder.ai/how-it-works

        They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:

        Source: https://www.builder.ai/under-the-hood

        It sounds impressive, but listing libraries doesn't prove they built an actual LLM.

        So, the questions that remain unanswered are:

        1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?

        2. Was there a technical team behind Saunders capable of building such a model?

        3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?

        Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings

  • paxys3 days ago
    > Less than two months ago, Builder.ai admitted to revising down core sales numbers and engaging auditors to inspect its financials for the past two years. This came amidst concerns from former employees who suggested sales performance had been inflated during prior investor briefings.

    I was hoping for something interesting, but it is just plain old fashioned accounting fraud.

    • 3 days ago
      undefined
  • rafram3 days ago
    • pyman3 days ago
      This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.

      From what I read online, the real issue was "Natasha", their virtual assistant powered by a dedicated foundation model. They ran out of money before it got anywhere.

      • profsummergig3 days ago
        This is so obviously fake news that it's a good litmus test of the people who are boosting it.

        There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.

        • threeseed3 days ago
          But it’s not just about coding quickly but also correctly.

          Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.

          Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.

          • pyman3 days ago
            I did a bit of research…

            Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.

            And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

            Questions:

            1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?

            2. Was there a technical team behind Saunders capable of building such a model?

            3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?

            • daveguy3 days ago
              > creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

              Creating a dedicated pretrained model is a prerequisite of any LLM. What do you mean by "full LLM"?

              • pyman3 days ago
                Just to clarify: I said "pre-trained foundation model".

                LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.

      • bartread3 days ago
        > This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.

        Yeeeah... that's a fairly disingenuous take.

        The difference between every other offshore dev shop backed by developers in India and Builder.ai is that - and I say this as someone who thinks Infosys is a shit company - Infosys and all those other dev shops are at least up front about how their business works and where and who will be building your app. Whereas Builder.ai spent quite a long time pretending like they had AI doing the work when actually it was a lot of devs in India.

        That is deliberately misleading and it is not OK. It's fraudulent. It's literally what Theranos did with their Edison machines that never worked so whereas they claimed they had this wondrous new blood testing technology they were actually running tests with Siemens machines, diluting blood samples, etc. The consequences of Theranos's actions were much more serious (misdiagnoses and, indeed missed diagnoses of thousands of patients), rather than just apps built by humans rather than AI, but lying and fraud is lying and fraud.

        • pyman3 days ago
          I don't agree. Even Infosys markets AI as part of their offering, just look at their "AI for Infrastructure" pitch:

          https://www.infosys.com/services/cloud-cobalt/offerings/ai-i...

          Every big dev shop does this. Overselling tech happens all the time in this space. The line between marketing and misleading isn't always so clear. The difference is Builder.ai pushed the AI angle harder, but that doesn't make it Theranos-level fraud.

          • aylmao3 days ago
            > The line between marketing and misleading isn't always so clear.

            In the general I kind of disagree with this. I am not a lawyer, so I don't know all the details, but if you look for it, you should be able to find the line since it's generally illegal to mislead customers. There's also a whole set of contractural and perhaps even legal obligations when it comes to investors.

            For contracts and the law to be enforceable, they need draw lines as clearly as possible. There's always some amount of details that are up to interpretation, but companies make sure to pay legal counsel to make sure they don't cross these lines.

            Now, specifically in this case, I do agree with you. This case doesn't seem to be a legal matter of customer or investor misleadings (thus far). Viola Credit did seize $37 million, so IMO there clearly was a violation of contract in all this, but it seems like that had nothing to do with the whole AI overselling.

          • mistercheph3 days ago
            Arguably, Theranos was also somewhere in a gray area between marketing and fraud.

            Everyone in the industry incentivizes and participates in this behavior, but once in a while, let's grab a few stand-out individuals to scapegoat once in a while for all the harm caused by/to the entire group with this behavior. Make sure you pick someone big/ugly enough to be credibly dangerous to the whole group, but who isn't too dangerous and well connected so that you can be sure that when the card flips on them everyone around them scatters.

            It's the same reason groups of individual humans do it: Scapegoating is a much lower resistance path to follow than the horrifying alternative (self-consciousness, reflection, love)

            • lotsofpulp3 days ago
              >Arguably, Theranos was also somewhere in a gray area between marketing and fraud.

              Theranos was clear fraud. She claimed scientific advances that did not exist.

              • mistercheph3 days ago
                What about traditional auto manufacturers making claims about solid state battery technology they will achieve in the next decade that they haven't yet?

                There are always unsolved engineering and scientific challenges that stand between today and future product, and nothing is guaranteed, but you have to sell investors on the future technology (see: frontier model makers pushing AGI/ASI hype)

                Obviously there are differences between Toyota's SS battery claims and Theranos' claims, but it's not a black and white line, it's a spectrum.

                • aprilthird20212 days ago
                  Why are so many people here pretending fraud is ambiguous?

                  Saying "We will have great batteries 10 years from now" is not fraud. It's your belief about the future. Everyone knows no one can predict the future.

                  Saying "this hydrogen powered truck works, here is a video of it running on the road right now" but the video is edited so you don't see that it's going down hill and the car isn't actually running" that's fraud.

                  Theranos wasn't in trouble for saying their machines would be great one day. They got in trouble for lying about the current state of things, saying they were performing blood tests on their machines when they were not.

                  • pyman2 days ago
                    BuilderAI never actually told customers that development was done using AI, that's something people made up after the company went bust. If you look at their website (builder.ai), they explain that their virtual assistant "Natasha" assigns a developer, and then uses face recognition to verify the identity of the developer.

                    Take a minute to visit their site and get informed. We live in a time where people form opinions just by reading a headline.

                    • aprilthird20215 hours ago
                      I know that. They had their funding pulled for fudging their sales numbers, which is fraud. And people pretending it's not have latched onto some other people's misunderstanding.
                  • mistercheph2 days ago
                    I'll give you a more temporally synchronous example if you like, Microsoft's deliberately misleading claims about their quantum computing progress: https://www.science.org/content/article/debate-erupts-around...
            • pyman3 days ago
              Theranos was dealing with people's health. Misdiagnoses, delayed treatments, etc, that's real harm. Imo, comparing that to building web apps isn't the same.
              • aprilthird20213 days ago
                The actual crime Theranos founder went to jail for was not misdiagnosing people. It was defrauding investors because they made them believe their machines were doing the tests when really they were sending them out to separate labs
                • pyman3 days ago
                  Completely different story. With Theranos the investors sued the founders, with Builder AI they didn't. This suggests they knew what was really going on, so it wasn't fraud in their eyes.
                  • aprilthird20213 days ago
                    It is not a completely different story. The lender yanked back the money they lent because they found out about fraudulent sales numbers. That led to the bankruptcy. It was still the people whose money was in the game who brought the company down in both scenarios because fraud is a big red line for anyone whose money is on the line
                    • pyman2 days ago
                      I understand where you're coming from, but we need to stick to the facts. If there are no court cases, we can't imply that fraud was committed. We don't know what kind of agreements were in place, why the money was being transferred, or what the expectations were on both sides.

                      We also don't know what was discussed in private. For example, it could have been something like: "We want to be part of this investment opportunity, we'll give you $40 million. But if regulators start asking questions, we want the money back."

                      Without full context or legal findings, everything else is just speculation.

                      I'm surprised no one is talking about Microsoft's investment in BuilderAI, a total loss. It's unlikely they'll recover much, if anything. So why aren't they suing the CEO and CFO? Maybe some of the issues were handled quietly behind the scenes to avoid public exposure or reputational damage? I don't know.

                      • aprilthird20215 hours ago
                        They admitted themselves to this fraud. Try reading the article you are posting under:

                        > Less than two months ago, Builder.ai admitted to revising down core sales numbers and engaging auditors to inspect its financials for the past two years. This came amidst concerns from former employees who suggested sales performance had been inflated during prior investor briefings.

                        It doesn't get more textbook fraud than lying to investors about how many sales you've recorded

              • Retric3 days ago
                Theranos was using the same testing equipment and techniques as any other lab for most of their diagnostic services. Which is how they avoided being instantly exposed when their results ended up being meaningless. “In October 2015, John Carreyrou of The Wall Street Journal reported that Theranos was using traditional blood testing machines instead of the company's Edison devices to run its tests, and that the company's Edison machines might provide inaccurate results.” https://en.wikipedia.org/wiki/Theranos

                They did plenty of shady shit including producing poor results, but that’s largely incompetence independent of fraud vs intentionally putting people’s lives on the line.

                IMO, the fraud kind of hides the equally important story where incompetent 19 year old collage dropout shockingly doesn’t know how to effectively setup and manage complex systems.

                • 3 days ago
                  undefined
          • aprilthird20213 days ago
            > Overselling tech happens all the time in this space.

            Overselling is fraud and is a crime at a certain point, which they clearly passed otherwise they wouldn't have had their lenders pull back money and leave them bankrupt

            • pyman3 days ago
              Just to play the devil's advocate: if a software company tells you your data is secure and then someone hacks their server and steals your photos and personal data, did their CEO and marketing department oversell their level of security? Is this fraud as well?
              • aprilthird20213 days ago
                "Your data is secure" is known to never be 100%. But what assessments and technology they say they use for security needs to be followed. And if it's found out that those are lies, then it's fraud.

                Kind of like how these guys lied about the volume of sales they had. Textbook fraud. They aren't in trouble for saying "AI is going to be great"

                • pyman2 days ago
                  I agree. But using the "your data is secure" analogy, BuilderAI never actually told customers that development was done using AI, that's something people made up. If you look at their website (builder.ai) they explain that their virtual assistant "Natasha" assigns a developer (I assume from India). That part doesn’t sound like fraud to me, and it's the part everyone seems to be focusing on.

                  The company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, the Indian founders and accountants reportedly engaged in round-tripping with VerSe Innovation. That raised red flags for investors, regulators, and prosecutors, and led to bankruptcy proceedings.

        • osigurdson3 days ago
          It doesn't matter for customers but investors would be interested if AI is being used or a bunch of devs due to the scaling potential differences.
          • 3 days ago
            undefined
    • dang3 days ago
      Thanks! It took an annoying amount of time to try to sort this out, but I made a consolidated reply here: https://news.ycombinator.com/item?id=44176241.
  • fazkan3 days ago
    This is so weird, its not that hard to actually build an app builder. There are multiple open-source repos (bolt etc), they could have just paid their "AI engineer" to actually build an AI engineer.

    Shameless plug, but we built (https://v1.slashml.com), in roughly 2 weeks. Granted its not as mature, but we don't have billions :)

    • driverdan3 days ago
      > its not that hard to actually build an app builder

      Besides simple one page self-contained apps, yes, it's quite hard. So hard that it's still an unsolved problem.

      • fazkan3 days ago
        not really, lovable, v0, and bolt all are mutlipage. They connect to supabase for db and auth. Replit can spinup custom dbs on-demand, and have a full-fledged IDE.

        I did my research before jumping into this space :)

        • aitchnyu2 days ago
          Which ones prevent anybody with a browser accessing other user's data? I have been discussing vibe coding and Supabase's Postgres row level security misconfiguration.
          • fazkan2 days ago
            replit, from what I know, and lovable to a certain extent.
    • glutamate3 days ago
      They launched in 2016
    • xkcd-sucks3 days ago
      It's plausible they started with a typical software consultancy and its crappy in house app builder scripts, and reformed it as an AI thing in order to inflate its value?
      • downrightmike3 days ago
        That'd be shameful, and a complete disgrace, it'd be like adding "bitcoin" to your company name or 10k fillings a few years ago to boost your stock
    • hnuser1234563 days ago
      Nice, I'll try this out tonight.
      • fazkan3 days ago
        thanks, do ping me if you run into any issues faizank@slashml.com
  • bartread3 days ago
    This is not news, or at least not fresh news. The FT reported the collapse ~9 days ago and it was discussed here: https://news.ycombinator.com/item?id=44080640
    • apsurd3 days ago
      news to me buddy. this is perhaps a useless comment but then i think, articles resurface every now and again and it's intentional and welcome for those that missed. and this isn't exactly that of course, rather makes me think it's worth a comment: news is relative. discussion ensues, it's all good
      • macintux3 days ago
        Except that it's contrary to the site FAQ.

        > If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.

  • ricardobeat3 days ago
    So.. where did the $450M go? A team of 700 developers in India built over eight years would have cost a fraction of that.
    • rokob3 days ago
      Why do you think it would be to pay for actual costs? The whole point of running a scam is to spend the money.
      • antithesizer3 days ago
        I really wish I'd read this before starting my career as a scammer ten years ago.
    • CSMastermind3 days ago
      How do you figure? $450M / 8 years / 700 developers = $80k / year per developer.
      • cubano3 days ago
        Typically, scams like this are very top-heavy with the vast majority of the pilfered cash going to a few well-placed "bros" at the top of the company pyramid.

        My guess? Most of the cash is socked away in BTC or some such wealth sink just waiting for the individuals to clear their bothersome legal issues.

        • owebmaster3 days ago
          > My guess? Most of the cash is socked away in BTC

          Had they done this years ago they would be so rich it would be worthy keep builder.ai going just to avoid legal problems.

      • casion3 days ago
        Average salary for a developer in India is about 1/10th of that.
        • spamizbad2 days ago
          That hasn't been the case in like 20 years. Engineering salaries are around 40K USD, although they can even stretch into the six figures for major companies with deep pockets wanting to attract elite talent. The band is pretty wide and is largely based on whether you work in a body shop consultancy (low end) or a major tech company like Google (high end).

          And, like many things in this world, you'll find you'll pay for what you get.

        • darth_avocado3 days ago
          Median salary of a reasonable developer is about 1/2th of that and if you are talking about Microsoft, Uber, Google etc., then that’s the salary of a senior dev.

          https://www.levels.fyi/t/software-engineer/locations/greater...

          But more importantly, we’re all pretending, the only cost of building anything is salaries. A company that size could blow a million dollars a month just on AWS, and the AI stuff is waaaay more expensive.

        • aprilthird20213 days ago
          No, it's not
      • bigfatkitten3 days ago
        Only if they’re all ex FAANG staff/principal.
    • paxys3 days ago
      They have been operating since 2016. Companies can and have burned through $450M in funding a hell of a lot faster than that.

      OpenAI is on track to spend $14 billion this year.

      • 2 days ago
        undefined
    • monksy3 days ago
      The Chai budget is completely justifiable expense. (Probably more so than the difference being run away with)
    • 3 days ago
      undefined
    • more_corn3 days ago
      [flagged]
      • dang3 days ago
        Please don't do this here.
      • pryelluw3 days ago
        $400M!

        I get $100M. Maybe even $200M.

        But $400M?

        Unforgivable.

        • nadermx3 days ago
          You figure 700 employees. 400m. Avg cost per hooker can't be more than a few hundred.

          So by this math each employee got 1,900ish hookers. Since i figure male hookers for the female employees where cheaper well round up to 2,000.

          That is in fact unforgivable. 1,000 would of been acceptable. 2,000... just excess

          • pryelluw3 days ago
            Did you factor in the nose candy?

            That estimate seems off. Please crunch the numbers once again. Make sure to factor in inflation.

          • kridsdale13 days ago
            Shit, those benefits are way better than Suicide Bomber.
    • TrackerFF3 days ago
      These kind of scumbags pocket 90% of the cash.

      Wouldn't surprise me if the developers were hired from sweatshop staffing agencies, or just working directly for minimum wage - if that even.

    • pyman3 days ago
      Elon Musk spent $6 billion training his model. Sam Altman spent $40 billion. Where did Builder AI's $500 million go? Probably into building a foundation model, not even a full LLM.
      • 1oooqooq3 days ago
        shhhh. we don't talk about the ongoing scams. those you keep hyping and try to sell your SaaS around it.
    • 77pt773 days ago
      [flagged]
  • anal_reactor3 days ago
    In this whole AI revolution we sometimes forget the power of cheap human labour... and if I recall correctly, that's not the first time such a thing happens. Amazon made a "no-checkout AI automated store" which was a bunch of cameras connected to a bunch of Indians. At this point, I think we should consider "Indians" a valid element of any engineering architecture, because they perfectly fill the niche where you have work that is almost easy to automate, but not quite.

    Of course, "Indian-as-a-Service" doesn't sound as cool as AI, but besides this, I think it's a valid solution and a business model for many use cases.

  • gamblor9563 days ago
    Have been joking with friends that "AI" stand for "Actually Indians".

    We never thought that it wasn't just a joke...

  • Ancalagon3 days ago
    Really weird considering how much AI is actually available now
    • wongarsu3 days ago
      If you have an idea for a cool AI startup it's faster to build your first prototype without the actual AI, just faking that part. But if your Actual Indians had 95% accuracy and you can't get an AI to do more than 85% then you are kind of stuck if you raised money and got customers pretending that your Actual Indians are Artificial Intelligence.
      • TYPE_FASTER3 days ago
        This is the way. Funny how AI could also stand for Actual Intelligence. Or, Artisanal Intelligence? "Now 100% organic handcrafted thoughts, unique for your business problem."
      • more_corn3 days ago
        Not true it’s super easy to fine tune and deploy one of the open models. I should teach a course.
        • msgodel2 days ago
          The technical aspects of training and tuning are trivial. GP is pointing out that you might not be able to get the model to succeed at the task as often for any number of reasons that you won't know before you actually train one.

          Although I guess your point is that it's also cheap to train them, probably cheaper than doing this. But startups are started by social people, not technical people. Stuff like this will always be expensive for social people since they have to pay one of us to do it. YC interviews their CEOs from time to time, it's really clear that's how that works.

    • mrweasel3 days ago
      Also it can't have been fast. Didn't customers and investors feel that it was weird that CoPilot spits out code as fast as you can type, but Builder.ai needs days or weeks to generate your app. Or where these Indian developers just really really fast?
      • givemeethekeys3 days ago
        Maybe they use GPT :)
        • helloplanets3 days ago
          There's this one super secret agentic framework that beats all the benchmarks...
    • hyperadvanced3 days ago
      Available sure, but cost effective? My guess is that they tried a lot of things to get ChatGPT to work and burnt out of money before it got cheap enough to fit with a reliable business model. Early but not wrong, I guess.
    • immibis3 days ago
      Almost like it doesn't work as well as they market it as working?
      • klipt3 days ago
        Not all companies are equal.

        At the same time that Tesla was making actual electric cars, Nikola was rolling fake "electric trucks" downhill.

        Grifters exist, but not everyone is a grifter.

        • mjmsmith3 days ago
          Not sure Tesla is the poster child for non-grifters in the context of AI.
          • andrei_says_3 days ago
            Or “auto pilot”
            • lotsofpulp3 days ago
              Auto pilot is pretty good at driving within the same lane. Just like a plane pilot needs to do the more complicated stuff like take off and landing, one can expect the same with autopilot (just changing lanes and stopping at lights or stop signs or turning).

              Full self driving is the mislabeled one. Should be “90% self driving”.

              • seanp2k23 days ago
                Not when that lane is the HOV lane and traffic is going (speed limit + 15mph). You can always spot the white Tesla on 101 doing the speed limit in the HOV lane during rush hour with a mile of cars behind them and a mile of nothing in front.
                • lotsofpulp3 days ago
                  The driver can set the auto pilot speed to whatever number they want. If they are going the speed limit, it’s because the driver chose to cruise at that max speed.

                  Full self driving, however, may be limited to the speed limit. I don’t know, since I don’t use it.

              • pixl973 days ago
                > plane pilot needs to do the more complicated stuff like take off and landing

                Auto landing test 3 days ago.

                https://www.youtube.com/watch?v=eijEPsSdqg8

                • lotsofpulp2 days ago
                  Interesting. For marketing purposes, however, I would think almost all people still associate auto pilot with just cruising in the air, similar to cruising on a highway.
          • bufferoverflow3 days ago
            How are they grifters? Tesla gave us FSD to try for free for a month, twice now. If you don't like its performance, don't pay.
            • a4isms3 days ago
              You the customer are only a minor part of the grift. In fact, you're an unwitting prop for the grift. The entire point of the grift is the stock price, not your $8,000 or monthly subscription.

              What they want from you are comments like this. What they want to do with those comments is to preserve the consensus amongst Tesla bulls that they are on their way to selling robots and renting robot taxis by the ride.

            • seanp2k23 days ago
              Elon Musk Predicts Level 4 Or 5 Full Self-Driving ‘Later This Year’ For the Tenth Year In A Row

              (2023) https://www.theautopian.com/elon-musk-predicts-level-4-or-5-...

              https://dictionary.cambridge.org/us/dictionary/english/grift

                  ways of getting money dishonestly that involve tricking someone
              
              So, they got money dishonestly by tricking someone into buying their cars based on the belief that they'd offer real full self driving now or very soon, then they didn't actually deliver that.
              • bufferoverflow3 days ago
                Who cares what Elon says? You can literally try the product and decide. Or you can watch any of hundreds of videos on YouTube or people trying it.
        • jampekka3 days ago
          Coast-to-coast self-driving Teslas were promised by 2017. And have been promised next year almost every year since.

          Tesla can make electric vehicles but the company valuagion is based on grift.

        • Supermancho3 days ago
          Capitalism rewards dishonesty. Every company is a grifter to some degree. This is more widespread in technical service companies.
          • ahazred8ta3 days ago
            In eastern europe (COMECON) during the cold war days, the factories were famous for cranking out products that claimed to do something but did not deliver on the promise. Did they do that because they were run by capitalists?

            Google China melamine contaminated milk.

            • AngryData3 days ago
              I find it hard to believe any claim that they weren't capitalists. Almost nothing in the USSR was operated or managed by worker collectives. Nor in China. A handful of people managed the capital of these businesses and profited off them, and the people who own, operate, and profit off of capital investments are capitalists, even if they slap a PR sign up claiming they aren't.
              • ahazred8ta2 days ago
                That's why I specified the eastern european countries where many of the factories actually were controlled by the unions. They themselves acknowledged that they were fudging their numbers on a regular basis to get undeserved bonus money for workers.
              • earnestinger3 days ago
                Technically you are wrong. The best kind of wrong.
            • throw109202 days ago
              Thank you for pointing out that this has nothing to do with capitalism.

              Humans are dishonest, and implying that that doesn't manifest in other economic systems is willfully and maliciously false, a statement only used by propagandists.

          • deadeye3 days ago
            To an extent. Just not to the extent that dishonesty is rewarded in socialism and communism.

            At least Capitalism in a free society is largely self correcting.

          • burnte3 days ago
            Capitalis does no such thing, the market available to the company does. Every problem people blame on "capitalism" is solvable with appropriate regulation. That's literally the point of regulation, too. Dozens of other countries show you can have a vibrant economy that isn't beholden to a few billionaires.
            • otherme1233 days ago
              Capturing the regulators is a great way to make yourself immune to market pressure.
            • Supermancho3 days ago
              > Capitalis does no such thing,

              I think it's obvious that this is incorrect. Part of regulation is to try and curtail grifting. It is not possible to prevent it and is the tendency of the system.

              • burnte3 days ago
                Not being able to eliminate it doesn't mean controlling it is impossible or that we shouldn't try. In any system there will be cheaters, it's how you deal with it that counts. A well regulated economy makes wealth spread around.
            • surgical_fire3 days ago
              > Capitalis does no such thing, the market available to the company does. Every problem people blame on "capitalism" is solvable with appropriate regulation

              Too bad that capital owners deeply hate regulation and do everything they can to deregulate everything.

              Hell, it doesn't even need to be capital owners. Plenty of bootlickers here on HN that always cry about any kind of government regulation.

            • jampekka3 days ago
              > Every problem people blame on "capitalism" is solvable with appropriate regulation.

              Capitalism makes capitalism hard to appropriately regulate. Concentration of capital means concentration of power.

              • burnte3 days ago
                > Capitalism makes capitalism hard to appropriately regulate. Concentration of capital means concentration of power.

                And yet many countries have a better handle on it than the USA. I just never, ever buy into the whole "it's to hard for America to do a thing that other countries do". I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.

                • jampekka3 days ago
                  The general trend has been towards deregulation, and capital concentration, almost everywhere for decades. US is just ahead.

                  The talk in EU is now that we have to ease regulation because we can't compete with countries with laxer regulation. The same race to the bottom has happened in e.g. taxes and labor protections ever since capital controls were lifted.

                  > I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.

                  I sure hope so but I can't really see it happening. The whole US political system seems to be FUBAR, largely because the concentrations of capital bought it.

          • sib3 days ago
            It's a wonderful thing that no other economic systems reward dishonesty.

            /s

            • Supermancho3 days ago
              >>> Grifters exist, but not everyone is a grifter.

              >> Capitalism rewards dishonesty.

              > It's a wonderful thing that no other economic systems reward dishonesty.

              This is a whataboutism. To rephrase, "all economic systems reward dishonesty." - That's the point. Saying not every market participant is a grifter is a form of denial.

  • dang3 days ago
    Recent and related:

    Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency - https://news.ycombinator.com/item?id=44080640 - May 2025 (136 comments)

  • a_void_sky3 days ago
    Nobody has mentioned that they were reselling the AWS credits they had. We had them as our billing partner with very good discounts. The day it happened, AWS sent us a mail to remove them as our billing partner.
  • moonikakiss3 days ago
    I did due-diligence on Builder.AI for a venture firm I was interning at (circa 2019). It was extremely apparent (Glassdoor, talking to any employee) it was complete BS.

    When I say apparent, it took less than 15 minutes and a couple of google searches to get a sniff of it.

    Somehow, you can still raise $500MM ++.

    I think about that a lot

    • aprilthird20213 days ago
      You have to elaborate! What were the signs? When you did due diligence what were you told about the company? Was the marketing or premise itself fishy or you only realized it was fraudulent after starting the due diligence?
  • 3 days ago
    undefined
  • Havoc3 days ago
    I've read indications that this was always aimed at hybrid model rather than pure AI, but hard to tell now because all the news is on this indians train.
    • giarc3 days ago
      Not only that..."The deception wasn't new. As early as 2019, The Wall Street Journal exposed Builder.ai's questionable AI claim revealing that the platform relied heavily on human contractors rather than artificial intelligence."
    • bboygravity3 days ago
      That makes more sense, would explain the unbelievable clickbait headlines (as usual).
    • aitchnyu2 days ago
      Sounds a little more reasonable. In 2016, Builder's founding, Uber was pursuing self driving as well as hiring human drivers.
  • belter3 days ago
  • LeicaLatte3 days ago
    Do things that don’t scale taken to another level :)
  • cubano3 days ago
    We wanted flying cars, but instead got fake AI.

    Shameful.

  • tartoran3 days ago
    What happens with all the money they collected from investors? Was it all just squandered away? Pocketed?
  • ManBeardPc3 days ago
    Another AI scam. Wasn’t there a similar case with the Amazon stores? Just walk out I think. Could be understood as sound advice if someone pitches you something groundbreaking done by AI.
  • profstasiak3 days ago
    many such cases
  • moralestapia3 days ago
    The elephant in the room is how many builder.ai(s) are still out there.

    My personal estimate is that it is about 80% of the startups you see around.

    • bigfatkitten3 days ago
      The WITCH consultancies make tons of money delivering code of similar quality without pretending that it was done by AI.
  • xyst3 days ago
    The Theranos of AI. What a joke.
  • 3 days ago
    undefined
  • 3 days ago
    undefined
  • Yeul3 days ago
    Honestly I feel bad for Indians but yeah everything annoying comes from them. Scamming, call centers and worst of all Microsoft agents.
    • orochimaaru3 days ago
      Scamming is Cambodia via the Chinese triad. That’s where all the scam farms are.

      There’s a lot of poor Indians forced into slave labor conditions there by tricking them into job opportunities. But there is not a lot of call center scams run today in India. Not at the scale at which Cambodia runs them.

  • pkkkzip3 days ago
    Definitely not helping stereotypes
  • stuartd3 days ago
    cowboys
  • mountainriver3 days ago
    Do folks think that this was utter negligence by the VCs, or just a pump and dump?
    • petesergeant3 days ago
      I dunno, there's a world where this ends differently, and the company was liquid for six more months and transitioned to "actual" AI instead, having already built the customer base and sales channels, and everyone's happy. Launching your AI product before the AI works appears not to be especially unusual and is only a problem if the money runs out before you finish building the AI.
    • alephnerd3 days ago
      For every investor in Builder, there were multiple that passed.

      Notice how (aside from MS) most participants were not experienced Enterprise SaaS or AI/ML investors.

    • flowerthoughts3 days ago
      There's a more-or-less useful adage in investing: scared money don't make money.

      Startups will always carry a risk, and VCs are not betting that the company will be asymptotically good, just good enough to make an exit.

      • MegaButts3 days ago
        > VCs are not betting that the company will be asymptotically good, just good enough to make an exit.

        This is a misunderstanding of VC investment. Any competent VC expects most of their investments to go to zero. They're hoping a small percent of their investments will make up for the losses. The goal of a decent VC isn't to avoid bad investments so much as it is to make sure they get one good investment. A good investment in AirBnB/Google/Facebook will make up for dozens of speculative bets that go to zero.

        • flowerthoughts2 days ago
          > This is a misunderstanding of VC investment. Any competent VC expects most of their investments to go to zero.

          I'll be doing a linguistic nit pick now, as I felt it was a bit harsh to label my statement as a misunderstanding.

          The bet is still on each investment to have a good exit. With the implied assumption that betting is a probabilistic game.

          • MegaButts2 days ago
            No, this is wrong. VCs regularly bet on companies they expect to fail, and occassionally even know will fail. They sometimes put money into companies knowing they will never get it back. They do not expect a positive return on every investment.
      • compiler-guy3 days ago
        And even more than that, just that some company they invest in will be a winner. It's OK for most to fail, as long as one of them does well. So they invest in lots of long-shots.
    • fakedang3 days ago
      Not just VCs. Microsoft was an investor too.
      • pyman3 days ago
        Microsoft invested £250M in Inflection AI, £250M in Builder.ai, and has backed several other companies working on LLMs. They’ve been placing strategic bets across the AI space, but only a few of those companies actually had the talent, infrastructure, and funding needed to build real models.

        The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.

        See: https://www.businesswire.com/news/home/20240611122778/en/Bui...

        • seanp2k23 days ago
          ...and given the massive success of Amazon Alexa...
          • 3 days ago
            undefined
    • s1artibartfast3 days ago
      neither? Negligence doesn't make sense because it was the VCs own money- no duty to care. It doesn't seem like anyone cashed out either.

      Seems like a bad bet that went south.

      • sib3 days ago
        >> Negligence doesn't make sense because it was the VCs own money

        Almost definitionally, VCs are investing someone else's money (the people providing the capital are called the "limited partners" (LPs); the VCs who raise and invest the money are "general partners" (GPs).) The LPs are often pension funds, university endowments, and charitable organizations.

        Yes, GPs do typically have a capital contribution requirement, but it's generally in the area of 1% of the fund, so the vast majority of what VCs are investing is other people's money, for which they definitely have fiduciary responsibility.

  • andrewinardeer3 days ago
    AI = Actual Indians, apparently.
    • CobrastanJorji3 days ago
      Yep. Also happened with Amazon's "Just Walk Out Technology:" https://www.businessinsider.com/amazons-just-walk-out-actual...
      • davidst3 days ago
        [Disclaimer: Former Amazon employee and not involved with Go since 2016.]

        I worked on the first iteration of Amazon Go in 2015/16 and can provide some context on the human oversight aspects.

        The system incorporated human review in two primary capacities:

        1. Low-confidence event resolution: A subset of customer interactions resulted in low-confidence classifications that were routed to human reviewers for verification. These events typically involved edge cases that were challenging for the automated systems to resolve definitively. The proportion of these events was expected to decrease over time as the models improved. This was my experience during my time with Go.

        2. Training data generation: Human annotators played a significant role in labeling interactions for model training-- particularly when introducing new store fixtures or customer behaviors. For instance, when new equipment like coffee machines were added, the system would initially flag all related interactions for human annotation to build training datasets for those specific use cases. Of course, that results in a surge of humans needed for annotation while the data is collected.

        Scaling from smaller grab-and-go formats to larger retail environments (Fresh, Whole Foods) would require expanded annotation efforts due to the increased complexity and variety of customer interactions in those settings.

        This approach represents a fairly standard machine learning deployment pattern where human oversight serves both quality assurance and continuous improvement.

        The news story is entertaining but it implies there was no working tech behind Amazon Go which just isn't true.

        • CobrastanJorji3 days ago
          That's some fascinating background, thanks! Probably explains why they keep operating it in stadiums but not grocery stores. Works pretty well with a small handful of items, does not scale up reliably to shopping carts full of stuff.
      • vel0city3 days ago
        I wish more stores would just do something like Scan and Go at Sam's Club. By far the smoothest checkout experience I've ever used.

        https://tech.walmart.com/content/walmart-global-tech/en_us/b...

        • thatguy09003 days ago
          I wonder if that only really works becuase Sam's club can just revoke your membership if you steal
          • mikestew3 days ago
            Apple stores work the same way, except that it is truly “scan and go”, whereas Walmart makes you show a digital receipt. What happens if you steal? I dunno, without checking I’m pretty sure you need an Apple account to use the app, so maybe that gets revoked. Or maybe Apple’s stuff simply works well enough.
            • vel0city3 days ago
              At least at Sam's, cameras glimpse in your cart and seems to apply some kind of trust score. Usually, the person at the exit just waves me by, sometimes if the cart is really loaded with odd items or I've already bagged some things they'll want to take a peek.
      • RollingRo113 days ago
        I remember being 13 years old and stepping into an Amazon Go store in Seattle. Little me lost my mind. I think I walked in and out of the store like 5 times just to see the amazon charge. Sucks that half of the magic was a lie.

        Shame to see another project fall to the strategy of AI = "actually indians". I wonder how many other companies have engaged in this stuff.

        • pyman3 days ago
          • calmbell3 days ago
            Depends on how you define real. I would argue that GPT-2 was a real LLM and it almost certainly cost a lot less than a billion. I'm sure there are much better examples.
            • pyman3 days ago
              Can you imagine Builder.ai using a model that argues with their clients or discriminates against them? I don't think so. GPT-2 is like bringing a knife to a gunfight in 2025.

              If you want to compete with the likes of GPT-4, Claude, or Gemini today, you're looking at billions, just for training, not counting infra, data pipelines, evals, red teaming, and everything else that comes with it.

              Builder.ai wasn't able to use GenAI to actually build software. And when the money ran out and no model was ever announced, investors lost trust and clients lost patience.

      • dmazin3 days ago
        My understanding is this turned out not to be true. People were used to label stuff for new stores, but the actual implementation did not depend on some sort of fakery.
        • rrrrrrrrrrrryan3 days ago
          That was the original idea, and that's what Amazon claimed, but IIRC they never got over 70% automated.

          Phrased differently, 30% of all transactions were still entered by a human overseas watching cameras at the time they decided to pull the plug, years after the initial launch.

          • kylecazar3 days ago
            Interesting that they can automate some of the transactions and not others... Wonder what was special about those other 30%.

            There was a 2 year period in which I bought lunch at an Amazon Go daily. I was naive to the magic so I thought it was the greatest innovation ever.

        • CobrastanJorji3 days ago
          I imagine it's possible the truth was somewhere in between. But if it worked, why did they stop using it in their grocery stores after putting so much money into it?
          • martinald3 days ago
            In the UK at least the grocery stores are completely empty. I've barely seen anyone in them. Bizarrely they are shutting loads down in London but opening new ones at the same time. Absolutely no idea what the strategy is, they must be throwing out the majority of the fresh food stock they have.
        • throwaway298123 days ago
          [dead]
    • DebtDeflation3 days ago
      It's happening in every industry. CEOs moving back office jobs to India and telling Wall St they replaced the jobs with "AI" to get a stock price boost. I'm convinced this dynamic is a major cause of the "white collar recession" we're experiencing now. Perhaps the intent is to eventually replace the Indians with AI (Artificial Intelligence), but right now it's very much AI (Anonymous Indians) doing the work.
      • goatlover3 days ago
        Is this how the Trump administration imagines bringing jobs back to America by not regulating tech companies?
    • thomassmith653 days ago
      Yes, this article's full title is "Builder.ai Collapses: $1.5bn 'AI' Startup Exposed as 'Actually Indians' Pretending to Be Bots"
    • blitzar3 days ago
      Whats the process for hallucinations? Do 1 in 10 of each of the workers have to be tripping on shrooms all shift?
      • nickdothutton3 days ago
        No, lack of sleep.
        • blitzar3 days ago
          Elegant cost saving (or redistribution of drugs to head office) solution.
      • bluefirebrand3 days ago
        No they're just extremely low paid overseas workers scrambling to do work fast enough that it looks like "AI"?
    • kristianc3 days ago
      GenAI.. generate another Indian..
    • stuart_real3 days ago
      [dead]
    • 3 days ago
      undefined
    • stego-tech3 days ago
      [flagged]
  • zachncst3 days ago
    Isn’t this what they always tell startups to do? Fake it and get product market fit. I recall the stories of task rabbit where the founder was delivering all the meals.
    • Aurornis3 days ago
      "Fake it till you make it" was about presenting yourself as an established, stable company to overcome objections about using a startup. Things like having a "Customer Support" phone number and e-mail address that just go to the founders, for example. It's fair game if the founders are actually picking up the phone and doing customer support, and it overcomes one objection people might have about using a startup instead of a big company.

      Claiming you can do something specific (use AI to do something) and then using humans to do the labor is something else entirely. If you raise money on that, it's just fraud.

      • MangoToupe3 days ago
        There's a good deal of grey-area there, for instance in faking user activity in social media startups. Reddit did this for instance, although I don't know if they reported active user numbers as part of fundraising.
        • Aurornis3 days ago
          If Reddit create a material number of fake accounts and reported those as a key metric for fundraising, that would be fraud.

          I think the story has been exaggerated a lot, though. The original story was that the admins were doing real submission activity (links, etc.) but they had a mechanism to create a new user account with the submission. So they created a lot of new user accounts for themselves, but the activity was real and driven by the founders.

          We all have test accounts on our production systems. If it's a tiny number of the overall users at time of fundraising it doesn't matter. On the other hand if they created 10,000 accounts and then claimed they had 11,000 users that would be blatant fraud. I really don't think they did anything like that, though. I think they seeded the very initial site with content and made different "accounts" for it, but by the time they raised they had real traffic.

          • spwa43 days ago
            ... and what if Twitter does it?

            Because at the very least they killed most countermeasures to bots and a serious percentage of activity on twitter is "fake engagement".

            I also have a much more difficult question: Could you explain how this fraud works/applies if nation states are the ones developing the bots? Is there a difference between foreign and US bots?

            • Aurornis3 days ago
              It's not complicated. If a company knowingly misrepresents their user activity then it's fraud. Knowing that a significant portion of your user activity is bots but then claiming you don't have bots would be fraud.
              • MangoToupe3 days ago
                > If a company knowingly misrepresents their user activity then it's fraud.

                Demonstrating this in court might get pretty complicated, though. Legal terms often have a way of obscuring the complexity of real life (which is understandable, of course).

                I'm guessing the number of well-known startups who have committed fraud by "faking it until they make it" is somewhere between 1 and N. What that number is might well be subjective to the judge or jury rendering a verdict. Unfortunately, lack of serious insight into this might also be evidence that "faking it until you make it" works even if it's fraud, so long as you can spin revenue that investors demand out of it eventually.

                Edit: forgive my claiming lack of evidence = evidence; i'm just tired. I think my point that it's kind of unknowable, and this might prompt people to accept it as proof positive (even irrationally). I hope my comment can be received in good faith

              • spwa42 days ago
                Really? Because just about every dating site, every forum, every ... has been doing this for decades. If this were true, where are the many court cases where management loses against investors? Because I don't see them. The only one I see is the whole shitshow around Elon Musk buying Twitter.

                Also a bunch of the bots are by nation states. In that case I would expect that at least some courts would not cooperate with any such fraud case (Russia, India, China, I don't know in Europe but I doubt there aren't a few examples ... and maybe US. Probably at least a few states). Best of luck to make anything stick if the courts to not cooperate.

        • snowwrestler3 days ago
          Most people on web forums or social media sites are browsing, reading, watching. Only a small percentage are posting UGC, user-generated content.

          So when founders are starting a new site, they need to bootstrap by getting enough content in there to drive browsing. Only then will the audience grow, and only then will users start to post their own stuff. This is what Reddit did, and it’s not unique to them. YouTube’s founders did the same thing when they started.

          Note that this is not “fake it til you make it.” This is investment in audience growth.

    • pyman3 days ago
      • tacheiordache3 days ago
        You say that as if Amazon Alexa was some kind of amazing product. t succeeded because it has Amazon backing and it's kindof a crappy product.
        • oblio3 days ago
          All voice assistants, at least the original iterations, are only good for 3-4 trivial things people actually want. And they've been around for a decade at this point.
          • sokoloff3 days ago
            It’s a kitchen timer, music player, and weather sayer.

            That’s surely worth the $30 I paid.

            • jaymzcampbell3 days ago
              I had to laugh, these are literally the only three things my wife and I use ours for. At a stretch, I'll count the multi-room speaker sync as a great value add to the OOTB audio playback. Anything else, forget it.
            • codegrappler3 days ago
              Don’t forget grocery list add-er!
          • Yeul3 days ago
            I will never talk to a computer until they are sentient.
        • 3 days ago
          undefined
        • Gothmog693 days ago
          What makes it crappy? I find it really good at voice detection
      • manuelisimo3 days ago
        so, yeah, but not enough?
    • mrtksn3 days ago
      IMHO you are not supposed to fake your core value proposition when taking money. If the AI part was an implementation detail probably wouldn’t have been a problem.
    • drewda3 days ago
      "Do things that don't scale" to quote Sir PG.
    • 3 days ago
      undefined
  • felineflock3 days ago
    Not to confuse with builder.IO - poor founder was posting these days "FOR THE LAST TIME GUYS THIS IS A DIFFERENT COMPANY".
    • seydor3 days ago
      actually he should buy the bankrupt name and be the same company
      • nikcub3 days ago
        I expect the builder.ai story will break into the mainstream via a book / documentary. There are some insane details and it's the first large-scale AI hype failure - which people are hungry to get the details on - and some big names involved.

        Another failure of dd - I really wonder how high-profile investors pour hundreds of millions into a co without doing something simple like ordering an app using a burner account.

        • ethbr13 days ago
          > I really wonder how high-profile investors pour hundreds of millions into a co without doing something simple like ordering an app using a burner account.

          You'd think that if you're investing $1M+, there's budget for at least getting an intern / assistant to do that.

          • pyman3 days ago
            Microsoft invested £250M in Inflection AI, £250M in Builder.ai, and has backed several other companies working on LLMs. They’ve been placing strategic bets across the AI space, but only a few of those companies actually had the talent, infrastructure, and funding needed to build real models.

            The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.

            See: https://www.businesswire.com/news/home/20240611122778/en/Bui...

            • trilbyglens3 days ago
              This is why I think ai is basically the death of startups as we know them. Only big players can even take a swing. No more underdog garage startups, unless you're just downstream getting dorty bath water from the big boys.

              Ai all around is purely about consolidation of power and money. It's bad for workers and ultimately probably bad for the startup world and competition more broadly.

              • pyman3 days ago
                I agree. The infra side is dominated by VCs and big players. And the data is in the hands of regulators, who are looking the other way.
            • ricardobeat3 days ago
              DeepSeek cost just over $5M to train. StarCoder cost around $1M, there is no info for Starcoder2 but unlikely to be more than a few million. The idea of spending billions in training is OpenAI trying to build a moat that might not actually exist.
              • pyman3 days ago
                These architectures didn't exist last year. The Chinese are innovating thanks to massive government backing, access to talent, and a clear focus on winning the AI race.
                • ricardobeat3 days ago
                  Starcoder was released in 2023, by french/american companies, and there were other coding models before it.

                  That was right around the time this company had a new $250M funding round, so lack of resources to invest in actual AI is a terrible excuse.

                  • pyman3 days ago
                    StarCoder is an open-source LLM for code, not text. Builder.ai told investors they were building a virtual assistant called "Natasha", not a code assistant.

                    I'm just telling you what I read online. Builder.ai wasn't competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. Builder.ai was building a virtual assistant for customers. It meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.

                    And like I said before, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

                    • ricardobeat2 days ago
                      The point is that one doesn't need billions to create useful models — there is no reason they would need new foundation models in the first place — and that 'running out of money' is unlikely to have been their main problem, 250M should be more than enough to create an AI website builder.
                      • pyman2 days ago
                        I get what you are saying, but you're missing the most important piece of the puzzle: the data.

                        Everyone talks about models and infrastructure, but without the right data, you've got nothing. And that's where the biggest hidden cost is.

                        According to the company's own website, they were creating the data themselves. Labelled datasets, transcripts, customer conversations, documents, and more. That's where the real money goes. Millions and millions.

          • SteveNuts3 days ago
            >You'd think that if you're investing $1M+, there's budget for at least getting an intern / assistant to do that.

            Or having an AI Agent do it...

        • bobthepanda3 days ago
          maybe this is the first large solely AI failure but algorithms and AIs have done lots of damage before. There have been flash crashes on Wall St, Zillow lost $1B using an algorithm to try and house flip, Klarna is circling the drain after hyping up AI, etc.
    • noworriesnate2 days ago
      Yeah thanks for calling this it. I’ve been following builder.io for a while and seeing builder.ai recently made me think they had possibly pivoted because builder.io has always been on the code generation / design to code / form building space from what I’m aware of.
    • seanp2k23 days ago
      Opening up gTLDs was a mistake.
      • mkl3 days ago
        Both those are ccTLDs.
  • rdtsc3 days ago
    > Linas Beliūnas, Director of the financial company Zero Hash, recently exposed that Builder.ai lacked true AI, instead utilising a group of Indian developers who were merely pretending to be bots writing code.

    They probably had to train people to talk like ChatGPT.

    Step 0: Make sure you have an em dash shortcut on your keyboard and use that as often as possible.

    Step 1: Be extremely polite and apologize profusely.

    Step 2: ...

    • TZubiri3 days ago
      Probably they just wrote code and that was fed into an LLM which LLMified the responses.
    • zingababba3 days ago
      Step 2: Do the needful
    • 3 days ago
      undefined
  • koakuma-chan3 days ago
    Oh no, qwik was my favourite JavaScript framework