65 pointsby bluepeter6 hours ago19 comments
  • paxys3 hours ago
    "Hey ChatGPT, my NYC landlord is raising my rent by $500, and says I must pay by Monday or leave. What do I do?"

    ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...

    ChatGPT now - IDK, pay a lawyer.

    So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.

    • NewsaHackO3 hours ago
      Lawyers are making laws protecting lawyers. More seriously, I think part of the issue is people take AI responses very seriously, because it is almost always right about non-nuanced material. So even if it has the disclaimer at the end to talk to a professional, they might forgo that if the answer looks professional enough (such as quoting possibly non-existent statutes, etc.). This issue gets compounded if the person who is prompting it doesn't know the material, is accidentally misframing the question, or not giving it key information that completely changes the scenario. Even in your example, what if the person neglected to say that the raise was two months ago, and they already signed a lease agreeing to the raise? Getting into the weeds of topics like law and medicine can be hard, and both have major consequences when an answer is wrong.

      For engineering (assuming it means civil engineering), that should already be illegal, unless the person who is using the AI is an engineer. Hopefully people aren't building structures with ChatGPT as their staff engineer.

    • terminalshort3 hours ago
      Their protection, not yours. Hopefully this will draw public backlash just like when they tried to ban Uber and make everybody go back to cabs. Fuck the entire credential cartel based system of societal organization. Burn it to the ground.
      • theturtletalks2 hours ago
        They will come for medical advice provided by AI as well. Doctors have been gate keeping that forever and they want you to have to go thru them instead of diagnosing yourself thru AI.

        Yes, there are people that will misdiagnose themselves, but I’ve read stories where doctors ignore patients symptoms or wave them off, and ChatGPT helps them find the underlying issue and actually improve their lives. Even if doctors and the medical field can’t handicap AI giving medical advice, I’m sure they are going to make it much harder for patients to get their hands on their own scans and bloodwork.

        • terminalshort19 minutes ago
          I know because I've lived it. My other comment on this post:

          > I have narcolepsy. It took a dozen or so doctors and years of suffering before I got a correct diagnosis, and even then it was only because I diagnosed myself with Google and then specifically made an appointment with a doctor who specializes in it. Gemini nails it when I put in my symptoms.

          Also, I had to vouch for your totally legit comment b/c somebody who doesn't want other people to read it flagged it dead.

        • rrmman hour ago
          Doctors carry malpractice insurance.
    • threetonesun3 hours ago
      Same question on Google gets you nyc.gov (the actual source!) with the same answer. That page is also always correct for NYC, instead however correct ChatGPT is, which might be 100%... or might not!
      • paxys3 hours ago
        And what then? People read through all of nyc.gov and the entire city/state legal code to find the exact statute that applies to their scenario?

        In fact government agencies have set up their own chatbots to help people with situations like these, and like the article says those would be illegal under this law as well.

        • threetonesun3 hours ago
          Was it that hard for you to try the search yourself? The first result was a helpful guide breaking down what to do specific to the scenario you mentioned: https://www.nyc.gov/main/services/rent-increase-guide

          Also NYC is in the process of getting rid of that chatbot.

          • paxys2 hours ago
            This is not about you or me, it's about the large chunk of New Yorkers (and people in every city) that:

            - have no resources for a lawyer

            - have limited English skills, and possibly limited literacy in general

            - aren't good with computers/internet

            - have little understanding of the law

            "Oh just browse a complex website" and every other "it works fine for me" scenario doesn't help this class of citizens. A simple chatbot that answers questions does.

            • threetonesun2 hours ago
              Apparently you are either a bot yourself or some AI shill. That page is clearer than most ChatGPT results, the official source, and has translations for dozens of languages. Not to mention "aren't good with computers" already rules out using ChatGPT!
      • prasadjoglekar3 hours ago
        True. But just changing the prompt to include "cite me cases" expands the search to court systems and actual cases. It's pretty useful as a first pass to get a sense for the issues, precedents and laws at stake.
        • ambicapter3 hours ago
          You know some of those "actual cases" are made up, right? Like, famously, lawyers are filing briefs with made-up citations b/c they used LLMs to draft it.
          • bluepeter3 hours ago
            Ah ok so only lawyers get to use AI hallucinations! (Actually, CA has a bill pending that AFAIR requires lawyers to manually verify AI citations... which is a lot narrower and better than what NY is trying here.)
      • pixl973 hours ago
        Note: 'Same question on Google gets you' can only reasonably be sure for you and no other person. Answers may vary depending on your location and history information.
        • slg3 hours ago
          This is a strange disclaimer to make specifically about Google when it is even more true for these chatbots.
    • HanClinto3 hours ago
      One of the big dividers that I see between the "haves" and the "have-nots" is the ability to afford legal representation in civil cases.

      For criminal cases, there are public defenders, but for civil cases, I don't believe there is any such thing?

      If you can afford a lawyer and your opponent can't, there is a lot that you can do to bully your opponent into making it not worth it for them to fight the case.

      One of my controversial opinions is that -- if we can enable easy access to AI, then we can give provide much broader access to legal or medical advice. Maybe not the best, maybe not always right, but even if it's average-ish advice, then I think that could often be better than nothing at all.

      We can't completely prevent bad people from doing bad things with AI, but I see this as one of the clear ways that we could do some really good things with AI.

      • cogman102 hours ago
        IMO, this screams the need for both tort reforms and something like a nationalized representation system.

        Perhaps something like a standard set of filings for a given case. Maybe automated rulings on less consequential motions. Maybe some sort of hard limits on the amount of billable hours a law-firm can work on a case. Anti-slapp laws for sure.

        Like, for example, maybe we allow a total of 100 billable hours worked, with an additional 10 billable hours allowed per appeal. The goal there being that you force lawyers and lawfirms to actually focus on the most important aspects of a case and not waste everyone's time and money filling motions for stuff you are allowed to get, but ultimately has 1% impact on the case. Perhaps you could even carve out a "if both sides agree, then you can extend the billable hours". You could also have penalties for a side that doesn't respond. For example, if you depose them and they fail to follow the orders then they lose billable hours while you get them credited back.

        The main goal here being avoiding both wasting a bunch of court time on a case but also stopping a rich person that can afford an army of lawyers from using that advantage to drive their opponent bankrupt with a sea of minor motions.

      • bee_rider3 hours ago
        I’m sure this is true to some extent, about the lawyers. But also, I wonder (aka I don’t have any data to back this up, it is just based on random stories I’ve heard) to what extent people “I’m right but can’t afford the lawyer time” as a sort of pride maintaining excuse. Or to what extent lawyers use that as a soft-no to reject clients that they don’t think have a strong enough case.

        Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.

        • terminalshort3 hours ago
          In my experience lawyers will tell you very directly when you don't have a good case, or if you do have a good case but it's not worth pursuing it (the most likely scenario). Also, the time that I did pursue my case, it took around $50,000 to the lawyers before I was able to convince the defendant to settle (for a large multiple of that $50K). If the other side had been more stubborn it would have been around $100K to take it to trial. If I hadn't had the money to pay the lawyer I would have been SOL, and most people don't have $50K to spend on an uncertain outcome like that. So “I’m right but can’t afford the lawyer time” is a very real scenario.
          • cogman103 hours ago
            That $100k is also on the cheap side. If the other side has a lawyer and a lot of money to burn, they can easily hike that way up. Filing a billion motions that your lawyer has to respond to, deposing everyone you've ever met, going after every document you've ever looked at. The more money someone has, the easier it is to make you spend more money, even if you are right.
            • terminalshort2 hours ago
              Right. My case was a very simple contract dispute with very little discovery and only a couple of people to depose, so I was lucky there. And the other side did have more money than me, but not so much that they could burn several hundred K on it without feeling it.
          • chimeracoder2 hours ago
            > So “I’m right but can’t afford the lawyer time” is a very real scenario.

            For most cases like the ones we're talking about (NYC unlawful eviction and/or tenant harassment), if you have a good case, you don't have to pay up-front. A lawyer will take it on contingency and get paid by the defendant if you win.

            In addition, there are also plenty of free legal resources dedicated to this exact topic as well.

            • terminalshort2 hours ago
              True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist. As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.
              • chimeracoder2 hours ago
                > As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.

                You misunderstand. If you are facing tenant harassment in New York City, there are other avenues for you to resolve it that don't involve engaging a lawyer at all.

                > True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist.

                Not really? If anything, there's a pretty narrow subset of cases where it's not possible to get someone on contingency but it is possible to use an LLM to meaningfully push your case forward without one.

        • cogman103 hours ago
          $50k is going to be on the cheap side for any case that ultimately involves the court. Anytime a case goes to trial, you can easily be looking at $1M+.

          There's a reason companies keep lawyers on staff. It's a whole lot cheaper to give a lawyer an annual salary than it is to hire out a lawfirm as the standard rates for law-firms are insanely high. On the low end, $150/hour. On the high end, $400. With things like 15 minute minimums (so that one draft response ends up costing $100).

          Take a deposition for 3 hours, with 2 lawyers, that'll be $2400.

          Not being able to afford a lawyer is no joke.

          • freejazz2 hours ago
            In house counsel aren't doing trials
            • cogman10an hour ago
              Correct, they are handling everything up until the point where you start a trial (including finding the legal firm and spot checking their work).
              • freejazz39 minutes ago
                Doubt that. There's no point of bringing in a litigator on day 1 of a trial save for the fact they are probably a better public speaker. Whatever needed to get done needed to be done well before trial started.
                • cogman105 minutes ago
                  Sure there is, if you can send back a strong response to a challenge, a potential litigant may back down ultimately saving money.

                  On staff legal council is there to be able to make the call when a more expensive firm should be hired and brought in. There's a lot of BS lawsuits, however, that flow through. For example, every software company that gets big enough will likely get sued for some BS patent infringement. Having on staff legal will be able to make the call of "yeah, you should just give them $10k to go away". That's a lot cheaper than hiring a firm to come in and then tell you "Yeah, you should give them $10k to go away".

                  Particularly for a business, it takes years before any case gets close to going to trial. Plenty of time for your council to make the determination on when bigger guns should be brought in.

    • solid_fuelan hour ago
      "Hey ChatGPT, my NYC landlord is raising my rent by $500, and says I must pay by Monday or leave. What do I do?"

      ChatGPT - "Wow that sounds illegal >:( You're absolutely right to be upset and mad. I searched around reddit for other users with similar problems and they suggested jamming all the taps open and claiming squatters rights."

    • 3 hours ago
      undefined
    • cm20123 hours ago
      Agreed. This law would have awful outcomes.
    • beepbooptheory3 hours ago
      Small note, saying "common people" in this way comes off at best anachronistic, at worst a little stuck up. Like a benevolent lord considering the feeble minds of the peasantry.

      Commonality stresses something qualitative, rather than quantitative or statistical, which is probably what you meant. Just say "most"!

      Cf. https://youtu.be/dxhQiiNJG74

    • bluepeter3 hours ago
      100% this.
    • kgwxd2 hours ago
      That search does not in the slightest require AI to get a reasonable answer for. And, no matter what, the answer from a computer isn't going to stop the landlord from doing whatever comes next.
    • Simulacra3 hours ago
      Occupational licensure has, overtime, slowly choked off both competition and access to information. IMHO much of it is little more than protectionism.
    • expedition322 hours ago
      They don't have government websites in New York?

      Besides chatGPT is owned by billionaire tech bros- hardly allies of the common people.

  • Esophagus43 hours ago
    The disclosure requirement is probably a decent thing (you have no idea how many people come into the ER and say, “But ChatGPT told me to do [dumb thing].”) But preventing it from answering at all is absurd.

    Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).

    That way if users ever sue OpenAI for giving them bad advice, OpenAI can say “no way man, you read the disclosure!”

    I’m usually in favor of giving people the best info they can and letting them make their own decisions.

    This could just be like those terms of service things everyone clicks “agree” to and I’d be fine with that.

    • terminalshort3 hours ago
      I am skeptical of this claim. What are some of the dumb things that people do on ChatGPTs advice that puts them in the ER?
    • GuinansEyebrows3 hours ago
      > Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).

      disclaimer: OSTENSIBLY

      if the sole aim was to reduce AI provider culpability, then a disclaimer would meet that requirement.

      humans famously suck at acting within rational self-interest; therefore, this isn't trying to protect AI providers of legal responsibility. it's trying to mitigate unwanted results from actions taken based on decisions informed by unverified LLM output.

  • bluepeter5 hours ago
    Whats at least somewhat humorous is the disclaimer requirement that "[t]he text of the notice shall [be] no smaller than the largest font size of other text appearing on the website on which the chatbot is utilized."

    H1 hero font size here we come for disclaimers! (Which don't do anything, per the bill, anyway.) But also is the fancy thought that chatbots only appear on websites.

    • terminalshort2 hours ago
      1. Put very large font size title on the main page.

      2. Display the disclaimer in the same font size to comply.

      3. Disclaimer is now completely unreadable because it appears in such a large font size that it is one or two words per line.

    • neonnoodle4 hours ago
      We're bringing back the <blink/> tag very strongly, some say more than ever before
  • tzs32 minutes ago
    The only penalty for violating this law is that if someone is injured by your chatbot giving bad advice they can sue you for the actual damages they suffered, and if your violation of the law was willful they can also sue for attorney fees.

    This only applies to advice that would have illegal for a human to give who is not licensed in the relevant field.

  • aetherspawn2 hours ago
    Great, electrical and mechanical engineers are already underpaid, under appreciated and overworked.

    I’ve always found it amusing that lawyers and accountants flash their license around with pride, put it in their email signatures, etc. and it provides authority for them. When people see chartered lawyer or accountant, they respect that person and take their advice.

    An engineering license, on the other hand, is so rarely talked about and never quoted in email signatures and the like. And even as a chartered engineer, people really just treat you like a mechanic or a trade and mostly ignore your advice anyway. Yet, it takes the longest to get, and has the most exams/hardest subjects, except for Doctors.

    Anything to make an Engineering license worth more is good in my books. Besides, in my experience ChatGPT gives wrong advice for engineering around 50% of the time and therefore probably has no business giving it.

  • cm20123 hours ago
    Bad law. I have gotten better advice from modern llms than from most of the professional categories above.
    • terminalshort3 hours ago
      I have narcolepsy. It took a dozen or so doctors and years of suffering before I got a correct diagnosis, and even then it was only because I diagnosed myself with Google and then specifically made an appointment with a doctor who specializes in it. Gemini nails it when I put in my symptoms.
    • freejazz2 hours ago
      Do share with the class
  • phishin5 hours ago
    Lawyers protecting lawyers. The one thing AI could help the ordinary people fight back against corporations.
    • bklosky4 hours ago
      Rent seeking via occupational licensing, there is nothing new under the sun
      • terminalshort3 hours ago
        That's a lot of words to say "cartel"
        • Esophagus43 hours ago
          A cartel of lawyers would be like… the most boring cartel
    • bigbadfeline3 hours ago
      >> OP: New York could prohibit chatbot medical, legal, engineering advice

      Isn't software engineering "engineering" too? Why split hairs, prohibit all or nothing. Of course it's not about logic or safety, it's about social engineering.

      • tzsan hour ago
        > Isn't software engineering "engineering" too?

        No.

        FTA:>> Important nuance: "engineering" here means New York Education Law Article 145 professions (professional engineering, land surveying, and geology), not software engineering.

        What the law is basically saying is that in fields where it would by a crime for a random human to give any substantive response, information, or advice chatbots also should not do so. Software engineering is not one of those fields.

        The law does not make it a crime for a chatbot to do so, but if it does and the person it advises suffers damages it makes it so the injured person can sue the chatbot operator for those damages (and for attorney fees if the chatbot operator willfully allowed the chatbot to give such advice).

    • bluepeter5 hours ago
      Same as it ever was. Honestly, I think we've probably passed peak consumer AI with with all the "guardrails" that regulations will require.
  • iamnothere3 hours ago
    Download one of the freely available models and use that, if you have the hardware for it. It’s not a good idea to ask sensitive questions on these nontransparent chatbot platforms.

    (FWIW I also think this is a bad law. Why not improve privacy protections instead? Why not allow nonprofessional use with a disclaimer?)

  • francisofascii3 hours ago
    The other professions are creating lawful protections for themselves in the upcoming AI revolution. Software engineers have no such protection.
  • 3 hours ago
    undefined
  • 3 hours ago
    undefined
  • arionhardison3 hours ago
    I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

    AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?

    • bee_rider3 hours ago
      > I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

      NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!

      > Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?

      No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.

      • arionhardison38 minutes ago
        I did not understand this, thank you for clarifying.
    • chimeracoder3 hours ago
      > I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

      > AI can surveil and direct munitions but it cant answer legal questions.

      There's no contradiction. The people sponsoring this bill don't think that AI should be used for either of those purposes.

      • arionhardison35 minutes ago
        Understood, I was under the impression that they were supporting this use and opposed to the DoD usage. Thank you for the clarification.
  • OutOfHere3 hours ago
    New York residents who opposite this bill can go to https://www.nysenate.gov/legislation/bills/2025/S7263 , register and sign in, click Nay on the bill's page, and submit feedback for their vote.

    ---

    ChatGPT> Before I answer your question, which state are you a resident of?

    Human> Not New York. Continue!

    ChatGPT> Alrighty then! Here you go...

    • cm20123 hours ago
      Greatly appreciated! I used this link to message my State Senator, I didn't even know her name before.
  • htrp3 hours ago
    loopholes wide enough to drive a truck through
  • tim-tday3 hours ago
    Now do target acquisition for lethal munitions.
  • josefritzisherean hour ago
    New York really should. AI is literally killing people. This is not OK.
  • moomoo113 hours ago
    [flagged]
    • HanClinto3 hours ago
      It's for your own good! Think of the children! You don't want puppies to DIE, do you? [0]

      [0] - https://www.youtube.com/watch?v=eXWhbUUE4ko

    • hypeatei3 hours ago
      Yes, there's a lot of naivety in the "regulate everything" camps that believe you can simply solve problems, including the ones caused by legislation, with more legislation.
    • cindyllm3 hours ago
      [dead]
  • fwip4 hours ago
    Seems like a good bill, at least directionally. If it's a crime to provide advice of this nature without a license, then chat bots shouldn't be dispensing it either.
    • articulatepang3 hours ago
      Maybe you mean it's a crime to professionally provide advice of this nature without a license?

      It is generally not a crime to casually provide advice of this nature without a license. For example, if my friend tells me, "My stomach hurts!", it is not a crime for me to say, "Just grin and bear it, it will be okay." If they subsequently die of appendicitis, I'm unlikely to have legal liability. It would be difficult to characterize what I said as medical diagnosis or treatment.

      Similarly, I can tell my friend, "Don't bother paying your taxes, that is a waste of time." This is legal speech. (Of course, helping them evade taxes is another matter.)

      What is illegal is to hold oneself out as a licensed doctor, lawyer or engineer, or to provide professional services without a license.

      Of course, chatbots operate at scale and give the impression of being professionally qualified even though they don't make specific representations to that effect. You're directionally probably right and I agree with you, I just want to nitpick about what is and isn't criminal.

      • fwip2 hours ago
        Yeah, exactly. ChatGPT et al provide "advice as a service," and charge up to hundreds of dollars a month for it. (And the free tier is just a loss-leader to make money).

        If these companies intend to profit off of giving advice, it seems wise to restrict them in the same way we do individuals.

    • _cairn3 hours ago
      It’s not illegal to provide advice of this nature without a license. It’s illegal to charge for services where you are advertising expertise in these areas without a license. Chatbots are information tools, like search engines, they should not be held to this standard imho.
      • lotu3 hours ago
        Just because you aren't charging money doesn't give you the ability to act as an attorney, doctor, or civil engineer.
      • daveguy3 hours ago
        A lot of people are paying money for chatbots with a hype train that says the chatbots are AGI.
    • bluepeter4 hours ago
      This is not directionally good because NY already has laws against unauthorized professional practice and deceptive conduct, and S7263 mainly replaces regulator-led enforcement with a vague, fee-shifting private cause of action that is likely to drive serial plaintiff litigation while chilling useful consumer guidance.
    • tekne4 hours ago
      You've got a license for looking up the law/engineering textbooks/your symptoms, pal?
      • kakacik3 hours ago
        Whats your issue with his claims? Apart from ad hominem attacks which don't help discussion much, make your statements childish and you don't provide any counter points
        • alwa3 hours ago
          It seems to me that @tekne is comparing the LLM to a reference source. I took them to be pointing out that unlicensed-practice laws don’t crack down on textbooks, or reading the law for yourself (or even going jailhouse-lawyer or trying to defend yourself in court).

          Rather, that the laws aim to keep the professional title commercially reliable, so that it indicates to the public that the person using it has proven some minimum level of expertise.

          So the analysis would turn on whether a reasonable person would confuse ChatGPT for a practicing lawyer, or doctor, or whatever—not whether it communicated legal or medical facts.

          Now, to my mind, the facts are the least interesting part of those professions—I pay those professionals precisely for their nuance and judgment and experience beyond the bare facts of a situation. And I think the ChatGPTs of the world do embellish their responses with the kind of confidence and tone that implies nuance/judgment/experience they don’t have.

          But I do think @tekne was making a valid point.

          • terminalshort2 hours ago
            But that isn't the standard. You said it yourself "whether a reasonable person would confuse ChatGPT for a practicing lawyer, or doctor"

            So as long as people don't think that there is a licensed lawyer or doctor on the other end typing out those responses, and they don't, this should be legal.

            • 2 hours ago
              undefined
        • operatingthetan3 hours ago
          That is not remotely ad hominem. I'd suggest you refresh your understanding.
    • NotGManan hour ago
      Anyone who has dealt with lawyers and doctors will a lot will tell you how incomptent they can be.
    • bluefirebrand4 hours ago
      Yes and since chatbots cannot be held accountable directly their owners must

      And yes, corporations own their chatbots. They aren't independent life forms

    • 0xy3 hours ago
      Ah yes, protectionism for $400/hour lawyers gatekeeping knowledge to protect tenants and abuse victims. Incredible!
  • mbgerring3 hours ago
    Yes, that’s correct, I do not want a vibe-coded freeway overpass, thanks.

    We all need to get serious about the unavoidable, unsolvable fact that these tools produce output of unknowable accuracy. Some things require such accuracy, precision, and, importantly, accountability. LLMs are capable of none of these things. Refusing to be honest about this and take appropriate precautions will lead to disaster.

    • tantalor3 hours ago
      > I do not want a vibe-coded freeway overpass

      I do. One of the reasons our infrastructure is so expensive is planning & design.

      For a single freeway overpass, you could be looking at $3M (25% of the total budget) before you have even broken ground. That covers feasibility studies, traffic modeling, rough layout, environmental studies, permitting, structural engineering, blueprints, bidding, contracts, community outreach, and the list goes on.

      If AI can reduce the cost of that by even 10%, that would be huge.

      • mbgerring3 hours ago
        Cool, we agree, and if you think the place to cut corners on that is the engineering calcs, you have lost your mind. If you do that, not only will people die, you will drastically increase costs because the infrastructure project you built will collapse.

        Europe and Asia both have reliable, modern infrastructure that’s decades ahead of the United States and they did not need the million-monkeys-on-typewriters machine to accomplish that.