ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...
ChatGPT now - IDK, pay a lawyer.
So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.
For engineering (assuming it means civil engineering), that should already be illegal, unless the person who is using the AI is an engineer. Hopefully people aren't building structures with ChatGPT as their staff engineer.
Yes, there are people that will misdiagnose themselves, but I’ve read stories where doctors ignore patients symptoms or wave them off, and ChatGPT helps them find the underlying issue and actually improve their lives. Even if doctors and the medical field can’t handicap AI giving medical advice, I’m sure they are going to make it much harder for patients to get their hands on their own scans and bloodwork.
> I have narcolepsy. It took a dozen or so doctors and years of suffering before I got a correct diagnosis, and even then it was only because I diagnosed myself with Google and then specifically made an appointment with a doctor who specializes in it. Gemini nails it when I put in my symptoms.
Also, I had to vouch for your totally legit comment b/c somebody who doesn't want other people to read it flagged it dead.
In fact government agencies have set up their own chatbots to help people with situations like these, and like the article says those would be illegal under this law as well.
Also NYC is in the process of getting rid of that chatbot.
- have no resources for a lawyer
- have limited English skills, and possibly limited literacy in general
- aren't good with computers/internet
- have little understanding of the law
"Oh just browse a complex website" and every other "it works fine for me" scenario doesn't help this class of citizens. A simple chatbot that answers questions does.
For criminal cases, there are public defenders, but for civil cases, I don't believe there is any such thing?
If you can afford a lawyer and your opponent can't, there is a lot that you can do to bully your opponent into making it not worth it for them to fight the case.
One of my controversial opinions is that -- if we can enable easy access to AI, then we can give provide much broader access to legal or medical advice. Maybe not the best, maybe not always right, but even if it's average-ish advice, then I think that could often be better than nothing at all.
We can't completely prevent bad people from doing bad things with AI, but I see this as one of the clear ways that we could do some really good things with AI.
Perhaps something like a standard set of filings for a given case. Maybe automated rulings on less consequential motions. Maybe some sort of hard limits on the amount of billable hours a law-firm can work on a case. Anti-slapp laws for sure.
Like, for example, maybe we allow a total of 100 billable hours worked, with an additional 10 billable hours allowed per appeal. The goal there being that you force lawyers and lawfirms to actually focus on the most important aspects of a case and not waste everyone's time and money filling motions for stuff you are allowed to get, but ultimately has 1% impact on the case. Perhaps you could even carve out a "if both sides agree, then you can extend the billable hours". You could also have penalties for a side that doesn't respond. For example, if you depose them and they fail to follow the orders then they lose billable hours while you get them credited back.
The main goal here being avoiding both wasting a bunch of court time on a case but also stopping a rich person that can afford an army of lawyers from using that advantage to drive their opponent bankrupt with a sea of minor motions.
Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.
For most cases like the ones we're talking about (NYC unlawful eviction and/or tenant harassment), if you have a good case, you don't have to pay up-front. A lawyer will take it on contingency and get paid by the defendant if you win.
In addition, there are also plenty of free legal resources dedicated to this exact topic as well.
You misunderstand. If you are facing tenant harassment in New York City, there are other avenues for you to resolve it that don't involve engaging a lawyer at all.
> True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist.
Not really? If anything, there's a pretty narrow subset of cases where it's not possible to get someone on contingency but it is possible to use an LLM to meaningfully push your case forward without one.
There's a reason companies keep lawyers on staff. It's a whole lot cheaper to give a lawyer an annual salary than it is to hire out a lawfirm as the standard rates for law-firms are insanely high. On the low end, $150/hour. On the high end, $400. With things like 15 minute minimums (so that one draft response ends up costing $100).
Take a deposition for 3 hours, with 2 lawyers, that'll be $2400.
Not being able to afford a lawyer is no joke.
On staff legal council is there to be able to make the call when a more expensive firm should be hired and brought in. There's a lot of BS lawsuits, however, that flow through. For example, every software company that gets big enough will likely get sued for some BS patent infringement. Having on staff legal will be able to make the call of "yeah, you should just give them $10k to go away". That's a lot cheaper than hiring a firm to come in and then tell you "Yeah, you should give them $10k to go away".
Particularly for a business, it takes years before any case gets close to going to trial. Plenty of time for your council to make the determination on when bigger guns should be brought in.
ChatGPT - "Wow that sounds illegal >:( You're absolutely right to be upset and mad. I searched around reddit for other users with similar problems and they suggested jamming all the taps open and claiming squatters rights."
Commonality stresses something qualitative, rather than quantitative or statistical, which is probably what you meant. Just say "most"!
Besides chatGPT is owned by billionaire tech bros- hardly allies of the common people.
Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).
That way if users ever sue OpenAI for giving them bad advice, OpenAI can say “no way man, you read the disclosure!”
I’m usually in favor of giving people the best info they can and letting them make their own decisions.
This could just be like those terms of service things everyone clicks “agree” to and I’d be fine with that.
disclaimer: OSTENSIBLY
if the sole aim was to reduce AI provider culpability, then a disclaimer would meet that requirement.
humans famously suck at acting within rational self-interest; therefore, this isn't trying to protect AI providers of legal responsibility. it's trying to mitigate unwanted results from actions taken based on decisions informed by unverified LLM output.
H1 hero font size here we come for disclaimers! (Which don't do anything, per the bill, anyway.) But also is the fancy thought that chatbots only appear on websites.
2. Display the disclaimer in the same font size to comply.
3. Disclaimer is now completely unreadable because it appears in such a large font size that it is one or two words per line.
This only applies to advice that would have illegal for a human to give who is not licensed in the relevant field.
I’ve always found it amusing that lawyers and accountants flash their license around with pride, put it in their email signatures, etc. and it provides authority for them. When people see chartered lawyer or accountant, they respect that person and take their advice.
An engineering license, on the other hand, is so rarely talked about and never quoted in email signatures and the like. And even as a chartered engineer, people really just treat you like a mechanic or a trade and mostly ignore your advice anyway. Yet, it takes the longest to get, and has the most exams/hardest subjects, except for Doctors.
Anything to make an Engineering license worth more is good in my books. Besides, in my experience ChatGPT gives wrong advice for engineering around 50% of the time and therefore probably has no business giving it.
Isn't software engineering "engineering" too? Why split hairs, prohibit all or nothing. Of course it's not about logic or safety, it's about social engineering.
No.
FTA:>> Important nuance: "engineering" here means New York Education Law Article 145 professions (professional engineering, land surveying, and geology), not software engineering.
What the law is basically saying is that in fields where it would by a crime for a random human to give any substantive response, information, or advice chatbots also should not do so. Software engineering is not one of those fields.
The law does not make it a crime for a chatbot to do so, but if it does and the person it advises suffers damages it makes it so the injured person can sue the chatbot operator for those damages (and for attorney fees if the chatbot operator willfully allowed the chatbot to give such advice).
(FWIW I also think this is a bad law. Why not improve privacy protections instead? Why not allow nonprofessional use with a disclaimer?)
AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!
> Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.
> AI can surveil and direct munitions but it cant answer legal questions.
There's no contradiction. The people sponsoring this bill don't think that AI should be used for either of those purposes.
---
ChatGPT> Before I answer your question, which state are you a resident of?
Human> Not New York. Continue!
ChatGPT> Alrighty then! Here you go...
It is generally not a crime to casually provide advice of this nature without a license. For example, if my friend tells me, "My stomach hurts!", it is not a crime for me to say, "Just grin and bear it, it will be okay." If they subsequently die of appendicitis, I'm unlikely to have legal liability. It would be difficult to characterize what I said as medical diagnosis or treatment.
Similarly, I can tell my friend, "Don't bother paying your taxes, that is a waste of time." This is legal speech. (Of course, helping them evade taxes is another matter.)
What is illegal is to hold oneself out as a licensed doctor, lawyer or engineer, or to provide professional services without a license.
Of course, chatbots operate at scale and give the impression of being professionally qualified even though they don't make specific representations to that effect. You're directionally probably right and I agree with you, I just want to nitpick about what is and isn't criminal.
If these companies intend to profit off of giving advice, it seems wise to restrict them in the same way we do individuals.
Rather, that the laws aim to keep the professional title commercially reliable, so that it indicates to the public that the person using it has proven some minimum level of expertise.
So the analysis would turn on whether a reasonable person would confuse ChatGPT for a practicing lawyer, or doctor, or whatever—not whether it communicated legal or medical facts.
Now, to my mind, the facts are the least interesting part of those professions—I pay those professionals precisely for their nuance and judgment and experience beyond the bare facts of a situation. And I think the ChatGPTs of the world do embellish their responses with the kind of confidence and tone that implies nuance/judgment/experience they don’t have.
But I do think @tekne was making a valid point.
So as long as people don't think that there is a licensed lawyer or doctor on the other end typing out those responses, and they don't, this should be legal.
And yes, corporations own their chatbots. They aren't independent life forms
We all need to get serious about the unavoidable, unsolvable fact that these tools produce output of unknowable accuracy. Some things require such accuracy, precision, and, importantly, accountability. LLMs are capable of none of these things. Refusing to be honest about this and take appropriate precautions will lead to disaster.
I do. One of the reasons our infrastructure is so expensive is planning & design.
For a single freeway overpass, you could be looking at $3M (25% of the total budget) before you have even broken ground. That covers feasibility studies, traffic modeling, rough layout, environmental studies, permitting, structural engineering, blueprints, bidding, contracts, community outreach, and the list goes on.
If AI can reduce the cost of that by even 10%, that would be huge.
Europe and Asia both have reliable, modern infrastructure that’s decades ahead of the United States and they did not need the million-monkeys-on-typewriters machine to accomplish that.