119 pointsby JumpCrisscross6 hours ago18 comments
  • burningion4 hours ago
    The main point raised in the article is that these bots may void attorney client privileges.

    But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

    • coffeebeqn4 hours ago
      Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
      • operation_moose4 hours ago
        We've found they're surprisingly good if everyone on the call is using a decent headset.

        The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

        We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.

        • user_78324 hours ago
          > If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

          Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.

          IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.

          • netdevphoenix2 hours ago
            It's just a token predictor what do you expect? What we need are tools that embrace that and ping the agent to validate what it just said or double check. But the trade off is that this might hamper their capabilities to some level
            • SlinkyOnStairsan hour ago
              > It's just a token predictor what do you expect?

              The point isn't that it's unexpected. It's that prior text-to-speech systems were much better about this particular failure mode, prone to spitting out entirely incorrect words but not rephrasing entire sentences.

              This is a particularly bad failure mode because people don't notice it.

              > What we need are tools that embrace that and ping the agent to validate what it just said or double check.

              This is not a problem that can be fixed by throwing more AI at it. It's a shared problem to all such systems, whether they're audio-text transformers or LLMs. Agentic review would just further push the system towards creating output that looks correct, but is not.

              LLM translation does the same, yielding more natural text, but generally not better translation. In several cases, especially the "easy" translation between similar languages (e.g. within a language group like Germanic or Nordic) LLM-powered translation is notably worse than more primitive "word & phrase book" systems, tending to change the meaning of the text in order to have good grammar whereas these older systems would give crude or grammatically incorrect translations that still retained the core meaning.

            • ffsm8an hour ago
              While you're correct in what tthe audio models are - at least somewhat (they're not exactly like text based llms), you seem to brush his point away too quickly before fully exploring it.

              This is a solvable issue, the current model and harnesses just aren't made with that assumption - hence they're doing "best effort while guessing if unsure".

              Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.

              Currently there is basically only one mode - and it's optimized for conversation. The note taking is just glued on with that functionality as the backbone, and that's probably not going to stay.

              • repelsteeltjean hour ago
                > Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.

                I'm hesitant to admit even that. Like any computational linguistics problem, accuracy relies on coverages of all levels: form morphology, through syntax and semantics to speech act and world knowledge.

                I worked with state of art speech recognition in healthcare setting. The model was specifically trained on small set of languages with emphasis on covering medical terminology.

                It worked great for conversations most of the time, but sometimes messed up very badly. For instance when patient would mention the name of a relative, a street address or phone number. Spelling out an email address would mess it up completely.

                It's just like when you're a horrible typist and rely on spell checking: The red squibles are gone, but the story no longer makes sense. Or when you "autofix" a syntax error, but the meaning diverges from your intention.

                As the technology improved the number of words decreases, but the mistakes get more severe.

            • jghnan hour ago
              > what do you expect?

              If the prediction strength is below X, put an indicator that it couldn't make a valid prediction?

            • freejazz7 minutes ago
              >It's just a token predictor what do you expect?

              Someone tell Altman

          • r_lee3 hours ago
            I don't think it's a training issue, it's simply that there's no inherent "I don't know" in the transformer architecture unless it's really like something completely unknown, otherwise the nearest neighbor will be chosen and that will be whatever sounds similar or is relevant, even if it might cause a problem
            • aspenmartin3 hours ago
              Not inherent in transformer architecture, we do try to ingrain a sense of uncertainty but it’s difficult not only technically but also philosophically/culturally. How confident do you want the model to be in its answer to “why did Rome fall”?

              Lots of tools in our toolbelts to do better uncertainty calibration but it trades off against other capabilities and actually can be rather frustrating to interact with in agentic contexts since it will constantly need input from you or otherwise be indecisive and overly cautious. It’s not technically a limitation of transformer architecture but it is more challenging to deal with than other architectures/statistical paradigms.

              Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant. But evals reward guessing at this point, and it’s very very hard to evaluate the calibration in these open ended contexts. But we’re slowly getting there, just not nearly as fast as other capabilities.

              • fluoridation2 hours ago
                >How confident do you want the model to be in its answer to “why did Rome fall”?

                The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.

            • feorenan hour ago
              The final output of the neural network part of an LLM is a vector with weights for every token, that is then usually softmaxed and picked from. Can we not quantify the uncertainty by looking at the distribution of weights of the top 10 options? Like we expect for a note-taking app that the top choice would be something like 98% certain, and if we see that the model gives a weight of 60% to "Russia" and 30% to "France", that's just not enough certainty to simply output "Russia". That's exactly when it should say "<uncertain>" or something instead.
            • user_78322 hours ago
              The thing is, if LLMs are stochastic parrots predicting the next word (aka, a partially decent auto complete), there's no reason it can't complete <specific question it can't answer> as "I don't know" - as that's a perfectly valid sentence too.

              That's why I'm still cautiously optimistic about LLMs somewhere being good enough. I don't know if or when someone will manage to do it, but I'm hopeful.

          • moffkalast2 hours ago
            It's a benchmark and eval issue. Guessing gets them the right result sometimes and the models rank better in error rate than they'd otherwise. We need the kind of benchmarks that penalize being wrong WAY more than saying "I don't know".

            Of course there's a secondary problem that the model may then overuse the unintelligible option, but that's something that's a matter of training them properly against that eval.

            You could also try thresholding the output based on perplexity to remove the parts that the model is less sure about, but that's not going to be super accurate I think.

            • user_78322 hours ago
              Yeah I broadly agree with you. I've tried by explicitly adding a prompt to "ask questions and clarify", and even fairly decent models like Gemini pro (2.5 or 3) tend to make question for the sake of it.

              Which reminds me that that's another big issue with LLMs - they'll blindly do whatever you ask them to, without pushback. (Again, I miss 3.5/3.6 era Sonnet which actually had half a spine. Fuck anthropic for blindly chasing coding benchmarks at the cost of everything else.)

              I've engaged in several "CMVs" (or "tell me why X is bad") with LLMs, and very often it's clear it's just saying stuff to say it, giving very terrible points on unjustifiable positions that collapse the moment I counter argue even slightly rationally.

      • pjc504 hours ago
        Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
    • camdenreslinkan hour ago
      The AI note summaries in meetings I'm in are frequently totally inaccurate. They are actually inaccurate in two ways: they fabricate things that were never said (but always kind of close to something that was said), and they emphasize the totally wrong thing (e.g. acting like the entire conversation was about one topic when that was just a very small part).

      I sincerely hope these aren't used in court.

      • rayineran hour ago
        They will be discovered and used in litigation, and the results will be hilarious. Think about how much lawyers pick apart language (like statutes or the constitution) that was written deliberately by humans and subject to review and revision. Now we're going to have lawyers, e.g., seizing on word choice in AI notes that might have a sinister connotation when the original wording was innocuous.
    • stego-tech3 hours ago
      This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.

      I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).

      Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.

      • mock-possum2 hours ago
        What are you trying to get away with I wonder?
    • papageek25 minutes ago
      Never write if you can speak; never speak if you can nod; never nod if you can wink. -Lomasney (has aged well it seems)
    • LanceH3 hours ago
      > But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

      I would add that their is no guarantee their are correct as well.

      • mock-possum2 hours ago
        You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.

        “At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.

        • LanceH2 hours ago
          Is audio always kept in addition to transcripts? (genuine question, I rarely record either)
    • Bombthecat43 minutes ago
      Not only there

      Also social settings will change, when everything you say stays on record forever in every meeting...

    • infecto2 hours ago
      The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
    • yagizdagabakan hour ago
      my fear exactly. same with something like Meta glasses. and i feel like we have moved quickly from the regulatory problems to "'tis a fact of life"
    • 3 hours ago
      undefined
    • 2 hours ago
      undefined
    • watwut3 hours ago
      Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.
      • nz3 hours ago
        No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).

        I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.

        The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).

        Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].

        [1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.

      • kjs32 hours ago
        "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him" - Cardinal Richelieu

        Be careful what you wish for. Particularly when it involves tech that often gets it very, very wrong.

      • triceratops2 hours ago
        That's an argument for recording everyone on earth 24/7. Is that what you mean?
        • sdellis2 hours ago
          With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
          • triceratops2 hours ago
            No there's the potential of that happening. That isn't what actually happens. If everyone's phone was continuously recording and storing everything 24/7 we'd need much bigger batteries for one thing.
        • flir2 hours ago
          It'll just happen. Can't really fight technological progress.
          • sdellis2 hours ago
            Actually, many people fight this kind of "progress". Just look at what is happening to Flock right now. True "technological progress" would be using technology to empower humans, not to exploit and subjugate them.
          • triceratops2 hours ago
            Is it progress though?
      • chvid3 hours ago
        Show me man the man and I will show you the crime.

        Modernized. Industrial AI scale.

      • SecretDreams3 hours ago
        Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
        • derektank2 hours ago
          >casually discussing a feature you're stuck working on that you think is a bad idea.

          I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.

          • kjs32 hours ago
            The people supplying this technology explicitly want it to tell them what their serf are doing. There will be no "honest but anonymous informing of upper management".
          • SecretDreams2 hours ago
            That information is often intentionally not cascaded up the chain because the higher up you go, the more rigid the thinking gets - at least in my experience. Upstream doesn't want to hear the bad news or hear about how their idea is dumb. They want us to just do the bad idea and if the bad idea doesn't work out, they want to hang the ICs out to dry.

            Maybe some smaller shops are not like this, but the bigger your company is, the more you'll find this type of thinking to persist.

            In theory, I do like your idea - anonymously cascading feedback upstream. I just see no avenue for this to succeed in practice.

  • gwbas1c2 hours ago
    Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.

    Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.

    The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.

    ---

    I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)

    • skinfaxian hour ago
      > Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.

      This is horrifying. Why do you feel the necessity to self-censor? What consequences do you anticipate?

      • BeetleB9 minutes ago
        Not the person who said this, but it's easy: Grow up in a country where the government listening in is common (without any transparent due process), and it becomes second nature.

        And then add to that how easy it is to record phone conversations with today's phones (I've done it), it's easier on the brain to assume it's being recorded as opposed to wondering if it is.

        But yes, I don't care about my dirty jokes being recorded :-) Illegal activity? Sure. But I solve that problem by not doing illegal things.

      • Kirby6426 minutes ago
        It's a good policy, generally. Treat anything written down, email, etc, as something that could become public later. Anything that could be recorded and saved for later can be used against you if it's taken the wrong way. A questionable joke could become an HR complaint, as an example.
      • jkingsman33 minutes ago
        Adding on to this question, do you anticipate the same people capable of tapping phones to think less of you for a dirty joke? The people whose opinion of me would lower for something off-color and the people who possess the ability to wiretap me are a disjoint set lol.
        • dnnddidiej28 minutes ago
          The point is it would be usable against you, but at the point you are wiretapped and can be used in court all bets are off you are probably in too much trouble may as well tell the joke!
          • jkingsman15 minutes ago
            yeah exactly haha; the threat model implies a level of "you're hosed" that something private I'd say to a friend isn't moving the needle on.
      • some_random18 minutes ago
        Have you missed the last decade and a half of people having their lives ruined by social media mobs for minor slights?
    • EvanAndersonan hour ago
      > Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family.

      Adding to that: If you live in a one-party consent state assume you're being recorded by any of the parties in a face-to-face conversation, too.

      Yeah-- it sucks that the world is this way. I deal with it. What I don't want to see are draconian controls on technology (which will ultimately be ineffective) in an attempt to put the genie back in the bottle.

  • samuelknight35 minutes ago
    AI meeting notes are not transcripts. While they do cause an unprecedented amount of record creation (as the article notes), there are also challenges that a defense can use. Note takers get small details wrong all the time, they often are making notes FOR someone so it biases what is documented, their prompting is opaque, and they can't be cross examined. We will likely see situations where the note taker and a witness participating in the meeting disagree?
  • atonse2 hours ago
    This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.

    Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.

    OpenAI's realtime whisper and other such models will become the default over time.

  • redmaple89219 minutes ago
    Surprised healthcare isn't called out specifically. AI note takers have exploded in popularity in the US.
  • rpaddock3 hours ago
    Some companies want no records at all, see:

    "2028 – A Dystopian Story By Jack Ganssle":

    http://www.ganssle.com/articles/2028adystopianstory.htm

    Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.

    From Cornel Law:

    LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery

    Rule 26. Duty to Disclose; General Provisions Governing Discovery

    (a) Required Disclosures.

    (1) Initial Disclosure.

    (A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:

    (i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;

    (ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …

    https://www.law.cornell.edu/rules/frcp/rule_26

    • djoldman2 hours ago
      This was interesting and sent me down a research hole.

      General conclusion:

      Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.

    • kjs3an hour ago
      Much of my experience with corporate counsel is one of 2 extremes: "keep everything"[1] or "keep nothing". Keep everything, because then you can't be caught out deleting something possibly relevant, which looks very, very bad in court. Keep nothing, because then opposing counsel can't catch you out only keeping things that make you look good in court.

      [1] There's actually a subset of this, which includes "...until you are legally allowed to delete it, then delete everything". This is driven by regulation (e.g. SOX in the US).

    • next_xibalba3 hours ago
      See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.
  • pfortuny4 hours ago
    Honest question:

    Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?

    I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.

    • dsr_3 hours ago
      Most services have privacy policies that boil down to:

      - we promise not to share PII (defined as narrowly as possible)

      - we promise not to share payment information except with our payment system

      - if you pay us, we promise not to train LLMs on your data

      - you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".

    • cj3 hours ago
      > I am totally baffled by the trust people put on these systems

      The average person doesn't care about online privacy.

      • sdellis2 hours ago
        They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
        • cjan hour ago
          When the average person thinks about "online privacy" they think about keeping things private from other people. They don't think about keeping their data private from the companies hosting/processing their data.
    • daft_pink3 hours ago
      If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
  • testfoobar2 hours ago
    I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
  • sandworm1014 hours ago
    >> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.

    Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.

    As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.

    Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.

    • hugh-avheraldan hour ago
      It sounds like the prison recordings were compulsory, which is a different kettle-of-fish. The key phrase "if they share" implies voluntary and deliberate action, and is not much of an oversimplification imo.

      > What matters is that the parties intended and expected communications to be privileged.

      I would contend that your summary, not theirs, is a oversimplification. Jurisdictions will obviously differ, but privilege does not attach merely because of the intent and beliefs of the lawyer and client.

      • sandworm10142 minutes ago
        Well, I try to avoid the R word. The actual legal term would be reasonable intention. Literal expressed intention. ie putting an A-C warning on ever email, won't be enough on its own.

        IMHO we should just assume the R word before every verb in every legal discussion. That is how reality works. These are not spells. If I express that I intend something to be private, then announce it using a megaphone at a basketball game, my intention is no longer reasonable regardless of what magic words I have thrown into my communication. Act like an idiot and a court will treat you like an idiot.

    • EvanAnderson44 minutes ago
      I'm less concerned about attorney-client privilege and more concerned about how data could aid prosecution in parallel reconstruction efforts.
  • analogpixel3 hours ago
    unrealted to the article, but how do you make a page that that prevents the mouse scroll wheel from working? that's pretty impressive.
    • bilekas2 hours ago
      It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
  • jgalt212an hour ago
    Stringer Bell would be furious.
  • vintagedave4 hours ago
    Paywall: can anyone share what the issue is?

    Inaccuracy in meeting minutes?

    Leaking private info, re security of notes?

    I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.

    • WillAdams4 hours ago
      Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
    • bearjaws2 hours ago
      The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
    • LanceH3 hours ago
      They are saying that it could invalidate attorney client privilege because the transcription could technically be available to an outside party.

      I suspect what isn't being said by the lawyers is they want to keep attorney client privilege so they can outright lie.

    • close044 hours ago
      It's in the viewable text on the page.

      > A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.

      By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.

      I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.

      [0] https://perkinscoie.com/insights/update/federal-court-rules-...

      [1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...

      [2] https://natlawreview.com/article/when-ai-takes-notes-protect...

      [3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...

      • vintagedave2 hours ago
        > It's in the viewable text on the page.

        Not for me - there was no viewable text.

      • pjc504 hours ago
        People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.
        • lukewarm7074 hours ago
          the doofus lawyer probably didn't realise, i wouldn't call it opt in
        • close043 hours ago
          If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.

          To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.

      • lukewarm7074 hours ago
        [dead]
  • ryanshrott39 minutes ago
    [dead]
  • chris_explicare2 hours ago
    [flagged]
  • senko2 hours ago
    [dead]