35 pointsby speckx3 hours ago25 comments
  • crassus_ed2 hours ago
    >Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.

    Honest question: Do you actually read any of these notes? I think there is a fundamental flaw with not taking notes. I'm convinced taking notes forces you to properly consider what is being said and you store the information in your brain better that way.

    • crazygringo2 hours ago
      I think you misunderstand.

      Taking notes during meetings isn't to improve understanding, or to "read" afterwards.

      They're a record of what was discussed and decided, with any important facts that came up. They're a reference for when you can't remember, two weeks later, if the decision was A and B but not C, or A and C but not B.

      Or when someone else delivers the wrong thing because they claim that's what the meeting decided on, and you can go back and find the notes that say otherwise.

      I probably only need to find something in meeting notes later once out of every twenty meetings. But those times wind up being so critically important, it's why you take notes in the first place.

      • crassus_ed2 hours ago
        Right, so it's for accountability instead. Have you considered generating stories or tasks from the notes in that case?

        Still I think it's better to discuss "action points" in that case and give a clear owner to those points. This always helps me to understand who's accountable and what actions actually need follow up.

        • crazygringo2 hours ago
          The question is, what artifact records the action points, the owners, who is accountable? And all the necessary associated information?

          Notes do. Ideally there is a meeting owner who produces official notes and emails them to everyone, but frequently that never happens. And when it does happen, sometimes they're wrong and you need to correct them.

          Which is why you need your own meeting notes. Plus, like I said, there are facts that come up that you want to document as well, that aren't part of the action items, but have value.

          • co_king_52 hours ago
            > The question is, what artifact records the action points, the owners, who is accountable?

            I think the person you're replying to is suggesting that the shared place for recording these things in a medium-large software department would be in project tracking software like Jira or Github Projects.

            • crazygringo2 hours ago
              And I'm saying, a lot of the time either the company doesn't use such tracking software, or it uses it for software development but not the meeting you just had with legal or finance or design or people outside the company or whatever.

              The kind of stuff stored in Jira is a very specific subcategory of all the types of things that get mentioned and decided in meetings. It doesn't cover all of it, not even close. And the person putting the information in might also get part of it wrong, that happens surprisingly frequently. It's not a substitute for personal meeting notes.

    • oytis2 hours ago
      Yeah, the whole purpose of taking notes is being present in the conversation. Notes themselves are a nice byproduct.
    • Buttons8402 hours ago
      I'd rather do spaced repetition than Obsidian.

      Does anyone know if a plugin for this?

      Like, a history buff could just tell the LLM "quiz me on the Taiping Rebellion, who what where when and why."

      The LLM then enters this instruction into an API that handles the spaced repetition data and algorithms.

      The LLM could pull that API daily and quiz you daily.

      Actually knowing all this stuff sounds so much better than having a bunch of notes in a fancy graph.

      • treetalker17 minutes ago
        You can use an LLM to generate first drafts of flashcards that you import into, and later revise in, a true spaced-repetition system — such as Mochi or Anki.

        For learning new material, make your LLM assume a Socratic position. Kagi Assistant has a custom Study model that does this. The key is causing the model to increase your friction (causing learning and memory) instead of decreasing it.

      • co_king_52 hours ago
        I feel like you'd be better off generating (or manually compiling) the dataset you'd like to memorize and then using existing spaced repetition tools to learn that data.

        I suspect it would be less effective to learn similar-yet-slightly-different LLM-generated content generated every time you want to study.

    • baq2 hours ago
      Back when I was in a big org and in meetings 5-7h basically every day (as an engineer IC) this workflow would have absolutely hit it out of the park.
    • anticorporatean hour ago
      I think it depends. Honestly, for me, the alternative to automated notes from meetings is that I don't take notes. I know, I should, but I don't. I've tried numerous times to instill the habit unsuccessfully.

      Where the value for me comes from is sending them out immediately after the meeting, not archiving them in a vault I never look at. "Here's the summary of what we discussed, and the distilled action items we each agreed to take."

      Like the author, I've gone out of my way to avoid hosting my personal stuff with Big Tech providers, but when it comes to work, I give in to whatever we use, because I just don't have capacity to also be IT support for internal technology. It's still uncomfortable, but I have to be honest about what I have time for.

    • ayhanfuat2 hours ago
      > I'm convinced taking notes forces you to properly consider what is being said and you store the information in your brain better that way.

      Yes, this is like listening a guided meditation in 2x speed because it is faster.

      • co_king_52 hours ago
        > Yes, this is like listening a guided meditation in 2x speed because it is faster.

        Isn't that pretty much the whole selling point of AI coding tools?

  • 11235813212 hours ago
    The gist is the OP went nuts replacing Google and Meta with self-hosted tools, and now he's feeding more data than ever into Anthropic or OpenAI (didn't specify, or I missed it. Skimming AI-generated blog posts tires the eyes.)

    That's par for the course, honestly. News-cycle-driven anti-big-tech sentiment is weak fuel for a lifelong commitment. Something new was going to come along.

    I am always happy for anyone who felt stuck on their side projects and no longer does, though.

    • crazygringo2 hours ago
      To be fair, OP talks specifically about that -- that's a full quarter of the post:

      > I’ve spent the past year moving away from surveillance platforms... And yet I willingly feed more context into AI tools each day than Google ever passively collected from me. It’s a contradiction I haven’t resolved. The productivity gains are real enough that I’m not willing to give them up, but the privacy cost is real too, and I notice it.

    • theiasson2 hours ago
      This paragraph made me laugh out loud:

      >I’ve settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.

      • co_king_52 hours ago
        > I’ve settled into an uneasy position: Crack Cocaine for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.
  • rkozik19892 hours ago
    Has anyone else considered that producing code faster isn't necessarily a good thing? There's a lot that goes into getting a solution correct that has nothing to do with programming. Just because you can scale code production doesn't mean you can scale things like understanding user wants and expectations. At a point you're more work for your self/organization because unless you get everything perfect the first time you're creating more work than you're resolving.
    • jamiemallers2 hours ago
      This is the underappreciated angle. Shipping faster creates a downstream cost that nobody's accounting for: more code in production means more surface area to monitor, more potential failure modes, and more on-call burden.

      I've watched teams go from deploying weekly to deploying 5x/day after adopting AI coding tools. Their velocity metrics looked incredible. Their incident rate also tripled. Not because the code was worse per se, but because they were changing more things faster than their observability and testing could keep up with.

      The bottleneck was never typing speed. It was always understanding -- understanding the system, understanding the user, understanding what "correct" even means in a given context. AI makes the typing-equivalent part nearly free, which just exposes that the hard parts were always the hard parts.

      The teams I've seen get the most out of AI coding tools are the ones that used the time savings to invest more in understanding, not to ship more features. More time with users, more time reading production logs, more time thinking about edge cases. The ones that just shipped faster ended up spending the saved time on incident response instead.

      • co_king_52 hours ago
        > The bottleneck was never typing speed. It was always understanding -- understanding the system, understanding the user, understanding what "correct" even means in a given context.

        This is also the problem about having "conversations" with AI boosters.

        These people have been convinced of a world view that devalues *understanding*. Of course they aren't interested in *understanding* what you have to say to them.

        • dd8601fnan hour ago
          Or… I put together new features in three days, and it takes the rest of the process 8 more weeks before anything sees production.
      • Daishiman27 minutes ago
        > I've watched teams go from deploying weekly to deploying 5x/day after adopting AI coding tools. Their velocity metrics looked incredible. Their incident rate also tripled. Not because the code was worse per se, but because they were changing more things faster than their observability and testing could keep up with.

        But this an improvement! The features/incident rate improves as you have more incidents but fewer in relation to the increased velocity. This may or may not be a valid tradeoff depending on the impact of incidents.

        At least in my org we have an understanding that the product side will have to change drastically to accommodate the different rates of code development.

    • seanalltogether2 hours ago
      I just went through this exact thing this week. We've been working on a new feature, that if vibecoded as soon as the docs landed in our lap, would have resulted in a lot of duplicated functionality and an expanded data model. The more we worked through the solution with other engineers the more we realized the problem had been solved by another team and our solution could be small and elegant.

      I can't claim that AI has no benefit to our organization, but I do think that as my career has matured, I find myself spending more time thinking about how code changes will effect the system as a whole, and less time doing the actual coding.

    • ssgodderidge2 hours ago
      I agree that it isn't always a good thing. The assumption is that writing code, at some level, is one of the bottlenecks to delivery. If you "widen" the bottleneck by removing the time it takes to generate the code, your new throughput is going to create stress on other delivery areas: gathering feedback, testing, validation, approval processes, etc. I think the most effective results would come from a holistic approach to removing other bottlenecks in addition to reducing time required for producing code
    • oytisan hour ago
      Yes, but also a lot of work in software is solving solved problems. It would be nice if we could apply AI just to that, but all humanity's previous experience with digital technology tells me that we won't
    • Ensorceled2 hours ago
      > Has anyone else considered that producing code faster isn't necessarily a good thing?

      This has been an relentless goal of the industry for my entire 40 year career.

      > At a point you're more work for your self/organization because unless you get everything perfect the first time you're creating more work than you're resolving.

      Nothing is correct the first time (or rarely). Accelerating the loop of build, test, re-evaluate is a good thing.

      • threethirtytwo2 hours ago
        I think you captured yet. Not many people agree but the real world metrics speak the truth, and that is trying and failing faster gets you further then methodological planning and structured approaches.

        There IS experimental evidence on this and anyones anecdotal opinion is instantly blown to smithereens by the fact that this was tested and producing code faster is provably better.

    • jolt422 hours ago
      I'll try things I wouldn't otherwise try to get a better solution. I generate more throw-away code that actually gets thrown away, that's a win.
    • poszlem2 hours ago
      But oftentimes you actually CAN understand user wants and expectation faster by deploying and iterating faster, though.
    • echelon2 hours ago
      No. It can't be anything but a good thing.

      Code was always a limiting factor. It's why we built large companies.

      Now we can do more with fewer engineers. This will enable small teams and small startups to be even more nimble.

      • co_king_52 hours ago
        This seems like quite a naive perspective to me.

        Was code typically a limiting factor? It doesn't seem to have been in the companies I've worked for.

        LLMs allow us to generate new code much more quickly than before, but reviewing that code (alongside other institutional issues) remains a bottleneck.

        • threethirtytwo2 hours ago
          >but reviewing that code (alongside other institutional issues) remains a bottleneck.

          AI can review my code.

          • co_king_52 hours ago
            > AI can review my code.

            LOL, good one

      • boesboes2 hours ago
        This must be why KLOCs are considered such a great indicator of productivity and why churn is used to measure code quality /s

        I've worked in multiple start-ups and more mature companies, they always slow down because producing code is easier then building a product. More code is only better when quality hardly matters, which is basically never

  • WarmWash2 hours ago
    I think a better take is from the MIT study last year, the one that found almost all AI pilots failed.

    In that study they found that pretty much everyone was using AI all the time, but they were just using their personal accounts rather than the company provided tools (hence the failures)

    In light of this, I'd say there is a very good chance that people are offloading their work on AI, and then taking that saved time for themselves i.e. "I can finish the job report in 30 minutes rather than 3 hours now, so by 9:30 I'm done with work until after lunch."

    The end result of this will either be layoffs to consolidate work or blocking of non-company monitored AI ensuring they can locate those now empty time slots.

  • akkanzn2 hours ago
    > Granola transcribes my meetings

    Both my personal and social circles experience has been these tools are spotty at best. They often miss important things, overemphasize the wrong things, etc. At a surface level they look good but if you actually scrutinize them they fall apart.

    • sundache2 hours ago
      > At a surface level they look good but if you actually scrutinize them they fall apart.

      This is true for a huge amount of AI output in my experience.

      • co_king_52 hours ago
        > At a surface level they look good but if you actually scrutinize them they fall apart.

        This is overwhelmingly true for AI generated code in my experience.

        FWIW it makes me highly discount the perspectives of internet commenters who argue that LLMs generate "better than human" or even "mostly working" code.

        • Daishiman24 minutes ago
          The question to ask is, better than which human?

          The top 25% percentile of coders in my org definitely code better than most agents. The rest? I trust the output of an LLM to be far more consistent when adding/deleting features across service layers than a human that can create accidental typos. Same thing with bog-standard React components or Docker build scripts.

    • gverrilla2 hours ago
      You can get almost verbatim transcriptions with better models.
    • co_king_52 hours ago
      Makes sense and flattening/erasure of interesting data is always my concern when interacting with an LLM.
    • thewhitetulip2 hours ago
      We're now in the post-scrutinize era.
  • uludag2 hours ago
    I had a thought about this coming from the book "Seeing Like a State."

    Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.

  • Havoc2 hours ago
    I’d take this a step further and say that the deployment failure isn’t just management failing to provide training etc

    If you take 100 people not all of them will have the intellectual curiosity, enthusiasm and flexibility to turn their ChatGPT license into productivity gains. No amount of training will overcome a fundamental lack of curiosity & willingness to experiment

    And in very corporate environments there are lots of people like that who thrived just fine thus far because everything is written down in a step by step policy etc.

  • beej712 hours ago
    I'm glad that this is making this individual more productive, but to quote the Fortune article:

    > “AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”

    So I don't feel like TFA is a necessarily a rebuttal to this. The proof would be in the pudding.

    • dd8601fnan hour ago
      His argument is that this isn’t a failure of AI to perform as advertised, but a series of deployment failures a businesses. He theorizes that they’re buying a million licenses for chatgpt or copilot, dumping that in the laps of employees, and assuming the results will just… “show up”.

      So I guess he’s making the case that the tools are good… the employees are just holding it wrong.

  • westoque2 hours ago
    one thing it did massively for me was save me time from questions like should i go with X or Y option questions. before i used to just think longer about tradeoffs but with AI it became a lot faster. no more procrastination due to decision fatigue.
  • bluesnowmonkey2 hours ago
    For one thing they were just early. Whatever measurements people made of AI six months ago are invalid. It’s a different animal now.

    Plus you get a wildly different payoff the more you can take humans completely out of the loop. If it writes the code but humans review, you’re still bottleneck. If it designs and codes and reviews and goes back to designing, and so on, there’s no effective speed limit.

    Big businesses aren’t going to work that way though. Which is why we shouldn’t be looking to them as thought leaders right now.

    • co_king_52 hours ago
      > Whatever measurements people made of AI six months ago are invalid. It’s a different animal now.

      Are you sure? It feels like the same exact bullshit to me.

      • threethirtytwo2 hours ago
        That's because you're getting left behind. The technology is outpacing you because most likely you're not using it right. Also likely you're not in an environment that pushes you to use it right so you just give it half assed attempts, never putting the initial effort to up your game with AI.

        At my company, if you don't use AI, you're productivity will be much slower than everyone else and that will result in you getting fired. The expectation is 3-4 PRs a day per person.

        • co_king_52 hours ago
          > That's because you're getting left behind. The technology is outpacing you because most likely you're not using it right.

          Ah shit you're probably right.

          Are there any concentration camps I can sign up for now that I'm useless to the economy?

  • beej712 hours ago
    Narrowing in one of pieces of this:

    > Meeting notes are the obvious one. Before Granola, I’d either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.

    Yikes. So, 1) meetings at your company suck. In general, you should be engaged and take short, summary notes and todos while you're there; no need to have a transcript or AI summary. Talk to your manager about getting meetings right. 2) "without thinking about it" might not be the best phraseology in this overall context. :)

  • firesteelrain2 hours ago
    A lot of anecdotes here and in the article so I’ll add my own.

    AI isn’t a silver bullet. It takes many iterations to get right. Yes, there is a lot of on-the-surface-it-looks-correct-so-ship-it stuff going on. I cringe when someone says “Well AI says..”

    I don’t care what AI says! Unless you have done the research yourself and applied your own critical thinking then don’t send me that slop!

    That is to say, there are some really good LLMs out there. I started using Claude and it is better for code than ChatGPT. But, you must understand and appreciate the code before you push it.

  • itisjailop2 hours ago
    Wow! The rest of the world is wrong. Bold claims.
    • co_king_52 hours ago
      Really, the rest of the world is just using it wrong.
  • altmanaltman2 hours ago
    Show what you build; prove the productivity gains by working out through the extra 20 minutes you save everyday. Prove all this stuff instead of just saying "oh yeah bro I'm totally more productive with AI." It is trivial to track these metrics if you're serious about your productivity as an individual. The article is big on words but fails to show even 1 good effect of increased productivity or if it even exists.

    The article mentions that the survey is wrong because the productivity gains do not show up in the metrics, etc. But what about your personal metrics? What projects did you ship, how many per week, what was the total amount of minutes saved per week, how did you use those minutes instead?

    Otherwise its just productivity theater.

    Most people never use a LLM assistant beacuse their lives aren't complicated enough to require a dedicated 24x7 assistant.

  • msp262 hours ago
    https://minutes.substack.com/p/tool-shaped-objects

    I feel like this applies for many of you.

    • co_king_52 hours ago
      > This is FarmVille at institutional scale.

      Great take, thanks for sharing this article!

    • kilianticsan hour ago
      While the idea in the post is an interesting one, the analogy to planing is terrible. The difference in results from a power planer and a hand plane (even with a pretty basic blade) is night and day. Wood planed with high quality and sharp steel has a finish that doesn't even need oil or varnish.

      People talk about how non-AI code will become an artisanal craft and I think it's a bit of a stretch. The one exception might be when code has an intrinsic aesthetic quality in itself, rather than just the functional output, something like the obfuscated C code competition entries. Hand-worked wood might be crappy too, like a school woodwork birdhouse project made by a beginner, but a truly artisanally crafted piece of furniture or cabinetry has a very tangibly superior output to an IKEA bookshelf or other industrial stuff.

      On the point of doing work for the sake of doing work and not for the sake of the value of the output, this is nothing new, as suggested in the blog post. But the more apt analogy would be all the "bullshit jobs" that have existed for decades in modern corporations. People who expand their teams to justify more budget to hire more people to create more work to expand their teams to get bigger budgets, etc. All the while producing nothing of real value in the company. The thing that AI seems to have done is accelerated and exaggerated this tendency, maybe since it was already the natural tendency within the logic of our corporate work culture.

  • LolWolf2 hours ago
    I don't want to put OP on blast here, but this is unfortunately just complete slop writing.

    The points being made are fine, I think, but look, if it's faster for you to generate than it is for us to read, I think this qualifies as denial-of-service-lite.

  • xnx2 hours ago
    3 posts from this site in 2 days.
  • philipwhiuk2 hours ago
    > I’ll keep using these tools. They’ve made me measurably more productive in ways I can point to: time saved, projects shipped, focus protected.

    What products? This blog post is long on vibes and short on evidence.

    > The actual gains are granular and personal, which makes them hard to count and easy to dismiss.

    It also means the trillion dollar valuations might be bunk?

    • co_king_52 hours ago
      > > The actual gains are granular and personal, which makes them hard to count and easy to dismiss.

      > It also means the trillion dollar valuations might be bunk?

      Yes, unless the selling point is the institutional and social instability you can create by handing LLMs to technically incapable users and telling them they can write code now.

    • watt2 hours ago
      I ship 0 projects and now with AI I can ship thousand times more in the same time frame!
    • empath752 hours ago
      > What products? This blog post is long on vibes and short on evidence.

      I think this is an uninteresting question. Almost every company is putting AI produced code into their products now and has been for years. Whether it's entirely vibe-coded or not is beside the point.

      I'm working on 4 kubernetes operators that we use internally at work in production currently. 3 of them were handcrafted, one of them was vibe coded using the other 3 as a template. Almost all of the work being done on all 4 of them is now done by AI, whether it is copilot or cursor or claude code. Stuff that used to take me days or weeks now takes hours.

      Just to give one example -- yesterday I added a whole new custom resource to an operator with some quite complicated logic that touched 8 or 9 different kubernetes resources. It's not a hugely complicated task, and I could have done it in 2-3 days by myself. Claude Code essentially one shotted it in 15 minutes, including tests. It misunderstood some things, it made some judgement calls that I didn't like in terms of spec, it wrote some new code instead of using pre-existing code, but fixing that took another 90 minutes or so, then I was done.

      You can put your head in the sand all you want, but the latest versions of the LLMs running in Claude Code are the real deal. The produce code 5-10x faster than an engineer working by themselves, and it's almost always better code than they would have produced, with more documentation and tests, and even better written PRs comments and Jira tickets.

      If you want to talk about valuations, consider now that there is a very real conversation about hiring vs spending more on tokens and spending more on tokens almost always wins. Anthropic is going to be absolutely printing money over the next year, and I would not be surprised if they turn a profit in two years.

  • runjake2 hours ago
    I keep happening across articles claiming that AI doesn’t actually increase productivity and I’m completely confused.

    I used to debate with people about this, but it didn’t really change anything. Now, I just shrug and continue on with my work and, if someone asks, I help them use AI better.

    My main worry now is when the AI bubble is going to burst, and what’s affordable now becomes unaffordable.

    • intangible2 hours ago
      If you were already experienced and productive, it does very little for you beyond summaries, a little boilerplate, and possibly search help.

      If you were unproductive, it allows you to be more "productive" while stalling or reversing your learning and growth.

      Of course, person number 2's newfound "productivity" comes at the expense of leeching productivity away from the experienced and productive people by overloading them with reviewing and validating their non-deterministic generated spaghetti.

      It amazes people who think pumping out code is the hard part of a project, when in fact that's the easiest part...

      We've apparently collectively forgotten that lines of code is one of the worst metrics for measuring productivity.

    • co_king_52 hours ago
      Do you believe that AI has increased your productivity?
  • rc-11402 hours ago
    I don't really understand what it is with CompSci graduates and their bizarre aversion to handwriting, note taking, and any kind of skill that's derived from arts disciplines or "average joe" office systems.

    Shorthand notation exists and it's more than possible to develop your own. I'd trust a OBS recording going in the background over some AI slop that has some chance to micro-hallucinate what it's hearing. It also sounds like a skill issue that the author can't control the pace of his own meetings to where being able to take good notes is seemingly impossible.

    The author's AI use cases seem like a band-aid to cover bigger problems. Let's not even get into the part of the blog post where the author has started delegating internal thinking and reflection to conversations with a LLM.

  • blibble2 hours ago
    no matter what, the booster's answer to the tool being shit is always "you're holding it wrong"
    • co_king_52 hours ago
      My problem with this article is the author didn't really provide any advice on how to hold it better.

      The AI note taker sounds genuinely useful but beyond that he never discusses the actual techniques that he used to go from 1 week to implement a side project to 1 day.

    • exitb2 hours ago
      On the other hand, where does the expectation come from, that you can be just as effective at using a tool as someone who actively used it since GPT-3.5? An OpenCode instance loaded with the latest frontier model is, to quote a poet, a rocketship to nowhere - it's on you to steer it towards the results you want to achieve.
  • satisfice2 hours ago
    Another article claiming productivity without providing evidence of the quality of the work. How do we know these meeting summaries are accurate? And why are meeting summaries so great, anyway? I never had them before.

    Is this productivity or paper pushing?

  • techpulse_x2 hours ago
    [dead]
  • LightBug12 hours ago
    Mostly agree with this from a personal pov ... AI has changed my role from often being a slog, to smoothly gliding through my day most of the time.

    The only question will be whether or not it gradually develops further from my assistant to my controller and then ... its own HR firing department.

  • co_king_52 hours ago
    Granola (LLM-backed meeting note-taker/summarizer) sounds like a very useful tool!

    What do you mean when you say that you use LLMs for *Code Scaffolding*?

    • thekoma2 hours ago
      Check out also Hyprnote which allows you to do the meeting transcription and note enhancement fully locally, or wherever else you want, with a BYOM approach.