The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
People may dislike XML, but it is easy to make a REST API with and it works well as an interface between computer systems where a human doesn't have to see the syntax.
Edit: for down voters, I'd be curious why.
We solved the problem of discovery and documentation between machines decades ago. LLMs can and should be using that today instead of us reinventing bandaids yet again.
What we find ourselves doing, apparently, is bolting together multiple disparate tools and/or specs to try to accomplish the same goal.
https://github.com/agoda-com/api-agent
worth taking a look to see multiple approaches to the problem
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
In my most hopeful of futures, we've figured out how to do lightweight inference, and if the models don't run locally at least they aren't harming the planet, and all this AI tooling hydrates all the automation projects of the last 40 years so that my favorite tiny local music label can have a super custom online shop that works exactly the way they need without having to sacrifice significant income to do it.
Yes. In the GIS industry for example, nothing has fundamentally changed with the introduction of LLMs. They may make the same processes more efficient e.g. though automated building of workflows. AI has significantly improved classification work of course but it's still using the same principles (we've been doing ML longer than most industries). Geocoding will get cheaper and easier but it's still geocoding.
GIS software allowing standard visualisation, export and map production will bet a lot better because of LLMs. It's an area where the sheer complexity and number of formats was overwhelming, but now a GeoTiff parser can be built in a day or two.
The article was making a bit of a sweeping statement based on a single datapoint: they didn't need Retool anymore.
They want a tool that makes this file share talk to this SharePoint site which updates this ERP tool over there. The LLM approach is great for the departmental person (if they can still host shadow IT) but falls down at the organizational level. The nature of this work is fundamentally different, crappier, and less interesting than what any person on HN wants to be doing which is a contributor to misunderstanding of the market.
EDIT: fixed grammar.
I wrote a short post about it on my blog: https://blog.waleson.com/2022/10/access2mendix.html
A lot of value indeed, but not just for less technical people. Imagine ddd vs gdb. Usually some kind of visual debugging aid isn’t available in an environment because the ROI isn’t there, not because technical people love mental parsing or hate graphics. The LLM revolution is changing the calculus here: creating new tools and new visualizations is easier than ever. It would be unthinkable three years ago to create a visual debugging aid just to use it once, outside of truly gnarly and show-stopping bugs; now it could very well be feasible.
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
Writing code is the "easy" part and kind of always has been. No one triggers incidents from a PR that's been in review for too long.
As a dev team, we've been exploring how we grapple with the cultural and workflow changes that arise as these tools improve--it's definitely an ongoing and constantly evolving conversation.
I've used every SOTA for day to day work, and at best they save some effort. They can't do everything yet
The real MOAT is to write something custom and this is where it struggles sometimes.
I personally hope that the future becomes a UBI consumer-as-a-job thing, minus too much of the destructive impact that current consumerism has on the world.
Back then, a domain expert could fire up either Delphi or Visual Basic 6, and build a program that was useful for them. If they needed more performance, they would hire a professional programmer who used their work as a specification, and sanded off the rough edges.
These days, Lazarus is the open source follow on to Delphi. It'll work almost anywhere. I've run it on a Raspberry Pi Zero W! The only downside is the horrible documentation.
Microsoft went off the rails with their push towards .NET, sadly.
I don't have too much experience with MAUI so I can't comment on that.
Blazor's initial bundle sizes made it quite difficult to consider as an option for web applications, despite the ability to share code between frontend and backend.
I still feel like the ASP.NET + Frontend SPA story has a long way to go compared to what is available in the fullstack typescript ecosystem right now. Shared typings between the frontend and backend via tools like tRPC/ oRPC, or full RSC/SSR frameworks like Next and TanStack start are just so much more ergonomic, but the backend TS story, especially in data access and ORMs is so much worse compared to Entity Framework. Prisma is abysmally slow, and Drizzle is getting there but IMO nothing right now compares to the power and DX of EF Core + Linq methods.
Low-code has become especially important now with LLMs for several reasons, especially in terms of stability, maintainability, security and scalability.
If the same feature can be implemented with less code, the stability of the software improves significantly. LLMs work much better with solid abstractions; they are not great at coding the whole thing from scratch.
More code per feature costs more in terms of token count, is more error-prone, takes more time to generate, is less scalable, more brittle, harder to maintain, harder to audit... These are major negatives to avoid when working with LLMs... So I don't understand how author reached the conclusion that they reached.
However underlying principles haven’t changed.
-Engineering bandwidth is minimally available for Internal tools. -Enterprise controls/guardrails are important needs - bringing in data to your app is a must have - maintaining code vs low code apps — low code has been a lot easier
In a conversation with a CTO at VC fund - he predicts that 4-6 quarters and you shall see demand back to peak in low code segment!
In a customer conversation— customer made one tool with cursor and he was very successful but by the time he started adding features for 2.0 everything started breaking and he wanted to move back to lowcode.
As a low code vendor- we just added internal tool building agent that underneath writes react code and leverages the other core capability of the platform thereby giving users best of both the worlds.
But surely interesting times ahead for the category. Let’s see if it survives or dies!
My personal take— it will survive and converge with agentic ai!
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
There's not much technical difference.
The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.
But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.
Your last idea makes sense as well to some extent. I think for sure, once you abstract away from the technical implementation details and use platforms which allow you to focus only on business logic, it becomes easier to move between different platforms which support similar underlying functionality. That said, some functionality may be challenging for different providers to replicate correctly... But some of the core constructs like authentication mechanisms, access controls, etc... Might be mostly interchangeable; we may end up with a few competing architectural patterns and different platforms will fit under one of the architectural patterns; which will be optimized for slightly different use cases.
Libraries + Frameworks doesn't mean that unless you're bonkers.
LLMs + Libraries + Frameworks means you might pay to build the application, but running it is only going to be the cost of where it's running.
You're exactly right.
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
I have been building https://github.com/openrundev/openrun to try and solve internal tooling deployment challenges. OpenRun provides a declarative deployment platform which supports RBAC access controls and auditing. OpenRun integrates with OIDC and SAML, giving your code based apps authn/authz features like low-code platforms.
The advantage of third-party tools is that it's hard to get new features in there, so they retain their simplicity. You don't get some rando C-Level or IT guy demanding new auth features to make it messy.
Low-code and LLMs can coexist: low-code can be just another layer (or, if you prefer, a more abstract programming language) that LLMs can use. You have less freedom, but more predictability and robustness, which is perfectly fine for internal tools.
Low-Code and the Democratization of Programming: Rethinking Where Programming Is Headed
https://www.oreilly.com/radar/low-code-and-the-democratizati...
You could try to generate the business tools straight from the conventional toolsets but the problem is that agents are still far to unreliable for that. However, just like humans, if you dumb down the space and give them a smaller, simpler, set of primitives - they can do a lot better.
The idea that "Now that AI can churn out massive amounts of code quickly and for little cost, we should just forget trying to minimize the amount of code because code is now basically free." Is magical thinking which opposes what is actually happening.
The key insight that's missing is that code creation is the cheapest aspect of software development; reading the code, maintaining the code and adapting the code to new requirements is by far the most difficult and time-consuming part and the speed of code creation is irrelevant there. The smallest trade-off which compromises quality and future-proofing of the code is going to cost multiples the next time you (or the LLM) needs to look at it.
People with industry experience know very well what happened when companies hired developers based on their ability to churn out a large volume of code. Over time, these developers start churning out more and more code, at an accelerating rate; creating an illusion of productivity from the perspective of middle-managers, but the rate of actual new feature releases grinds to a halt as the bug rate increases.
With AI, it's going to be the same effect, except MUCH worse and MUCH more obvious. I actually think that it will get so bad that it will awaken people who weren't paying attention before.
Speaking as someone who spent 8 years building nocode tools, had two exits, and stepped out of the industry last year: I’m not bitter, and I’m not cheerleading either.
For apps—where “nocode” is basically an App template /API template builder it was always 50% useful, 50% marketing to sell you extra services. You still need an advanced builder mindset: people who think like engineers, but don’t want to write code. That’s a weird combo, and it’s really hard to find consistently.
For business logic, it’s almost the opposite. Nocode can give you a clean, visual UX—a clear map of how the logic is connected instead of a spaghetti mess in code. That value sticks around wherever “explain how this works” matters. Not everywhere, but definitely enough places for a real market.
a twist of that could be a hybrid that explains how it was built, has some quick controls, and not just typing prompt. e.g. NoCode agentic UI.
For our startup, the low-code vs LLM shift started hugely frustrating and scary, but also hopeful. After years of dev, we were getting ready to launch our low code app product #2, and then bam, chatgpt 3.5 happened and LLMs stopped sucking so much.
We had to look at our future for our corner of the world -- bringing our tricky gpu graph investigation tech into something that goes beyond the data 1%'ers at top gov/bank/tech/cyber investigation teams to something most teams can do -- and made the painful and expensive call to kill the low-code product.
The good news is, as a verticalized startup, the market still needed something here for the same reason we originally built it. LLMs just meant the writing was on the wall that that the market expectations would grow as would what's possible in general. We correctly guessed that would happen, and started building louie.ai . Ex: While we previously had already viewed our low-code platform as doubling a way for teams to write down their investigation flows so they can eventually do ML-powered multi-turn automations on them.. we never dreamed we'd be speed running investigation capture the flag competitions. Likewise, we're now years ahead of schedule on shedding the shackles of python-first notebooks & dashboards.
So yeah, for folks doing generic low-code productivity apps, it's not great. n8n and friends had to reinvent themselves as AI workflows, and there's still good reason to believe that as agent experiences improve, they'll get steamrolled anyways... but...
Verticalized low-code workflow tools get to do things that are hard for the claude codes. Today the coding envs are built better than most of the non-ai-native vertical teams, but the patterns are congealing and commoditizing. It'll be interesting as the ai side continues to commoditize , and the vertical teams get better at it - at which point the verticals get much more valuable again. (And indeed, we see OpenAI and friends hitting ceilings on generic applications and having to lean in to top verticals, at least for the b2b world.)
https://www.reddit.com/r/salesforce/comments/1hxxdls/unpopul...
Just someone give me MS Access for the web with an SSO module and let me drive it.
That'd cover 99% of LOB app needs and allow me to actually get shit done without tools that dissolve in my hands or require hordes of engineers to keep running or have to negotiate with a bullshit generator to puke out tens of thousands of lines of unmaintainable javascript crap.
We have achieved nothing in the last 25 years if we can't do that. Everyone who entered the industry since about 2005 appears to be completely braindead on how damn easy it was to get stuff actually done back then.
The Apache foundation or someone ought to target that as a proper Open Source setup.
Can you say more about how easy it was to get stuff done back then? What was actually easier? Was Access just good and you didn't need to deal with building web apps?
Excel is arguably worse, if only because it was more accessible for less patient people. But at least Excel doesn’t offer you an entire armory of footguns at quite the same scale as Access did.
Version control was in issue yes but you didn't really need it because ONE PERSON could literally do all the engineering work. You just copied the MDB file and suffixed it with the date. In reality, corrupt databases were a non-issue if you didn't shove MDBs on a network share and VBA was not a security risk here because the distribution of the MDBs was controlled.
There’s a reason people in IT hate Access. It’s not because the technology. It’s because of what organizational bad habits the technology enabled.
It ran a 50 seat ERP system that managed over 1000 suppliers and 500 customers, did all invoicing, inventory/stock management, logistics and financials. In the hands of clue that is.
They did replace it with SAP eventually but this was at a 15x per seat cost multiplication.
There is still is a lot of stuff hiding out there that works like that which is used daily and has few issues. You just don't hear about it because the people building and operating it really don't give a crap and have no enthusiasm - it's a tool to do a job. As it should be.
One of the hilarious things I've seen recently was an ex partner of mine's hair salon paid for a SaaS booking system. It's a pile of junk. Doesn't work properly, screws up scheduling and finances and generally costs more time that it does some other way.
They literally went back to a paper bookings diary and just phone or whatsapp people if there's a problem.
> They literally went back to a paper bookings diary and just phone or whatsapp people if there's a problem.
I couldn't book a place the other day because their online booking system was broken. It made me realize why most places and hotels use Booking.com or Airbnb. It's not just about discoverability; getting a booking system to actually work is a really hard problem.
But look at these two examples: MS Access wouldn't help in either case. For a booking system to be useful, people need to access the calendar online. These are things only made possible with web or mobile apps. Booking.com and Airbnb have a combined market cap of around a quarter trillion dollars. That's how valuable a functioning booking system really is.
It has problems, as the other reply suggests, but most of those are easily surmountable in 2026.
This is essentially Rails and Django and so on
I'll note that Access existed in a completely usable form before the www even existed.
Let AI build apps using these building blocks instead of wasting tokens reinventing the wheel on how interactive tables should be, which chart library to use, and how speaking from the frontend to the backend works securely.
LLMs will make creating low-code apps as easy as normal apps. But it has one constraint: how is the extensibility of the low-code framework?
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
An LLM is not low code. It's something that generates the thing that does the thing.
Most of the time it generates 'high' code. That high code is something that looks like hieroglyphics to non developers.
If it generated low code then it's possible that non developers could have something that is comprehensible to them (at least down as far as the deterministic abstraction presented by the low code framework)
A strong advantage a platform like retool has in the non-developer market is they own a frictionless deployment channel. Your average non-developer isn't going to learn npm and bash, and then sign up for an account on AWS, when the alternative is pushing a button to deploy the creation the AI has built from your prompt.
In my company I feel like the last to this party,
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
Existing tools already do a great job if you just want a magical looking prototype but they're not versatile enough for real production applications where those other aspects you mentioned actually matter (deployment, security, networking, maintenance, scalability, lock-in factor, costs...). Existing tools have focused on creating a 'magical' experience at the expense of all the critical stuff that needs to go under the bonnet.
There's a parallel with LLMs as well. You could build great prototypes with LLMs coding fully autonomously from start to finish... But if you want to build a real production system (beyond a certain low degree of complexity), currently, you NEED human involvement. The reason why you need human involvement is because there's just too much complexity, too much code to manage for a real production system. None of the existing low-code tools actually solve that problem of reducing complexity whilst maintaining production-readiness.
Also, I see great value in not having to take care of the runtime itself. Sure, I can write a python script that does what I want much quicker and more effectively with claude code, but there is also a bunch of work to get it to run, restart, log, alert, auth…
Is this a commonly held assumption?
I can get assembly from /dev/urandom for cents on the TB.
Fascinating but not surprising given some of the AI-for-software development changes of late.
n8n allows you to work at a higher level, and working at this level allows you to do things in a way that's more likely to be "correct". While it's not necessarily "difficult" to do integrations with Home Assistant or Discord or any of the million integrations that n8n has, it can still be error-prone, even for experienced developers.
With n8n, I'm pretty sure I could even have my parents set up and more importantly debug pipelines to control their thermostat or something. Even if I could get them to prompt Codex or Claude or something, I think it would be hard for them to debug the output if they had to.
who needs SaaS
Things I built for internal use pretty quickly:
- Patient matcher
- Great UX crud for some tables
- Fax tool
- Referral tool
- Interactive suite of tools to use with Ashby API
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.Much easier to just have a Claude Code build it all out for real.
But if I can get my AI to use an off the shelf open source flow orchestrator rather than manual coding api calls that is better.
I work on a 'low code' platform, not really, but we do a lot of EDI. This requires a bunch of very normal patterns and so we basically have a mini-DSL for mapping X12 and EDIFACT into other objects.
You guessed it, we have a diagram flow control tool.
It works, yes I can write it in Javascript too... but most of the 'flow control bits' are really inside of a small sandbox. Of course, we allow you to kick out to a sandbox and program if needed.
But for the most part, yeah I mean a good mini-DSL gets you 90% of the way there for us and we dont reach for programming to often.
So - its still useful to abstract some stuff.
Could AI write it by hand every time? yes... but you still would want all the bells and sidepieces.