I don't know if this is just me being paranoid, but every time I see a phrase like this in an article I feel like it's co-written by an LLM and it makes me mad...
That, but a hell lot of it with fast interconnect!
... one can always dream.
I'm just expressing the general sentiment of distaste for piling stuff upon stuff and holding it with a duct-tape, without ever stepping back and looking at what we have, or at least should have, learnt and where we are today in the technological stack.
I'm semi-serious: there are actually modern processor designs that put this burden on the programmer (or rather their fancy compiler / code generator) in order to keep the silicon simple. See eg https://en.wikipedia.org/wiki/Groq#Language_Processing_Unit
Totally depends on who "us" and isn't. What problem is being solved etc. In the aggregate clearly the trade off has been beneficial to the most people. If what you want to do got traded, well you can still dream.
That is, phrasing it as a dream makes it sound like you imagine it would be better somehow. What would be better?
It's quite a different thing to running a general purpose OS to multiplex each core with multiple processes and a hardware walked page table, TLB etc.
Obviously you know what you prefer for your laptop.
As we get more and more cores perhaps the system designs that have evolved may head back toward that simplicity somewhat? Anything above %x cpu usage gets its own isolated, un-interrupted core(s)? Uses low cost IPC? Hard to speculate with any real confidence.
I think that is largely my qualm with the dream. The only way this really works is if we had never gone with preemptive multitasking, it seems? And that just doesn't seem like a win.
You do have me curious to know if things really do automatically pin to a cpu if it is above a threshold. I know that was talked of some, did we actually start doing that?
Yeah that's the perfect use case for current system design. Nobody sane wants to turn that case into an embedded system running a single process with hard deadline guarantees. Your laptop may not be ideal for controlling a couple of tonnes of steel at high speed, for example. Start thinking about how you would design for that and you'll see the point (whether you want to agree or not).
I confess I assumed writing controllers for a couple of tonnes of steel at high speed would not use the same system design as a higher level computer would? In particular, I would not expect most embedded applications to use virtual memory? Is that no longer the case?
For example, real-time guarantees (hard time constraints on how long a particular type of event will take to process) would be easier to provide.
Put another way, if that would truly be a better place, what is stopping people from building it today?
> The complexity would almost certainly still exist.
That doesn’t follow. A lot of the complexity is purely to achieve the performance we have.
To that end, I was assuming the idea would be that we think we could have faster systems if we didn't have this stuff. If that is not the assumption, I'm curious what the appeal is?