199 pointsby beardicus20 hours ago41 comments
  • user_783218 hours ago
    > RadiantOS treats your computer as an extension of your mind. It’s designed to capture your knowledge, habits, and workflows at the system layer. Data is interlinked like a personal wiki, not scattered across folders.

    This sounded really interesting... till I read this:

    > It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.

    (From https://radiant.computer/system/os/)

    That's... kind of a wierd thing to have? Other than that, it actually looks nice.

    • glenstein17 hours ago
      I actually don't mind it necessarily. I wonder if the medium-far future of software is a ground-level AI os that spins up special purpose applications on the fly in real time.

      What clashes for me is that I don't see how that has anything to do with the mission statement about getting away from social media and legacy hardware support. In fact it seems kind of diametrically opposite, suggesting intentionally hand crafted, opinionated architecture and software principles. Nothing about the statement would have lead me to believe that AI is the culmination of the idea.

      And again, the statement itself I am fine with! In fact I am against the culture of reflex backlash to vision statements and new ventures. But I did not take the upshot of this particular statement to be that AI was the culmination of the vision.

    • ndiddy17 hours ago
      Most of the text on the site seems LLM written as well. Given that the scope of the project involves making their own programming language, OS, and computing hardware, but they don't seem to have made very much tangible progress towards these goals, I don't understand why they decided to spend time making a fancy project site before they have anything to show. It makes me doubt that this will end up going anywhere.
      • ItsHarper16 hours ago
        They've written an R' compiler in C, and ported its order and parser to be self-hosted, with source code for those included in blog posts.

        I'm not a fan of all the LLM and image generator usage either, though.

      • glenstein17 hours ago
        >Most of the text on the site seems LLM written as well.

        I was thinking the same thing. Out of curiosity I pasted it at one of those detection sites and it said 0% AI written, but the tone of vague transcendance certainly got my eyebrow raised.

    • 7thaccount18 hours ago
      Same. I was super excited until I saw the AI stuff you pointed out. I'll have to read more about that. I like the idea of a new OS that isn't just a Linux clone, networking stack that is old school and takes computing in a different direction. I don't have a lot of need for the AI stuff outside of some occasional LLM stuff. I'd like to hear more from the authors on this.

      I also understand that the old BBS way of communicating isn't perfect, but looking into web browsers seems to just be straight up insanity. Surely we can come up with something different now that takes the lessons learned over the past few decades combined with more modern hardware. I don't pretend to know what that would look like, but the idea of being able to fully understand the overall software stack (at least conceptually) is pretty tempting.

    • ASalazarMX15 hours ago
      > Radiance compiler targetting RISC-V ISA. Involves writing an R' compiler in C and then porting it to R'.

      R is a language for statistics and data analysis, I can't understand why they chose it for low-level systems programming having modern alternatives like Go or Rust. Maybe it has to do with the AI integration.

      It seems interesting enough to follow, but I'm uncertain about its actual direction.

      Edit: Thanks to people in this thread for pointing out that it's not R, but R'. The language they're creating is called Radiance, so it may be that R' is a subset of it.

      > Radiance is a small statically-typed systems programming language designed for the Radiant platform, targeting the RISC-V RV64GC architecture. Radiance features a modern syntax and design inspired by Rust, Swift and Zig.

      • tonyarkles15 hours ago
        I think R’ is completely separate from R-the-stats-language and more like a cut down version of their Radiance language. Pretty common way to bootstrap a self-hosted runtime.
      • cloudhead15 hours ago
        Yes, R' is "R prime", unrelated to the statistics language. Honestly didn't think about it that much.
        • pcthrowaway6 hours ago
          Honestly putting a single single-quote in the name of your programming language seems like trolling
          • pclmulqdq6 hours ago
            This whole product description with the use of the words "intentional" next to "AI" seems like trolling. There are a lot of very trendy words put next to each other and there are no artifacts.
    • godelski4 hours ago
      Looks like an experiment to me. Which is fine. Why not play around? A NN based computer is something people have been contemplating for awhile. Though it seems more like a solution looking for a problem to me ¯\\\_(ツ)\_/¯[0]

        > what personal computing could be when designed from first principles.
      
      Actually this bugs me more. I really dislike how frequently people claim "first principles". First principles are those that cannot be reduced. It's often used not in this way and all that accomplishes is people tricking themselves (or worse, tries to convince others) that these are the simplest components. We use first principles in subjects like math and physics but honestly 99.9% of what we work with in those domains are not through first principles. Maybe if you start with axioms in set theory or if you're in physics trying to derive a ToE (what those first principles are) then you're not down there.

      It bugs me because deriving first principles is an extremely complicated task. Its also a very beneficial exercise, especially when trying to build things from the ground up. Constantly asking yourself how this can be broken down even more. First principles are not where you start. Having them should demonstrate a large amount of work having already been done and deep thought into what you're doing.

      When clicking on that section I see nothing that looks like first principles. I see really a manifesto and much of which is actually difficult to distinguish from current computing.

      It's a nice manifesto, but also seems too vague and naïve. Though the latter is generally a feature of manifestos, not exactly a fault

      [0] to me, AI being built in should look more like AI being used like physics informed networks. Which are more using the NN for optimization. You could definitely abstract out more and this would be cool and interesting to see. But from the sound of it, it seems more like they're just putting LLMs in

    • __alexs16 hours ago
      People had similar fears about OLE in Windows 95.
      • tonyarkles15 hours ago
        That’s kind of where my mind went too. They’re pitching this functionality for use by AI, but if it’s actually something like OLE or the Smalltalk browser or something like that where you can programmatically enumerate APIs, this has a lot of potential for non-AI use cases too that I generally find lacking in conventional platforms.
    • d-us-vb18 hours ago
      There are lots of systems that have tried to do something like the first quote. They're usually referred to as "semantic OSes", since the OS itself manages the capturing of semantic links.

      I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.

      • palmotea18 hours ago
        > I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context.

        I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."

      • sealeck18 hours ago
        Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?
    • sleepybrett15 hours ago
      I think it's fine if all the 'ai' is local.

      I haven't read all of the documentation around this project but I hope it's in the same vein as the cannon cat and the apple-apple//gs (and other early computer systems with quick and easy access to some kind of programmable environment). (as an aside I think apple tried to keep this going with applescript and automator but didn't quite pull it off)

      I think there is a weird trick though. General purpose computers a great and they can do anything and many people bog down their systems with everything as a result. I feel like past projects like Microsoft's Bob and the Canon Cat were also in this general thought pattern. Strip it back, give them the tools that they need and very little else.

      I try and follow that pattern also on my work macbook. I only install what I need to do my job. Anything I want to try out gets a deadline for removal. I keep my /Applications folder very light and I police my homebrew installs with a similar zeal.

    • Y_Y12 hours ago
      Good luck running your super-necessary local models without nvidia drivers
      • capyba10 hours ago
        Which came first, the nvidia drivers, or the super-necessary local models that wrote those drivers?
    • CGMthrowaway18 hours ago
      Sounds like it's vibe-coding your entire software stack (data, apps, OS) in real time.
  • Lerc18 hours ago
    I'm interested in the idea of a clean slate hardware/software system. I think being constrained to support existing hardware or software reduces opportunities for innovation on the other.

    I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.

    Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?

    If software can run other software, should there be an independent notion of how that software should be facilitated?

    What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.

    I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).

    A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.

    You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.

    Different, clean slate, not easy.

    • Aurornis17 hours ago
      Clean slate designs with arbitrarily radical designs are easy when you don’t have to actually build them.

      There are reasons that current architecture are mostly similar to each other, having evolved over decades of learning and research.

      > Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash,

      To serve what goal? Such a design certainly wouldn’t be useful for general purpose computing and it wouldn’t even serve current GPU workloads well.

      Any architecture that requires extreme overhauls of how software is designed and can only benefit unique workloads is destined to fail. See Itanium for a much milder example that still couldn’t work.

      > machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).

      Software isn’t the only place where big-O scaling is relevant.

      Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.

      Those data buses take up valuable space. Worse, the majority of them are going to be idle most of the time. It’s extremely wasteful. The die area would be better used for anything else.

      > A n=32 system would have four billion cores

      A four billion core system would be the poster child for Amdahl’s law and a great example of how not to scale compute.

      Let’s not be so critical of companies trying to make practical designs.

      • teraflop16 hours ago
        > Software isn’t the only place where big-O scaling is relevant.

        > Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.

        Nitpick: I don't think the comment you're replying to is proposing a fully-connected graph. It's proposing a hypercube topology, in which the number of connections per CPU scales logarithmically. (And with each node also connected to its diagonal opposite, but that doesn't significantly change the scaling.)

        If my math is right, a 64-core system with this topology would have only 224 connections.

        • Lerc14 hours ago
          This is what I meant. I also like the idea of optical, line-of-sight connections. If you do the hypercube topology everything a node connects to has a different parity, so you lay them out on two panels facing each other.
      • d-us-vb17 hours ago
        Perhaps not a true counterpoint, but there are systems like the GA144, an array of 144 Forth processors.

        I think you're missing the point, and I don't think OP is "being critical of companies making practical designs."

        Also, I think OP was imagining some kind of tree based topology, not connected graph since he said:

        > ...but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.

        • 7thaccount16 hours ago
          Are you aware of anyone who has used that system outside of a hobbyist buying the dev board? I looked into it and the ideas were cool, but no clue how to actually do anything with it.
    • cloudhead15 hours ago
      Thanks for these thoughts -- I agree in principle, but we have to juggle a couple things here: while Radiant is in some ways an experiment, it isn't a research project. There are enough "obvious" things we can do better this time around, given everything we've learned as an industry, that I wouldn't want to leapfrog over this next milestone in personal computer evolution and end up building something a little too unfamiliar to be useful.
      • Lerc13 hours ago
        In that case I think the best advice I can give here is to focus less on features you dislike in other things and conaider the problems caused by those things. Without being encumbered by legacy reqirements you are free to make any changes you want, but each part is workload. Start at the top of each symptomatic feature and work your way down until you can change the part that causes the symptoms. Some things might require going down to the core. Some could be fixed with top level changes. Focus on finding what makes things bad(and why) instead of identifying bad things.
        • cloudhead13 hours ago
          That's a nice approach, thanks for the advice.
    • Animats8 hours ago
      > machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).

      NCube. 64 to 1024 CPUs in an N-dimensional cube.[1] I played with one a bit when Stanford got one surplus from an oil drilling company. Not very useful.

      It's straightforward to build non-shared-memory MIMD machines, but they are not historically successes and are very difficult to program. Classic example: Sony PS3. Unless the problem is very well matched to the architecture, which we might see for backpropagation and such.

      [1] https://ncube.systems/overview.html

    • ChrisMarshallNY12 hours ago
      BeOS was one.

      When Apple was looking for its "next generation" OS, everyone assumed that Gassé and BeOS were going to be it, but they chose Jobs and the legacy (FreeBSD-based) NextOS.

      I know that "old is bad," in today's tech world, but, speaking only for myself, I'm glad they made the decision they did. BeOS was really cool, but it was too new.

      • E39M5S628 hours ago
        I don't know if BeOS was too new conceptually. I think a lot of it at the time was that it was just incomplete. Limited or no printing. A slow and incomplete userland networking stack (later replaced by a more performant BONE stack). Incomplete and missing APIs. Single user, no real concept of multiple users.

        I absolutely love BeOS - it remains the pinnacle of user-focused computing for me - but I think it would have been a tough job to base another commercial OS on it. We'll never know what could have happened, but Apple probably made the right choice in buying NeXT.

  • ilaksh17 hours ago
    Very interesting and ambitious project and nice design. I hope the author will be able to comment here.

    I'm interested to hear about the plans or capabilities in R' or Radiance for things like concurrent programming, asynchronous/scheduling, futures, and invisible or implied networking.

    AI is here and will be a big part of future personal computing. I wonder what type of open source accelerator for neural networks is available as a starting point. Or if such a thing exists.

    One of the opportunities for AI is in compression codecs that could provide for very low latency low bandwidth standards for communication and media browsing.

    For users, the expectation will shortly be that you can talk to your computer verbally or send it natural language requests to accomplish tasks. It is very interesting to think how this could be integrated into the OS for example as a metadata or interface standard. Something like a very lightweight version of MCP or just a convention for an SDK filename (since software is distributed as source) could allow for agents to be able to use any installed software by default. Built in embeddings or vector index could also be very useful, maybe to filter relevant SDKs for example.

    If content centric data is an assumption and so is AI, maybe we can ditch Google and ChatGPT and create a distributed hash embedding table or something for finding or querying content.

    It's really fun to dream about idealized or future computers. Congratulations for getting so far into the details of a real system.

    One of my more fantasy style ideas for a desktop uses a curved continuous touch screen. The keyboard/touchpad area is a pair of ergonomic concave curves that meet in the middle and level out to horizontal workspaces on the sides. The surface has a SOTA haptic feedback mechanism.

    • cloudhead14 hours ago
      Thanks for your comment! In terms of concurrent programming in Radiance, it's likely I go for something inspired by Go's simplicity and Haskell's power with STM[0]. Actors are also on the table, but likely as a library on top of the native system, whatever it is. The important thing is that everything that involves "waiting" be composable in this system: timers, network i/o, IPC, file i/o, etc.

      For the AI/OS intersection, it is indeed a very interesting design space. The key insight really is that the better the AI knows you, the more helpful it can be to you, so the more you give it access to, the better. However, to be safe, the OS itself needs to be locked down in such a way that personal data cannot leave your device. This is why capabilities-based security is an interesting direction: software should not have access to more than what it needs to operate, and you need fine grained control over that.

      If you have more ideas, please write us!

      [0]: https://en.wikipedia.org/wiki/Software_transactional_memory

      • alexisreadan hour ago
        It looks like you've looked over a number of languages, but I don't see anything about Forth, Forsp, Ante, Steps, Austral, Wat or Vale? I'd suggest they all have useful components to steal from :)

        https://github.com/ablevm/able-forth

        https://xorvoid.com/forsp.html

        https://antelang.org/blog/why_effects/

        https://tinlizzie.org/

        https://news.ycombinator.com/item?id=43419928

        https://github.com/GiacomoCau/wat-js/tree/master

        https://verdagon.dev/grimoire/grimoire

      • Flere-Imsaho13 hours ago
        Please make a REPL front and centre of the system.

        REPLs make computers feel like magic to me.

        • cloudhead13 hours ago
          I agree! I've been thinking about why terminal software is so compelling, and how to make that the default while keeping the system accessible to beginners. I think there's a way to do it, to unify GUI, TUI and CLI.
          • 7thaccount12 hours ago
            Take a look at what Rebol did in the late 90s. They have a DSL called "View" for making user interfaces with your scripts. Rebol itself shipped with a program that came with a ton of sample applications including Tetris. The creator of Rebol (Carl Sassenrath) was also big on the Amiga scene and envisioned what he called iOS (internet operating system, not the Apple iPhone OS) where you would do stuff like load a web page by just running the page's Rebol script iirc.

            It never took off for various reasons. For one, the language was a bit like lisp, but with brackets. More importantly, it was commercial and then closed source for too long when Perl was really taking off.

            Long story short though, whatever language you implement should have extremely high level primitives for simple GUI widgets. Forget Qt and Windows Forms...there HAS to be an easier solution. I can see how Rebol did it and surely you can make something like that possible.

            • cloudhead2 hours ago
              Yes, good idea — I know about Rebol but haven’t looked at it from that angle.
            • anthk11 hours ago
              Check the Red programming language.
  • efficax17 hours ago
    Like clockwork every year or so someone emerges and says "I'm going to fix computing" and then it never happens. We're as mired in the status quo in computing as we are in politics, and I don't see any way out of it, really.

    Also the website is very low contrast (brighten up that background gray a bit!)

    • benob17 hours ago
      I have been having a lot of fun with PicoCalc. It's not targeted at end users but is fun for developers alike who want a taste of developing things from first principles. More than anything it can live independently from your other devices.
      • 7thaccount16 hours ago
        I keep seeing the videos pop up. It does look really cool. I see it has a basic interpreter, so I guess kinda like a C64?
        • benob14 hours ago
          It comes with a basic interpreter, but the thriving community develops plenty of stuff: python, lua, forth, lisp... have nice ports which you can play with on device. There also is a library of software developed for rp2040 which have been ported, such as a mac classic emulator. If you feel that a micro controller is too low power, you can plug in a luckfox lyra which runs a proper linux with 128M of RAM.
          • 7thaccount14 hours ago
            That is really cool! Making me want one even more. I might have fun picking that forth implementation apart.
    • PeaceTed13 hours ago
      Pretty much. I mean, best of luck to them. One has to try if anything is to change but I have seen this kind of thing so many times. Filled with enthusiasm but lacking in execution.

      The whole 'Real artists ship' thing in action.

    • cloudhead14 hours ago
      Gotta keep trying then!
  • moconnor18 hours ago
    The landing page reads like it was written with an LLM.

    Somehow this makes me immediately not care about the project; I expect it to be incomplete vibe-coded filler somehow.

    Odd what a strong reaction it invokes already. Like: if the author couldn’t be bothered to write this, why waste time reading it? Not sure I support that, but that’s the feeling.

    • wrs16 hours ago
      I am very concerned about the long term effects of people developing the habit of mistrusting things just because they’re written in coherent English and longer than a tweet. (Which seems to be the criterion for “sounds like an LLM wrote it”.)
      • 7thaccount16 hours ago
        Haha. This is so true. I'm a bit long-winded myself and once got accused of being AI on here. I just don't communicate like Gen Alpha. I read their site and nothing jumped out as AI although it's possible they used it to streamline what they initially wrote.
      • ASalazarMX15 hours ago
        Wait until the bot herders realize you can create engagement by having a bot complain about texts being LLM-like.
    • Antibabelic3 hours ago
      I don't think it feels particularly LLM-written, I can't find many of the usual tells. However, it is corporate and full of tired cliches. It doesn't matter if it's written by an LLM or not, it's not pleasant to read. It's a self-indulgent sales pitch.
    • cloudhead14 hours ago
      What's odd is how certain people seem to be about their intuition about what is and isn't written by an LLM.
    • cactusplant737417 hours ago
      It seems be popular here because of the ideas it proposes.
  • JSR_FDED17 hours ago
    I love these guys for trying to do this. I just hope they’ve already made their money and can afford to continue doing this.

    It’s every engineer’s dream - to reinvent the entire stack, and fix society while they’re at it (a world without social media, sign me up!).

    Love the retro future vibes, complete with Robert Tinney-like artwork! (He did the famous Byte Magazine covers in the late 70s and early 80s).

    https://tinney.net/article-this-1981-computer-magazine-cover...

    • cloudhead14 hours ago
      Thanks for the encouragement!
  • mwcampbell18 hours ago
    The thing that always worries me about these clean-slate designs is the fear that they'll ignore accessibility for disabled people, e.g. blind people, and then either the system will remain inaccessible, or accessibility will have to be retrofitted later.
    • Lerc18 hours ago
      I'm actually ok with that if it truly serving the purpose for what a computer should be.

      I think those principles would embody the notion that the same thing cannot serve all people equally. Simultaneously, for people to interact, interoperability is required. For example, I don't think everyone should use the same word processor. It is likely that blind people would be served best by a word processor designed by blind people. Interoperable systems would aim to neither penalise or favour users for using a different program for the same task.

      • glenstein17 hours ago
        I also think for the purpose of piloting a new system I don't mind people chasing whatever aspect of that mission most inspires them. Anything aspiring to be a universal paradigm needs to account for accessibility to have legitimacy in being "for everyone" but that doesn't necessarily have to be the scope when you're starting.

        I'd like to think that prioritizing early phase momentum of computing projects leads to more flowers blooming, and ultimately more accessibility-enabled projects in the long run.

    • nicksergeant18 hours ago
      It's funny you mention that because the first thing I thought when viewing this page was "is this a loading state? why is everything grey?".
      • debo_18 hours ago
        Ahem. It's _radiant_ grey.
      • jccalhoun17 hours ago
        I thought "is there one of those popups covering things and greying out the page until you close it?"
      • Flere-Imsaho14 hours ago
        I like the design. Minimal and loaded fast. I haven't dug into the pages' code but I'm guessing there's little or no bloated JavaScript.
    • d-us-vb18 hours ago
      Yeah, this is concerning. Although, if the system is architected well, accessibility features ought to be something that can be added as an extension.

      What is a screen reader but something that can read the screen? It needs metadata from the GUI, which ought to be available if the system is correctly architected. It needs navigation order, which ought to be something that can be added later with a separate metadata channel (since navigation order should be completely decoupled from the implementation of the GUI).

      The other topic of accessibility a la Steve Yegge: the entire system should be approachable to non-experts. That's already in their mission statement.

      I think that the systems of the past have trained us to expect a lack of dynamism and configurability. There is some value to supporting existing screen-readers, like ORCA, since power users have scripts and whatnot. But my take is that if you provide a good mechanism that supports the primitive functionality and support generalized extensibility, then new and better systems can emerge organically. I don't use accessibility software, but I can't imagine it's perfect. It's probably ripe for its own reformation as well.

      • throwup23817 hours ago
        > What is a screen reader but something that can read the screen?

        Good screen readers track GUI state which makes it hard to tack on accessibility after the fact. They depend on the identity of the elements on the screen so they can detect relevant changes.

    • PeaceTed13 hours ago
      For all the complaints leveled at Apple, their accessibility on their OS's is astounding.

      It is said if we live long enough, we all will be disabled at some point.

  • system7rocks18 hours ago
    This looks like an advertisement for a new season of Severance or something.

    The image on this page is wild: https://radiant.computer/principles/

    Of course, I am intrigued by open architecture. Will they be able to solve graphic card issues though?

    • d-us-vb18 hours ago
      You won't be bringing your own graphics card to RadiantOS. According to one of the pages, they want to design their own hardware and the graphics will be provided by a memory-mapped FPGA.

      If your question is about the general intricacies in graphics that usually have bugs, then I'd say they have a much better chance at solving those issues than other projects that try to support 3rd party graphics hardware.

    • flobosg17 hours ago
      That image is giving me some Evangelion vibes: https://wiki.evageeks.org/Ramiel
      • glenstein17 hours ago
        I am fascinated by the art but it seem bizzarely overdefined relative to the software vision laid out in text. That is, the amount of richly imagined imagery dramatically outpaces the overall coherence of the vision in every other respect.

        And as with the text, the art feels AI generated. In fact I even think it's quite beautiful for what it is, but it reminds me of "dark fantasy" AI generated art on Tikok.

        I have nothing against an aesthetic vision being the kernel of inspiration for a computing paradigm (I actually think the concept art process is a fantastic way to ignite your visionary mojo, and I'm flashing back to amazing soviet computing design art).

        But I worry about the capacity and expertise to be able to follow through given the vagueness of the text and the, at least, strongly-suggestive-of-AI text and art, which might reflect the limited capacity and effort even to generate the website let alone build out any technology.

        • efskap4 hours ago
          Oh it's absolutely AI art. The shadows in the middle of the image here are totally messed up: https://radiant.computer/system/

          Ironic for a project claiming to be rooted in human creation

    • edm0nd18 hours ago
      my outie enjoys trying experimental operating systems
    • analog837418 hours ago
      Am I hallucinating or is that black diamond in the sky a little malproportioned?
      • zamadatix15 hours ago
        It's an AI generated image
  • rmonvfer12 hours ago
    This looks great, but I’m a bit confused about what actually exists right now. The site uses the present tense (“it is”, “we are”), but I couldn’t find anything after browsing and clicking around for about 15 minutes. From the “log”, it sounds like only the parser for R Prime is implemented, and that R Prime itself is just a precursor to the actual language that will be used to develop the whole system (from scratch?). Does that mean that R Prime has to be fully developed before work on its successor can even begin?

    If anything is already working, where’s the code? Can people contribute yet?

    Not trying to nitpick, but it’s hard to tell what’s real vs. vaporware (beyond the author’s very impressive abilities for systems/language design and writing)

    The website also mentions a device but I get that’s many years away too, right? I mean how long will it take to actually develop everything that’s described in the website?

  • 7thaccount18 hours ago
    >"It's a tool for personal computing where every application and every surface, exists as code you can read, edit, and extend. It's a system you can truly own"

    This sounds a lot like a Smalltalk running as the OS until they started talking about implementing a systems language.

  • postexitus17 hours ago
    The whole thing feels like it's generated by LLM. Some interesting sounding titbits here and there, no specifics ever, weird trance images.
  • dclowd990117 hours ago
    I'm having a hard time following the through line on these first principles. Likely it's just a "me" problem because I have status quo system designs set in my head, but here are some ideas that seem conflicting to me:

    > Hardware and software must be designed as one

    In here, they describe an issue with computers is how they use layers of abstraction, and that actually hides complexity. But...

    > Computers should feel like magic

    I'm not sure how the authors think "magic" happens, but it's not through simplicity. Early computers were quite simple, but I can guarantee most modern users would not think they were magical to use. Of course, this also conflicts with the idea that...

    > Systems must be tractable

    Why would a user need to know how every aspect of a computer works if they're "magic" and "just work"?

    Anyway, I'm really trying not to be cynical here. This just feels like a list written by someone who doesn't really understand how computers or software came to work the way they do.

    • iansteyn17 hours ago
      Yeah I felt the contradictions here too. Doesn’t the feeling of “magic” directly proceed from abstraction and non-tractability (or at least, as you say, not needing to understand every part of the system)?
      • glenstein17 hours ago
        >Doesn’t the feeling of “magic” directly proceed from abstraction and non-tractability

        Yes, but also I think it can also have a kind of liminal impression of an internal logic.

        • iansteyn16 hours ago
          Would you mind elaborating?
          • glenstein13 hours ago
            I agree that the "magic" feeling involves abstraction from nuts and bolts, but a kind of notable responsiveness to, say, preferred trains of thought that are optimal for a workflow or project management or for rich functional interaction. I use the word "liminal" in the sense of the aesthetic term "liminal spaces" to indicate a presence of a kind of lightweight logic not necessarily fully articulated.
  • anonzzzies17 hours ago
    I like this type of stuff, but it cannot go anywhere. A real clean slate system, free from all crap we have piled on the things we use every day, must be something simple that cannot be interesting for the masses and must be understood and programmable by one person if things go bad. The only way I can see that is by creating something underpowered just to have fun and we already have that; actual (old and new) hardware, emulators and virtual cpus. As soon as it gets any volume or viability, it will be taken on by commercial entities that will eat it and the way to prevent it is always to make it obsolete to begin with.
  • Animats14 hours ago
    This is a lot like One Laptop Per Child, but more vague. What is it supposed to do? Why have one?

    It's not that the system doesn't come with a browser. It's that the browser is apparently built into the operating system. Remember IE 6?

    If you're going to rethink computing devices, the next thing is probably a big screen, a camera, and a good microphone array. No keyboard. You just talk to it and occasionally gesture, and it organizes and helps you.

  • underdeserver18 hours ago
    I don't understand why they're particular about writing their own esoteric language. If they want people to buy and engage with it, software has to be the gateway, and that's easier to write in a language people know.
    • glenstein17 hours ago
      The more I look at it and think about it, it feels like the whole thing, language and images together, are collectively concept art. Which, if that's the case, is fine for what it is. But I do think if that's the case, I think it's at least slightly disrespectful to readers to be coy about how real any of this is.
    • 7thaccount18 hours ago
      It's prob a balance. Sure, C is king....but if you are starting from scratch...do you REALLY need it or could you design something even better? Maybe, maybe not.

      I've programmed for a long time, but always struggled with Assembly and C, so take my views with a grain of salt.

      • underdeserver18 hours ago
        I don't think C is king anymore. They could use Rust with nostd, or Zig, or C++. Anything (low level enough) is better than an entirely new language.
        • 7thaccount18 hours ago
          I missed this earlier: "Radiance features a modern syntax and design inspired by Rust, Swift and Zig."
    • neonnoodle14 hours ago
      Next you'll tell me urbit, nock, and hoon never caught on
    • PeaceTed13 hours ago
      Yes but have you considered making a trapazoid wheel? Maybe it will work, or not...
  • palmotea18 hours ago
    The AI art makes it look like vapor.
    • TheOtherHobbes18 hours ago
      So does the AI text.

      They want to implement custom hardware with support for audio, video, everything, a completely new language, a ground-up OS, and also include AI.

      Sounds easy enough.

  • Freebytes14 hours ago
    I thought they were talking about redesigning hardware from the ground up. There will always be history and baggage if you are working with the same computer instruction sets. From the very beginning at the level of assembly, there is history and baggage. This is not ambitious enough.
  • LarsDu8817 hours ago
    I read clean slate architecture and "no baggage" and thought someone was designing a non Von Neuman architecture machine with a novel clockless asynchronous cpu, but nope, it's a custom OS running on RISC-V
    • ASalazarMX15 hours ago
      I thought that too. It would have been such an interesting thing, I would have (modestly) contributed to their Kickstarter even if it didn't produce a commercial product in the end.
  • xorvoid17 hours ago
    Honestly, this seems rambling and unfocused. It's like a grab-bag of recent-ish buzzwords.

    The task that has been set is gigantic. Despite that, they've decided to make it even harder by designing a new programming language on top of it (this seems to be all the work done to date).

    The hardware challenge alone is quite difficult. I don't know why that isn't the focus at this stage. It is as-if the author is suggesting that only the software is a problem, when some of the biggest issues are actually closed hardware. Sure, Linux is not ideal, but its hardly relavent in comparison.

    I think this project suffers from doing too much abstract thinking without studying existing concrete realities.

    I would suggest tackling one small piece of the problem in the hardware space, or building directly on some of the work others have done.

    I don't disagree with the thesis of the project, but I think it's a MUCH bigger project than the author suggests and would/will require a concentrated effort from many groups of people working on many sub-projects.

    • junon17 hours ago
      In the OSdev world that's called an "Alta Lang" problem.

      https://wiki.osdev.org/Alta_Lang

    • cloudhead12 hours ago
      Thanks for the feedback. My goal at this stage is to put the full vision out there, and refine it to create a sense of direction, a north star under which to work. A lot of the specifics are undecided/unknown and that's ok at this stage. I'm building this bottom up using the skills I have (software), and as I bring on hardware folks, those aspects of the system will start to clarify as well.

      Whether the project is bigger than I think or not is not so relevant for me personally, I will attempt it because I don't see a future in personal computing that I want to be a part of otherwise.

  • SLWW13 hours ago
    This is intensely ambitious.

    I just kind of want to see what comes out of this.

    RISC-V ftw and if they got lightweight, local-first AIs with a decent site-map of each program; that could be really unique and fun; if not annoying to use in practice.

    I see the future they want so badly.

  • Sateeshm17 hours ago
    It's a miracle that the internet and computers work with each other as well as they do.
  • junon17 hours ago
    The exokernel makes this a nonstarter if you ever want to run untrusted code, as it implies hardware takeovers, compromised peripherals/TPMs/drives/etc. especially when it claims to be AI first.
  • jasonjmcghee18 hours ago
    Out of curiosity- there's a focus on local llm then talk about no GPU, only FPGA. Those feel- at odds. But maybe I'm out of the loop for how far local LLMs on custom hardware has come?
    • ItsHarper16 hours ago
      They're still at the compiler stage. LLM features and hardware seem far enough away that it's reasonable to wait to evaluate if that combination is actually practical.
  • maherbeg17 hours ago
    Love ambitious projects like this!

    I wonder why the Unix standard doesn't start dropping old syscalls and standards? Does it have to be strictly backwards compatible?

  • saulpw16 hours ago
    The most important question I have for any project like this is: who is making it? And this website does not answer this question.
    • timerol14 hours ago
      Alexis Sellier is the author of all of the posts under /log
  • mwilcox12 hours ago
    Urbit by BlueSky, with LLMs
  • Atomic_Torrfisk15 hours ago
    > RadiantOS is a single address space operating system.

    But why? We use virtual address spaces for a reason.

  • largbae18 hours ago
    If it doesn't have a browser, how will you visit radiant.computer on your Radiant Computer?
    • 7thaccount18 hours ago
      You wouldn't I don't think (assuming this thing ever got off the ground - huge assumption), but is that really a problem? I think the web page is more to make normalish people aware that this hypothetical ecosystem would be out there. From within that ecosystem they could have a different page.
  • MomsAVoxell17 hours ago
    Look, if someone hasn't done it already, I see absolutely no reason not to build a Lua-based IPFS process, port it absolutely everywhere, and use it to host its own operating system.

    Why does it always need to be so difficult? We already have the tools. Our methods, constantly changing and translblahbicatin' unto the falnords, snk snk... this kind of contrafabulation needs to cease.

    Just sayin'.

    IPFS+Lua. It's all we really need.

    Yes yes, new languages are best languages, no no, we don't need it to be amazing, just great.

    It'll be great.

    • ilaksh12 hours ago
      You can build a first version of that in a week or less.

      I think it could be great. The challenge for me has always been getting other people to use things I've built.

      But why not try it?

  • chambored16 hours ago
    I'll indications point to this GitHub user [1], Alexis Sellier [2], as the engineer behind this. Good luck with such an ambitious goal. I'd love to see it.

    [1]: https://github.com/cloudhead [2]: https://cloudhead.io/

  • welcome_dragon6 hours ago
    Does anyone else find this just completely lacking in substance? I've read all the comments here and it seems like I must be missing something.

    This just reads like complete BS vaporware.

    Why should any of us take it seriously?

  • lostlogin17 hours ago
    Coincidence or borrows from Asimov?

    The Prime Radiant featured in Foundation.

  • barrenko17 hours ago
    Was hoping this was an evolution on the daylight computer.
  • Starlevel00418 hours ago
    Why does the website look like my monitor is dying? Black on dark grey, seriously?
    • alejoar18 hours ago
      Indeed. Not very.. radiant.
  • kush_xg0116 hours ago
    Nice idea, all the best!
  • slater18 hours ago
    So what does its UI look like?
    • d-us-vb18 hours ago
      Based on its /log page, it doesn't look like it has one yet. They're just now implementing the implementation language, R'.
      • palmotea18 hours ago
        > They're just now implementing the implementation language, R'.

        They haven't done their due diligence: there's already a well-known language named R: https://www.r-project.org/. The prime isn't sufficient disambiguation.

        • 7thaccount18 hours ago
          I assume they know but don't care. Either way, that is a bad choice. I think "Rad" would be a good name, but maybe they already are using that for something else.

          Edit: where did you see it's called "R"? It looks like they call the system language "Radiance" : https://radiant.computer/system/radiance/

        • debo_17 hours ago
          I assumed R and R' are prototypical bootstrapping variants of what will be the full-fledged Radiant language, but that wasn't explicitly written anywhere.
        • exasperaited18 hours ago
          well-known "language" (air quotes)
          • 18 hours ago
            undefined
      • LarsDu8817 hours ago
        They called their language "R"??? Robert Gentleman will throw a hissy fit.
  • riversflow16 hours ago
    > No social networking

    What an airball. Social networks are the single most valuable aspect of computers and the internet. It is a dark world where we all just talk to LLMs instead of each other.

  • max_16 hours ago
    love it!

    I wish I could work there!

  • lowsong18 hours ago
    > Computing machines are instruments of creativity, companions in learning, and partners in thought. They should amplify human intention.

    An admirable goal. However putting that next to a bunch of AI slop artwork and this statement...

    > One of our goals is to explore how an A.I.-native computer system can enhance the creative process, all while keeping data private.

    ...is comically out of touch.

    The intersection between "I want simple and understandable computing systems" and "I want AI" is basically zero. (Yes, I'm sure some of you exist, my point is that you're combining a slim segment of users who want this approach to tech with another slim segment of users who want AI.)

    • cloudhead12 hours ago
      In five years time, "I want AI" will be 99% of computer users. Sure, neural nets are opaque, but having an AI assistant running locally and helping you with your tasks does not make your computer any harder to understand.
  • Imustaskforhelp15 hours ago
    I am wondering what linux distro/iso comes up with a liveboot gui desktop environment but without a web browser (I only know of tinycorelinux but that is too barebones and I wanted to build my own iso on top of it but it was a little hard to install packages when I tried following its remastering guide etc.)

    I even tried to search it on distrowatch with the negate option in their search but it seemed to be broken.

    I needed it once to build my own "studyOs" , and in the process I went down a deep rabbit hole on about the hobby-ist distros of linux and their importance.

    I then settled on MXLinux because of what I wrote below

    People recommend cubic etc. but personally I recommend MXLinux. Its default linux snapshot feature was exactly what I was looking for and it just worked.

    I glanced over this and I was excited thinking oh great this could be a linux iso with no browser and similar to tiny core but I found out through the comments that its focus on LLM's etc. is very vague and weird for what I am reading.

    I just feel like its seriously not getting the idea. I want to effectively dissect this post's tenants from a Linux user for just a few years.

    My first experiences was positive, then negative and now its mixed really.

    I feel like this is intending on become so hardware focused that I am not even sure what they mean by this. From my limited knowledge, Linux tries to do a lot of things simply to boot up into a predictable environment on every computer device most likely to the point that there are now things like nix that can arise your system in a determinist system.

    I still think that there is a point in making something completely new instead of Yet another Unix from what I can tell, but my hopes aren't very high, sorry. You would have to convince me from why the world would be better off with this instead of Linux aside and their notes on why not linux is still absolutely mixed thoughts in my opinion

    > Linux is a monolithic kernel. All drivers, filesystems, and core services share the same privilege space. A single bug, eg. a bad pointer dereference in a GPU driver can corrupt kernel memory and take down the entire system.

    Can't drivers be loaded at runtime and there are ways to isolate the taking down of entire system imo. I think this is just how a monolithic kernel should work, no?

    I read more discussions on mono-lithic kernel and micro-kernel on Tanenbaum–Torvalds debate wiki [1] and here is something that I think to be apt here

    > Torvalds, aware of GNU's efforts to create a kernel, stated "If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't."

    Some other person on the usenet group also called gnu hurd a vaporware and well I think there is some factuality to it and gnu hurd team was working on hurd far longer than linux was working at linus (an excerpt? from the same wikipage)

    Another line I want to share is this from the wiki: Different design goals get you different designs

    I think I was going to criticize the radiant computer but hey, its open source,nobody's stopping you from doing work on it. And this line was said defending linux earlier, so it sure can defend this

    But at the same time, my concerns regarding this or any project is regarding it becoming vapor-ware. Linux is way too big and actually good enough for most users. I don't think that the world can have a perfect os. It can have a good enough tho and from the end user, Linux is exactly that. The fact that its open source and is genuinely good at what it does, and there is absolutely no denying about it. You could live your whole life using linux Imo. Its beautiful.

    I used to defend NetBsd etc. or hate systemd etc. but the truth of the matter is that nobody's forcing you to use systemd or netbsd, you damn well could use a server without it but I have found that the mass adoption does make me convince that a sys-admin level, linux, maybe even debian or systemd in general would have its gains.

    I think linux is really really really good, its just the best imo but I will still try out things like the freebsd,openbsd etc. . I genuinely love it so much. Its honestly wild / even a fever dream when you think about it that something like linux even actually exists. Its so brilliant and the ecosystem around it is just chef's kiss.

    One can try and these are your developer hours but I just don't want to see things turn into vaporware, so I will just ask you a question on how do you prevent this project from becoming a vaporware. I am sure this isn't the first time someone has proposed the ideal system and it wouldn't be the last either.

    Edits: Sorry this got long. I got a little carried reading the wiki article, its so good.

    [1]: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...

    • cloudhead12 hours ago
      Thanks for your comment.

      I do think Linux is good enough (it's what I use daily), but I'm a software person. I don't think linux is good enough for the average person or for kids learning how to use computers. It's hard to use and quite unfriendly, even distros like Ubuntu.

      The second point is simply that I think we can do better. Progress does not stop here, but most people are afraid to take on big problems, so they never try.

      • Imustaskforhelp10 hours ago
        > It's hard to use and quite unfriendly, even distros like Ubuntu.

        I am no fan of Ubuntu. But I'd wish to know. more in detail what you mean by this stuff if possible.

        Regarding the point on how linux isn't good enough, I think that the reason its not being used is because of the power of defaults

        Chromebooks technically ship linux iirc and nobody's complaining about linux there. Android is also technically linux and the same goes there

        Theoretically I feel like Modern Linux is still the same, but there are still some problems which might require going into a terminal

        The problem with the terminal is that I think people fear it. maybe because its the unknown, people usually don't interact with terminal and as such it becomes 2nd nature, heck for some 1st nature, to interact with terminal when they embrace it

        It's a very blurry chaotic line between making a full hardcore linux or switching back to windows and being a "normie" in this sense.

        I don't know if there is something that linux could measurably do without affecting one set of users with another.

        I'd say the person born during times of terminal wouldn't feel very uncomfortable with terminals simply because that was what was familiar to them

        The Key concept is the sense of unknown and familiar which stresses to even us right now

        I am familiar with linux and so I feel a little resistive to the unknown, in the same way that minix creator might've felt towards the creation of linux at the time as from the previous comment

        We have tried creating movements to gather attention but it seems that maybe selling a message could be hard when eyes are being gathered and taken away by algorithms which scroll. Some meme like 67 would get more attention than linux in mainstream media by a huge margin lol

        I personally feel like more eyeballs need to be on linux and a place to actually summarize the benefits of linux but to each their own and there are many ways that people explore the unchartered territory of unknown for the first time

        > The second point is simply that I think we can do better. Progress does not stop here, but most people are afraid to take on big problems, so they never try.

        I do think that maybe we can do better but also that maybe modern linux might be an idea similar to stable equilibra. I am not competent enough right now to comment on that

        > but most people are afraid to take on big problems, so they never try.

        I would say even after all this, go for it, Linus himself might have felt the same way towards GNU hurd not doing any job and decided to take things in his own hands to make our squishy penguin boy known as Linux. This is your developer time and effort and the best we can do is give feedback, I hope you take both negative and positive feedback in positive ways since the only thing in our hands is the intepretation.

        Maybe there can be some fair critiques too in the process where you get the process of improving the things so that is always something that is really nice.

        I personally feel like we can do nothing and everything to change our society at the same time in a weird super-position state and the only way to find out our change in the society is to actually try it but one can try to look as far as he could something similar to what robert frost said in "the road not taken"

        Good luck.

  • KissSheep18 hours ago
    [dead]