It's the closest thing to a Unix successor we ever got, taking the "everything is a file" philosophy to another level and allowing to easily share those files over the network to build distributed systems. Accessing any remote resources is easy and robust on Plan9, meanwhile on other systems we need to install specialized software with bad interoperability for each individual use case.
Plan9 also had some innovative UI features, such as mouse chording to edit text, nested window managers, the Plumber to run user-configurable commands on known text patterns system-wide, etc.
Its distributed nature should have meant it's perfect for today's world with mobile, desktop, cloud, and IoT devices all connected to each other. Instead, we're stuck with operating systems that were never designed for that.
There are still active forks of Plan9 such as 9front, but the original from Bell Labs is dead. The reasons it died are likely:
- Legal challenges (Plan9 license, pointless lawsuits, etc.) meant it wssn't adopted by major players in the industry.
- Plan9 was a distributed OS during a time when having a local computer became popular and affordable, while using a terminal to access a centrally managed computer fell out of fashion (though the latter sort of came back in a worse fashion with cloud computing).
- Bad marketing and posing itself as merely a research OS meant they couldn't capitalize on the .com boom.
- AT&T lost its near endless source of telephone revenue. Bell Labs was sold multiple times over the coming years, a lot of the Unix/Plan9 guys went to other companies like Google.
The reason Plan 9 died a swift death was that, unlike Unix – which hardware manufacturers could license for a song and adapt to their own hardware (and be guaranteed compatibility with lots of Unix software) – Bell Labs tried to sell Plan 9, as commercial software, for $350 a box.
(As I have written many times in the past: <https://news.ycombinator.com/item?id=22412539>, <https://news.ycombinator.com/item?id=33937087>, and <https://news.ycombinator.com/item?id=43641480>)
More than the price tag the problem is that plan 9 wasn't really released until 2004.
It's just unlikely that it will get as big of a following as Linux has.
One might want to, e. G., have fine control over a how a network connection is handled. You can abstract that as a file but it becomes increasingly complicated and can make API design painful.
I would say almost nothing can be cleanly abstracted as a file. That’s why we got ioctl (https://en.wikipedia.org/wiki/Ioctl), which is a bad API (calls mean “do something with this file descriptor” with only conventions introducing some consistency)
exec 5<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\nhost: www.google.com\r\nConnection: close\r\n\r\n" >&5
cat <&5
ls /dev/tcp
It is an abstraction in GNU Bash.Pretending everything is a file was never a good idea and is based on an untrue understanding of computing. The everything-is-an-object phase the industry went through was much closer to reality.
Consider how you represent a GUI window as a file. A file is just a flat byte array at heart, so:
1. What's the data format inside the file? Is it a raw bitmap? Series of rendering instructions? How do you communicate that to the window server, or vice-versa? What about ancillary data like window border styles?
2. Is the file a real file on a real filesystem, or is it an entry in a virtual file system? If the latter then you often lose a lot of the basic features that makes "everything is a file" attractive, like the ability to move files around or arrange them in a user controlled directory hierarchy. VFS like procfs are pretty limited. You can't even add your own entries like adding symlinks to procfs directories.
3. How do you receive callbacks about your window? At this point you start to conclude that you can't use one file to represent a useful object like a window, you'd need at least a data and a control file where the latter is some sort of socket speaking some sort of RPC protocol. But now you have an atomicity problem.
4. What exactly is the benefit again? You won't be able to use the shell to do much with these window files.
And so on. For this reason Plan9's GUI API looked similar to that of any other OS: a C library that wrapped the underlying file "protocol". Developers didn't interact with the system using the file metaphor, because it didn't deliver value.
All the post-UNIX operating system designs ignored this idea because it was just a bad one. Microsoft invested heavily in COM and NeXT invested in the idea of typed, IDL-defined Mach ports.
COM has been legacy tech for decades now. Even Microsoft's own security teams publish blog posts enthusiastically explaining how they found this strange ancient tech from some Windows archaeological dig site, lol. Maybe one day I'll be able to mint money by doing maintenance consulting for some old DCOM based systems, the sort of thing where knowing what an OXID resolver is can help and AI can't do it well because there's not enough example code on GitHub.
Anyone that has to deal with Windows programming quickly discovers that COM is not the legacy people talk about on the Internet.
Your remark kind of proves the point Web is now ChromeOS Platform, as you could have mentioned browser instead.
The only reasons people think it's a good idea in the first place is a) every programming language can read files so it sort of gives you an API that works with any language (but a really bad one), and b) it's easy to poke around in from the command line.
Essentially it's a hacky cop-out for a proper language-neutral API system. In fairness it's not like Linux actually came up with a better alternative. I think the closest is probably DBus which isn't exactly the same.
Maybe something like FIDL is a proper solution but I have only read a little about it: https://fuchsia.dev/fuchsia-src/get-started/learn/fidl/fidl
#next #never #forget #thieves
My graduation project was porting a visualisation framework from Objective-C/NeXTSTEP to Windows.
At the time, my X setup was a mix of AfterStep or windowmaker, depending on the system I was at.
https://learn.microsoft.com/en-us/windows/win32/com/com-tech...
- MacOS 8. Not the Linux thing, but Copeland. This was a modernized version of the original MacOS, continuing the tradition of no command line. Not having a command line forces everyone to get their act together about how to install and configure things. Probably would have eased the tradition to mobile. A version was actually shipped to developers, but it had to be covered up to justify the bailout of Next by Apple to get Steve Jobs.
- Transaction processing operating systems. The first one was IBM's Customer Information Control System. A transaction processor is a kind of OS where everything is like a CGI program - load program, do something, exit program. Unix and Linux are, underneath, terminal oriented time sharing systems.
- IBM MicroChannel. Early minicomputer and microcomputer designers thought "bus", where peripherals can talk to memory and peripherals look like memory to the CPU. Mainframes, though, had "channels", simple processors which connected peripherals to the CPU. Channels could run simple channel programs, and managed device access to memory. IBM tried to introduce that with the PS2, but they made it proprietary and that failed in the marketplace. Today, everything has something like channels, but they're not a unified interface concept that simplifies the OS.
- CPUs that really hypervise properly. That is, virtual execution environments look just like real ones. IBM did that in VM, and it worked well because channels are a good abstraction for both a real machine and a VM. Storing into device registers to make things happen is not. x86 has added several layers below the "real machine" layer, and they're all hacks.
- The Motorola 680x0 series. Should have been the foundation of the microcomputer era, but it took way too long to get the MMU out the door. The original 68000 came out in 1978, but then Motorola fell behind.
- Modula. Modula 2 and 3 were reasonably good languages. Oberon was a flop. DEC was into Modula, but Modula went down with DEC.
- XHTML. Have you ever read the parsing rules for HTML 5, where the semantics for bad HTML were formalized? Browsers should just punt at the first error, display an error message, and render the rest of the page in Times Roman. Would it kill people to have to close their tags properly?
- Word Lens. Look at the world through your phone, and text is translated, standalone, on the device. No Internet connection required. Killed by Google in favor of hosted Google Translate.
You have things backwards. The Copland project was horribly mismanaged. Anybody at Apple who came up with a new technology got it included in Copland, with no regard to feature creep or stability. There's a leaked build floating around from shortly before the project was cancelled. It's extremely unstable and even using basic desktop functionality causes hangs and crashes. In mid-late 1996, it became clear that Copland would never ship, and Apple decided the best course of action was to license an outside OS. They considered options such as Solaris, Windows NT, and BeOS, but of course ended up buying NeXT. Copland wasn't killed to justify buying NeXT, Apple bought NeXT because Copland was unshippable.
XHTML appeals to the intuition that there should be a Strict Right Way To Do Things ... but you can't use that unforgiving framework for web documents that are widely shared.
The "real world" has 2 types of file formats:
(1) file types where consumers cannot contact/control/punish the authors (open-loop) : HTML, pdf, zip, csv, etc. The common theme is that the data itself is more important that the file format. That's why Adobe Reader will read malformed pdf files written by buggy PDF libraries. And both 7-Zip and Winrar can read malformed zip files with broken headers (because some old buggy Java libraries wrote bad zip files). MS Excel can import malformed csv files. E.g. the Citi bank export to csv wrote a malformed file and it was desirable that MS Excel imported it anyway because the raw data of dollar amounts was more important than the incorrect commas in the csv file -- and -- I have no way of contacting the programmer at Citi to tell them to fix their buggy code that created the bad csv file.
(2) file types where the consumer can control the author (closed-loop): programming language source code like .c, .java, etc or business interchange documents like EDI. There's no need to have a "lenient forgiving" gcc/clang compiler to parse ".c" source code because the "consumer-and-author" will be the same person. I.e. the developer sees the compiler stop at a syntax error so they edit and fix it and try to re-compile. For business interchange formats like EDI, a company like Walmart can tell the vendor to fix their broken EDI files.
XHTML wants to be in group (2) but web surfers can't control all the authors of .html so that's why lenient parsing of HTML "wins". XHTML would work better in a "closed-loop" environment such as a company writing internal documentation for its employees. E.g. an employee handbook can be written in strict XHTML because both the consumers and authors work at the same company. E.g. can't see the vacation policy because the XHTML syntax is wrong?!? Get on the Slack channel and tell the programmer or content author to fix it.
On the other hand, imagine a world where Chrome would slowly start to phase out its quirks modes. Something like a yellow address bar and a "Chrome cannot guarantee the safety of your data on this website, as the website is malformed" warning message. Turn it into a red bar and a "click to continue" after 10 years, remove it altogether after 20 years. Suddenly it's no longer that one weird customer who is complaining, but everyone - including your manager. Your mistakes are painfully obvious during development, so you have a pretty good incentive to properly follow the spec. You make a mistake on a prominent page and the CTO sees it? Well, guess you'll be adding an XHTML validator to your CI pipeline next week!
It is very tempting to write a lenient parser when you are just one small fish in a big ecosystem, but over time it will inevitably lead to the degradation of that very ecosystem. You need some kind of standards body to publish a validating reference parser. And like it or not, Chrome is big enough that it can act as one for HTML.
This depends. If you are a small creator with a unique corruption then you're likely out of luck. The problem with big creators is 'fuck you' I do what I want.
>"Chrome cannot guarantee the safety of your data on this website, as the website is malformed" warning message.
This would appear on pretty much every website. And it would appear on websites that are no longer updated and they'd functionally disappear from any updated browser. In addition the 10-20 year thing just won't work in US companies, simply put if they get too much pressure next quarter on it, it's gone.
>Your mistakes are painfully obvious during development,
Except this isn't how a huge number of websites work. They get html from many sources and possibly libraries. Simply put no one is going to follow your insanity, hence why xhtml never worked in the first place. They'll drop Chrome before they drop the massive amount of existing and potential bugs out there.
>And like it or not, Chrome is big enough that it can act as one for HTML.
And hopefully in a few years between the EU and US someone will bust parts of them up.
> We rank valid XHTML higher
It doesn’t even have to be true!
No, the reason is that Adobe’s implementation never bothered to perform much validation, and then couldn’t add strict validation retroactively because it would break too many existing documents.
And it’s really the same for HTML.
And yet Apple decided that no, this time we do it the "right" way[1], stuck with plain HTML/CSS/JS and frankly we're all better for it.
[1] I'm aware this is a massive oversimplification and there were more cynical reasons behind dropping the flash runtime from iOS, but they're not strictly relevant to this discussion.
Amen. Postel’s Law was wrong:
https://datatracker.ietf.org/doc/html/rfc9413
We stop at the first sign of trouble for almost every other format, we do not need lax parsing for HTML. This has caused a multitude of security vulnerabilities and only makes it more difficult for pretty much everybody.
The attitude towards HTML5 parsing seemed to grow out of this weird contrarianism that everybody who wanted to do better than whatever Internet Explorer did had their head in the clouds and that the role of a standard was just to write down all the bugs.
I, for one, is kinda happy that XHTML is dead.
[0]: By <bold> I mean <b> and by <italic> I mean <i>, and the reason it's not valid HTML is that the order of closing is not reverse of the order of opening as it should properly be.
HTML is not a set of instructions that you follow. It’s a terrible format if you treat it that way.
XHTML allows you to use XML and <bold> <italic> are just XML nodes with no schema. The correct form has been and will always be <b> and <i>. Since the beginning.
Unless you know about tree structures, it doesn’t make sense to the average person why you would have to stop and then restart a span of formatting options just because an unrelated attribute changed.
And that’s why XHTML failed - HTML is human-writable.
However because you can be both bold and italic, these should just be flags and that’s how they are now which is why b or i can be closed before b or i. I doubt they are just flags in the code of the browsers though because tags support styles and stuff.
I get that non programmers were writing html before, but these days not even most programmers write it.
Am I right in assuming that even you didn't notice the problem the first time you looked at it?
> Out of order closure should definitely error out
Hitchens's razor: "What can be asserted without evidence can also be dismissed without evidence."
Granted, I could ensure that my code was valid XHTML, but I’m a hypermeticulous autistic weirdo, and most other people aren’t. As much as XHTML “made sense”, it was completely unworkable in reality, because most people are slobs. Sometimes, worse really is better.
You're overlooking how incentives and motivations work. The gp (and their employer) wants to integrate the advertisement snippet -- even with broken XHTML -- because they receive money for it.
The semantic data ("advertiser's message") is more important than the format ("purity of perfect XHTML").
Same incentives would happen with a jobs listing website like Monster.com. Consider that it currently has lots of red errors with incorrect HTML: https://validator.w3.org/nu/?doc=https%3A%2F%2Fwww.monster.c...
If there was a hypothetical browser that refused to load that Monster.com webpage full of errors because it's for the users' own good and the "good of the ecosystem"... the websurfers would perceive that web browser as user-hostile and would choose another browser that would be forgiving of those errors and just load the page. Job hunters care more about the raw data of the actual job listings so they can get a paycheck rather than invalid <style> tags nested inside <div> tags.
Those situations above are a different category (semantic_content-overrides-fileformatsyntax) than a developer trying to import a Python library with invalid syntax (fileformatsyntax-Is-The-Semantic_Content).
EDIT reply to: >Make the advertisement block an iframe [...] If the advertiser delivers invalid XHTML code, only the advertisement won't render.
You're proposing a "technical solution" to avoid errors instead of a "business solution" to achieve a desired monetary objective. To re-iterate, they want to render the invalid XHTML code so your idea to just not render it is the opposite of the goal.
In other words, if rendering imperfect-HTML helps the business goal more than blanking out invalid XHTML in an iframe, that means HTML "wins" in the marketplace of ideas.
The real problem is the benefits of xhtml are largely imaginary so there isn't really a motivation to do that work.
Make the advertisement block an iframe with the src attribute set to the advertiser's URL. If the advertiser delivers invalid XHTML code, only the advertisement won't render.
I know which I, as a user, would prefer. I want to use a browser which lets me see the website, not just a parse error. I don’t care if the code is correct.
Also, the whole argument falls apart the moment the banner has a javascript error too. Should we attempt to run malformed code just in case? Or should a browser start shipping shims and compatibility fixes for known broken websites like microsoft do for windows apps?
It would kill the approachability of the language.
One of the joys of learning HTML when it tended to be hand-written was that if you made a mistake, you'd still see something just with distorted output.
That was a lot more approachable for a lot of people who were put off "real" programming languages because they were overwhelmed by terrible error messages any time they missed a bracket or misspelled something.
If you've learned to program in the last decade or two, you might not even realise just how bad compiler errors tended to be in most languages.
The kind of thing where you could miss a bracket on line 47 but end up with a compiler error complaining about something 20 lines away.
Rust ( in particular ) got everyone to bring up their game with respect to meaningful compiler errors.
But in the days of XHTML? Error messages were arcane, you had to dive in to see what the problem actually was.
What happens?
Even today, after years of better error messages, the strict validator at https://validator.w3.org/check says:
Error Line 22, Column 4: end tag for "b" omitted, but OMITTAG NO was specified
What is line 22? </p>
It's up to you to go hunting back through the document, to find the un-closed 'b' tag.Back in the day, the error messages were even more misleading than this, often talking about "Extra content at end of document" or similar.
Compare that to the very visual feedback of putting this exact document into a browser.
You get more bold text than you were expecting, the bold just runs into the next text.
That's a world of difference, especially for people who prefer visual feedback to reading and understanding errors in text form.
Try it for yourself, save this document to a .html file and put it through the XHTML validator.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<?xml-stylesheet href="http://www.w3.org/StyleSheets/TR/W3C-WD.css" type="text/css"?>
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<title>test XHTML 1.0 Strict document</title>
<link rev="made" href="mailto:gerald@w3.org" />
</head>
<body>
<p>
This is a test XHTML 1.0 Strict document.
</p>
<p>
See: <a href="./">W3C Markup Validation Service: Tests</a>
<b>huh
Well, isn't that good
</p>
<hr />
<address>
<a href="https://validator.w3.org/check?uri=referer">valid HTML</a><br />
<a href="../../feedback.html">Gerald Oskoboiny</a>
</address>
</body>
</html>
<!doctype html>
<title>…</title>
<p>Important: Do <strongNOT</strong> come into the office tomorrow!
Or: <!doctype html>
<title>…<title>
<p>Important: Do <strong>NOT</strong> come into the office tomorrow!
Firefox displays naught but the error:
XML Parsing Error: mismatched tag. Expected: </b>.
Location: file:///tmp/x.xhtml
Line Number 22, Column 3:
</p>
--^
Chromium displays this banner on top of the document up to the error: This page contains the following errors:
error on line 22 at column 5: Opening and ending tag mismatch: b line 19 and p
Below is a rendering of the page up to the first error.
Chromium is much more helpful in the error message, directing the user to both line 19 and 22. It also made the user-friendly choice to render up to the error.
In the context of XHTML, we should also keep in mind that Chrome post-dates XHTML by almost a decade.
Really, neither has particularly great handling of errors in anything XML. None of it is better than minimally maintained, a lot of it has simply been unmaintained for a decade or more.
This was also part of the initial draw of `clang`.
- I think without the move to NeXT, even if Jobs had come back to Apple, they would never have been able to get to the iPhone. iOS was - and still is - a unix-like OS, using unix-like philosophy, and I think that philosophy allowed them to build something game-changing compared to the SOTA in mobile OS technology at the time. So much so, Android follows suit. It doesn't have a command line, and installation is fine, so I'm not sure your line of reasoning holds strongly. One thing I think you might be hinting at though that is a missed trick: macOS today could learn a little from the way iOS and iPadOS is forced to do things and centralise configuration in a single place.
- I think transaction processing operating systems have been reinvented today as "serverless". The load/execute/quit cycle you describe is how you build in AWS Lambdas, GCP Cloud Run Functions or Azure Functions.
- Most of your other ideas (with an exception, see below), died either because of people trying to grab money rather than build cool tech, and arguably the free market decided to vote with its feet - I do wonder when we might next get a major change in hardware architectures again though, it does feel like we've now got "x86" and "ARM" and that's that for the next generation.
- XHTML died because it was too hard for people to get stuff done. The forgiving nature of the HTML specs is a feature, not a bug. We shouldn't expect people to be experts at reading specs to publish on the web, nor should it need special software that gatekeeps the web. It needs to be scrappy, and messy and evolutionary, because it is a technology that serves people - we don't want people to serve the technology.
This is not true. The reason it died was because Internet Explorer 6 didn’t support it, and that hung around for about a decade and a half. There was no way for XHTML to succeed given that situation.
The syntax errors that cause XHTML to stop parsing also cause JSX to stop parsing. If this kind of thing really were a problem, it would have killed React.
People can deal with strict syntax. They can manage it with JSX, they can manage it with JSON, they can manage it with JavaScript, they can manage it with every back-end language like Python, PHP, Ruby, etc. The idea that people see XHTML being parsed strictly and give up has never had any truth to it.
JSX is processed during the build step, XHTML is processed at runtime, by the browser.
Another possibility they were exploring was buying BeOS, which would have been pretty interesting because it was an OS built from scratch in the 90's without any of the cruft from the 70's.
Also, the only thing specific to Next that survived in MacOSX and iOS was ObjectiveC and the whole NextStep APIs, which honestly I don't think it a great thing. It was pretty cool in the 90's but when the iPhone was released it was already kinda obsolete. For the kernel, Linux or FreeBSD would have worked just the same.
By "cruft" you mean "lessons learned", right?
You won't get far with POSIX on any of the platforms.
When iOS was announced, Google scrambled to re-do the entire concept
It has always appeared though like you suggest, that the project quickly pivoted to candy bar touch phones following the release of the original iPhone. It's worthwhile to remember that the industry wasn't nearly as convinced that touching glass was the future of mobile typing in 2007 as it later became, and the sales volume of Blackberrys back then was often incorrectly cited as evidence to support the case against touch.
> https://www.bgr.com/tech/iphone-vs-android-original-google-b...
Android team ended up delaying Android release by a year: https://appleinsider.com/articles/13/12/19/googles-reaction-...
Honestly I'm disappointed the promised XHTML5 never materialized along side HTML5. I guess it just lost steam.
The HTML Standard supports two syntaxes, HTML and XML. All browsers support XML syntax just fine—always have, and probably always will. Serve your file as application/xhtml+xml, and go ham.
If you appreciate Modula's design, take a look at Nim[1].
I remember reading the Wikipedia page for Modula-3[2] and thinking "huh, that's just like Nim" in every other section.
Probably not, but what would be the benefit of having more pages fail to render? If xhtml had been coupled with some cool features which only worked in xhtml mode, it might have become successful, but on its own it does not provide much value.
I think those benefits are quite similar to having more programs failing to run (due to static and strong typing, other static analysis, and/or elimination of undefined behavior, for instance), or more data failing to be read (due to integrity checks and simply strict parsing): as a user, you get documents closer to valid ones (at least in the rough format), if anything at all, and additionally that discourages developers from shipping a mess. Then parsers (not just those in viewers, but anything that does processing) have a better chance to read and interpret those documents consistently, so even more things work predictably.
It is like Windows jumping through hoops to support backwards compatibility even with buggy software. The interest of the customer is that the software runs.
What if the browser renders it incorrectly? If a corrupt tag combination leads to browser X parsing "<script>" as inline text but browser Y parsing it as a script tag, that could lead to serious security issues!
Blindly guessing at the original author's intent whenever you encounter buggy content is a recipe for disaster. Sometimes it is to the user's benefit to just refuse to render it.
This was, maybe, true some 10 years ago. Now even old Windows programs (paint,wordpad) do not run on newer Windows
> The interest of the customer is that the software runs
Yes, but testing is expensive and we are Agile. /s
Eh, that's a really weird example as those are components of the operating system that are replaced with the OS upgrade.
Rhetorical question: Should the browser display page even if it is commented out?
There is some bar for what is expected to work.
If all browsers would consistently error out on unclosed tags, then it would definitely force developers to close tags, it would force it become common knowledge, second nature.
If devs couldn't even get RSS right, a web built on XHTML was a nonstarter.
I actually have, and its not that bad.
If anything, the worst part is foreign content (svg, mathml) which have different rules more similar to xml but also not the same as xml.
Just as an aside, browsers still support xhtml, just serve with application/xhtml+xml mime type, and it all works including aggressive error checking. This is very much a situation where consumers are voting with their feet not browser vendors forcing a choice.
IMO there's a place for XHTML as a generated output format, but I think HTML itself should stay easy to author and lightweight as a markup format. Specifically when it comes to tag omission, if I'm writing text I don't want to see a bunch of `</li>` or `</p>` everywhere. It's visual noise, and I just want a lightweight markup.
BeOS. I like to daydream about an alternate reality where it was acquired by Sony, and used as the foundation for PlayStation, Sony smartphones, and eventually a viable alternative to Windows on their Vaio line.
Neal Stephenson, https://web.stanford.edu/class/cs81n/command.txt :
> Imagine a crossroads where four competing auto dealerships are situated… (Apple) sold motorized vehicles--expensive but attractively styled cars with their innards hermetically sealed, so that how they worked was something of a mystery.
> (Microsoft) is much, much bigger… the big dealership came out with a full-fledged car: a colossal station wagon (Windows 95). It had all the aesthetic appeal of a Soviet worker housing block, it leaked oil and blew gaskets, and it was an enormous success.
> On the other side of the road… (Be, Inc.) is selling fully operational Batmobiles (the BeOS). They are more beautiful and stylish even than the Euro-sedans, better designed, more technologically advanced, and at least as reliable as anything else on the market--and yet cheaper than the others.
> … and Linux, which is right next door, and which is not a business at all. It's a bunch of RVs, yurts, tepees, and geodesic domes set up in a field and organized by consensus. The people who live there are making tanks.
It would be years before OS X could handle things that wouldn’t cause BeOS to break a sweat, and BeOS still has a bit of a responsiveness edge that OS X still can't seem to match (probably due to the PDF rendering layer).
GCC nowadays has Modula-2 as official frontend, not sure how much it will get used though.
XHTML, yep I miss it, was quite into it back then.
Similarly, I'm also gutted that the QNX 1.44MB demo floppy didn't survive past the floppy era - they had some really good tech there. Imagine if they pitched it as a rescue/recovery OS for PCs, you could've run it entirely from the UEFI. Or say as an OS for smart TVs and other consumer smart devices.
TIL: what microchannel meant by micro and channel.
Also it had OS independent device-class drivers.
And you could stuff a new CPU on a card and pop it right in. Went from a 286+2MB to a 486dx2+32MB.
People being too lazy to close the <br /> tag was apparently a gateway drug into absolute mayhem. Modern HTML is a cesspool. I would hate to have to write a parser that's tolerant enough to deal with all the garbage people throw at it. Is that part of the reason why we have so few browsers?
Your chronology is waaaaaaaaaaaay off.
<BR> came years before XML was invented. It was a tag that didn’t permit children, so writing it <BR></BR> would have been crazy, and inventing a new syntax like <BR// or <BR/> would have been crazy too. Spelling it <BR> was the obvious and reasonable choice.
The <br /> or <br/> spelling was added to HTML after XHTML had already basically lost, as a compatibility measure for porting back to HTML, since those enthusiastic about XHTML had taken to writing it and it was nice having a compatible spelling that did the same in both. (In XHTML you could also write <br></br>, but that was incorrect in HTML; and if you wrote <br /> in HTML it was equivalent to <br /="">, giving you one attribute with name "/" and value "". There were a few growing pains there, such as how <input checked> used to mean <input checked="checked">—it was actually the attribute name that was being omitted, not the value!—except… oh why am I even writing this, messy messy history stuff, engines doing their own thing blah blah blah, these days it’s <input checked="">.
Really, the whole <… /> thing is more an artefact of an arguably-misguided idea after a failed reform. The absolute mayhem came first, not last.
> I would hate to have to write a parser that's tolerant enough to deal with all the garbage people throw at it.
The HTML parser is magnificent, by far the best spec for something reasonably-sized that I know of. It’s exhaustively defined in terms of state machines. It’s huge, far larger than one would like it to be because of all this compatibility stuff, but genuinely easy to implement if you have the patience. Seriously, go read it some time, it’s really quite approachable.
This is untrue. This is the first public draft of XHTML from 1998:
> Include a space before the trailing / and > of empty elements, e.g. <br />, <hr /> and <img src="karen.jpg" alt="Karen" />.
— https://www.w3.org/TR/1998/WD-html-in-xml-19981205/#guidelin...
Meanwhile, local files with the doctype would be treated as XHTML, so people assumed the doctype was all you needed. So everyone who tried to use XHTML didn't realize that it would go back to being read as HTML when they upload it to their webserver/return it from PHP/etc. Then, when something went wrong/worked differently than expected, the author would blame XHTML.
Edit: I see that I'm getting downvoted here; if any of this is factually incorrect I would like to be educated please.
None of that is correct.
It was perfectly spec. compliant to label XHTML as text/html. The spec. that covers this is RFC 2854 and it states:
> The text/html media type is now defined by W3C Recommendations; the latest published version is [HTML401]. In addition, [XHTML1] defines a profile of use of XHTML which is compatible with HTML 4.01 and which may also be labeled as text/html.
— https://datatracker.ietf.org/doc/html/rfc2854
There’s no spec. that says you need to parse XHTML served as text/html as HTML not XHTML. As the spec. says, text/html covers both HTML and XHTML. That’s something that browsers did but had no obligation to.
The mismatched doctype didn’t trigger quirks mode. Browsers don’t care about that. The prologue could, but XHTML 1.0 Appendix C told you not to use that anyway.
Even if it did trigger quirks mode, that makes no difference in terms of tag soup. Tag soup is when you mis-nest tags, for instance <strong><em></strong></em>. Quirks mode was predominantly about how it applied CSS layout. There are three different concepts being mixed up here: being parsed as HTML, parsing tag soup, and doctype switching.
The problem with serving application/xhtml+xml wasn’t anything to do with web servers. The problem was that Internet Explorer 6 didn’t support it. After Microsoft won the browser wars, they discontinued development and there was a five year gap between Internet Explorer 6 and 7. Combined with long upgrade cycles and operating system requirements, this meant that Internet Explorer 6 had to be supported for almost 15 years globally.
Obviously, if you can’t serve XHTML in a way browsers will parse as XML for a decade and a half, this inevitably kills XHTML.
> In addition, [XHTML1] defines a profile of use of XHTML which is compatible with HTML 4.01 and which may also be labeled as text/html.
If you read this carefully, you'll see that it's not saying that text/html can be used to label XHTML. It's saying that you can use text/html if you write your XHTML in such a way that it's compatible with HTML 4.01, because the browser will parse and interpret it as HTML.
You're correct that the doctype wasn't the reason it was treated as tag soup. It was instead because of the parts of XHTML that are not directly compatible with HTML 4.01.
The mismatch between local files and websites served as text/html was very real and I experienced it myself. It's curious that you'd think I'd make it up. There were differences in behavior, especially when JavaScript was involved (notably: Element.tagName is all-uppercase in HTML but lowercase in XHTML) and it is absolutely the case that developers like myself blamed this on XHTML.
Edit: you asked why. I first saw it at SELF where Chris DiBona showed it to me and a close friend. It was awesome. Real time translation, integration of various types of messaging, tons of cool capabilities, and it was fully open source. What made it out of Google was a stripped down version of what I was shown, the market rejected it, and it was a sad day. Now, I am left with JIRA, Slack, and email. It sucks.
It make it seem needlessly complicated, and effectively erased all the positives.
Mailing lists use hierarchical threads and they haven't gone away.
In a sense Wave still exists but was split into multiple products, so I wouldn’t say it’s “dead”. The tech that powered it is still used today in many of Google’s popular products. It turns out that having separate interfaces for separate purposes is just more user friendly than an all-in-one.
https://github.com/shano/Wave-ServerAdmin
It's been 16 years. I should probably archive this..
Even the watered-down version of wave was something I used at my host startup, it was effectively our project management tool. And it was amazing at that.
I don't know how it would fare compared to the options available today, but back then, it shutting down was a tremendous loss.
I downloaded the open-source version of the server to see if I could build a product around it, but it came with a serious limitation: The open-source server did not persist any data. That was a complete non-starter for me.
At that point I suspected it wasn't going anywhere. My suspicions were confirmed when I sat near some Wave team members at an event, and overhead one say, with stars in his eyes, "won't it be groovy when everyone's using Wave and..."
---
Cool concept, though.
Q: Do they have non-human shareholders I don't know about, or do they have shareholders who lack qualities present in most living human beings?
I remember being excited by wave when the demo hit but never had a use for what it offered at that point in my career.
Google Picasa: Everything local, so fast, so good. I'm never going to give my photos to G Photos.
Google Hangouts: Can't keep track of all the Google chat apps. I use Signal now.
Google G Suite Legacy: It was supposed to be free forever. They killed it, tried to make me pay. I migrated out of Google.
Google Play Music: I had uploaded thousands of MP3 files there. They killed it. I won't waste my time uploading again.
Google Finance: Tracked my stocks and funds there. Then they killed it. Won't trust them with my data again.
Google NFC Wallet: They killed it. Then Apple launched the same thing, and took over.
Google Chromecast Audio: It did one thing, which is all I needed. Sold mine as soon as they announced they were killing it.
Google Chromecast: Wait, they killed Chromecast? I did not know that until I started writing this..
Killing Google Reader affected a relatively small number of users, but these users disporportionately happened to be founders, CTOs, VPs of engineering, social media luminaries, and people who eventually became founders, CTOs, etc. They had been painfully taught to not trust Google, and, since that time, they didn't. And still don't.
They had a core set of ultra-connected users who touched key aspects of the entire tech industry. The knowledge graph you could have built out of what those people read and shared…
They could have just kept the entire service running with, what, 2 software engineers? Such a waste.
The thing is: I guess they didnt see a good way to monetize it (according to their "metrics"), while the product itself had somehow relative high OpEx and being somehow a niche thingy.
Picking up the pieces after Reader was impossible because the entire RSS ecosystem imploded with it. Almost every single news site decided that with killing Reader, they wouldn't bother maintaining their RSS feeds, leaving them basically all "legacy" until they irrevocably break one day and then get shut down for not wanting to get maintained.
like theoldreader and Inoreader, which explicitly copied the columnar interfaces, non-RSS bookmarklet content saving, item favoriting, friend-of-a-friend commenting and quasi-blog social sharing features, and mobile app sync options via APIs? Or NewsBlur, which did all of that _and also_ added user-configurable algorithmic filtering? Or Feedly, which copied Reader's UX but without the social features? or Tiny Tiny RSS and FreshRSS, which copied Reader's UX as self-hosted software?
theoldreader remains the most straightforward hosted ripoff of Google Reader, right down to look and feel, and hasn't changed much in more than a decade. Tiny Tiny is very similar, and similarly unchanging. FreshRSS implemented some non-RSS following features. So did NewsBlur, but as it always has, it still struggles with feed parsing and UI performance.
Inoreader and Feedly both pivoted toward business users and productivity to stay afloat, with the former's ditching of social features leading to another exodus of people who'd switched to it after Google Reader folded.
You can argue whether it's as good as GPM or not, but it's false to imply that your uploaded music disappeared when Google moved to YouTube Music. I made the transition, and all of my music moved without a new upload.
My music was deleted.
I also need to sell my Google Chromecast with Google TV 4K. Brand new, still in its shrink wrap. Bought it last year, to replace a flaky Roku. It was a flaky HDMI cable instead. I trust Roku more than Google for hardware support.
I genuinely thought all the chromecast audios I owned were useless bricks and was looking around for replacements and then they just started working again from an OTA update. Astounding. I assume someone got fired for taking time away from making search worse to do this.
(edit: https://www.techradar.com/televisions/streaming-devices/goog...)
Of course another question how long they will honor that commitment.
Unfortunately the last public version has a bug that randomly swaps face tags, so you end up training on the wrong persons faces just enough to throw it all off, and the recognition becomes effectively worthless on thousands of family photos. 8(
Digikam is a weak sauce replacement that barely gets the job done.
still have many domains on there, all with gmail
Google killed a lot of things to consolidate them into more "integrated" (from their perspective) product offerings. Picasa -> Photos, Hangounts -> Meet, Music -> YT Premium.
No idea what NFC Wallet was, other than the Wallet app on my phone that still exists and works?
The only one I'm not sure about is Chromecast - a while back my ones had an "update" to start using their newer AI Assistant system for managing it. Still works.
Two ways. Gradually, then suddenly.
- Ernest Hemingway, The Sun Also Rises
They have something called Google TV Streamer now, so for me it's more of a rebrand than really killing a product.
Oh, and a metric crapton of ads it shows you.
Is there another app where I can store this locally?
The difference is they no longer store the data on their servers, it's stored on your phone (iPhone/Android)
https://support.google.com/maps/answer/6258979
That way, they can't respond to requests for that data by governments as they don't have it.
I can look on my phone and see all the places I've been today/yesterday, etc
Using it on daily basis
Edit: Missed the "locally" part. Sorry no suggestions. Maybe Garmin has something?
I don't like the thought of providing Google thousands of personal photos for their AI training. Which will eventually leak to gov't agencies, fraudsters, and criminals.
Which particular thing called Hangouts? There were at least two, frankly I’d say more like four.
Google and Microsoft are both terrible about reusing names for different things in confusing ways.
> Can't keep track of all the Google chat apps.
And Hangouts was part of that problem. Remember Google Talk/Chat? That was where things began, and in my family we never wanted Hangouts, Talk/Chat was better.
Allo, Chat, Duo, Hangouts, Meet, Messenger, Talk, Voice… I’ve probably forgotten at least two more names, knowing Google. Most of these products have substantial overlap with most of the rest.
Hangouts had trouble scaling to many participants. Google Meet is fine, and better than e.g. MS Teams.
Legacy suite, free forever? Did they also promise a pony?..
Play Music: music is a legal minefield. Don't trust anybody commercial who suggests you upload music you did not write yourself.
Finance: IDK, I still get notifications about the stocks I'm interested in.
NFC Wallet: alive and kicking, I use it literally every day to pay for subway.
Can't say anything about Chromecast. I have a handful of ancient Chromecasts that work. I don't want any updates for them.
Why didn’t you quit Google after, say, the third product you used got canned?
I should’ve realized when that recent update broke them for like a week, then the brought the all back online, but suddenly much buggier.
On the other hand, for every flash game made there were about ten thousands flash-based ads, and nearly as many websites that used flash poorly for things like basic navigation (remember flash based website dropdown menus?). And for a few years it seemed like every single restaurant with a website was using flash for the entire thing, the results were borderline unusable in the best cases. And let's not forget that as long as flash was dominant, it was choking out the demand to get proper video support into browsers. Flash based video players performed like dog shit and made life on Linux a real chore.
It was a plague on the web, you couldn't zoom, select text, go back, just a black box ignoring everything about your web browser.
Killing it was probably the best thing Jobs ever did.
Flash was the last thing that got people excited for the Web generally
These are terrible for maintainability, but excellent for usability.
On the whole, I'd say it was easily a loss for the greater web that web programming left the citizen-programmer behind. (By requiring them all to turn into hamfisted front-end javascript programmers...)
Many of the centralized evils of the current web might have been avoided if there had remained an onramp for the neophyte to really create for the web.
I.e. Facebook et al. might have instead been replaced by a hosted, better-indexed Macromedia create + edit + host platform
Or the amount of shit code produced by inexperienced front-end devs throwing spaghetti at IE might have been reduced
It’s far from perfect but I’ve been enjoying playing with it even for things that aren’t games and it has come a long way just in the last year or two. I feel like it’s close to (or is currently) having its Blender moment.
I still miss Macromedia Fireworks.
yeah it wasn't secure
but;
> bad performance
I don't think thats the case. For the longest while flash was faster than js at doing anything vaguely graphic based. The issue for apple was that the CPU in the iphone wasn't fast enough to do flash and anything else. Moreover Adobe didn't get on with jobs when they were talking about custom versions.
You have to remember that "apps" were never meant to be a thing on the iphone, it was all about "desktop" like web performance.
The 20 most common things you’d do with the tool were there for you in obvious toolbars. It had a lot of advanced features for image editing. It had a scripting language, so you could do bulk editing operations. It supported just about every file extension you could think of.
Most useful feature of all was that it’d load instantly. You’d click the icon on the desktop, and there’d be the Fireworks UI before you could finish blinking. Compared to 2025 Adobe apps, where you click the desktop icon and make a coffee while it starts, it’s phenomenal performance.
I agree on security and bugs, but bugs can be fixed. It just shows neglect by Adobe, which was, I think, the real problem. I think that if Adobe seriously wanted to, it could have been a web standard.
Those did sometimes run really great, but most implementations were indeed very slow.
I remember vividly because it was part of my job back then to help with web performance and when we measured page speed and user interface responsiveness flash was almost always the worst.
You remembering a few optimised instances does not change the reality that Flash was bad.
Of course modern computers are orders of magnitude more powerful! But Flash was definitely generally worse compared on the same hardware and network stack compared to vanilla (non-plugin based) web tech.
I feel like people are talking past each other a bit here. FlashScript was never very fast, and rendering a document as a giant collection of bezier curves was not fast, but the people doing animations with it were getting the equivalent of modern day CSS3 animations + SVG, and it ran nicely on hardware two orders of magnitude slower than what we need for CSS3+SVG
Adobe was never known for its security or quality.
When Flash was on its way out one app made at the place I worked still said they needed it, and I couldn't figure out why... it was a Java app. After some digging, I found it, some horizontal dividers on the page. They could have, and should have, just been images. They didn't do anything. Yet someone made them in Flash.
I'd also say all the drop-down menu systems were an overuse. Splash screens on every car company's home page. It was out of hand.
I guess you could call it a victim of it's own success, where once it was time for it to die (due to mobile), very few people were sad to see it go.
There hasn’t been a replacement, yet.
Like I want to make websites about me similar to those in neocities right, those flashy nice (good?) artistic UI
I suck at css. I don't know but I never really got a feedback attention loop and heck even AI can make it better than me
But I want to show the world what I myself can make as well and not just what I prompt or get back.
I want a good feedback loop, can flash be useful for this purpose? Like maybe I want a website like uh something early browser times. I am kinda interested in building something like netscape navigator esque thing even though I wasn't born in that era or maybe windows xp style.
I have mixed opinions about AI tbh. I genuinely just want to learn things right now, it might take me more time, I have been beating myself over using AI and not feeling equal to if writing things by hand. So I want to prove to myself that I can write things/learn things by hand as well. Like I tried using it to study but the lure to make things right away and then trapping you later is definitely there, it feels easy in the start imo and that's the lure and I kinda want to stay away with that lure to develop my skills, maybe not right now, then later.
VM's persist memory snapshots (as do Apple's containers, for macOS at least), so there's still room for something like that workflow.
The technology took decades to mature, but the business people didn’t have the patience to let the world catch up to this revolutionary technology.
Kinda, but for small writes it's still nowhere near.
Samsung 990 Pro - IOPS 4KQD1 113 MBytes/Sec
P4800X optane - IOPS 4KQD1 206 MBytes/Sec
And that's a device 5 years newer and on a faster pcie generation.
It disappeared because the market that values above attribute is too small and its hard to market because at first glance they look about the same on a lot of metrics as you say
We were about get rid of split between RAM and disk memory and use single stick for both!
Isn't windows fast boot something like that (only slower, depending on ssd)? It semi-hibernates, stores kernel part of memory on disk for faster startup.
Optane was nearly as fast as RAM, but also persistent like a storage device. So you do a suspend to RAM, without the requirement to keep it powered like a RAM.
A few more thoughts about that, since I happen to have some of the last systems who actually had systems level support for that in their firmware, and early low-capacity optanes designed for that sort of use. It's fascinating to play with these, but they are low capacity, and bound to obsolete operating systems.
Given enough RAM, you can emulate that with working suspend and resume to/and from RAM.
Another avenue are the ever faster and larger SSDs, in practice, with some models it makes almost no difference anymore, since random access times are so fast, and transfer speeds insane. Maybe total and/or daily TBW remains a concern.
Both of these can be combined.
But now with the new Meta Ray-Bans featuring a light field display and with new media like gaussian splats we're on the verge of being able to make full usage of all the data those cameras were able capture, beyond the demos of "what if you could fix your focus after shooting" of back then.
Beyond high tech, there's a big market for novelty kinda-bad cameras like Polaroids or Instax. The first Lytro has the perfect form factor for that and was already bulky enough that slapping a printer on it wouldn't have hurt.
I always wondered about that - since it works by interleaving pixels at different focal depths, there's always going to be a resolution tradeoff that a single-plane focus camera wouldn't.
It's such a cool idea though, and no more difficult to manufacturer than a sensor + micro lens array.
(1) it's not really different focal depths, it's actually more like multiple independent apertures at different spatial locations, each with a lower resolution sensor behind it - stereovision on steroids (stereoids?)
They don't capture a light field like Lytro did, they capture a regular image with a very deep depth of field, extract a depth map (usually with machine learning, but some phones augment it with stereoscopy or even LIDAR on high end iPhones) and then selectively blur based on depth.
[1] https://en.wikipedia.org/wiki/Midori_%28operating_system%29
I've heard someone at Microsoft describe it as a moonshot but also a retention project; IIRC it had a hundred plus engineers on it at one time, including a lot of very senior people.
Apparently a bunch of research from Midori made it into .NET so it wasn't all lost, but still...
Never heard this phrase before, but I can definitely see this happening at companies of that size
It's kind of in that space, and is still actively developed.
They burned through $5B of 1999 dollars, building out a network in 23 cities, and had effectively zero customers. Finally shut down in 2001.
All their marketing was focused on "mobile professionals", whoever those were, while ignoring home users who were clamoring for faster internet where other ISPs dragged their feet.
Today, 5G femtocells have replicated some of the concept (radically small cell radius to increase geographic frequency reuse), but without the redundancy -- a femtocell that loses its uplink is dead in the water, not serving as a relay node. A Ricochet E-radio that lost its uplink (but still had power) would simply adjust its routing table and continue operating.
But simple point-to-point dialup (using my XP box as a RAS/DUN server) served me well back in the day, even after the network went down, because I put my home node as high up as I could, and it would get me roughly a half-mile radius around the house. Loooong before 802.11ah! It was fast enough for VNC.
https://www.joelonsoftware.com/2000/12/20/the-ricochet-wirel...
It was surprisingly great for the time. Apparently I was one of their 4 customers, too!
(It’s not super obvious, especially on mobile, but once you see the site, just scroll down to see the content)
I don't know if it was Yahoo Pipes that died, or a mainstream internet based on open protocols and standards.
Then all those sites I used to post on stopped supporting rss one by one and finally pipes was killed off.
For a while I used a python library called riko that did the same thing as pipes without the visual editor. I have to thank it for getting me off php and into python.
It has the advantage of being open source, has well defined and stable APIs and a solid backend. Plus 10+ years of constant development with many learnings around how to implement flow based programming visually.
I used the Node-RED frontend to create Browser-Red[^2] which is a Node-RED that solely executes in the browser, no server required. It does not support all Node-RED functionality but gives a good feel for using Node-RED and flow based programming.
The second project with which I am using Node-RED frontend is Erlang-Red[^3] which is Node-RED with an Erlang backend. Erlang is better suited to flow based programming than NodeJS, hence this attempt to demonstrate that!
Node-RED makes slightly different assumptions than Yahoo! Pipes - input ports being the biggest: all nodes in Node-RED have either zero or one input wires, nodes in Yahoo! Pipes had multiple input wires.
A good knowledge of jQuery is required but that makes it simpler to get into the frontend code - would be my argument ;) I am happy to answer questions related to Node-RED, email in bio.
[^1]: https://nodered.org
[^2]: https://cdn.flowhub.org
Apache Karavan: https://karavan.space/ Kaoto (Red Hat): https://kaoto.io
Both are end 2 end usable within vscode.
I often wonder, if AI had come 15 years earlier, would it have been a ton better because there weren't a billion different ways to do things? Would we have ever bothered to come up with all the different tech, if AI was just chugging through features efficiently, with consistent training data etc.?
Sounds not that different from containers, if you just choose the most popular tooling.
Small projects: docker compose, posgres, redis, nginx
Big projects: kubernetes, posgres, redis, nginx
This is why Heroku lost popularity.
I feel like this also describes something like Vercel. Having never personally used Heroku, is Vercel all that different except Ruby vs JS as the chosen language?
- not lowering prices as time went off. They probably kept a super-huger margin profit, but they’re largely irrelevant today
- not building their own datacenters and staying in aws. That would have allowed them to lower prices and gain even more market share. Everyone that has been in amazon/aws likely has seen the internal market rate for ec2 instances and know there’s a HUGE profit margin deriving by building datacenters. Add the recent incredible improvements to compute density (you can easily get 256c/512t and literally terabytes of memory in a 2u box) and you get basically an infinite money glitch.
It has been in existence in some form or another for nearly 30 years, but did not gain the traction it needed and as of writing it's still not in a usable state on real hardware. It's not abandoned, but progress on it is moving so slow that I doubt we'll ever see it be released in a state that's useful for real users.
It's too bad, because a drop in Windows replacement would be nice for all the people losing Windows 10 support right now.
On the other hand, I think people underestimate the difficulty involved in the project and compare it unfavorably to Linux, BSD, etc. Unix and its source code was pretty well publicly documented and understood for decades before those projects started, nothing like that ever really existed for Windows.
Some projects creep along slowly until something triggers an interest and suddenly they leap ahead.
MAME's Tandy 2000 implementation was unusable, until someone found a copy of Windows 1.0 for the Tandy 2000, then the emulation caught up until Windows ran.
Maybe ReactOS will get a big influx of activity after Windows 10 support goes offline in a couple days, or even shortly after when you can't turn AI spying off, not even three times a year.
And yet, no big leap in ReactOS (at least for now).
Apparently copyright law only applies for humans, generative AI gets away with stealing because there is too much monetary interest involved in looking the other way.
I don't think the world really needs that. :)
To me that just sound like it will make ReactOS much more Windows-like. So it's probably a win for the project. \s
The project is supposed to be a clean-room reverse engineering effort. If you even see Windows code, you are compromised, and should not work on ReactOS.
I don't think people do, it sounds like a nearly impossible struggle, and at the end you get a Windows clone. I can't imagine hating yourself enough to work on it for an extended period of time for no money and putting yourself and your hard work in legal risk. It's a miracle we have Wine and serious luck that we have Proton.
People losing Windows 10 support need to move on. There's Linux if you want to be free, and Apple if you still prefer to be guided. You might lose some of your video games. You can still move to Windows 11 if you think that people should serve their operating systems rather than vice versa.
Like what? I'm genuinely curious what personal risks faces anyone from contributing to ReactOS. I also am curious what kind of legal risk may threaten the work? I mean, even in the unlikely scenario that something gets proven illegal and ordered to be dismissed from the project, what would prevent any such particular expunged part to be re-implemented by some paid contractor (now under legally indisputable circumstances), thus rendering the initial effort (of legal action) moot?
I think nostalgia is influencing this opinion quite a bit, and we don't realize the mountain of tiny usability improvements that have been made since XP
Crazy fast compiler so doesn't frustrate trial & erroring students, decent type system without the wildness of say rust and all the basic programming building blocks you want students to grasp are present without language specific funkiness.
I think Pascal or ADA are better language to start learning about types with a good base.
The bad part is the horrible legacy documentation system. 8(
Free Pascal obviously has all of that stuff too.
Of course, being a good teaching language probably doesn't make the language popular or even survive. Python is so widely used not necessarily because it's simple to learn but because of its ecosystem.
10+ years ago I'd regularly build all sorts of little utilities with it. It was surprisingly easy to use it to tap into things that are otherwise a lot more work. For instance I used it to monitor the data coming from a USB device. Like 3 nodes and 3 patches to make all of that work. Working little GUI app in seconds.
Apple hasn't touched it since 2016, I kind of hope it makes a comeback given Blender and more so Unreal Engine giving people a taste of the node based visual programming life.
You can still download it from Apple, and it still technically works but a lot of the most powerful nodes are broken in the newer OS's. I'd love to see the whole thing revitalized.
I’ve tried things like Touch Designer and Max MSP but they’re too heavy to just pick up and play with. QC was the right balance between simplicity and power.
Have you looked at https://vvvv.org/ ? Maybe it's still comparatively too heavy but imho it's not that heavy (cf. touch designer and the likes). I want to play with it some more myself...
Full C# instead of god forbidden js.
Full vector dpi aware UI, with grid, complex animation, and all other stuff that html5/css didn’t have in 2018 but silverlight had even in 2010 (probable even earlier).
MVVM pattern, two-way bindings. Expression Blend (basically figma) that allowed designers create UI that was XAML, had sample data, and could be used be devs as is with maybe some cleanup.
Excellent tooling, static analysis, debugging, what have you.
Rendered and worked completely the same in any browser (safari, ie, chrome, opera, firefox) on mac and windows
If that thing still worked, boy would we be in a better place regarding web apps.
Unfortunately, iPhone killed adobe flash and Silverlight as an aftermath. Too slow processor, too much energy consumption.
Why do you think JavaScript is a problem? And a big enough problem to risk destroying open web standards.
I don't see how alternatives to JavaScript are a risk to open web standards. WebAssembly is itself a part of those same standards. It's just a shame that it was built as an extension of JavaScript instead of being an actual alternative.
TypeScript exists for the same reason things like mypy exists, and no one in their right mind claims that python's openness should be threatened just because static typing is convenient.
I suppose JS could go in the same direction and adopt the typing syntax from TS as a non-runtime thing. Then the typescript compiler would become something like mypy, an entirely optional part of the ecosystem.
No, it's the exact same thing. TypeScript adds support for type annotations, and removing these annotations leads to JavaScript. See how Node.js added support for TypeScript by implementing type stripping in v22.
Flash's ActionScript helped influence changes to modern JS that we all enjoy.
You sometimes need alternative ideas to promote & improve ideas for open web standards.
Stuff like angularjs was basically created for the same reason flash/silverlight went down — iphone
> A remote code execution vulnerability exists when Microsoft Silverlight decodes strings using a malicious decoder that can return negative offsets that cause Silverlight to replace unsafe object headers with contents provided by an attacker. In a web-browsing scenario, an attacker who successfully exploited this vulnerability could obtain the same permissions as the currently logged-on user. If a user is logged on with administrative user rights, an attacker could take complete control of the affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.
https://learn.microsoft.com/en-us/security-updates/securityb...
Lots of their stuff was delivered as Silverlight apps. It turns out that getting office workers to install a blessed plugin from Microsoft and navigate to a web page is much easier than distributing binaries that you have to install and keep up to date. And developing for it was pure pleasure; you got to use C# and Visual Studio, and a GUI interface builder, rather than the Byzantine HTML/JS/CSS ecosystem.
I get why it never took off, but in this niche of small-time custom software it was really way nicer than anything else that existed at the time. Web distribution combined with classic desktop GUI development.
> It turns out that getting office workers to install a blessed plugin from Microsoft and navigate to a web page is much easier than distributing binaries that you have to install and keep up to date. And developing for it was pure pleasure; you got to use C# and Visual Studio, and a GUI interface builder
IIRC around that time, you could also distribute full-blown desktop applications (C# WinForms) in a special way via the browser, by which they were easily installable and self-updating. The tech was called ClickOnce https://learn.microsoft.com/en-us/visualstudio/deployment/cl.... I think the flow was possibly IE-only, but that was not a big issue in a business context at that time.
- Client software that ran a VM which received "objects" from a central server (complete with versioning so it would intelligently download new objects when necessary). Versions were available for IBM (DOS), Windows, and Mac. Think of it as an early browser.
- Multiple access points and large internal network for storing and delivering content nationwide. This was their proprietary CDN.
- Robust programming language (TBOL/PAL) for developing client-side apps which could also interact with the servers. Just like Javascript.
- Vector (NAPLPS) graphics for fast downloading (remember, Prodigy started in the days when modems maxed out at 1200 baud); later they added JPG support.
- Vast array of online services: shopping, banking, nationwide news, BBSes, mail (before Internet email was popular), even airline reservations.
All this was run by a partnership between IBM, Sears, and CBS (the latter dropped out early). They were the Google of the time.
The internet before advertising, artificial intelligence, social media and bots. When folks created startups in their bedrooms or garages. The days when google slogan was “don’t be evil”.
Communities are moving back to early Internet-like chatrooms like IRC, but now it is Slack, Discord, and the like. Everything private.
I don't like the siloing our information to Discord being a comparison to old internet. We had indexable information in forums that is "lost", not in the literal sense, but because you wouldn't be able to find it without obsessive digging to find it again. Conversations in Discord communities are very surface level and cyclical because it's far less straight forward to keep track of and link to answers from last week let alone two years ago. It is profoundly sad, to be honest.
Animated gifs of cat, banner bars and pixels cost one dollar, until a one million were sold.
And it all ran on Chuck Norris' personal computer.
The creator, kentonv (on HN), commented about it recently here https://news.ycombinator.com/item?id=44848099
If I did it again I wouldn't focus on portability of existing apps. Especially today given you could probably vibe code most things (and trust the sandbox to protect you from AI slop security bugs).
For example, making external network requests from a Sandstorm app is hard by default. Years after it was "dead", Ian Denhardt wrote a simple drop-in tool you could include in an app which would use a conventional HTTP proxy to hijack the requests from the app and turn them into Powerbox requests. Even though it isn't "the best" way to do it, it is very serviceable and approachable to devs. I think it's something Sandstorm should've supported by default, to abstract away that challenge.
The funny thing is, Sandstorm is actually kinda a pain if you know the engineering you want to do but the platform restricts you from it: It's actually much much better at nontechnical users just being able to use finished apps. I don't think the "use Sandstorm as infrastructure" story is nearly as good. The original Sandstorm company wasn't running a lot of their own infrastructure on Sandstorm, and some of it that was required hidden hacks to work around some of the big ways Sandstorm isn't really designed for hosting websites.
I still don't think it's dead, but since everyone who still really wants to work on Sandstorm has bills to pay and jobs to do (and kids to raise), we definitely aren't moving super fast.
Where I do agree with Kenton is that Sandstorm really excels at running vibe coded apps, I've been playing with it, and found it very easy to get AI tools to fix apps up for Sandstorm. And of course, a sandbox that only runs when the user is interacting with it and manages all of the authentication for you is the safest place to run untrustworthy code that may have security or performance bugs.
It looked a bit goofy in the promo videos, but under the hood it was doing real-time chord detection and accompaniment generation. Basically a prototype of what AI music tools like Suno, Udio, or Mubert are doing today, fifteen years too early.
If Microsoft had kept iterating on it with modern ML models, it could’ve become the "GarageBand for ideas that start as a hum."
This would have changed so much. Desktop apps powered by the engine of Firefox not Chrome.
Why? Not enough company buy in, not enough devs worked on it. Maybe developed before a major Firefox re-write?
https://github.com/rhinstaller/anaconda-webui
I wish RedHat made an easy-to-use framework out of it.
Tauri apps take advantage of the web view already available on every user’s system. A Tauri app only contains the code and assets specific for that app and doesn’t need to bundle a browser engine with every app.
Rendering will still use Edge/Chromium on a generic Windows machine.
React Compiler automatically memoizes values and functions, reducing the need for manual useMemo calls. You can use the compiler to handle memoization automatically.
https://wikipedia.org/wiki/Kuro5hin
I was a hold out on smartphones for a while and I used to print out k5 articles to read while afk... Just such an amazing collection of people sharing ideas and communal moderation, editing and up voting.
I learned about so many wierd and wonderful things from that site.
(Archive here: https://www.inadequacy.org/)
And that's precisely why companies nerf their web sites and put a little popup that says "<service> works better on the app".
Apple would have inevitably done their own thing, but it would have been really nice to have two widely used, mature and open mobile Linux platforms.
Hit ctrl-f and typed Meego as soon as I saw this thread, hoping I'd be the first. Alas.
The N9 was literally a vision from an alternate timeline where a mobile platform from a major manufacturer was somehow hackable, polished, and secure. Favorite phone I've ever owned and I used it until it started to malfunction.
Had a Jolla for a bit, too. It was nice to see them try to keep the basic ideas going but unfortunately it was a pain in the ass to use thanks to their decision to go with a radio that only supported GSM/EDGE in the US. Had to carry around a MiFi just to give it acceptable data service.
I think the idea with Jolla is that if Nokia ever did an about-face, they were ready to be reabsorbed and get things back on the right track. Unfortunately, though we do once again have a "Nokia", it's just another Android white label with no interest in maintaining its own leading-edge smartphone platform.
I will bookmark the site to pay my utility bill, but it's not something I'd ever share. I might share a link to funny YouTube video, but wouldn't bookmark it.
I think social bookmarking didn't really know what it was, which is why the modern versions are more about sharing links than bookmarking. I don't post my bookmarks to Reddit, where people follow me as a person. I would post links I think are worth sharing to a topic people are interested in following.
Instead it went chasing markets, abandoning existing users as it did so, in favour of potential larger pools of users elsewhere. In the end it failed to find a niche going forward while leaving a trail of abandoned niches behind it.
Luckily it wasn't long after Mozilla abandoned it that PWAs were introduced and I could port the apps I cared about.
That’s actually an incredibly cool concept.
I noticed the trend when I was working on a major web property for the Aditya Birla conglomerate. My whole team was pleasantly surprised, and we made sure to test everything in Firefox for that project. But everyone switched to Android + Chrome over the next few years, which was a shame.
Today, India is 90% Chrome :(
Years later they briefly added Circles to Twitter and I thought that was great, not as useful as G+ Circles (since you could only have a single "private" circle to share to) but I very much used it and my mutuals as well... just to have Musk remove the only useful feature they had in years.
It was a series of experiments with new approaches to programming. Kind of reminded me of the research that gave us Smalltalk. It would have been interesting to see where they went with it, but they wound down the project.
It didn't get far enough to be "used" in a production sense. There was enough interest and people were playing around with it, but no real traction to speak of. Frankly, language projects are difficult because these days they have to be bootstrapped to a certain size before there's any appreciable use, and VCs are not patient enough for that kind of timetable.
Here's a postmortem Chris gave about all that: https://www.youtube.com/watch?v=WT2CMS0MxJ0 / https://www.youtube.com/watch?v=ThjFFDwOXok
LT was cool, but they abandoned it with insufficient hand-off when it was 80-90% done to work on Eve.
I know a bunch of people were unhappy that LightTable wasn't finished, especially because they raised money via Kickstarter for it.
Maybe Eve was too ambitious. Maybe funding never materialized. Maybe they just got bored and couldn't finish. Maybe they pissed off their audience.
As for me, I brought some eve-y ideas to my language project: https://github.com/mech-lang/mech
They had built a solid streaming platform for low latency cloud gaming but failed hard on actually having interesting games to play on it. You just can't launch a gaming platform with a handful of games that have been available everywhere and expect it to succeed.
I had a good connection and sometimes I forgot I was streaming a game.
At least the died gracefully, I played for free and also got a chromecast + controller out of the deal
I tried DDG (Bing-backed, I believe) and it happily found everything with no manual intervention at all. That was the point where I ditched Google Search after 30 years.
tumblr will practically let you do that for chrissake
You could have said Wordpress.com or something. It's not quite a website, but it's close. It's also probably going to be Typepad (i.e. defunct) in a few years and Blogger is probably going to be there quicker than that.
A genius product ripped my Microsoft. Have you used Microsoft Teams recently? Bad UI, hard to configure external hardware and good level of incompatibility, missing the good old "Echo / Sound Test Service". At a point I even installed Skype of my old Android but was sucking up too much battery.
My friends now use Discord, and it's an acceptable substitute, like Skype it's always pushing a ton of features and updates I didn't ask for. ;-)
I used it quite a bit to produce radio shows for my country's public broadcasting. Because Non's line-oriented session format was so easy to parse with classic Unix tools, I wrote a bunch of scripts for it with Awk etc. (E.g. calculating the total length of clips highlighted with brown color in the DAW -- which was stuff meant for editing out; or creating a poor man's "ripple editing" feature by moving loosely-placed clips precisely side by side; or, eventually, converting the sessions to Samplitude EDL format, and, from there, to Pro Tools via AATranslator [1] (because our studio was using PT), etc. Really fun times!)
Interesting how Flash became the almost universal way to play videos in the browser, in the latter half of the 2000's (damn I'm old...).
Are you referring to the SWF file format?
I wonder why one one has managed to build something comparable that does work on a phone.
Maybe they could have fixed all that for touch screens, small portrait screens, and more but they never did make it responsive AFAIK.
1. competing visions for how the entire system should work
2. dependence on early/experimental npm libraries
3. devs breaking existing features due to "innovation"
4. a lot of interpersonal drama because it was not just open source but also a social network
the ideas are really good, someone should make the project again and run with it
It was a fascinating protocol underneath, but the social follow structure seemed to select strongly for folks who already had a following or something.
Having seen what goes on in the foss world and what goes on in the large faang-size corporate world, no wonder the corporate world is light-years ahead.
Those people need to be pushed out early and often. That's what voting is for. You need a supermajority to force an end to discussion, and a majority to make a decision. If you hold up the discussion too long with too slim a minority, the majority can fork your faction out of the group. If the end of debate has been forced, and you can't work with the majority, you should leave yourself.
None of this letting the bullies get their way until everything is a disaster, then splitting up anyway stuff.
The core of the issue is that drama is a way to impose your views of the world.
In foss software you quite literally don’t have to agree. You can fork the software and walk your own path. You can even pull changes from the original codebase, most licenses allow that.
Consensus is only necessary if you care about imposing your views of the world onto others.
Edit: in fact I'd say they were irrelevant before pretty much all of those innovations. By the time AIM or MSN Messenger really became popular, ICQ didn't matter anymore.
I heard that some patent troll got a hold of the patent for force feedback joysticks, and all manufacturers just gave up on them because of the troll. The patent expired recently IIRC, so hopefully people will start making them again soon.
[1] https://austral-lang.org/ [2] https://austral-lang.org/spec/spec.html
Sorta related, iPod car interface was a reliable way to play and control music, now replaced with CarPlay which has problems and also messes up your nav.
In 2011, before TypeScript, Next.js or even React, they had seamless server-client code, in a strongly typed functional language with support for features like JSX-like inline HTML, async/await, string interpolation, built-in MongoDB ORM, CSS-in-JS, and many syntax features that were added to ECMAScript since then.
I find it wild how this project was 90%+ correct on how we will build web apps 14 years later.
- not backed by a huge corporation (React = FB, TypeScript = Microsoft, Next.js = Vercel, ...)
- many of the ideas I listed above were controversial at the time of introduction, I can imagine that Opa must have felt overwhelming
- Opa didn't actually have components or state management, which was a pain point on which React originally took off
Ozzie, who had previously worked at IBM, was particularly interested in the challenge of remote collaboration. His vision culminated in the creation of Groove, which was released in 2001. The software distinguished itself from other collaboration tools of the time by allowing users to share files and work on documents in real-time—even without a continuous internet connection.
Groove’s architecture was innovative in that it utilized a peer-to-peer networking model, enabling users to interact directly with each other and share information seamlessly. This approach allowed for a level of flexibility and responsiveness that was often missing in traditional client-server models. Asynchronous collaboration was a key feature, where team members could work on projects without needing to be online simultaneously.
https://umatechnology.org/what-happened-to-microsoft-groove/
We built some things on it, was like CRDT for all the things.
All the buzz in the 2020's about WASM giving websites the ability to run compiled code at native speed, letting pages network with your server via WebRTC?
Yeah, you could do that with Java Applets in 1999.
If Sun (and later Oracle) had been less bumbling and more visionary -- if they hadn't forced you to use canvas instead of integrating Java's display API with the DOM, if they had a properly designed sandbox that wasn't full of security vulnerabilities?
Java and the JVM could have co-evolved with JavaScript as a second language of the Web. Now Java applets are well and truly dead; the plugin's been removed from browsers, and even the plugin API's that allowed it to function have been deprecated and removed (I think; I'm not 100% sure about that).
- Based on BitTorrent ideas
- Completely decentralized websites' code and data
- Either completely decentralized or controllable-decentralized authentication
- Could be integrated into existing websites (!)
It's not kind of dead, there's a supported fork, but it still feels like a revolution that did not happen. It works really well.
First Class had broader userbase, such as schools and organizations in the groupware/collaborative segment (but also Mac user groups and so on).
First Class was a comercial product (the server). It had filesharing (UL/DL), it had it's own desktop, mail, chat, IM, voice mail and more. Started out on Mac, but later became cross platform. Still you can find presentations and setup guides on old forgotten University/Schools websites.
Hotline on the other hand, was very easy to setup and also pretty lightweight. It had a server tracker. In the beginning it was Mac only. Lot's of warez servers, but also different (non-warez) communities. It had filesharing (ul/dl from the file area), chat and a newsboard. The decline came after it's developers released the Windows versions. Most servers became clickbait pron/warez with malware etc. People started to move away to web and it Hotline basically died out.
Now, there was some open source/clone projects that kept the spirit alive. But after a few years, web forums, torrents and other p2p-apps took over. But there is some servers running still in 2025 and open source server/client software still developed.
Compared to First Class. Hotline was the Wild West. It only took 15 minutes to set up your own server and announce it on a server tracker (or keep it private).
When i use Discord and other apps/services, it's not hard to think of FC/HL. But then, they were solutions of it's time.
More about: https://en.wikipedia.org/wiki/FirstClass
https://en.wikipedia.org/wiki/Hotline_Communications
https://www.macintoshrepository.org/6691-hotline-connect-cli...
In comparison I found Clojars^[0] for Clojure better and community driven like NPM. But obv Clojure has more business adoption than CL.
Do you use CL for work?
[0]: https://clojars.org/
What was the bookmarks social tool called from 00’s? I loved it and it fell off the earth. You could save your bookmarks, “publish” them to the community, share etc..
What ever happened to those build your own homepage apps like startpage (I think)? I always thought those would take off
del.icio.us! Funnily, also killed by yahoo like flickr
In this same vein, I always thought Tumblr had a great design for a blog. It hits the perfect balance between a microblog like Twitter, and a fat blog like Wordpress. It had various stigma's around the type of people who posted there, which seems to have only gotten worse over the years. It is a shell of its former self and yet anther site that fell on hard times after Yahoo ownership.
Yahoo really is where Web 2.0 went to die.
[1] http://web.archive.org/web/20201014024057/https://www.youtub...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
(For those unfamiliar, Illustrator is a pure vector graphics editor; once you rasterize its shapes, they become uneditable fixed bitmaps. Fireworks was a vector graphics editor that rendered at a constant DPI, so it basically let you edit raster bitmaps like they were vectors. It was invaluable for pixel-perfect graphic design. Nothing since lets you do that, though with high-DPI screens and resolution-independent UIs being the norm these days, this functionality is less relevant than it used to be.)
Just barely stopped using my CS6 copy. Still haven't found anything as intuitive.
I just wanna make a mostly static site with links in and out of my domain. Maybe a light bit of interactivity for things like search that autocompletes.
Connect your phone to a display, mouse, keyboard and get a full desktop experience.
At the time smartphones were not powerful enough, cables were fiddly (adapters, HDMI, USB A instead of a single USB c cable) and virtualization and containers not quite there.
Today, going via pkvm seems like promising approach. Seamless sharing of data, apps etc. will take some work, though.
They replaced StumbleUpon with "Mix", whatever it is. Probably because they didn't know how to earn money from it. Sad.
Its 2025 and we still haven't solved secure online identification and we are still not using end-to-end encryption for e-mail, most e-mail is not even signed.
Interaction with state agencies is still mostly via paper-based mail. The only successfully deployed online offer of the german state administration seems to be the online portal for tax filings “elster.de”.
The use of a private key on the national ID card would have been able to provide all this and more using standard protocols.
At least for identification, there is an expensive effort to re-design something similar in a smartphone-centric way and with less security and not based on standard approaches called “EUDI wallets”.
For encrypted communication the agreed-on standard seems to be “log in to our portal with HTTPS and use our proprietary interfaces to send and receive messages”...
Why did it die: Too expensive (~30€/year for certificate, >100€ for reader one time) and too complicated to use. Not enough positve PR. Acceptance at state-provided sites was added too late. In modern times, everything must be done with the smartphone, handling of physical cards is considered backwards hence this is probably not going to come back...
Edit: Anothther simiarly advanced technoloy that also seems to have been replaced by inferiror substitute smartphone: HBCI banking (a standard...) using your actual bank card + reader device to authenticate transactions... replaced by proprietary app on proprietary smartphone OS...
That wasn't the case pre-internet or pre-cellphone, when I remember pining for something resembling those technologies.
It's at a very early stage of development but looks promising
There are lots of competing MLs you can use instead:
- F# (Fable)
- ReasonML
- OCaml (Bucklescript)
- Haskell
- PureScript
IMO the problem with Elm was actually The Elm Architecture.
https://guide.elm-lang.org/architecture/
I'm no frontend guy, but I think it did/was inspire(d) react (redux?) maybe. Corrections on this very welcome
- https://redux.js.org/understanding/history-and-design/prior-...
Just on principle, I'd have liked to see it on the market for more than 49 days! It pains me as an engineer to think of the effort to bring a hardware device to market for such a minuscule run.
Still, even in the early days they had great black levels and zero motion lag - they’d advertise it as “600 fps”. They seriously improved on power draws and heat, and were definitely superior if you wanted an ideal movie or sports watching experience.
Buuut they were also competing with LED TVs, which could be really REALLY thin (rule of cool) and would just sip power. They died out.
If something like that existed today, powered by modern APIs and AI, it could become the ultimate no-code creativity playground.
Indeed (and the names are a clue). More specifically, Electron came out of the ideas in Atom, not the other way around.
Atom Shell/Electron was from the very beginning something you could use separately from Atom as a framework for creating desktop apps using Chromium/Node.js.
BT had this grand vision for basically providing rich multi-media through the phone line, but in ~1998. Think a mix of on-demand cable and "teleconferencing" with TV based internet (ceefax/red button on steriods)
It would have been revolutionary and kick started the UK's jump into online rich media.
However it wouldnt have got past the regulators as both sky and NTL(now virgin) would have protested loudly.
It took years for Android to adopt it, and then iOS, and even today, it's a copy of what WebOS did, but somehow worse.
I still pine for features that Palm/HP WebOS had, Android and iOS are still downgrades to this day
* Rethinkdb: I made some small projects with it in the past and it was easy to easy
While not perfect, I have some hope that Jujutsu may be a path forward for improved ergonomics in version control: https://github.com/jj-vcs/jj/blob/main/README.md#introductio...
https://en.wikipedia.org/wiki/IGoogle
https://en.wikipedia.org/wiki/Google_Desktop
and why? = UI/UX
I think the market was still skeptical about nodejs on the server at the time but other than that I don’t really know why it didn’t take off
That said, frameworks were all the buzz back in the day, so the language alone probably wouldn't have gone anywhere without it.
Died due to legal wranglings about patents, iirc.
The idea that you could read and write data at RAM speeds was really exciting to me. At work it's very common to see microscope image sets anywhere from 20 to 200 GB and file transfer rates can be a big bottleneck.
Archive capture circa 2023: https://web.archive.org/web/20230329173623/https://ddramdisk...
HN post from 2023: https://news.ycombinator.com/item?id=35195029
mount -t tmpfs ram /mnt/ramdisk
What to do with it, once it's there, is a concern of software, but specialized hardware is needed to get it there.
https://en.wikipedia.org/wiki/Zram
https://wiki.archlinux.org/title/Zram
https://wiki.gentoo.org/wiki/Zram
for most purposes. (Assuming the host has enough RAM to spare, to begin with)
https://www.youtube.com/watch?v=e5wAn-4e5hQ
https://www.youtube.com/watch?v=QWsNFVvblLw
Summary:
>This presentation introduces Via, a virtual file system designed to address the challenges of large game downloads and storage. Unlike cloud gaming, which suffers from poor image quality, input latency, and high hosting costs, Via allows games to run locally while only downloading game data on demand. The setup process is demonstrated with Halo Infinite, showing a simple installation that involves signing into Steam and allocating storage space for Via's cache.
>Via creates a virtual Steam library, presenting all owned games as installed, even though their data is not fully downloaded. When a game is launched, Via's virtual file system intercepts requests and downloads only the necessary game content as it's needed. This on-demand downloading is integrated with the game's existing streaming capabilities, leveraging features like level-of-detail and asset streaming. Performance metrics are displayed, showing download rates, server ping, and disk commit rates, illustrating how Via fetches data in real-time.
>The system prioritizes caching frequently accessed data. After an initial download, subsequent play sessions benefit from the on-disk cache, significantly reducing or eliminating the need for network downloads. This means the actual size of a game becomes less relevant, as only a portion of it needs to be stored locally. While server locations are currently limited, the goal is to establish a global network to ensure low ping. The presentation concludes by highlighting Via's frictionless user experience, aiming for a setup so seamless that users are unaware of its presence. Via is currently in early access and free to use, with hopes of future distribution partnerships.
I'm amazed the video still has under 4,000 views. Sadly, Flaherty got hired by XAI and gave up promoting the project.
https://x.com/rflaherty71/status/1818668595779412141
But I could see the technology behind it working wonders for Steam, Game Pass, etc.
I don't see how this could take off. Internet speeds are getting quicker, disk space is getting cheaper, and this will slow down load times. And what's worse is the more you need this tech the worse experience you have.
> replaces visual monitoring with a sonic `ecology' of natural sounds, where each kind of sound represents a specific kind of network event.
https://www.usenix.org/conference/lisa-2000/peep-network-aur...
The concept is that we are wired to notice sounds that are out of the ordinary, but “ordinary” sounds are not distracting.
I had forgotten about this project for a number of years until I read Peter Watts’s Blindsight.
At its best, having IM, email, browser, games, keywords, chats, etc. was a beautiful idea IMO. That they were an ISP seemed secondary or even unrelated to the idea. But they chose to charge for access even in the age of broadband, and adopt gym level subscription tactics to boot, and people decided they'd rather not pay it which is to be expected. I often wonder if they'd have survived as a software company otherwise.
They were basically a better thought out Facebook before Facebook, in my opinion.
You could purposely choose to be online or offline.
Much easier to draw a line back then about how often you were online.
I kind of expect we might see something similar if the AI bubble pops
I wonder who owns the domain now
- Multimodality: Text/audio/images input and output. Integrated OCR.
- Connection with an asterisk server, it could send and receive voice phone calls! I used it to call for pizzas to a local place via whatsapp. This was prior to Google's famous demo calling a hairdresser to book a haircut.
- It understood humor and message sentiment, told jokes and sometimes even chimed in with a "haha" if somebody said something funny in a group chat or sent an appropriate gif reaction
- Memory (facts database)
- Useful features such as scheduling, polling, translations, image search, etc.
Regarding the tech, I used external models (Watson was pretty good at the time), plus classical NLP processing and symbolic reasoning that I learned in college.
Nobody understood the point of it (where's the GUI? how do I know what to ask it? customers asked) and I didn't make a single dime out of the project. I closed it a couple years later. Sometimes I wonder what could've been of it.
Also this: https://news.ycombinator.com/item?id=6676494
Redmart (Singapore): Best web based online store to this date (obviously personal view). No one even tries now that mobile apps have won.
https://techcrunch.com/2016/11/01/alibaba-lazada-redmart-con...
In the end I wound up with basically the same application software as on my Debian desktop, except running on Haiku instead of Linux. Haiku is noticeably snappier and more responsive than Linux+X+Qt+KDE, though.
I'd love to have an SGI laptop.
Or an SGI cell phone or VR headset.
All of the upside and none of the downside of react
No JSX and no compiler, all native js
The main dev is paid by microsoft to do oss rust nowadays
I use choo for my personal projects and have used it twice professionally
https://github.com/choojs/choo#example
The example is like 25 lines and introduces all the concepts
Less moving parts than svelte
For example, Haunted is a react hooks implementation for lit: https://github.com/matthewp/haunted
Choo suffered from not having an ecosystem, same with mithtil and other "like react but not" also-rans.
Nothing ever came close to easily find conferences to attend, and find the slides and conversation around them
Two features that come to mind as IIRC being unique (as compared to Illustrator) were multi-page documents and multiple page size multi-page documents. Ideal for the complete standard set of company branded print documents: business card, "With Compliments" slip, and letterhead. :D
Adobe's acquisition of Macromedia and subsequent killing of (the IMO superior) FreeHand contributed directly to my subsequent decision to avoid closed source application software--especially for creative tools--even if alternatives were "technically inferior".
(And, indeed, "creative tool killed/hampered for business reasons" is a story which has been repeated elsewhere multiple times in the quarter century[0] since.)
While Inkscape is still missing features compared to FreeHand it is however also still here many years later and is what I've used ever since when I need 2D vector design software. (Although I've also been keeping an eye on Graphite: https://graphite.rs)
----
[0] Oh, weird, apparently it's actually less than 25 years: https://en.wikipedia.org/wiki/Adobe_FreeHand#Adobe_FreeHand Seems I've been holding the grudge for less time than I thought. :D
Think flowcharts crossed with pseudocode but following Structured Programming principles.
Very useful for mocking up, designing and testing code logic before you write it.
People talk so much about how you need to write code that fits well within the rest of the codebase, but what tools do we have to explore codebases and see what is connected to what? Clicking through files feels kind of stupid because if you have to work with changes that involve 40 files, good luck keeping any of that in your working memory. In my experience, the JetBrains dependency graphs also aren't good enough.
Sourcetrail was a code visualization tool that allowed you to visualize those dependencies and click around the codebase that way, see what methods are connected to what and so on, thanks to a lovely UI. I don't think it was enough alone, but I absolutely think we need something like this: https://www.dbvis.com/features/database-management/#explore-... but for your code, especially for codebases with hundreds of thousands or like above a million SLoC.
Example: https://github.com/CoatiSoftware/Sourcetrail/blob/master/doc...
Another example: https://github.com/CoatiSoftware/Sourcetrail/blob/master/doc...
I yearn to some day view entire codebases as graphs with similarly approachable visualization, where all the dependencies are highlighted when I click an element. This could also go so, so much further - you could have a debugger breakpoint set and see the variables at each place, alongside being able to visually see how code is called throughout the codebase, or hell, maybe even visualize every possible route that could be taken.
I’m not arguing the solutions it outlined are good, but I think some more discussion around how we interact with touch screens would be needed. Instead, we are still typing on a layout that was invented for mechanical typewriters - in 2025, on our touch screens.
It had its own cross platform UI and other frameworks too, so you could "write once in Java, and ship on all the things" .. well theoretically.
It got abandoned too soon. But it was quite fun to build apps with it for a while, almost Delphi- like. I always wonder if it went open source, if things would have been different vis a vis Flash, etc.
i first learned about it when i was working in an university group and had the task to transform a windowing algorithm already working on matlab to python. it felt like a modern linter and lsp with additional support through machine learning. i don't quite know why it got comparative small recognition, but perhaps enough to remain an avantgarde pioneering both python and machine learning support for further generations and wider applications.
They stopped supporting small tablets some years ago though, and made it worse with every Windows update. I can only surmise that it was to make people stop using them. Slow GUI, low contrast, killed apps.
> TUNES started in 1992-95 as an operating system project, but was never clearly defined, and it succumbed to design-by-committee syndrome and gradually failed. Compared to typical OS projects it had very ambitious goals, which you may find interesting.
1. When Windows Vista was being developed, there were plans to replace the file system with a database, allowing users to organize and search for files using database queries. This was known as WinFS (https://en.wikipedia.org/wiki/WinFS). I was looking forward to this in the mid-2000s. Unfortunately Vista was famously delayed, and in an attempt to get Vista released, Microsoft pared back features, and one of these features was WinFS. Instead of WinFS, we ended up getting improved file search capabilities. It's unfortunate that there's been no proposals for database file systems for desktop operating systems since.
2. OpenDoc (https://en.wikipedia.org/wiki/OpenDoc) was an Apple technology from the mid-1990s that promoted component-based software. Instead of large, monolithic applications such as Microsoft Excel and Adobe Photoshop, functionality would be offered in the form of components, and users and developers can combine these components to form larger solutions. For example, as an alternative to Adobe Photoshop, there would be a component for the drawing canvas, and there would be separate components for each editing feature. Components can be bought and sold on an open marketplace. It reminds me of Unix pipes, but for GUIs. There's a nice promotional video at https://www.youtube.com/watch?v=oFJdjk2rq4E.
OpenDoc was a radically different paradigm for software development and distribution, and I think this was could have been an interesting contender against the dominance that Microsoft and Adobe enjoys in their markets. OpenDoc actually did ship, and there were some products made using OpenDoc, most notably Apple's Cyberdog browser (https://en.wikipedia.org/wiki/Cyberdog).
Unfortunately, Apple was in dire straits in the mid-1990s. Windows 95 was a formidable challenger to Mac OS, and cheaper x86 PCs were viable alternatives to Macintosh hardware. Apple was an acquisition target; IBM and Apple almost merged, and there was also an attempt to merge Apple with Sun. Additionally, the Macintosh platform depended on the availability of software products like Microsoft Office and Adobe Photoshop, the very types of products that OpenDoc directly challenged. When Apple purchased NeXT in December 1996, Steve Jobs returned to Apple, and all work on OpenDoc ended not too long afterward, leading to this now-famous exchange during WWDC 1997 between Steve Jobs and an upset developer (https://www.youtube.com/watch?v=oeqPrUmVz-o).
I don't believe that OpenDoc fits in with Apple's business strategy, even today, and while Microsoft offers component-based technologies that are similar to OpenDoc (OLE, COM, DCOM, ActiveX, .NET), the Windows ecosystem is still dominated by monolithic applications.
I think it would have been cool had the FOSS community pursued component-based software. It would have been really cool to apt-get components from remote repositories and link them together, either using GUI tools, command-line tools, or programmatically to build custom solutions. Instead, we ended up with large, monolithic applications like LibreOffice, Firefox, GIMP, Inkscape, Scribus, etc.
3. I am particularly intrigued by Symbolics Genera (https://en.wikipedia.org/wiki/Genera_(operating_system)), an operating system designed for Symbolics Lisp machines (https://en.wikipedia.org/wiki/Symbolics). In Genera, everything is a Lisp object. The interface is an interesting hybrid of early GUIs and the command line. To me, Genera could have been a very interesting substrate for building component-based software; in fact, it would have been far easier building OpenDoc on top of Common Lisp than on top of C or C++. Sadly, Symbolics' fortunes soured after the AI winter of the late 1980s/early 1990s, and while Genera was ported to other platforms such as the DEC Alpha and later the x86-64 via the creation of a Lisp machine emulator, it's extremely difficult for people to obtain a legal copy, and it was never made open source. The closest things to Genera we have are Xerox Interlisp, a competing operating system that was recently made open source, and open-source descendants of Smalltalk-80: Squeak, Pharo, and Cuis-Smalltalk.
4. Apple's "interregnum" years between 1985 and 1996 were filled with many intriguing projects that were either never commercialized, were cancelled before release, or did not make a splash in the marketplace. One of the most interesting projects during the era was Bauhaus, a Lisp operating system developed for the Newton platform. Mikel Evins, a regular poster here, describes it here (https://mikelevins.github.io/posts/2021-07-12-reimagining-ba...). It would have been really cool to have a mass-market Lisp operating system, especially if it had the same support for ubiquitous dynamic objects like Symbolic Genera.
You can have one today if you want, although nobody knows about it.
Step 1. Install a local Oracle DB https://hub.docker.com/r/gvenzl/oracle-free#quick-start
Step 2. Set up DBFS https://docs.oracle.com/en/database/oracle/oracle-database/2...
Step 3. Mount it via FUSE or NFS.
Step 4. Also access the underlying tables via SQL.
MIME types for mail addressed much of the demand for pluggable data types.
For anyone interested in the Apple future that could have been, check out Jim Miller's articles, e.g. on LiveDoc (https://www.miramontes.com/writing/livedoc/index.php)
A place where artists and consumers could freely communicated and socialize without hazzle.
Died because of: Stupidity, commercialisation and walled-gardening.
- Gnome2 dropped from Ubuntu in favor of Unity
- Ford Crown Victoria
What is the modern-day successor? You only mentioned what isn't good.
I loved the AirPort routers. I found it odd that Apple exited the market just as everyone else entered.
I ended up getting a Linksys as a troubleshooting step when I was having internet issues. I don't think the AirPort was the issue, but after migrating, it didn't seem worth going back to a router that was effectively end of life.
I still remember many years ago having a Sonos system and calling support due to some issues I was having. When they asked what type of router I had and I mentioned it was an AirPort, they immediately moved on to something else being the issue. The reputation was so solid that support wouldn't even bother troubleshooting it.
Now I just have a single basic wifi+router, no mesh. It doesn't really cover my whole house, but one more AP would do. My parents' house needs more APs to the point where management is a concern.
You can't just drop that and not say why...
Some people claim that EPA standards killed large sedans because of how SUVs have a lower bar. I don't know, maybe people want the room. But maybe an updated Vic would've sold well to fleets based on reputation. Dodge kept selling the Charger to civilians and police.
Also, insert usual spiel about new cars having overcomplicated controls and hard to replace parts. Hot/cold knob doesn't take my eyes off the road.
Unfortunately, it died because its very niche and also they couldnt keep up with development of drivers for desktops.. This is even worse today...
Dual screen iPad killer, productivity optimised. IIRC Microsoft OneNote is its only legacy.
Killed because both the Windows team and the Office team thought it was stepping on their toes.
I used this when it was brand new for a bit and it was so incredibly smooth and worked so well. It solved the problem of controlling systemd units remotely so well. I'm pretty sure the only reason it never actually took off was kubernetes and coreos's acquisition, however it actively solves the 'other half' of the k8s problem which is managing the state of the host itself.
ISO/OSI had session layer. ie much of what QUIC does regarding underlying multiple transports.
Speaking of X.509 the s-expressions certificate format was more interesting in many ways.
X.400 was a nice idea, but the ideal of having a single global directory predates security. I can understand why it never happened
On X.509, the spec spends two chapters on attribute certificates, which I've never seen used in the wild. It's a shame; identity certificates do a terrible job at authentication
(Not the Linux distribution with the same name)
I have used it for years.
A two pane manager, it makes defining file associations, applications invoked by extensions and short cut buttons easy convenient.
Sadly it is abandonware now.
Slowly migrating to Double Commander now...
Instead it went into a slow death spiral due to Windows 95.
Got a good family story about that whole acquisition attempt, but I don't want to speak publicly on behalf of my uncle. I know we've talked at length about the what-ifs of that moment.
I do have a scattering of some neat Quarterdeck memorabilia I can share, though:
https://www.dropbox.com/scl/fo/0ca1omn2kwda9op5go34e/ACpO6bz...
But you couldn't actually buy /X. After trying to buy a copy, my publisher even contacted DESQ's marketing people to get a copy for me, and they wouldn't turn one over. Supposedly there were some copies actually sold, but too few, too late, and then /X was dropped. There was at least one more release of plain DESQview after that, but by then Windows was eating its lunch.
Why? Obviously close-to-zero market. It was unbelievable how those people though those projects would even succeed.
I know a lot of it got folded into PHP, but the best parts of it, like native support for XHP and function whitelists, never got in AFAIK.
Looked cool during demos. Got killed when Flash died.
And the similarly named but completely separate OLE Automation, which let you script programs, across process boundaries. This is what let you write in VB(A): Set w = New Word.Application: Set e = new Excel.Application: Set doc = w.Open("foo.doc"); etc... - this was to Office (mostly) what shell scripting is to Linux, and enabled a lot of ad-hoc business process automation.
People always fail to see something that is an inevitability. Humans lack foresight because they don't like change.
google glass sucks though and glasses will never be a thing. google and meta and … can spend $8T and come up with the most insane tech etc but no one will be wearing f’ing glasses :)
Their execution was of course bad but I think today current LLM models are better and faster and there is much more OSS models to reduce costs. Hardware though looked nice and pico projector interesting concept even though not the best executed.
I wrote a bunch of software in Borland Delphi, which ran in Windows, Wine, and ReactOS with no problems. Well, except for ReactOS' lack of printing support.
As long as you stay within the ECMA or published Windows APIs, everything runs fine in Wine and ReactOS. But Microsoft products are full of undocumented functions, as well as checks to see if they're running on real Windows. That goes back to the Windows 3.1 days, when 3.1 developers regularly used OS/2 instead of DOS, and Microsoft started adding patches to fail under OS/2 and DR-DOS. So all that has to be accounted for by Wine and ReactOS. A lot of third-party software uses undocumented functions as well, especially stuff written back during the days when computer magazines were a thing, and regularly published that kind of information. A lot of programmers found the lure of undocumented calls to be irresistible, and they wound up in all kinds of commercial applications where they really shouldn't have been.
In my experience anything that will load under Wine will run with no problems. ReactOS has some stability problems, but then the developers specifically call it "alpha" software. Despite that, I've put customers on ReactOS systems after verifying all their software ran on it. It gets them off the Microsoft upgrade treadmill. Sometimes there are compatibility problems and I fall back to Wine on Linux. Occasionally nothing will do but real Windows.
Which reduces its innovation level to nothing more than a chest-mounted camera.
You want real B2C products that people would actually buy? Look at the Superbowl ads instead. Then watch the Humane ad again. It's laughable.
I felt like the OS hit a stride around 8.1, with some markets sporting somewhat impressive marketshare, and the corporate politics of the whole situation and Nokia merger screwed it up badly.
I really think if Microsoft had doubled down and focused on getting flagship devices to all 4 flagship carriers it would have gone somewhere.
But I remember at the time having a dead end of hardware where competitors were putting out new phones on all 4 carriers every year. With Windows Phone you were hopping between carrier exclusives or getting nothing because all the new Nokia/Microsoft phones were low end or mid-range at best.
make hardware expensive again!
Also, I did not experience them personally, but I love watching computing history videos on YouTube, and a lot of the computers and operating systems from the 1980s and early 1990s got buried too soon, mostly because of their owners being short-sighted idiots in not realizing the full potential of what computers and video games could become, and having wildly successful hits on their hands with legions of faithful fans but not knowing how to build on that success or what the fans actually wanted to see in updated hardware.
Hit, "ctrl + spacebar to search for anything with simple typed parameters for search" was a killer product in 2005 and now Microsoft finally got wise to copy it in 2025.
Nope.
If you have a mobile phone, you’re almost certainly using IPv6 today.
Yeah. That still stings. It didn’t have to be this way.
One always must define a one sentence goal or purpose, before teams think about how to build something.
Cell processors, because most coders can't do parallelism well
Altera consumer FPGA, as they chose behavioral rather than declarative best practices... then the Intel merger... metastability in complex systems is hard, and most engineers can't do parallelism well...
World Wide Web, because social-media and Marketers
Dozens of personal projects, because sometimes things stop being fun. =3
It seems like a clear winner. Of course, this was comparing one particular beta machine to one particular VHS machine.
Or are you claiming that it died a long, long, time ago, and now is when we really need it, but it wasn't needed before?
Is American democracy a 'project'?
Project seems a fitting description of American democracy and the project aspect is part of what makes it American. We the people are working towards that more perfect Union even if at times it does not seem it, the system mostly works but there is no straight line between where we started and that more perfect Union and whatever that more perfect union is, is not a constant. We do get lost along the way, take round about paths, sidestep, go backwards, etc, it is a requirement of the shared aim and sometimes a step backwards is actually a step forward. We can't achieve that more perfect Union, all we can do is keep trying and hope we are more or less going in the correct general direction.
As long as American democracy keeps evolving it is alive and has held up better than one would expect. For at least a century the country has been run by parties that supposedly want to kill American democracy, strangle it with their ideology and defeat the enemies of American democracy which just happen to be their political rivals; we would live in a utopia if it was not for <political party>. But they keep failing and the project continues on.
Brief (CC0): https://doi.org/10.5281/zenodo.17305774 Curious: would this structure have saved any of the projects mentioned here?
It was an extremely interesting effort where you could tell a huge amount of thought and effort went into making it as privacy-preserving as possible. I’m not convinced it’s a great idea, but it was a substantial improvement over what is in widespread use today and I wanted there to be a reasonable debate on it instead of knee-jerk outrage. But congrats, I guess. All the cloud hosting systems scan what they want anyway, and the one that was actually designed with privacy in mind got screamed out of existence by people who didn’t care to learn the first thing about it.
> I wanted there to be a reasonable debate on it
I'm reminded of a recent hit-piece about Chat Control, in which one of the proponent politicians was quoted as complaining about not having a debate. They didn't actually want a debate, they wanted to not get backlash. They would never have changed their minds, so there's no grounds for a debate.
We need to just keep making it clear the answer is "no", and hopefully strengthen that to "no, and perhaps the massive smoking crater that used to be your political career will serve as a warning to the next person who tries".
I can’t think of a single thing that’s come along since that is even remotely similar. What are you thinking of?
I think it’s actually a horrible system to implement if you want to spy on people. That’s the point of it! If you wanted to spy on people, there are already loads of systems that exist which don’t intentionally make it difficult to do so. Why would you not use one of those models instead? Why would you take inspiration from this one in particular?
Once that idea appears, it allows every lobbyist and insider to say “mandate this, we’ll do something like what Apple did but for other types of Bad People” and all of a sudden you have regulations that force messaging systems to make this possible in the name of Freedom.
Remember: if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image. There are many in politics for whom that level of control is the actual goal.
Great!
> the problem is the very assertion “it is possible to preserve the privacy your constituents want, while running code at scale that can detect Bad Things in every message.”
Apple never made that assertion, and the system they designed is incapable of doing that.
> if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image.
Apple’s system cannot do that. If you change parts of it, sure. But the system they proposed cannot.
To reiterate what I said earlier:
> The vast majority of the debate was dominated by how people imagined it worked, which was very different to how it actually worked.
So far, you are saying that you don’t have a problem with the system Apple designed, and you do have a problem with some other design that Apple didn’t propose, that is significantly different in multiple ways.
Also, what do you mean by “model”? When I used the word “model” it was in the context of using another system as a model. You seem to be using it in the AI sense. You know that’s not how it worked, right?
Chat Control, and other proposals that advocate backdooring individual client systems.
Clients should serve the user.
Chat Control is older than Apple’s CSAM scanning and is very different from it.
> Clients should serve the user.
Apple’s system only scanned things that were uploaded to iCloud.
You missed the most important part of my comment:
> I think it’s actually a horrible system to implement if you want to spy on people. That’s the point of it! If you wanted to spy on people, there are already loads of systems that exist which don’t intentionally make it difficult to do so. Why would you not use one of those models instead? Why would you take inspiration from this one in particular?
> I'm reminded of a recent hit-piece about Chat Control, in which one of the proponent politicians was quoted as complaining about not having a debate. They didn't actually want a debate, they wanted to not get backlash. They would never have changed their minds, so there's no grounds for a debate.
Right, well I wanted a debate. And Apple changed their minds. So how is it reminding you of that? Neither of those things apply here.
No thanks. I'll take a hammer to any device in my vicinity that implements police scanning.
No, but I have a hard time imagining a bug that would meaningfully compromise this kind of system. Can you give an example?
> How about making Apple vulnerable to demands from every government where they do business?
They already are. So are Google, Meta, Microsoft, and all the other giants we all use. And all those other companies are already scanning your stuff. Meta made two million reports in 2024Q4 alone.
The onus is on you to prove perfection before ruining lives on hardware they paid for.
100x worse on the vulnerability front, as the tech could be bent to any whim. Importantly, none of what you described is client-side scanning. Even I consider abiding rules on others’ property fair.
I believe my retro Nokia phones s60/s90 does not have any spyware. I believe earlier Nokia models like s40 or monochrome does not even have an ability to spy on me (but RMS considers triangulation as spyware). I don't believe any products from the duopoly without even root access are free from all kinds of vendor's rootkits.
But not very different to how it was actually going to work, as you say:
> If you change parts of it, sure.
Now try to reason your way out of the obvious "parts of it will definitely change" knee-jerk.
Apple designed a system. People guessed at what it did. Their guesses were way off the mark. This poisoned all rational discussion on the topic. If you imagine a system that works differently to Apple’s system, you can complain about that imaginary system all you want, but it won’t be meaningful, it’s just noise.
If you can't acknowledge this, it puts you in a position where you can't be convincing to people who need you to deflect obvious, well-known criticisms before beginning a discussion. It gives you crazy person or salesman vibes. These are arguments that someone with a serious interest in the technology would be aware of already and should be included as a prerequisite to being taken seriously. Doing this shows that you value other people's time and effort.
Where have I given you that impression? The thing that annoys me is the sensible discussion being drowned out by ignorance.
> If you can't acknowledge this, it puts you in a position where you can't be convincing to people who need you to deflect obvious, well-known criticisms before beginning a discussion.
I cannot parse this, it’s word salad. People who need me to deflect criticisms? What? I genuinely do not understand what you are trying to say here. Maybe just break the sentences up into smaller ones? It feels like you’re trying to say too many things in too few sentences. What people? Why do they need me to deflect criticisms?