And yes, as others have said, instead we got the modern web, with (for example) web based word processors requiring orders of magnitude more compute power than a desktop of the early Java era.
It wasn't a good experience.
In the meantime, computers became fast enough to run the modern web. The average phone can run tens of these web based wordpressors.
Java on the web was pretty terrible from beginning to end, but The Java Web could've worked.
Now that we have the web, we're moving back to the Javaverse in the form of apps (which, on Android, are actually Java(-like)). Every big website has one of those "for the full experience, download our app" banners. Other sites use WASM to bring back the Java applet days, now without third party plugin full of security holes. Google Docs renders to a virtual canvas in the browser in the same way an applet would've back in 2003, except it would've been able to open files directly from the file system.
And lo and behold, the new system is also a terrible experience.
I'd have said the situation back then was a bit better than that - a Java applet wouldn't have been able to access your filesystem by default, for instance.
Part of the benefit of Sun's Java was that the bytecode itself could be statically verified to only have good behaviour and the plugin would then sandbox what it could access at runtime. The plugin itself would obviously have had bugs - like all software - but it's not obvious to me that was intrinsically worse than having all that code as part of the browser (as we do now).
I'd contrast it with ActiveX and (I think), which was very free about what its applets could do (basically just Windows executable code, I think). Flash I'm less clear on the limitations of.
We have moved on in other ways, of course - browsers are architected to isolate processes more, including use of things like seccomp.
Java applications were really slow, and certainly much slower than native programs, until HotSpot became the default in J2SE 1.3. It's distance history now, but I remember a lot of excitement about Java in 1996 (compile once) and then disappointment of how slow it was.
(After some iterations HotSpot became a really good JIT compiler.)
Not quite as easy as say, VB6, but good enough.
One of these days, I want to try building some sort of GUI app using Swing again and build a native image with Graal.
NFSv4 can run over TCP, which means that any encrypted wrapper can carry it. While SSH port forwarding can be used, stunnel is a better fit for batch environments. Wireguard is another option from this perspective.
Encrypted ONC RPC works at a lower level of the protocol to secure NFS, which is documented in RFC-9289.
Obviously, none of this will help with a machine using RARP and TFTP over 10baseT.
https://news.ycombinator.com/item?id=33384073
Speaking of YP (which I always thought sounded like a brand of moist baby poop towelettes), BSD, wildcard groups, SunRPC, and Sun's ingenuous networking and security and remote procedure call infrastructure, who remembers Jordan Hubbard's infamous rwall incident on March 31, 1987?
I often heard a different "F" word in that acronym in place of "File".
If they mean Interac E-Transfers, then their inability to access it may have prevented them from running afoul of a common scam. Online classified ads will offer desirable items that are also often expensive and niche, and will ask the would-be purchaser to pay for it via an e-Transfer. And then you never hear from them again.
Always ensure the product exists, or the service is rendered, before using Interac E-transfer.
https://www.getcybersafe.gc.ca/en/e-transfer-fraud-protect-y...
NetBSD can run Raspberry Pis big-endian. This is a much easier platform to obtain and configure than SPARC.
The targets appear to be earmv7hfeb and aarch64eb.
I've been very slowly upping my Java-fu over the past year or so to crack into the IC market here in the Nordics. Naturally I started by investigating the JVM and its bytecode in some detail. It may surprise a lot of people to know that the JVM's bytecode is actually very, very much not cleanly mappable back to a normal processor's instruction set.
My very coarse-grained understanding is: If you really want to "write once, run anywhere", and you want to support more platforms than you can count on one hand, you eventually kind of need something like a VM somewhere in the mix just to control complexity. Even moreso if you want to compile once, run anywhere. We're using VM here in the technical sense, not in the Virtualbox one - SQLite implements a VM under the hood for partly the same reason. It just smooths out the cross-compilation and cross-execution story a lot, for a lot of reasons.
More formally: A SQLite database is actually a big blob of bytecode which gets run atop the Virtual DataBase Engine (VDBE). If you implement a VDBE on a given platform, you can copy any SQLite database file over and then interact with it with that platform's `sqlite3`, no matter which platform it was originally built on. Sound familiar? It's rather like the JVM and JAR files, right?
Once you're already down that route, you might decide to do things like implement things like automatic memory management at the VM level, even though no common hardware processor I know has a native instruction set that reads "add, multiply, jump, traverse our object structure and figure out what we can get rid of". VDBE pulls this kind of hat trick too with its own bytecode, which is why we similarly probably won't ever see big hunking lumps of silicon running SQLiteOS on the bare metal, even if there would be theoretical performance enhancements thataways.
(I greatly welcome corrections to the above. Built-for-purpose VMs of the kind I describe above are fascinating beasts and they make me wish I did a CS degree instead of an EE one sometimes.)
I was once at a meetup for Lisp hackers, and discussing something or another with one of them, who referred to Lisp as a "low-level language". When I expressed some astonishment at this characterization, he decided I needed to be introduced to another hacker named "Jerry", who would explain everything.
"Jerry" turned out to be Gerald Sussman, who very excitedly explained to me that Lisp was the instruction set for a virtual machine, which he and a colleague had turned into an actual machine, the processor mentioned above.
I can't find that exact microcontroller that I remember, I think the domain is gone, but there are other things like this, including some FPGA cores which make the same claim that I remember from that microcontroller I read about in the early 2000s. I wonder how those would perform compared to a JVM running on a traditional instruction set on the same FPGA.
Could it be some older ARM core supporting Jazelle?
> https://en.wikipedia.org/wiki/Jazelle
concretely possibly a ARM926EJ-S?
> https://en.wikipedia.org/wiki/ARM9#ARM9E-S_and_ARM9EJ-S
Various other "Java processors" are listed on
I think it was the "aJile" processor listed in your final link, but I'm not 100% sure. It was over 20 years ago that I read about it and was about to buy a development kit when I got pulled off of all java work I was doing.
Lynn Conway, co-author along with Carver Mead of "the textbook" on VLSI design, "Introduction to VLSI Systems", created and taught this historic VLSI Design Course in 1978, which was the first time students designed and fabricated their own integrated circuits:
>"Importantly, these weren’t just any designs, for many pushed the envelope of system architecture. Jim Clark, for instance, prototyped the Geometry Engine and went on to launch Silicon Graphics Incorporated based on that work (see Fig. 16). Guy Steele, Gerry Sussman, Jack Holloway and Alan Bell created the follow-on ‘Scheme’ (a dialect of LISP) microprocessor, another stunning design."
[...]
https://news.ycombinator.com/item?id=29953548
The original Lisp badge (or rather, SCHEME badge):
Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode, by Guy Lewis Steele Jr. and Gerald Jay Sussman, (about their hardware project for Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course) (1979) [pdf] (dspace.mit.edu)
http://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.p...
I believe this is about the Lisp Microprocessor that Guy Steele created in Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
My friend David Levitt is crouching down in this class photo so his big 1978 hair doesn't block Guy Steele's face:
The class photo is in two parts, left and right:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
Here are hires images of the two halves of the chip the class made:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.
Here is a photo of a chalkboard with status of the various projects:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
Design of a LISP-based microprocessor
http://dl.acm.org/citation.cfm?id=359031
https://donhopkins.com/home/AIM-514.pdf
Page 22 has a map of the processor layout:
https://donhopkins.com/home/LispProcessor.png
We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.
Here's a map of the projects on that chip, and a list of the people who made them and what they did:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
1. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors (moisture sensors) integrated into digital subsystem for testing.
2. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data manipulator subsystem for searching and sorting data base operations.
3. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.
4. Mike Coln: Switched capacitor, serial quantizing D/A converter.
5. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.
6. Jim Frankel: Data path portion of a bit-slice microprocessor.
7. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.
8. Tak Hiratsuka: Subsystem for data base operations.
9. Siu Ho Lam: Autocorrelator subsystem.
10. Dave Levitt: Synchronously timed FIFO.
11. Craig Olson: Bus interface for 7-segment display data.
12. Dave Otten: Bus interfaceable real time clock/calendar.
13. Ernesto Perea: 4-Bit slice microprogram sequencer.
14. Gerald Roylance: LRU virtual memory paging subsystem.
15. Dave Shaver Multi-function smart memory.
16. Alan Snyder Associative memory.
17. Guy Steele: LISP microprocessor (LISP expression evaluator and associated memory manager; operates directly on LISP expressions stored in memory).
18. Richard Stern: Finite impulse response digital filter.
19. Runchan Yang: Armstrong type bubble sorting memory.
The following projects were completed but not quite in time for inclusion in the project set:
20. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1 above, this team completed a CRT controller project.
21. Martin Fraeman: Programmable interval clock.
22. Bob Baldwin: LCS net nametable project.
23. Moshe Bain: Programmable word generator.
24. Rae McLellan: Chaos net address matcher.
25. Robert Reynolds: Digital Subsystem to be used with project 4.
Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she taught him how to make his first prototype "Geometry Engine"!
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]
Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
[...]
The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].
[...]
For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)
9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.
[...]
The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)
This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.
Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?
If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!
There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
https://web.archive.org/web/20210131033223/http://ai.eecs.um...
I believe this is a very sensible decision: being too close to a real architecture would probably tie the bytecode to similar architectures too much and make it quite useless as opposed to compiling to an actual architecture.
The bytecode being abstract enough is likely a good thing to be able to achieve okay performance everywhere. Like, you wouldn't want the bytecode to specify a fixed number of registers.
What may also surprise many people thinking Java is a bloated language is that the Java bytecode is actually quite simple, straightforward to understand, clean and also very well documented. It's an interesting thing to look into, even for someone not involved day to day in some Java.
Not what I'd describe as "simple, straightforward"
The 6502 has 56 mnemonics, mapping to about 151 unique opcodes.
The Z80 has about 80 distinct mnemonics, mapping to about 158 base opcodes, with extended opcodes (via CB, ED, DD, FD prefixes) pushing the total to around 500.
Java bytecode opcodes are symbolic instructions for a virtual stack machine, about 200 in total with a 1:1 mapping from mnemonic to opcode, and just an "wide" extended prefix to extend operand sizes, designed for portability rather than direct hardware execution.
e.g. The PSC 1000 microprocessor (1994) could run Java directly: https://en.wikipedia.org/wiki/Ignite_(microprocessor)
Stack-based microprocessors tend to perform worse than register-based ones and I assume there wasn't a huge reason to develop a Java-on-chip for a "Java computer.” (1) It would have not run non-Java software easily and (2) the future of stack-based microprocessors wasn't as bright.
https://www.jopdesign.com/doc/rtarch.pdf
via https://www.jopdesign.com/ which includes a link to the github repo --- with 20-year-old files!
You can think about it on a really small scale:
PUSH 1
PUSH 2
PUSH 3
ADD
PUSH 4
MULT
ADD
is not that hard to conceptually rewrite into STORE 1, r1
STORE 2, r2
STORE 3, r3
ADD r2, r3 INTO r2
STORE 4, r3
MULT r2, r3 INTO r2
ADD r1, r2 INTO r1
Of course, from there, you have to deal with running out of registers, then you're going to want to optimize the resulting code (for instance, generally small numbers like this can fit into the opcodes themselves so we can optimize away all the STORE instructions easily in most if not all assembly languages), but, again, this is all fairly attainable code to developers with the correct skills, not pie-in-the-sky stuff. Compiler courses do not normally deal directly with this exact problem, but by the time you finish one you'd know enough to tackle this problem since the problem since the problem that compiler courses do deal with is a more-or-less a superset of this problem.As someone who started their software career at Java version 8, I wouldn't say the trend in Java has been to become more clunky.
If we separate frameworks from the core libraries of Java, its more modular, has better functionality with things like Strings, Maps, Lists, Switch statements, Resource (file, http) accessing etc.etc.
For frameworks we have Spring Boot, which can be as clunky or as thin as you want for a backend.
For IC cards, and small embedded systems, I can still do that in the newer versions of Java just with a smaller set of libraries.
Maybe the author is nostalgic for a time (which I didn't experience - was busy learning how to walk), but Java can do all the things JDK version 1 can, and so much more. No?
Was such a great promise. I remember visiting PCExpo in the late 90's and Sun's booth had a Java demo running on three machines: Linux x86, Windows X86 and Solaris Sparc (OSX wasn't even revealed yet). You could run a few demos you selected from a menu one of which was a 3D ship with accelerated OpenGL which really thrilled me - cross platform everything, even CAD and gaming. Amazing! The future is finally here.
And it never happened. Bummer. Instead we got a badly hacked up hypertext viewer with various VM's duck taped to the sides.
When Java was new, scripting/dynamic languages hadn't matured enough to be true competitors so you were left with C/C++, Delphi and the like. In that landscape, Java is beyond exciting.
Nowadays there are so many alternatives that didn't exist then. And it's not debatable that many of those languages (Dart, C#, Typescript, Kotlin) move faster when it comes to language features. Whether you want/need them is subjective, sure. But back in the day Java was that hot, fast moving language.
This sounds interesting. I have read quite a few FORTH posts on HN but never gave the thing a look. It is really different than anything I have looked at. For example, for functional languages I never got pass Scheme's ' symbol, but at least I get most of the syntax. FORTH really is another level.
There are BSD, GPL, and other Open Source variants of Open Firmware you can get and fool around with today and if you’re building a new product you should still consider whether an Open Firmware would work for you versus one of its inferior successors.
For example, for many years the FreeBSD's 3rd-stage loader used FICL (Forth Inspired Command Language) for scripting [1]. It's still supported, although in the recent years it was deprecated in favor of Lua [2].
[1] https://github.com/freebsd/freebsd-src/tree/main/stand/forth
[2] https://github.com/freebsd/freebsd-src/tree/main/stand/lua
I've frequently written about Mitch Bradley's Forthmacs / Sun Forth / CForth / OpenBoot / OpenFirmware on HN. I was his summer intern at Sun in 1987, and used his Forth systems in many projects!
[...]
https://news.ycombinator.com/item?id=29261810
Speaking of Forth experts -- there's Mitch Bradley, who created OpenFirmware:
[...]
Here's the interview with Mitch Bradley saved on archive.org:
https://web.archive.org/web/20120118132847/http://howsoftwar...
I've previously posted some stuff about Mitch Bradley -- I have used various versions of his ForthMacs / CForth / OpenFirmware systems, and I was his summer intern at Sun in '87!
Mitch is an EXTREMELY productive FORTH programmer! He explains that FORTH is a "Glass Box": you just have to memorize its relatively simple set of standard words, and then you can have a complete understanding and full visibility into exactly how every part of the system works: there is no mysterious "magic", you can grok and extend every part of the system all the way down to the metal. It's especially nice when you have a good decompiler / dissassembler ("SEE") like ForthMacs, CForth, and OpenFirmware do.
https://news.ycombinator.com/item?id=9271644
[...]
https://news.ycombinator.com/item?id=38689282
Mitch Bradley came up with a nice way to refactor the Forth compiler/interpreter and control structures, so that you could use them immediately at top level! Traditional FORTHs only let you use IF, DO, WHILE, etc in : definitions, but they work fine at top level in Mitch's Forths (including CForth and Open Firmware).
[...]
metacompile.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...
kernel.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...
arm64: https://github.com/MitchBradley/openfirmware/tree/master/cpu...
emacs: https://github.com/MitchBradley/openfirmware/tree/master/cli...
olpc: https://github.com/MitchBradley/openfirmware/tree/master/dev...
video: https://github.com/MitchBradley/openfirmware/tree/master/dev...
amd7990: https://github.com/MitchBradley/openfirmware/tree/master/dev...
pci: https://github.com/MitchBradley/openfirmware/tree/master/dev...
fcode: https://github.com/MitchBradley/openfirmware/tree/master/ofw...
gui: https://github.com/MitchBradley/openfirmware/tree/master/ofw...
inet: https://github.com/MitchBradley/openfirmware/tree/master/ofw...
Forth is really a transparent "glass box" where you can see through and understand it all from top to bottom, and OpenFirmware includes a museum of drivers and modules and extensions for everywhere it's ever been and all of its missions, like Superman's Crystal Fortress of Solitude!
https://en.wikipedia.org/wiki/Fortress_of_Solitude
>The Fortress contained an alien zoo, a giant steel diary in which Superman wrote his memoirs (using either his invulnerable finger, twin hand touch pads that record thoughts instantly, or heat vision to engrave entries into its pages), a chess-playing robot, specialized exercise equipment, a laboratory where Superman worked on various projects such as developing defenses to kryptonite, a room-sized computer, communications equipment, and rooms dedicated to all of his friends, including one for Clark Kent to fool visitors. As the stories continued, it was revealed that the Fortress was where Superman's robot duplicates were stored. It also contained the Phantom Zone projector, various pieces of alien technology he had acquired on visits to other worlds, and, much like the Batcave, trophies of his past adventures. Indeed, the Batcave and Batman himself made an appearance in the first Fortress story. The Fortress also became the home of the bottle city of Kandor (until it was enlarged), and an apartment in the Fortress was set aside for Supergirl.
Also:
> You need to rename the file with a specific format: the IP address of the JavaStation, but in 8 capitalized hex digits, followed a dot, and then the architecture (in this case “SUN4M”). So, in this example the IP address (as defined in rarpd above) is 192.168.128.45, which in hex is C0A8802D.
This is of course the correct way to do it, but if you're lazy you can just tail the tftpd logs and see what filename it tries to download, rename the file on the server, and reboot again to pick it up. (I did this when netbooting raspberry pis)
Yes, firmware only knows how to use rarp and tftp to fetch a kernel or a better bootloader, kernel is modern and speaks DHCP. This is a pretty common pattern with netbooting; some will bootp rather than rarp, sometimes you use tftp to fetch something that can do an http fetch, etc. Always lots of fun :D
I remember having trouble some years ago upgrading old Cisco routers because the image was bigger than what TFTP can handle.
I also love installing/cabling servers, but not needing to leave your desk to (re)provision hardware is pretty life changing. Considering your desk can be anywhere around the world due to work travels.
The Worst Job in the World, from Michael Tiemann <tiemann@cygnus.com>:
https://www.donhopkins.com/home/catalog/unix-haters/slowlari...
PS: Fuck Trump supporting anti-vaxer Scott "You have zero privacy, get over it" McNealy. May he run Solaris in hell. If you installed it on him, then good for you, he deserved it!
Scott McNealy has long been one of Trump’s few friends in Silicon Valley:
https://www.sfchronicle.com/politics/article/Scott-McNealy-h...
Former Sun Micro CEO Scott McNealy, known for his provocative quotes, says Trump is doing a 'spectacular job' amid the coronavirus crisis. That's not how many tech experts see it:
https://www.businessinsider.com/scott-mcnealy-praises-trumps...
Sun on Privacy: "Get Over It":
> Initially I drank the kool-aid and was thrilled about this new “modern” language that was going to take over the world, and drooled at the notion of Java-based computers, containing Java chips that could run java byte-code as their native machine code.
Exactly. I was lucky to see Java when it was still called Oak, and then I developed some of the first (non-animation) Java applets and small desktop applications outside of Sun/JavaSoft. It was very exciting (speaking as a programmer in C, C++, Smalltalk, a little Self, a little Lisp, and other languages at the time). The language itself wasn't as cool as Lisp or Smalltalk, but it was a nice halfway compromise from C++, with some of its own less exotic but nice features and ergonomics. It was already in the browsers, had next-gen embedded systems for the Internet at the forefront from the beginning, there was a proof-of-concept of a better kind of Web browser using it, Sun putting even putting it in rings for ubiquitous computing, there were thin clients that could get interesting (combined with Sun's "The Network Is The Computer", even if historically techies didn't like underpowered diskless workstations, except to give to non-techies), etc., and it only promised to get better...
Then I turned my back for a sec., and the next time I looked, Java had been kicked out of the browser, and most all of the energy (except for the Android gambit) seemed to be focused on pitching Java for corporate internal software development. And suddenly no one else seemed to want to touch it, even if there wasn't much better. (Python, for example, from the same era, was one person's simplified end user extension language; and not intended for application development.)
Yet another case of technology adoption not going how you'd initially think it would.
On the PC Front, James Gosling and cia made were an amazing team, but they pushed for very academic and cumbersome patterns that converged in the EJB architecture. Nobody in its sane mind would fall in love with that.
Two or three years later, the internet bubble fallout affected every technology.
Nowadays we've got Kubernetes with YAML for that.
https://news.ycombinator.com/item?id=39252103
>The background is that Terry Winograd, a professor of Human-Computer Interaction at Stanford University in Silicon Valley, had invited me to lecture on some of my work in 1998. After my talk, Terry invited me to tour his lab and meet some of his graduate students. One of the Ph.D. students was a bright young fellow named Larry Page, who showed me his project to enhance the relevance of web search results.
Many of those lectures are online. I was not able to find the 1998 one he mentioned, but here is one that Jakob Neilsen gave on May 20, 1994 called "Heuristic Evaluation of User Interfaces, Jakob Nielsen, Sunsoft".
https://searchworks.stanford.edu/view/vj346zm2128
He gave another one on October 4 1996 entitled "Ensuring the Usability of the Next Computing Paradigm", but I can't find it in the online collection, although it exists in the inventory of video recordings, however I can't find any 1998 talks by Jakob Nielsen in this list:
https://oac.cdlib.org/findaid/ark:/13030/c82b926h/entire_tex...
Here is the entire online collection (it's kind of hard to search the 25 pages of the extensive collection, thanks to bad web site design!):
https://searchworks.stanford.edu/catalog?f%5Bcollection%5D%5...
The oldest (most interesting to me) ones are at the end (page 25):
https://searchworks.stanford.edu/?f%5Bcollection%5D%5B%5D=a1...
Here are some of the older ones that I think are historically important and especially interesting (but there are so many I haven't watched them all, so there are certainly more that are worth watching):
[...]
Bringing Behavior to the Internet, James Gosling, SUN Microsystems [December 1, 1995]:
https://searchworks.stanford.edu/view/bz890ng3047
I also uploaded this historically interesting video to youtube to generate closed captions and make it more accessible and findable, and I was planning on proofreading them like I did for this Will Wright talk, but haven't gotten around to it yet (any volunteers? ;):
https://www.youtube.com/watch?v=dgrNeyuwA8k
This is an early talk by James Gosling on Java, which I attended and appeared on camera asking a couple questions about security (44:53, 1:00:35), and I also spotted Ken Kahn asking a question (50:20). Can anyone identify other people in the audience?
My questions about the “optical illusion attack” and security at 44:53 got kind of awkward, and his defensive "shrug" answer hasn't aged too well! ;)
No hard feelings of course, since we’d known each other for years before (working on Emacs and NeWS) and we’re still friends, but I’d recently been working on Kaleida ScriptX, which lost out to Java in part because Java was touted as being so “secure”, and I didn’t appreciate how Sun was promoting Java by throwing the word “secure” around without defining what it really meant or what its limitations were (expecting people to read more into it than it really meant, on purpose, to hype up Java).
There were also java realtime JVM with different latency promises.
Basically you had different versions of the JVM, optimized for its use case. I guess when Sun was bought by Oracle, everything died.
https://www.ptc.com/en/products/developer-tools/perc
https://www.aicas.com/wp/products-services/jamaicavm/
Additionally there are plenty of others around,
And then there are flavours, like microEJ, Android, what Ricoh, Xerox ship on their copiers, BlueRay,....
Folks' hate for Oracle make them forget that the Java push into the industry was not Sun alone, rather Sun, Oracle, IBM trio.
Oracle has since early Java days embraced the technology, and even had their own flavour of JavaStation, called Network Computer.
Oracle has been a better Java steward than the alternative, being Java dying in version 6, losing Maxime (whose ideas live on GraalVM),...
No one else jumped to acquire Sun, and Google missed their opportunity to own Java, after torpedoing Sun.
Nowadays Google has their own .NET.
And frankly Electron is a much worse experience than Swing apps, but there is nothing like helping Chrome taking over Web and desktop as the platform to rule them all. /s
Which is kind of what JetBrains did, as do other companies that know their stuff, e.g. https://www.bitwig.com
Hah! Even ISAs are somewhat detached from truly native machine code, these days.