163 pointsby jaypatelani4 days ago11 comments
  • deadlyllama4 days ago
    I remember when Java was exciting. There were several attempts at open source Java OSes like JOS (https://jos.sourceforge.net/). A Java applet runtime for the PalmPilot. My thesis on dynamic aliasing protection was based on a dynamic Java-esque runtime. But... Java got a reputation for being heavyweight.

    And yes, as others have said, instead we got the modern web, with (for example) web based word processors requiring orders of magnitude more compute power than a desktop of the early Java era.

    • okeuro494 days ago
      I can remember trying to run applets on a consumer machine.

      It wasn't a good experience.

      In the meantime, computers became fast enough to run the modern web. The average phone can run tens of these web based wordpressors.

      • jeroenhd4 days ago
        Web applets were a terrible experience all round. Downloaded JAR files usually just worked, though. The GUI looked odd because it wasn't using normal operating system controls, but in terms of performance it was no slower than any native program except for in the most extreme cases.

        Java on the web was pretty terrible from beginning to end, but The Java Web could've worked.

        Now that we have the web, we're moving back to the Javaverse in the form of apps (which, on Android, are actually Java(-like)). Every big website has one of those "for the full experience, download our app" banners. Other sites use WASM to bring back the Java applet days, now without third party plugin full of security holes. Google Docs renders to a virtual canvas in the browser in the same way an applet would've back in 2003, except it would've been able to open files directly from the file system.

        And lo and behold, the new system is also a terrible experience.

        • mark_undoio4 days ago
          > Google Docs renders to a virtual canvas in the browser in the same way an applet would've back in 2003, except it would've been able to open files directly from the file system.

          I'd have said the situation back then was a bit better than that - a Java applet wouldn't have been able to access your filesystem by default, for instance.

          Part of the benefit of Sun's Java was that the bytecode itself could be statically verified to only have good behaviour and the plugin would then sandbox what it could access at runtime. The plugin itself would obviously have had bugs - like all software - but it's not obvious to me that was intrinsically worse than having all that code as part of the browser (as we do now).

          I'd contrast it with ActiveX and (I think), which was very free about what its applets could do (basically just Windows executable code, I think). Flash I'm less clear on the limitations of.

          We have moved on in other ways, of course - browsers are architected to isolate processes more, including use of things like seccomp.

        • danieldk4 days ago
          but in terms of performance it was no slower than any native program except for in the most extreme cases.

          Java applications were really slow, and certainly much slower than native programs, until HotSpot became the default in J2SE 1.3. It's distance history now, but I remember a lot of excitement about Java in 1996 (compile once) and then disappointment of how slow it was.

          (After some iterations HotSpot became a really good JIT compiler.)

        • _glass4 days ago
          To be fair, Java Swing was my first GUI programming experience, and is still the best I had. For Desktop, fast iteration, no budget, and works anywhere, it's basically Swing or Electron.
          • renewedrebecca4 days ago
            Swing is still nice to use, especially with the GUI Builder on NetBeans.

            Not quite as easy as say, VB6, but good enough.

            One of these days, I want to try building some sort of GUI app using Swing again and build a native image with Graal.

      • jamesfinlayson4 days ago
        I remember trying to play Java applet games before broadband Internet was widely deployed - I soon gave up on waiting for the applet to download and played some other game, but it was great once broadband Internet became available (or I was using a computer at a big institution with fast Internet).
    • mynameajeff4 days ago
      Love digging around projects like JOS. I had never heard of it before, and there really doesn't seem like much else online about it beyond the info that can be found from that link. There's always something melancholy about retroactively watching the activity of a project like JOS have such a swarm of activity and then just quietly and unceremoniously dying off.
    • pjmlp4 days ago
      Don't forget Electron mess.
  • chasil4 days ago
    "Thankfully, despite its age and total lack of security, NFS is still well supported under Linux."

    NFSv4 can run over TCP, which means that any encrypted wrapper can carry it. While SSH port forwarding can be used, stunnel is a better fit for batch environments. Wireguard is another option from this perspective.

    Encrypted ONC RPC works at a lower level of the protocol to secure NFS, which is documented in RFC-9289.

    Obviously, none of this will help with a machine using RARP and TFTP over 10baseT.

    • DonHopkins4 days ago
      NFS originally stood for "No File Security".

      https://news.ycombinator.com/item?id=33384073

      Speaking of YP (which I always thought sounded like a brand of moist baby poop towelettes), BSD, wildcard groups, SunRPC, and Sun's ingenuous networking and security and remote procedure call infrastructure, who remembers Jordan Hubbard's infamous rwall incident on March 31, 1987?

      https://news.ycombinator.com/item?id=31822138

      • EvanAnderson4 days ago
        > NFS originally stood for "No File Security"

        I often heard a different "F" word in that acronym in place of "File".

  • dleslie4 days ago
    > After many months of searching I found a Mr Coffee JavaStation for sale in Canada; unfortunately the seller only accepted payments through a Canadian banking service which is pretty much inaccessible outside Canada.

    If they mean Interac E-Transfers, then their inability to access it may have prevented them from running afoul of a common scam. Online classified ads will offer desirable items that are also often expensive and niche, and will ask the would-be purchaser to pay for it via an e-Transfer. And then you never hear from them again.

    Always ensure the product exists, or the service is rendered, before using Interac E-transfer.

    https://www.getcybersafe.gc.ca/en/e-transfer-fraud-protect-y...

    • toast04 days ago
      Only delayed. Eventually they had a friend move to Canada in order to straw purchase the JavaStation on their behalf. (Maybe there were other motivations for moving to Canada, like ketchup chips)
      • dleslie4 days ago
        Ah, I didn't read much past what I quoted; I became distracted.
    • 486sx334 days ago
      Maybe you’re missing part of the point… you can’t send an interac transfer from a US bank account, so unless you have a Canadian bank account, you can’t do it !
      • mardifoufs4 days ago
        Yes but usually listings that ask for interac e-transfers are a scam in the first place! They are basically impossible to revert to scammers really like them. So even if they had access to interact transfers, they probably shouldn't have bought the listed item anyways.
  • ephaeton4 days ago
    I dearly remember setting up NetBSD on various sparc stations and ultra sparcs (a II, and an Ultra 60) and running them alongside a set of various other RISCs and CISCs of late 90s. Based on the paper 'attack of the lemmings' (IIRC) by matthias something (IIRC), I wanted to create a 'how to portably code C' course that would run with just the basic netbsd tools - compiler, editor, test system, make, ... - write once, commit, have the whole weird-ass machine park response to the unit test for a given exercise. Sadly never made it happen fully. Still - NetBSD! fun times, great documentation and such a knowledgeable crowd! Enjoy the voyage!
    • chasil4 days ago
      I am assuming that the major reason that you wanted to do this is that SPARC is big-endian. It works in the native order of TCP/IP, and the hton/ntoh macros are null at the socket level in C.

      NetBSD can run Raspberry Pis big-endian. This is a much easier platform to obtain and configure than SPARC.

      The targets appear to be earmv7hfeb and aarch64eb.

      https://wiki.netbsd.org/ports/evbarm/

      • ephaeton4 days ago
        yeah, machines of different endianness, and, ideally, different alignment requirements. Always wanted to get an alpha, as well. Had hpux / hp300 ?, sparc, sparc64, 386, x86_64, maybe another arch. This was in 2005'ish, mind you. Idea was to write code that would portably work on linux and netbsd on at least said architectures, ideally more.
      • DonHopkins4 days ago
        I has having lunch with some hardware designers from SGI and Sun, and the SGI people mentioned jokingly that the MIPS could be both big-endian and little-endian, which they called SPIM. Then they pointed out much to the embarrassment of the Sun people (including me at the time) that the little-endian version of the SPARC would be called CRAPS.
  • hiAndrewQuinn4 days ago
    >The Java-chip thing proved more difficult to realize than anticipated

    I've been very slowly upping my Java-fu over the past year or so to crack into the IC market here in the Nordics. Naturally I started by investigating the JVM and its bytecode in some detail. It may surprise a lot of people to know that the JVM's bytecode is actually very, very much not cleanly mappable back to a normal processor's instruction set.

    My very coarse-grained understanding is: If you really want to "write once, run anywhere", and you want to support more platforms than you can count on one hand, you eventually kind of need something like a VM somewhere in the mix just to control complexity. Even moreso if you want to compile once, run anywhere. We're using VM here in the technical sense, not in the Virtualbox one - SQLite implements a VM under the hood for partly the same reason. It just smooths out the cross-compilation and cross-execution story a lot, for a lot of reasons.

    More formally: A SQLite database is actually a big blob of bytecode which gets run atop the Virtual DataBase Engine (VDBE). If you implement a VDBE on a given platform, you can copy any SQLite database file over and then interact with it with that platform's `sqlite3`, no matter which platform it was originally built on. Sound familiar? It's rather like the JVM and JAR files, right?

    Once you're already down that route, you might decide to do things like implement things like automatic memory management at the VM level, even though no common hardware processor I know has a native instruction set that reads "add, multiply, jump, traverse our object structure and figure out what we can get rid of". VDBE pulls this kind of hat trick too with its own bytecode, which is why we similarly probably won't ever see big hunking lumps of silicon running SQLiteOS on the bare metal, even if there would be theoretical performance enhancements thataways.

    (I greatly welcome corrections to the above. Built-for-purpose VMs of the kind I describe above are fascinating beasts and they make me wish I did a CS degree instead of an EE one sometimes.)

    • bitwize4 days ago
      It's not common, as only one was ever made, but the Lisp processor described in Sussman and Steele's paper "Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode", had built-in, hardware-implemented garbage collection.

      I was once at a meetup for Lisp hackers, and discussing something or another with one of them, who referred to Lisp as a "low-level language". When I expressed some astonishment at this characterization, he decided I needed to be introduced to another hacker named "Jerry", who would explain everything.

      "Jerry" turned out to be Gerald Sussman, who very excitedly explained to me that Lisp was the instruction set for a virtual machine, which he and a colleague had turned into an actual machine, the processor mentioned above.

      • naikrovek4 days ago
        I remember seeing a Java microprocessor for sale years ago. It claimed that the CPUs native instruction set is Java bytecode.

        I can't find that exact microcontroller that I remember, I think the domain is gone, but there are other things like this, including some FPGA cores which make the same claim that I remember from that microcontroller I read about in the early 2000s. I wonder how those would perform compared to a JVM running on a traditional instruction set on the same FPGA.

        • aleph_minus_one4 days ago
          > I remember seeing a Java microprocessor for sale years ago. It claimed that the CPUs native instruction set is Java bytecode.

          Could it be some older ARM core supporting Jazelle?

          > https://en.wikipedia.org/wiki/Jazelle

          concretely possibly a ARM926EJ-S?

          > https://en.wikipedia.org/wiki/ARM9#ARM9E-S_and_ARM9EJ-S

          Various other "Java processors" are listed on

          > https://en.wikipedia.org/wiki/Java_processor

          • naikrovek4 days ago
            nah it was a processor whose native instruction set was java bytecode. it garbage collected natively, and all the other stuff. It was not Jazelle, nor was it an ARM CPU which interpreted bytecode and ran it.

            I think it was the "aJile" processor listed in your final link, but I'm not 100% sure. It was over 20 years ago that I read about it and was about to buy a development kit when I got pulled off of all java work I was doing.

      • hiAndrewQuinn4 days ago
        Indeed, the old Lisp machines were exactly what I was thinking of as the possible exception here.
      • DonHopkins4 days ago
        https://news.ycombinator.com/item?id=37130128

        Lynn Conway, co-author along with Carver Mead of "the textbook" on VLSI design, "Introduction to VLSI Systems", created and taught this historic VLSI Design Course in 1978, which was the first time students designed and fabricated their own integrated circuits:

        >"Importantly, these weren’t just any designs, for many pushed the envelope of system architecture. Jim Clark, for instance, prototyped the Geometry Engine and went on to launch Silicon Graphics Incorporated based on that work (see Fig. 16). Guy Steele, Gerry Sussman, Jack Holloway and Alan Bell created the follow-on ‘Scheme’ (a dialect of LISP) microprocessor, another stunning design."

        [...]

        https://news.ycombinator.com/item?id=29953548

        The original Lisp badge (or rather, SCHEME badge):

        Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode, by Guy Lewis Steele Jr. and Gerald Jay Sussman, (about their hardware project for Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course) (1979) [pdf] (dspace.mit.edu)

        http://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.p...

        I believe this is about the Lisp Microprocessor that Guy Steele created in Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        My friend David Levitt is crouching down in this class photo so his big 1978 hair doesn't block Guy Steele's face:

        The class photo is in two parts, left and right:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        Here are hires images of the two halves of the chip the class made:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.

        Here is a photo of a chalkboard with status of the various projects:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        Design of a LISP-based microprocessor

        http://dl.acm.org/citation.cfm?id=359031

        https://donhopkins.com/home/AIM-514.pdf

        Page 22 has a map of the processor layout:

        https://donhopkins.com/home/LispProcessor.png

        We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.

        Here's a map of the projects on that chip, and a list of the people who made them and what they did:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        1. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors (moisture sensors) integrated into digital subsystem for testing.

        2. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data manipulator subsystem for searching and sorting data base operations.

        3. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.

        4. Mike Coln: Switched capacitor, serial quantizing D/A converter.

        5. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.

        6. Jim Frankel: Data path portion of a bit-slice microprocessor.

        7. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.

        8. Tak Hiratsuka: Subsystem for data base operations.

        9. Siu Ho Lam: Autocorrelator subsystem.

        10. Dave Levitt: Synchronously timed FIFO.

        11. Craig Olson: Bus interface for 7-segment display data.

        12. Dave Otten: Bus interfaceable real time clock/calendar.

        13. Ernesto Perea: 4-Bit slice microprogram sequencer.

        14. Gerald Roylance: LRU virtual memory paging subsystem.

        15. Dave Shaver Multi-function smart memory.

        16. Alan Snyder Associative memory.

        17. Guy Steele: LISP microprocessor (LISP expression evaluator and associated memory manager; operates directly on LISP expressions stored in memory).

        18. Richard Stern: Finite impulse response digital filter.

        19. Runchan Yang: Armstrong type bubble sorting memory.

        The following projects were completed but not quite in time for inclusion in the project set:

        20. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1 above, this team completed a CRT controller project.

        21. Martin Fraeman: Programmable interval clock.

        22. Bob Baldwin: LCS net nametable project.

        23. Moshe Bain: Programmable word generator.

        24. Rae McLellan: Chaos net address matcher.

        25. Robert Reynolds: Digital Subsystem to be used with project 4.

        Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she taught him how to make his first prototype "Geometry Engine"!

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]

        Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        [...]

        The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].

        [...]

        For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)

        9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.

        [...]

        The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)

        This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.

        Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?

        If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!

        There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

        https://web.archive.org/web/20210131033223/http://ai.eecs.um...

    • jraph4 days ago
      > It may surprise a lot of people to know that the JVM's bytecode is actually very, very much not cleanly mappable back to a normal processor's machine code or instruction set

      I believe this is a very sensible decision: being too close to a real architecture would probably tie the bytecode to similar architectures too much and make it quite useless as opposed to compiling to an actual architecture.

      The bytecode being abstract enough is likely a good thing to be able to achieve okay performance everywhere. Like, you wouldn't want the bytecode to specify a fixed number of registers.

      What may also surprise many people thinking Java is a bloated language is that the Java bytecode is actually quite simple, straightforward to understand, clean and also very well documented. It's an interesting thing to look into, even for someone not involved day to day in some Java.

      • geokon4 days ago
        I remember learning in school that it's a relatively simple stack machine, but when I look at the instruction set online it's actually ~200 opcodes..

        Not what I'd describe as "simple, straightforward"

        • renewedrebecca4 days ago
          That's fewer opcodes than a 6502 or Z80 microprocessor.
          • DonHopkins4 days ago
            In the 6502 and Z80, "opcode" refers to the actual machine instruction executed by the CPU, represented by mnemonics (e.g., LDA, STA). Each mnemonic can map to multiple opcode values due to different addressing modes.

            The 6502 has 56 mnemonics, mapping to about 151 unique opcodes.

            The Z80 has about 80 distinct mnemonics, mapping to about 158 base opcodes, with extended opcodes (via CB, ED, DD, FD prefixes) pushing the total to around 500.

            Java bytecode opcodes are symbolic instructions for a virtual stack machine, about 200 in total with a 1:1 mapping from mnemonic to opcode, and just an "wide" extended prefix to extend operand sizes, designed for portability rather than direct hardware execution.

    • tabony4 days ago
      It's not a directly mappable to a register-based microprocessor but it's directly mappable to a stack-based microprocessor.

      e.g. The PSC 1000 microprocessor (1994) could run Java directly: https://en.wikipedia.org/wiki/Ignite_(microprocessor)

      Stack-based microprocessors tend to perform worse than register-based ones and I assume there wasn't a huge reason to develop a Java-on-chip for a "Java computer.” (1) It would have not run non-Java software easily and (2) the future of stack-based microprocessors wasn't as bright.

    • jonjacky4 days ago
      This 20-page paper on the Java Optimized Processor is quite interesting. It has been implemented on FPGAs:

      https://www.jopdesign.com/doc/rtarch.pdf

      via https://www.jopdesign.com/ which includes a link to the github repo --- with 20-year-old files!

    • ielillo4 days ago
      IIRC correctly the original Java VM was a stack based machine. That made sense when it was first created since a stack based machine is the simplest system you can create that run code and since it only need three registers, one for the instruction, one for the first data and one for the top of the stack for the other data. The problem is that you need to push and pop a lot from the stack during runtime which means more memory accesses and more time spent on gathering the data than on doing actual operations. That also underutilizes the processor registers since on a normal processor you would be using two data registers at most. This was one of the early issues with java running slowly on android and the reason of the creation of the Dalvik VM which was a register one.
      • geokon4 days ago
        Naiive question.. If the opcodes are the same, how can you go from a stack machine to a register one?
        • jerf4 days ago
          You compile the stack based code into register code. It is, of course, easier to say than to do, but it is within the range of a skilled team, not absurdly complicated.

          You can think about it on a really small scale:

              PUSH 1
              PUSH 2
              PUSH 3
              ADD
              PUSH 4
              MULT
              ADD
          
          is not that hard to conceptually rewrite into

              STORE 1, r1
              STORE 2, r2
              STORE 3, r3
              ADD r2, r3 INTO r2
              STORE 4, r3
              MULT r2, r3 INTO r2
              ADD r1, r2 INTO r1
          
          Of course, from there, you have to deal with running out of registers, then you're going to want to optimize the resulting code (for instance, generally small numbers like this can fit into the opcodes themselves so we can optimize away all the STORE instructions easily in most if not all assembly languages), but, again, this is all fairly attainable code to developers with the correct skills, not pie-in-the-sky stuff. Compiler courses do not normally deal directly with this exact problem, but by the time you finish one you'd know enough to tackle this problem since the problem since the problem that compiler courses do deal with is a more-or-less a superset of this problem.
          • geokon3 days ago
            oh okay, so you're not really running the original byte code. You're cross-compiling it to a different architecture effectively. That makes sense then!
  • tomaytotomato4 days ago
    > Hard as it may be to imagine, there was a time when Java was brand new and exciting. Long before it became the vast clunky back-end leviathan it is today, it was going to be the ubiquitous graphical platform that would be used on everything from cell phones to supercomputers: write once, run anywhere.

    As someone who started their software career at Java version 8, I wouldn't say the trend in Java has been to become more clunky.

    If we separate frameworks from the core libraries of Java, its more modular, has better functionality with things like Strings, Maps, Lists, Switch statements, Resource (file, http) accessing etc.etc.

    For frameworks we have Spring Boot, which can be as clunky or as thin as you want for a backend.

    For IC cards, and small embedded systems, I can still do that in the newer versions of Java just with a smaller set of libraries.

    Maybe the author is nostalgic for a time (which I didn't experience - was busy learning how to walk), but Java can do all the things JDK version 1 can, and so much more. No?

    • MisterTea4 days ago
      > write once, run anywhere.

      Was such a great promise. I remember visiting PCExpo in the late 90's and Sun's booth had a Java demo running on three machines: Linux x86, Windows X86 and Solaris Sparc (OSX wasn't even revealed yet). You could run a few demos you selected from a menu one of which was a 3D ship with accelerated OpenGL which really thrilled me - cross platform everything, even CAD and gaming. Amazing! The future is finally here.

      And it never happened. Bummer. Instead we got a badly hacked up hypertext viewer with various VM's duck taped to the sides.

    • seabrookmx4 days ago
      I don't think the comparison is new Java to old Java, I think it's Java vs. it's competitors.

      When Java was new, scripting/dynamic languages hadn't matured enough to be true competitors so you were left with C/C++, Delphi and the like. In that landscape, Java is beyond exciting.

      Nowadays there are so many alternatives that didn't exist then. And it's not debatable that many of those languages (Dart, C#, Typescript, Kotlin) move faster when it comes to language features. Whether you want/need them is subjective, sure. But back in the day Java was that hot, fast moving language.

  • markus_zhang4 days ago
    > Sun’s bootloader environment from that period was called OpenBoot, and consisted of a FORTH interpreter, from which you can interrogate the device tree and pretty much do whatever you want.

    This sounds interesting. I have read quite a few FORTH posts on HN but never gave the thing a look. It is really different than anything I have looked at. For example, for functional languages I never got pass Scheme's ' symbol, but at least I get most of the syntax. FORTH really is another level.

  • yjftsjthsd-h4 days ago
    Odd that it uses RARP to get an IP but then uses DHCP for NFS configuration. (Or is it the baked in firmware using RARP and then the modern NetBSD kernel using DHCP? That would make more sense)

    Also:

    > You need to rename the file with a specific format: the IP address of the JavaStation, but in 8 capitalized hex digits, followed a dot, and then the architecture (in this case “SUN4M”). So, in this example the IP address (as defined in rarpd above) is 192.168.128.45, which in hex is C0A8802D.

    This is of course the correct way to do it, but if you're lazy you can just tail the tftpd logs and see what filename it tries to download, rename the file on the server, and reboot again to pick it up. (I did this when netbooting raspberry pis)

    • toast04 days ago
      > Or is it the baked in firmware using RARP and then the modern NetBSD kernel using DHCP? That would make more sense

      Yes, firmware only knows how to use rarp and tftp to fetch a kernel or a better bootloader, kernel is modern and speaks DHCP. This is a pretty common pattern with netbooting; some will bootp rather than rarp, sometimes you use tftp to fetch something that can do an http fetch, etc. Always lots of fun :D

      • eb0la4 days ago
        A lot of old hardware uses TFTP and RARP to boot. RARP just will get you the ip address, and the rest is hardcoded somehow in the machine - needs very little memory on boot. For BOOTP you need some intelligence to know where are your files. TFTP is also cheap in memory to use. UDP without flow, no nothing. Just send me the next packet in sequence when I ask you to do so.

        I remember having trouble some years ago upgrading old Cisco routers because the image was bigger than what TFTP can handle.

  • torcete4 days ago
    I remember doing this when I was working for Sun Microsystems. We had to install Solaris in a quite large number of Sun computer for a big client and we did all of them with tftp.
    • bayindirh4 days ago
      Big fleets are still installed with TFTP + HTTP/FTP.
      • torcete4 days ago
        I had no idea. Interesting and cool at the same time!
        • bayindirh4 days ago
          It's very cool. Getting a couple racks of new servers and installing all of them from your desk without any interaction is very enjoyable.

          I also love installing/cabling servers, but not needing to leave your desk to (re)provision hardware is pretty life changing. Considering your desk can be anywhere around the world due to work travels.

    • DonHopkins4 days ago
      Are you the poor Unix system administrator at Sun with the Worst Job in the World, who had to install Solaris on Scott McNealy's and Ed Zander's and other VP's workstations?

      The Worst Job in the World, from Michael Tiemann <tiemann@cygnus.com>:

      https://www.donhopkins.com/home/catalog/unix-haters/slowlari...

      PS: Fuck Trump supporting anti-vaxer Scott "You have zero privacy, get over it" McNealy. May he run Solaris in hell. If you installed it on him, then good for you, he deserved it!

      Scott McNealy has long been one of Trump’s few friends in Silicon Valley:

      https://www.sfchronicle.com/politics/article/Scott-McNealy-h...

      Former Sun Micro CEO Scott McNealy, known for his provocative quotes, says Trump is doing a 'spectacular job' amid the coronavirus crisis. That's not how many tech experts see it:

      https://www.businessinsider.com/scott-mcnealy-praises-trumps...

      Sun on Privacy: "Get Over It":

      https://www.wired.com/1999/01/sun-on-privacy-get-over-it/

  • neilv4 days ago
    > Hard as it may be to imagine, there was a time when Java was brand new and exciting. Long before it became the vast clunky back-end leviathan it is today, it was going to be the ubiquitous graphical platform that would be used on everything from cell phones to supercomputers: write once, run anywhere.

    > Initially I drank the kool-aid and was thrilled about this new “modern” language that was going to take over the world, and drooled at the notion of Java-based computers, containing Java chips that could run java byte-code as their native machine code.

    Exactly. I was lucky to see Java when it was still called Oak, and then I developed some of the first (non-animation) Java applets and small desktop applications outside of Sun/JavaSoft. It was very exciting (speaking as a programmer in C, C++, Smalltalk, a little Self, a little Lisp, and other languages at the time). The language itself wasn't as cool as Lisp or Smalltalk, but it was a nice halfway compromise from C++, with some of its own less exotic but nice features and ergonomics. It was already in the browsers, had next-gen embedded systems for the Internet at the forefront from the beginning, there was a proof-of-concept of a better kind of Web browser using it, Sun putting even putting it in rings for ubiquitous computing, there were thin clients that could get interesting (combined with Sun's "The Network Is The Computer", even if historically techies didn't like underpowered diskless workstations, except to give to non-techies), etc., and it only promised to get better...

    Then I turned my back for a sec., and the next time I looked, Java had been kicked out of the browser, and most all of the energy (except for the Android gambit) seemed to be focused on pitching Java for corporate internal software development. And suddenly no one else seemed to want to touch it, even if there wasn't much better. (Python, for example, from the same era, was one person's simplified end user extension language; and not intended for application development.)

    Yet another case of technology adoption not going how you'd initially think it would.

    • markus_zhang4 days ago
      I'm curious about what happened. IIRC, as you mentioned too in your reply, that Java was supposed to run in embedded devices. It was supposed to be lean and fast. But I can't imagine the modern Java doing that...
      • ciberado4 days ago
        Version 1.0 ran quite smoothly once we upgraded the machines from 4MB to 8MB of RAM (seriously!). But, of course, at those times 8MB was much more memory than the early smartishphones carried, so their Java version was heavily stripped down and almost good-for-nothing.

        On the PC Front, James Gosling and cia made were an amazing team, but they pushed for very academic and cumbersome patterns that converged in the EJB architecture. Nobody in its sane mind would fall in love with that.

        Two or three years later, the internet bubble fallout affected every technology.

        • bzzzt4 days ago
          EJB was not invented by Gosling but adopted from IBM. It combined over-engineered concepts from the mainframe world combined with objects and too much XML configuration.

          Nowadays we've got Kubernetes with YAML for that.

          • DonHopkins4 days ago
            Java is a domain specific language for converting XML into stack dumps.

            https://news.ycombinator.com/item?id=39252103

            >The background is that Terry Winograd, a professor of Human-Computer Interaction at Stanford University in Silicon Valley, had invited me to lecture on some of my work in 1998. After my talk, Terry invited me to tour his lab and meet some of his graduate students. One of the Ph.D. students was a bright young fellow named Larry Page, who showed me his project to enhance the relevance of web search results.

            Many of those lectures are online. I was not able to find the 1998 one he mentioned, but here is one that Jakob Neilsen gave on May 20, 1994 called "Heuristic Evaluation of User Interfaces, Jakob Nielsen, Sunsoft".

            https://searchworks.stanford.edu/view/vj346zm2128

            He gave another one on October 4 1996 entitled "Ensuring the Usability of the Next Computing Paradigm", but I can't find it in the online collection, although it exists in the inventory of video recordings, however I can't find any 1998 talks by Jakob Nielsen in this list:

            https://oac.cdlib.org/findaid/ark:/13030/c82b926h/entire_tex...

            Here is the entire online collection (it's kind of hard to search the 25 pages of the extensive collection, thanks to bad web site design!):

            https://searchworks.stanford.edu/catalog?f%5Bcollection%5D%5...

            The oldest (most interesting to me) ones are at the end (page 25):

            https://searchworks.stanford.edu/?f%5Bcollection%5D%5B%5D=a1...

            Here are some of the older ones that I think are historically important and especially interesting (but there are so many I haven't watched them all, so there are certainly more that are worth watching):

            [...]

            Bringing Behavior to the Internet, James Gosling, SUN Microsystems [December 1, 1995]:

            https://searchworks.stanford.edu/view/bz890ng3047

            I also uploaded this historically interesting video to youtube to generate closed captions and make it more accessible and findable, and I was planning on proofreading them like I did for this Will Wright talk, but haven't gotten around to it yet (any volunteers? ;):

            https://www.youtube.com/watch?v=dgrNeyuwA8k

            This is an early talk by James Gosling on Java, which I attended and appeared on camera asking a couple questions about security (44:53, 1:00:35), and I also spotted Ken Kahn asking a question (50:20). Can anyone identify other people in the audience?

            My questions about the “optical illusion attack” and security at 44:53 got kind of awkward, and his defensive "shrug" answer hasn't aged too well! ;)

            No hard feelings of course, since we’d known each other for years before (working on Emacs and NeWS) and we’re still friends, but I’d recently been working on Kaleida ScriptX, which lost out to Java in part because Java was touted as being so “secure”, and I didn’t appreciate how Sun was promoting Java by throwing the word “secure” around without defining what it really meant or what its limitations were (expecting people to read more into it than it really meant, on purpose, to hype up Java).

      • bzzzt4 days ago
        It's not Java, it's the programmer. There are lots of non-hacker types churning out inefficient code using inefficient abstractions. There are also people using Java for high-frequency trading application with realtime performance needs.
        • hiAndrewQuinn4 days ago
          It's true! If memory serves from the Jane Street podcast, the literal NYSE ran for years on a single-threaded Java application. I still struggle to wrap my head around the wizardry that kind of thing must have required.
          • bitwize4 days ago
            There are Navy ships using Java applications to process radar data in real time. Seriously.
            • wiseowise4 days ago
              Anywhere I can read about that? I’m curious about all miltech things.
          • markus_zhang4 days ago
            Actually a few years ago I knew that a significant percentage of option trading routed through a VBA+Access+Excel product developed by a Montreal company (forgot the name) and the earliest code started from 1998.
      • Foobar85684 days ago
        You had different JVM, some could run on smart card https://en.m.wikipedia.org/wiki/Java_Card

        There were also java realtime JVM with different latency promises.

        Basically you had different versions of the JVM, optimized for its use case. I guess when Sun was bought by Oracle, everything died.

        • pjmlp4 days ago
          There are also java realtime JVM with different latency promises.

          https://www.ptc.com/en/products/developer-tools/perc

          https://www.aicas.com/wp/products-services/jamaicavm/

          Additionally there are plenty of others around,

          And then there are flavours, like microEJ, Android, what Ricoh, Xerox ship on their copiers, BlueRay,....

          Folks' hate for Oracle make them forget that the Java push into the industry was not Sun alone, rather Sun, Oracle, IBM trio.

          Oracle has since early Java days embraced the technology, and even had their own flavour of JavaStation, called Network Computer.

          Oracle has been a better Java steward than the alternative, being Java dying in version 6, losing Maxime (whose ideas live on GraalVM),...

          No one else jumped to acquire Sun, and Google missed their opportunity to own Java, after torpedoing Sun.

          Nowadays Google has their own .NET.

        • layer84 days ago
          Java Card is still very much a thing.
      • toast04 days ago
        I mean, it runs on phones and Blu-Ray players. Of course, our phones now need 4 gb of ram so they don't have to swap out their launchers...
    • pjmlp4 days ago
      Where I stand, outside HN circles, I see Java all over the place, including embedded.

      And frankly Electron is a much worse experience than Swing apps, but there is nothing like helping Chrome taking over Web and desktop as the platform to rule them all. /s

      • sgt4 days ago
        Was not a big fan of Swing, but sure most things beat Electron because it doesn't quite "right". There is definitely some desktop experience being lost, aside from it being a memory and CPU hog.
        • jen204 days ago
          The only reasonable “feeling” Java app I’m aware of on the desktop is IntelliJ (and derivatives) - but AFAIK JetBrains have had to fork almost every part of the ecosystem to make that a reality.
          • pjmlp4 days ago
            Usually what happens is that most folks write Java Swing, or used to, with programmer art, instead of reading material like Filty Rich Clients, and having a designer team.

            Which is kind of what JetBrains did, as do other companies that know their stuff, e.g. https://www.bitwig.com

  • dehrmann4 days ago
    > Java chips that could run java byte-code as their native machine code.

    Hah! Even ISAs are somewhat detached from truly native machine code, these days.