The image meant you basically got whatever state the developer ended up with, frozen in time, with no indication really of how they got there.
Think of today's modern systems and open source, with so many libraries easily downloadable and able to be incorporated in your system in a very reproducible way. Smalltalk folks derided this as a low tech, lowest-common-denominator approach. But in fact it gave us reusable components from disparate vendors and sources.
The image concept was a huge strength of Smalltalk but, really in the end in my opinion, one of the major areas that held it back.
Java in particular surged right past Smalltalk despite many shortcomings compared to it, partially because of this. The other part of course was being free at many levels. The other half of Smalltalk issues beyond the image one, was the cost of both developer licenses ($$$$!) and runtime licenses (ugh!).
That wasn't a function of the image system. That was a product of your version control/CI/CD systems and your familiarity with them.
Consider that Docker and other container based systems also deploy images. No reason Smalltalk has to be any different.
I did software development work in Smalltalk in the 90's. We used version control (at one point, we used PVCS, which was horrible, but Envy was pretty sweet), had a build process and build servers that would build deploy images from vanilla images. Even without all that, the Smalltalk system kept a full change log of ever single operation it performed in order. In theory, someone could wipe their changelog, but that's the moral equivalent of deleting the source code for your binary. Image-based systems are no reason to abandon good engineering practices.
Consider also that Docker was the only one to really get popular, perhaps because it promoted the idea of using a text-based "Dockerfile" as your source of truth and treating the images as transitory built artifacts (however false this was in practice).
I'd say the clown popularised it outside of Linux and Unix sysadmin circles, rather than the Dockerfile format itself.
Solaris and FreeBSD had significantly better implementations of the containerisation/isolation piece from a technical standpoint. But they never caught on. I really think the Dockerfile made the difference.
The problem with an image based ecosystem that I see is that you are inevitably pushed towards using tools that live within that image. Now granted, those tools are able to be very powerful because they leverage and interact with the image itself. But the community contributing to that ecosystem is far smaller than the communities contributing to filesystem based tools.
The result is that people who are considering coming into the system, have to start with abandoning their familiar toolchain. And for all of the technical advantages of the new toolchain, the much smaller contributor base creates a worse is better situation. While the file-based system has fundamental technical limitations, the size of the ecosystem results in faster overall development, and eventually a superior system.
Another point is that you need to export your tools out of your own image so others can import it into their images. This impedance mismatch between image and filesystem was annoying.
It is no accident that Eclipse to this day still has a code navigation perspective based on Smalltalk, it has an incremental compiler similar to Smalltalk experience, and the virtual filesystem used by Eclipse workspaces mimic the behaviour of Smalltalk images.
I worked with a similar language, Actor (Smalltalk with an Algol-like syntax), and the usual way to deal with distribution was to “pack” (IIRC) the image by pointing to the class that your app is an instance of, and the tool would remove every other object that is not a requirement of your app. With that you got an image that started directly into your app, without any trace of the development environment.
But saving the image has some drawbacks. Mutability always requires special care.
You don't do that with Smalltalk, either, at least not for the last 30 years or so. Smalltalk has worked with version control systems for decades to maintain the code outside the image and collaborate with others without needing to share images.
A docker container is composed typically of underlying components. You can cowboy it for sure, but the intent is to have a composable system.
The Smalltalk image resulted from the developer just banging on the system.
Not to sound harsh or gatekeep, but folks who keep repeating the canard that "The Smalltalk image resulted from the developer just banging on the system", mostly never used smalltalk in the first place.
Give the original smalltalk devs some credit for knowing how to track code development over time.
I don't know much about the systems used in commercial smalltalks of the 90s, but I'm sure they weren't "meh" either (others more knowledgeable than me about them can chime in).
image-centric development is seductive (I'm guilty). But the main issue isn't "we don't know what code got put where, and by whom". There were sophisticated tools available almost from the get go for that.
Its more a problem of dependencies not being pruned, because someone, somewhere wants to use it. So lots of stuff remained in the "blessed" image (I'm only referring to squeak here) which really ought not to have been in the standard distribution. And because it was there, some other unrelated project further down the line used a class here, a class there.
So when you later realise it needed to be pruned, it wasn't that easy.
But nevertheless, it was still done. Witness cuis.
In other words, it was a cultural problem, not a tooling problem. It's not that squeak had too few ways of persisting & distributing code - it had too many.
IMHO, the main problem was never the image, or lack of tools. It was lack of modularisation. All classes existed in the same global namespace. A clean implementation of modules early on would have been nice.
You probably already know about this, but in case you didn't, there is 1 project which adds modules to cuis Smalltalk:
> The image concept, in my opinion, is what really limited Smalltalk's appeal and distribution.
I'd say these statements are both true. The image concept is very impressive and can be very useful, it certainly achieved a lot of bang for very little buck.
And it also was/is one of the major impediments for Smalltalk, at least after the mid 1980s.
The impressive bit is shown by pretty much the entire industry slowly and painfully recreating the Smalltalk image, just usually worse.
For example on macOS a lot of applications nowadays auto-save their state and will completely return to the state they were last in. So much that nowadays if you have a lot of TextEdit windows open and wish to make sure everything is safe, you kill the program, you don't quit it.
Also, all/most of the shared libraries and frameworks that come with the system are not loaded individually, instead they are combined into one huge image file that is mapped into your process. At some point they stopped shipping the individual framework and shared library binaries.
User interfaces have also trended in the direction of a an application that contains its own little world, rather than editing files that exist within the wider Unix filesystem.
The image accomplished all that and more and did so very efficiently. Both in execution speed and in amount of mechanism required: have a contiguous piece of memory. Write to disk, make a note of the start pointer. On load, map or read it into memory, fix up the pointers if you didn't manage to load at the same address and you're ready to go. On G4/G5 era Macs, the latter would take maybe a second or two, whereas Pages, for example, took forever to load if things weren't already cached, despite having much less total data to load.
But the drawbacks are also huge. You're really in your little world and going outside of it is painful. On an Alto in the mid to late 1970s I imagine that wasn't much of an issue, because there wasn't really much outside world to connect to, computer-wise, and where would you fit it on a 128KB machine (including the bitmap display)? But nowadays the disadvantages far outweigh the advantages.
With Objective-S, I am building on top of Cocoa's Bundle concept, so special directories that can contain executable code, data or both. Being directories, bundles can nest. You can treat a bundle as data that your program (possibly the IDE) can edit. But you can also plonk the same bundle in the Resources folder of an application to have it become part of that application. In fact, the IDE contains an operation to just turn the current bundle into an application, by copying a generic wrapper application form its own resources and then placing the current bundle into that freshly created/copide app.
Being directories, data resources in bundles can remain standard files, etc.
With Objective-S being either interpreted or compiled, a bundle with executable code can just contain the source code, which the interpreter will load and execute. Compiling the code inside a bundle to binaries is just an optimization step, the artifact is still a bundle. Removing source code of a bundle that has an executable binary is just an obfuscation/minimization step, the bundle is still the bundle.
"When you use a browser to access a method, the system has to retrieve the source code for that method. Initially all the source code is found in the file we refer to as the sources file. … As you are evaluating expressions or making changes to class descriptions, your actions are logged onto an external file that we refer to as the changes file. If you change a method, the new source code is stored on the changes file, not back into the sources file. Thus the sources file is treated as shared and immutable; a private changes file must exist for each user."
1984 "Smalltalk-80 The Interactive Programming Environment" page 458
They wanted to get away from syntax and files, like an inert recipe you have to rerun every time so I think if you do away with the image you do away with the core aspect of it.
Computing just in general didn't go the direction they wanted it to go in many ways I think it was too ambitious of an idea for the time. Personally I've always hoped it comes back.
The thing is that the "scripting" approach, is just so much easier to distribute. Just look at how popular python got. Smalltalk didn't understand that. The syntax is worse than python IMO (and also ruby of course).
Imposing a very different metaphor from the ground up limited adoption and integration with other tools and environments.
A lot of great ideas are tried and tried and tried and eventually succeed, and what causes them to succeed is that someone finally creates an implementation that addresses the pragmatic and usability issues. Someone finally gets the details right.
Rust is a good example. We've had "safe" systems languages for a long time, but Rust was one of the first to address developer ergonomics well enough to catch on.
Another great example is HTTP and HTML. Hypertext systems existed before it, but none of them were flexible, deployable, open, interoperable, and simple enough to catch on.
IMHO we've never had a pure functional language that has taken off not because it's a terrible idea but because nobody's executed it well enough re: ergonomics and pragmatic concerns.
Like with LLM's it seems impossible to separate the "reasoning" from data it has stored to learn that reasoning.
I was thinking that supporting a Smalltalk application must be a nightmare because it is so malleable. Users can inspect and modify the entire system, no?
The transition from "websites" to "web apps" was well underway by the time the dev tools became a built-in browser feature - Chrome was notable for being the first browser to release with the console, inspectors, etc out of the box, but that came later. The developer experience was quite a bit rougher in the early days, and then better but still not native in the days of plugins like Firebug.
The web becoming the premium app distribution platform was, firstly, because the web was the lowest-common-denominator distribution channel. Javascript was just the tool that was available where everyone wanted to run.
That should make the Smalltalk family popular with free software proponents. That makes me curious why that is not the case in history. The efforts of FSF on Smalltalk pale in comparison with those on C, Lisp and other languages.
It was thanks to GCC that most folks actually got a free C compiler after those events, coupled with Sun starting the trend among UNIX vendors that developer tools would be extra license, no longer available on a regular UNIX installation.
End users? Yes if you - want them to - let them; No if you - don't want them to - stop them.
Say you made the foreground text color the same as the background text color, so you could no longer see the source code. You can no longer do anything. You can no longer save those changes. And then what?
Better, say you did that in a script file which additionally saved the image, so that image was now unusable. And then what?
Smalltalk developers preferred to do their scripting within the Smalltalk IDE, so they could use their familiar tools.
And then save their "scripting" as a text file ("fact.st").
$ cat fact.st
Stdio stdout
nextPutAll: 100 factorial printString;
nextPut: Character lf.!
SmalltalkImage current snapshot: false andQuit: true!
And then "run" that text file ("fact.st") from the commandline. $ bin/pharo --headless Pharo10-SNAPSHOT-64bit-502addc.image fact.st
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000One thing to keep in mind is that smalltalks all have the same ability to save & load code to & from disk, just as any other programming environment. But, they also have the option of just using the image to persist, and iterate on that.
Squeak overdid that aspect of it, such that over time, it became hard to prune older side projects & and it just became increasingly bloated. Both Pharo & Cuis forked from squeak at about the same time.
Pharo images are fully bootstrapped from a seed.
Cuis is not quite there yet, but cuis from its inception went on a ruthless simplification drive (the number of classes in the system was reduced by about 500% !), so that it's base is effectively a "seed", and the rest of a cuis image is built up by importing projects (from disk & git) on demand.
But yeah, curating a set of images over time is remarkably enticing & friction free. Even in cuis, I find I have to force myself to keep flushing changes to my own packages.
Its not that the tools to use files are limited. In cuis, they're not. You can work on multiple different things WITHIN THE SAME IMAGE (changes to some builtins, a couple of your own projects, etc), and the system will keep track of what belongs where. So a couple of mouse clucks will fileout the relevant code to the relevant changesets & packages.
And yet - just banging on the same image is just ... fun, easy, enticing.
Small correction: they actually cloned/converted the Apple Smalltalk image, so those bits remained. The VM was created from scratch by writing it in Slang, a Smalltalk dialect that was essentially equivalent to BCPL and could be translated to C.
They did drop the object memory part completely and designed a new one from scratch.
Previously people had manually translated the VM from Slang to Pascal or C (I did so myself in 1986) but for this project they wrote a tool for that (in Smalltalk, of course).
Here is another copy of the "Back to the Future" paper:
"Produce a new image:
- Design a new Object Memory and image file format.
- Alter the ST-80 System Tracer to write an image in the new format.
- Eliminate uses of Mac Toolbox calls to restore Smalltalk- portability.
- Write a new file system with a simple, portable interface."
https://dl.acm.org/doi/10.1145/263698.263754Hmm...I wonder if Dan used the PDF writer I wrote for him to produce that version of the paper...
Around 1990, I was a graduate student in Prof. Red Whittaker's field robotics group at Carnegie Mellon. In Porter Hall, I was fortunate to have a Sun 3/60 workstation on my desk. It had a Smalltalk-80. I learned to program it using Goldberg & Robson and other books from ParcPlace Systems.
The programming environment was fantastic, better than anything I have seen before or since. You always ran it full screen, and it loaded up the Smalltalk image from disk. As the article says, you were in the actual live image. Editing, running, inspecting the run-time objects, or debugging: all these tasks were done in the exact same environment. When you came into the office in the morning, the entire environment booted up immediately to where you had left it the previous day.
The image had objects representing everything, including your screen, keyboard, and mouse. Your code could respond to inputs and control every pixel on the screen. I did all my Computer Graphics assignments in Smalltalk. And of course, I wrote fast video games.
I used the system to develop programs for my Ph.D thesis, which involved geometric task planning for robots. One of the programs ran and displayed a simulation of a robot moving in a workspace with obstacles and other things. I had to capture many successive screenshots for my papers and my thesis.
Everybody at CMU then wrote their papers and theses in Scribe, the document generation system written by Brian Reid decades earlier. Scribe was a program that took your markup in a plain text file (sort of at a LaTeX level: @document, @section, etc.) and generated Postscript for the printer.
I never had enough disk space to store so many full screen-size raster images. So, of course, instead of taking screenshots, I modified my program to emit Postscript code, and inserted it into my thesis. I had to hack the pictures into the Postscript generation process somehow. The resulting pictures were vector graphics using Postscript commands. They looked nice because they were much higher resolution than a screenshot could have been.
This is my favorite video by Newspeak's creator Gilad Bracha: https://youtu.be/BDwlEJGP3Mk?si=Z0ud1yRqIjVvT4oO
* No global import/export namespace (all imports are dependency injected, meaning capability based security is already baked in)
* Nested classes instead of modules
* No variable assignment (everything is a method call)
* Mixins instead of inheritance
* Synchronization and code updates as near primitives
* Support for foreign objects through aliens and proxies, and foreign code can call newspeak objects through expats
* A native serialization to file format
* support for multiple overlapping type systems
Were the users running, say, Windows and then the Smalltalk "OS" would be running on top of that, but in a sort of "kiosk mode" where its full OS-ness was suppressed and it was dedicated to showing a single interface?
It's funny because in the past I got the chance to test izware Mirai, which is written in Lisp — when the app got into a problematic state (which was often on my machine) you were sent to the REPL where you could inspect the memory and so on. It was alien to me at the time. Today I dream of having that.
I was surprised a couple years back they still maintain Mantis, a 4GL I used on a mainframe (it was kind of Rails for the 3270 terminal). Even the documentation is hideously expensive. I asked if they had a “hobby license” I could use to run under Hercules. They seemed genuinely perplexed that someone would imagine they would allow me to use their software without sacrificing my firstborn.
When, as a dev, I use Smalltalk, it opens up what's effectively a virtual machine on my desktop. The whole Smalltalk GUI runs inside its own frame, none of the controls are native, etc. And it's a development environment - I have access to a class browser, a debugger, a REPL, and so on. I can drill down and read/modify the source code of everything. Which is great as a dev, but may be intimidating for an end user.
Is that what the end user experience is like as well? I think that's what OP is asking. I've never used a Smalltalk application as an end user to my knowledge, so I can't say myself.
The application packager removes everything that is related to Smalltalk as developer environment, and possibly other classes that are also not used by the application, so you get a slimmed down image.
Then you have the VM boot code, as native executable, that is responsible for starting the image execution.
Thanks to the way executable files work in most platforms, the packing tool merges that boot loader and the slimmed down image into a single executable.
When the executable starts, the loader locates the image inside the executable, loads it, and transfers execution to the runtime.
Java and .NET also have similar techniques available, see jlink, or Single-file deployment respectively.
Depends which Smalltalk implementation.
Digitalk and Dolphin and IBM Smalltalk … wrapped native widgets.
Calling it "tree shaking" is web development term AFAIK.
I think that's backwards. Lars Bak and the other V8 folks came from the Smalltalk world and brought the "tree shaking" term with them as far as I know.
In any case, before the JavaScript usage, it seems that treeshaking applied to objects to be included in a runtime image. The JavaScript usage is actually more akin to the dead-code elimination and link-time symbol removal of compiled and linked languages.
It means going through the image and remove most code that isn't directly needed by the application, or only exists to support developer workflows.
Usually needs a bit help for fine tuning, regarding what code to keep, and what to delete.
You also find this on Java (jlink, ProGuard, D8/R8 on Android), and .NET (trimming, .NET Native manifests).
But destiny made CPUs win and now we're using AI to chew their accidental complexity for us.
The thing (for me) about Smalltalk was the thought to code ratio. It was awesome. It had a pretty good balance of less is more. Where working in Swift and Kotlin feel like trying to navigate the many nuances of American football or cricket, Smalltalk was like playing soccer/football-sans-america. The syntax is simple. And the computation model is straitforward and simple.
Elixir is kind of like that, computationally, a few simple concepts and everything builds on that. The saddest part about Elixir is that it ran with the whole do/end syntax. Drives me nuts. But I love that computationally, though different than Smalltalk, it’s like Smalltalk in that it’s a simple consistent model.
I'm making my own text editor in Ruby now, as I'm wishing for a more Smalltalk-like experience with it. There's just so much missing. Ruby has the reflective capability to enjoy a Smalltalk-like IDE, but Rails took over and drove Ruby in that direction long before anyone could cook one up.
Which is a shame, IDEs that aren't Smalltalk / Lisp haven't graduated past the need for static analysis despite having 50 years to do so. Now it's the red-headed stepchild of languages due to no fault of its own.
I've experienced this a few different times: with Microsoft BASIC-80 (and GW-BASIC), with SBCL and SLIME, with LOGO, with GForth, with OpenFirmware, with MS-DOS DEBUG.COM, with Jupyter, and of course with Squeak. It really is nice.
It used to be the normal way of using computers; before memory protection, it was sort of the only way of using computers. There wasn't another memory space for the monitor to run in, and the monitor was what you used to do things like load programs and debug them. This approach continued as the default into many early timesharing systems like RT-11 and TENEX: there might be one virtual machine (memory space) per user, but the virtual machine you typed system commands into was the same one that ran your application. TENEX offered the alternative of running DDT (the debugger) in a different memory space so bugs in the application couldn't corrupt it, and that was the approach taken in ITS as well, where DDT was your normal shell user interface instead of an enhanced one.
All this seems very weird from the Unix/VMS/Win32 perspective where obviously the shell is a different process from your text editor, and it's designed for launching black-box programs rather than inspecting their internal memory state, but evolutionarily it was sort of the natural progression from a computer operator single-stepping a computer (with no memory protection) through their program with a toggle switch as they attempted to figure out why it wasn't working.
One of the nicest things about this way of working is halt-and-continue. Current versions of Microsoft Visual Studio offer sometimes halt and continue. In MBASIC you could always halt and continue. ^C halted the program, at which point you could examine variables, make arbitrary changes to the program, GOTO a line number, or just CONT to continue where you'd interrupted it. Smalltalk, SLIME, or ITS allows you to program in this way; if you like, you can refrain from defining each method (or function or subroutine) until the program tries to execute it, at which point it halts in the debugger, and you can write the code for the method and continue.
This is an extremely machine-efficient approach; you never waste cycles on restarting the program from the beginning unless you're going to debug program initialization. And in Smalltalk there isn't really a beginning at all, or rather, the beginning was something like 50 years ago.
Myself, though, I feel that the hard part of programming is debugging, which requires the experimental method. And the hard part of the experimental method is reproducibility. So I'm much more enthusiastic about making my program's execution reproducible so that I can debug faster, which conflicts with "you're in the running environment". (As Rappin says, "Code could depend on the state of the image in ways that were hard to replicate in deploys." I experience this today in Jupyter. It's annoying to spend a bunch of time trying to track down a bug that doesn't exist when you restart from scratch; worse is when the program works fine until you restart it from scratch.) So I'm much more excited about things like Hypothesis (https://news.ycombinator.com/item?id=45818562) than I am about edit-and-continue.
Paul Graham wrote somewhere (I can't find it now) about how in Viaweb's early days he would often fix a bug while still on the phone with the customer who was experiencing it, because he could just tweak the running CLisp process. But you can do the same thing in PHP or with CGI without sacrificing much reproducibility—your system's durable data lives in MariaDB or SQLite, which is much more inspectable and snapshottable than a soup of Smalltalk objects pointing to each other. (#CoddWasRight!) Especially since the broad adoption of the Rails model of building your database schema out of a sequence of "migrations".
PHP is similar, but not the same. You can't (or at least I can't) stop a request in progress and change its code; but you can rapidly change the code for the next request. Make a change in the editor, hit reload in the browser is a productive short loop, but stop at a breakpoint, inspect the state and change the code is a more powerful loop. Stopping at a breakpoint is challenging in systems with communication though, and I've learned to live without it for the most part.
Database transactions bridge some of the gap between "change the code for the next request" and "stop at a breakpoint and change the code": as long as your request handler code keeps failing, it will abort the transaction, so the database is unchanged, so you can restart the transaction as many times as you want to get to the same point in execution, at least if your programming language is deterministic. By providing a snapshot you can deterministically replay from, it allows you to add log entries before the point where the problem occurred, which can be very useful.
Stopping at a breakpoint can be more productive, especially with edit-and-continue, but often it isn't. A breakpoint is a voltmeter, which you can use to see one value at every node in your circuit; logs are a digital storage oscilloscope with a spectrum analyzer, where you can analyze the history of millions or billions of values at a single node in your circuit.
Also much happier. C++ back then is not the C++ you use today.
Elixir kind of got close too (prettier "erlang") - to have fault-tolerant mini-CPUs ("objects", aka similar to biological cells). The problem is that even people with grand ideas, such as Alan Kay, are not automatically great language designers. Matz is a better language designer than Alan Kay, for instance, whereas Alan Kay has the better ideas. It's a trade-off.
Note: I myself would not know how a language should look like that would follow Alan Kay's vision closer. It is definitely not smalltalk; it is also not ruby, though ruby is very good. I know I would suck as language designer, but as an improvement I would have a language similar to ruby (no, not crystal - crystal is a worse "ruby"), with a stronger focus on speed (perhaps have a second language that can be as fast as C here and be similar) and with a much stronger focus on what erlang brought to the table; that would get us close to, say, around 85% or 90%. And then add the rest that could fulfil Alan Kay's vision. Of course we need a specification too, because just saying "90% of his vision" is also pointless. I would perhaps start from erlang, retain the good bits, make it a modern variant of OOP (and OOP is also defined differently in different programming languages, so we need to define that term too, in the specification).
Just restore unsaved changes when you launch the same image again. That's robust not fragile.
(I’m not sure that’s exactly what happened, probably the system crashed before garbage collection could happen. But it was definitely a guaranteed insta-crash).
What about this makes you think it's a rant? Is the author making an impassioned plea for people to use Smalltalk? Is he going off on a tirade about something?