That's not completely true. Webservers have Server Side Includes (SSI) [0]. Also if you don't want to rely on that, 'cat header body > file' isn't really that hard.
[0] https://web.archive.org/web/19970303194503/http://hoohoo.ncs...
I have no idea when Apache first supported SSI , but personally I never knew it existed until years after PHP became popular.
I would guess , assuming that `Options +Includes` cannot be done by unprincipled users, that this being a disabled-by-default feature it was inaccessible to majority of us.
"src/modules/standard/mod_include.c" says:
/*
* http_include.c: Handles the server-parsed HTML documents
*
* Original by Rob McCool; substantial fixups by David Robinson;
* incorporated into the Apache module framework by rst.
*
*/
Rob McCool is the author of NCSA HTTPd so it seems there is direct lineage wrt. this feature between the two server implementations.My shared hosting from claranet supported ssi via a .htaccess configuration.
Technically php was around at that point, but I don’t think it became popular until php3 - certainly my hosting provider didn’t support it until then.
> You either copied and pasted your header into every single HTML file (and god help you if you needed to change it), or you used <iframe> to embed shared elements. Neither option was great.
[1]: https://sgmljs.sgml.net/docs/producing-html-tutorial/produci...
They cannot be arbitrary components though.
The first time I saw this I thought it was one of the most elegant solutions I'd ever seen working in technology. Safe to deploy the files, atomic switch over per machine, and trivial to rollback.
It may have been manual, but I'd worked with a deployment processes that involved manually copying files to dozens of boxes and following 10 to 20 step process of manual commands on each box. Even when I first got to use automated deployment tooling in the company I worked at it was fragile, opaque and a configuration nightmare, built primarily for OS installation of new servers and being forced to work with applications.
And yes, it's truly elegant.
Rollbacks become trivial should you need it.
It's pretty easy to automate a system that pushes directories and changes symlinks. I've used and built automation around the basic pattern.
I put that in a little bash script so.. I don't know if you call anything that isn't CI "manual" but I don't think it'd be hard to work into some pipeline either.
I kind of miss it honestly.
- My favorite deployment is rsync over SSH and occasionally, I still upload a file over SFTP.
- MongoDB will always AWALYS be in my mind as the buggy database that bought itself into dev minds with lots of money but ultimately was all hype and risky to use from a business perspective. Turns out, especially with the rise of TypeScript that most data has a solid structure anyways and while NoSQL has its places, most projects benefit from good old SQL.
- Slack launched in 2013? Man, time flies.
- I still hardly use Docker and just deploy straight to a VPS running Debian.
- I remember the first years of TypeScript, which were kinda tough. Many projects had no types. I sometimes considered to use one package over another just because they had proper types.
- VSCode is a good thing and if you don't go too crazy with plugins, it works stable and performant. I like it.
- Next.js gives me MongoDB vibes. An over-engineered, way too "magical" framework that hijacked developer minds and is built on the weird assumption that React, a DOM manipulation library, belongs on the server. I never got the appeal and I will just wait this out. Meanwhile, I'm having fun with Hono. Easy to build API-based backends as well as MPAs with server-side HTML generation, runs on Node, Bun, Deno and whatnot, feels lightweight and accessible and gives me a lot of control.
I think VS Code is probably more responsible for TypeScript acceptance than any other project. Just having good interactions with the editor I think brought a lot of the requests to add type definitions to projects.
I'm with you on Next/Mongo... while as a Dev I liked a lot about Mongo, I'd never want to admin it again, I'm fine with PostgreSQL's JSONB for when I need similar features. On Next specifically, usually avoid it... fatter clients aren't so bad IMO.
Edit: +1 for Hono too... Beyond that, Deno has become my main scripting environment for all my scripting needs.
The author traces the evolution of web technology from Notepad-edited HTML to today.
My biggest difference with the author is that he is optimistic about web development, while all I see is shaky tower of workarounds upon workarounds.
My take is that the web technology tower is built on the quicksand of an out-of-control web standardization process that has been captured by a small cabal of browser vendors. Every single step of history that this article mentions is built to paper over some serious problems instead of solving them, creating an even bigger ball of wax. The latest step is generative AI tools that work around the crap by automatically generating code.
This tower is the very opposite of simple and it's bound to collapse. I cannot predict when or how.
But I don't agree that the system is bound to collapse. Rather, as I read the article, I got this mental image of the web of networked software+hardware as some kind of giant, evolving, self-modifying organism, and the creepy thing isn't the possibility of collapse, but that, as humans play with their individual lego bricks and exercise their limited abilities to coordinate, through this evolutionary process a very big "something" is taking shape that isn't a product of conscious human intention. It's not just about the potential for individual superhuman AIs, but about what emerges from the whole ball of mud as people work to make it more structured and interconnected.
That's folk wisdom, but is it actually true? "Hundreds of lines just to grab a query parameter from a URL."
/*@null@*/
/*@only@*/
char *
get_param (const char * param)
{
const char * query = getenv ("QUERY_STRING");
if (NULL == query) return NULL;
char * begin = strstr (query, param);
if ((NULL == begin) || (begin[strlen (param)] != '=')) return NULL;
begin += strlen (param) + 1;
char * end = strchr (begin, '&');
if (NULL == end) return strdup (begin);
return strndup (begin, end-begin);
}
In practice you would probably parse all parameters at once and maybe use a library.I recently wrote a survey website in pure C. I considered python first, but do to having written a HTML generation library earlier, it was quite a cakewalk in C. I also used the CGI library of my OS, which granted was one of the worst code I ever refactored, but after, it was quite nice. Also SQLite is awesome. In the end I statically linked it, so I got a single binary to upload anywhere. I don't even need to setup a database file, this is done by the program itself. It also could be tested without a webserver, because the CGI library supports passing variables over stdin. Then my program outputs the webpage on stdout.
So my conclusion is: CRUD websites in C are easy and actually a breeze. Maybe that also has my previous conclusion as a prerequisite: HTML represents a tree and string interpolation is the wrong tool to generate a tree description.
> In practice you would probably parse all parameters at once and maybe use a library.
Depending on your requirements, that might be a feature.
> It also doesn't handle any percent encoding.
This does literal matches, so yes you would need to pass the param already percent encoded. This is a trade off I did, not for that case, but for similar issues. I don't like non-ASCII in my source code, so I would want to encode this in some way anyway.
But you are right, you shouldn't put this into a generic library. Whether it suffices for your project or not, depends on your requirements.
Written standard be damned; I’ll just bang out something that vaguely looks like it handles the main cases I can remember off the top of my head. What could go wrong?
You said you allocated 5 minutes max to this snippet, well in php this would be 5 seconds and 1 line. And it would be a proper solution.
$name = $_GET['name'] ?? SOME_DEFAULT; name = cgiGetValue (cgi, "name");
if (!name) name = SOME_DEFAULT;
If you allow for GCC extensions, it looks like this: name = cgiGetValue (cgi, "name") ?: SOME_DEFAULT;> If multiple fields are used (i.e. a variable that may contain several values) the value returned contains all these values concatenated together with a newline character as separator.
> concatenated together with a newline character
No, because...
> In practice you would probably parse all parameters at once and maybe use a library.
In the 90s I wrote CGI applications in C; a single function, on startup, parsed the request params into an array (today I'd use a hashmap, but I was very young then and didn't know any better) of `struct {char name; char value}`. It was paired with a `get(const char name)` function that returned `const char ` value for the specified name.
TBH, a lot of the "common folk wisdom" about C has more "common" in it than "wisdom". I wonder what a C library would look like today, for handling HTTP requests.
Maybe hashmap for request params, union for the `body` depending on content-type parsing, tree library for JSON parsing/generation, arena allocator for each request, a thread-pool, etc.
Isn't that CURL?
struct { char *name; char *value}
can bypass the formatting by prepending 2 spacesYet 30 years later it feels like string interpolation is the most common tool. It probably isn't, but still surprisingly common.
[*] I mean yeah, I could have written a wrapper, but that would have taken far more time.
Building the tree on the server is usually wasted work. Not a lot of tree oriented output as you make it libraries.
> Not a lot of tree oriented output as you make it libraries.
That was actually the point of my library, although I must admit, I haven't implemented actually streaming the HTML output out, before having composed the whole tree. It isn't actually that complicated, what I would need to implement would be to make part of the tree immutable, so that the HTML for it can already be generated.
https://github.com/urweb/urweb
http://www.impredicative.com/ur/
Needless to say it wasn't very practical. But there was one commercial site written in it https://github.com/bazqux/bazqux-urweb (the site still exists but not sure if it's still written in ur/web)
Ur/Web is not very practical for reasons other than type safety: the lack of libraries and slow compilation when the project gets big. The language itself is good, though.
Nowadays, I would probably choose OCaml. It doesn't have Ur/Web's high-level features, but it's typed and compiles quckly.
"... and through it all, the humble <br> tag has continued playing its role ..."
However, some very heavy firepower was glossed over.. TLS/HTTPS gave us the power to actually buy things and share secrets. The WWW would not be anywhere near this level of commercialized if we didn't have that in place.
I've been creating web pages since 1993, when you could pretty much read every web site in existence and half of them were just HTML tutorials. I've lived through every change and every framework. 2025 is by far the best year for the Web. Never has it been easier to write pages and have them work perfectly across every device and browser, and look great to boot.
It’s fashionable to dunk on “how did all this cloud cruft become the norm”, but seeing a continuous line in history of how circumstance developed upon one another, where each link is individually the most rational decision at their given context, makes them an understandable misfortune of human history.
The first VPS provider, circa fall of 2001, was "JohnCompanies" handing out FreeBSD jails advertised on metafilter (and later, kuro5hin).
These VPS customers needed backup. They wanted the backup to be in a different location. They preferred to use rsync.
Four years later I registered the domain "rsync.net"[1].
[1] I asked permission of rsync/samba authors.
I bugged out of front-end dev just before jquery took off.
That's what I still do 30 years later
Nah son, I won't allow the great Symfony to be erased from history and replaced with Laravel. Not on my watch.
Really wished it added a few things.
>The concept of a web developer as a profession was just starting to form.
Webmaster. That was what we were called. And somehow people were amazed at what we did when 99% of us, as said in the article, really had very very little idea about the web. ( But it was fun )
>The LAMP Stack & Web 2.0
This completely skipped the part about Perl. And Perl was really big, I bet one point in time on the Web most web site were running on Perl. Cpanel, Slashdot, etc. The design of Slashdot is still pretty much the same today as most of the Perl CMS in that era. Soon after every one knew C wouldn't be part of of the Web CGI-BIN Perl took over. We have Perl Script all over the web for people to copy and paste, FTP upload CHMOD before PHP arrives. Many forums at the time were also Perl Script.
Speaking of Slashdot, after that was Digg. That was all before Reddit and HN. I think there used to be something about HN like Fight Club "The first rule of fight club is you do not talk about fight club," And HN in the late 00s or early 10s was simply referred as the orange site by journalist / web reporters.
And then we could probably talk about Digg v4 and dont redesign something if it is working perfectly.
>WordPress, if you wanted a website, you either learned to code or you paid someone who did.
There were a part of CMS war or blogging platform before it was called a blog. There were many, including those using Perl / CGI-BIN. I believe it was Movable Type Vs Wordpress.
And it also missed forums, Ikonboard based on Perl > Invision ( PHP ) vs Vbulltin. Just like CMS/ blog there used to be some Perl vs PHP forum software as well. And of course we all know PHP ultimately won.
>Twitter arrived in 2006 with its 140-character limit and deceptively simple premise. Facebook opened to the public the same year.
Oh I wished they mentioned about MySpace and Friendster. The Social Network before Twitter and Facebook. I believe I have have my original @ksec Twitter handle registered and loss access to it. It has been sitting there for years. Anyone knows how to get it back please ping me. Edit: And I just realised my HN proton email address hasn't been logged in for months for some strange reason.
>JavaScript was still painful, though. Browser inconsistencies were maddening — code that worked in Firefox would break in Internet Explorer 6, and vice versa.
Oh it really missed the most important piece of web era. Firefox Vs IE. Together we pushed Firefox to beyond 30% and in some cases 40% of Browser market share. That is insanely impressive if we consider nearly most of those usage were not from work because Enterprise and Business PCs is still on IE6.
And then Chrome came. And I witness and realise how fast things can change. It was so fast that without all the fans fare of Mozilla people were willingly to download and install Google Chrome. And to this day I have never used Chrome as my main browser. Although it has been a secondary browser since the day it was launched.
>Version control before Git was painful
There was Hg / Mercurial. If anything taking over SVN it should have been Hg. For whatever reason I have always been on the wrong side of history or mainstream. Although that is mostly a personal preference. Pascal over C and later Delphi over Visual C++, Perl over PHP. FreeBSD over Linux. Hg over Git.
>Virtual private servers changed this. You could spin up a server in minutes, resize it on demand, and throw it away when you were done. DigitalOcean launched in 2011 with its simple $5 droplets and friendly interface.
Oh VPS was a thing long before DO. DO was mostly copying Linode from the start. And that is not a bad thing considering Linode at the time was the most developer friendly VPS provider. Taking the crown from I believe Rackspace? Or Rackspace acquired one of those VPS provider before Linode became popular. I cant quite remember.
>Node.js .....Ryan Dahl built it on Chrome's V8 JavaScript engine, and the pitch was simple: JavaScript on the server.
I still think Node.js and Javscript on server is a great idea but wrong execution especially on Node.js NPM. One could argue there is no way we would have known without first trying it and that is certainly true. And it was insanely overhyped in the post Rails Era around 2012 - 2014 because Fail Whales of twitter and Rails couldn't scale. I think the true spirit successor is Bun, integrating everything together very neatly. I just wish I could use something other than Javascirpt. ( On the wrong side of history again I really liked Coffeescript )
>The NoSQL movement was also picking up steam. MongoDB
Oh I remember the over hyped train of NoSQL MongoDB on HN and internet. CoachDB as well. In reality today, SQLite, PlanetScale Postgres / Vitess MySQL or Clickhouse is enough for 99% of use case. ( Or may be I dont know enough NoSQL to judge it usefulness )
>How we worked was changing too. Agile and Scrum had been around since the early 2000s,
Oh the worst part of Agile and Scrum isn't what it did to Tech Industry. It is what it did to companies outside of Tech industry. I dont think most people realise by mid 2010s tech was dominating mainstream media and words like Agile were floating around in many other industries and they all need to be Agile. Especially American companies. Finance companies who were not tech but decided to uses these terms because it was Hip or Cool as part of their KPI along with consultant firms like McKinsey, the Agile movement took over a lot of industry like plague.
This reply is getting too long. But I want to go back to the premise and conclusion of the post,
>I'm incredibly optimistic about the state of web development in 2025....... We also have so many more tools and platforms that make everything easier.
I dont know and I dont think I agree. AI certainly make many steps we do now easier. But conceptually speaking everything is still a bag of hurts, no body is asking why do we need those extra steps in the first place. Dragging something via FTP is still easier. Editing on WYSIWYG Dreamweaver is way more fun. Just like I think Desktop programming should be more Delphi like. In many ways I think WebObject is still ahead of many web frameworks today. Even Vagrant is still easier than we have today. The only good things is that Bun, Rails, HTMX and even HTML / Browser are finally to be swinging back to another ( my preferential ) direction. Safari 26.2 is finally somewhat close to Firefox and Chrome in compatibility.
The final battle left is JPEG XL, or may be AV2 AVIF will prove it is good enough. The web is finally moving in the right direction.