We are currently in the process of moving Dillo away from GitHub:
- New website (nginx): https://dillo-browser.org/
- Repositories (C, cgit): https://git.dillo-browser.org/
- Bug tracker (C, buggy): https://bug.dillo-browser.org/
They should survive HN hug.
The CI runs on git hooks and outputs the logs to the web (private for now).
All services are very simple and work without JS, so Dillo can be developed fully within Dillo itself.
During this testing period I will continue to sync the GitHub git repository, but in the future I will probably mark it as archived.
See also:
What's holding back CSS and HTML support (at their specific versions) and is there interest of expanding that support in full, but lacking resources?
- It is extremely slow and resource intensive. Opening any link/page takes at least 3 seconds on my fastest computer, but the content is mostly text with images.
- It cannot be used without JS (it used to be at least readable, now only the issue description loads). I want the bug tracker to be readable from Dillo itself. There are several CLI options but those are no better than just storing the issues as files and using my editor.
- It forces users to create an account to interact and it doesn't interoperate with other forges. It is a walled garden owned by a for-profit corporation.
- You need an Internet connection to use it and a good one. Loading the main page of the dillo repo requires 3 MiB of traffic (compressed) This is more than twice the size of a release of Dillo (we use a floppy disk as limit). Loading our index of all opened issues downloads 7.6 KiB (compressed).
- Replying by email mangles the content (there is no Markdown?).
- I cannot add (built-in) dependencies across issues.
I'll probably write some post with more details when we finally consider the migration complete.
I’m glad you’re prioritizing this and that you consider this a reason to choose a different forge.
Maybe in the tech world, but in the real world there are companies such as Nestlé out there competing for this title.
I have poor eyesight, so I can’t read the content.
I don't imagine you'll get much business at your hostile...
They'd probably rather go to hotels or youth hostels instead.
That said I wish there was something a little better than cgit
However I really like what you've done here for Dillo as well.
I believe that storing the issues in plain text in git repositories synced across several git servers is more robust and future-proof, but time will tell.
Having a simple storage format allows me to later on export it to any other service if I change my mind.
https://dillo.org/post-sitemap.xml
At some point I should investigate if we can fill a complaint to get it taken down at least. Here is more info: https://dillo-browser.org/dillo.org.html
I think it is becoming more important to i386 BSD, especially since i386 OpenBSD can no longer build Firefox, Seamonkey and IIRC Chrome on i386 systems.
I have been using dillo more and more as time goes on, plus you can get plugins for Gemini Protocol and Gopher.
https://git.dillo-browser.org/buggy/
It fetches the issues from GitHub and stores them in <number>/index.md in Markdown format, with some special headers. I then keep the issues in a git repository:
https://git.dillo-browser.org/bugtracker/
So we have a very robust storage that we can move around and also allows me to work offline. When I want to push changes, I just push them via git, then buggy(1) runs in the server via a web hook. This also tracks the edit changes.
While typing, I often use `find . -name '*.md' | entr make` which regenerates the issues that have changed into HTML as well as the index, then sends a SIGUSR1 to dillo, which reloads the issue page.
The nice thing of allowing arbitrary HTML inline is that I can write the reproducers in the issue itself:
https://git.dillo-browser.org/bugtracker/tree/501/index.md#n...
Closing an issue is just changing the header "State: open" by "State: closed", often with a comment pointing to the merged commit.
> Uses the fast and bloat-free FLTK GUI library [1]
Bloat as a moat, is sadly the strategy of much of the web or apps in recent years. High Performance has shifted into how fast we can serve bloat. Efficiency has become about pushing the most bloat with least time.
Pages are bloated, sites are bloated, browsers are bloated, browser-market is bloated (two-a-dime! or three for free). The whole damn web is a big bloat. wtf happened.
If remove ads and behavioural tracking, speed is faster
But goal of Big Tech, who make the popular browsers, is to make speed faster (fast enough) _with_ ads and tracking
User wants fast speed. User does not want ads and tracking. Big Tech wants users in order to target with ads and tracking. Big Tech tries top deliver fast speed to keep users interested
User can achieve fast speed _without_ ads and tracking
I do it every day. I do not use a large propular browser to make HTTP requests nor to read HTML
Probably the best indicator of which features are supported is to pass as many tests as possible from WPT that cover that feature.
I did some experiments to pass some tests from WPT, but many of them require JS to perform the check (I was also reading how you do it in blitz). It would probably be the best way forward, so it indicates what is actually supported.
Yeah, if we add JS support to Blitz then one of our initial targets will probably be "enough to run the WPT test runner".
> I was also reading how you do it in blitz
We are able to run ~20k tests (~30k subtests) from the `css` directory without JS which is IMO more than enough for it to be worthwhile.
> Probably the best indicator of which features are supported is to pass as many tests as possible from WPT that cover that feature.
Yes, and no. It definitely is an indicator to some extent. But in implementing floats recently I've a lot of the web suddenly renders correctly, but I'm only passing ~100 more tests!
There are options here for (my setup below):
geometry=1600x900
increase font_factor=1.75
bg_color=0xFAF9F6
Start and Home pages too.
On C++ compilers, clang++ it's much faster than g++ under legacy platforms. Clang uses far less RAM and CPU than GCC while compiling.
I know we have no cproc/cparser or tcc for C++, but at least clang it's usable with 1GB of RAM.
Feature support matrix is here: https://blitz.is/status/css
This month I have been working on support for CSS floats (https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/P...) (not yet merged to main), which turn out to still be important for rendering much of the modern web including wikipedia, github, and (old) reddit.
EDIT: I should mention, in case anyone is interested in helping to build a browser engine, additional collaborators / contributors would very welcome!
Edit: to be clear, I consider this a good thing. You've got a head start, are contributing to the ecosystem and aren't doing by yourself something that others have spent billions on.
The main thing we pull in from Servo (which is also shared with Firefox) is the Stylo CSS engine, which is a really fantastic piece of software (although underdocumented - a situtation I am trying to improve). I was recently able to add support for CSS transitions and animations to Blitz in ~2 days because Stylo supports most of the functionality out of the box.
(the other smaller thing we use from servo is html5ever: the html5/xhtml parser)
We also rely on large parts of the general Rust crate ecosystem: wgpu/vello for rendering (although we now have an abstraction and have added additional Skia and CPU backends), winit for windowing/input, reqwest/hyper/tokio for HTTP, icu4x for unicode, fontations/harfrust for low-level font operations, etc.
And while we're building the layout engine ourselves, we're building it as two independently usable libraries: Taffy for box-level layout (development mostly driven by Blitz but used by Zed and Bevy amongst others), and Parley for text/inline-level layout (a joint effort with the Linebender open source collective).
---
I wish that the Servo project of 2025 was interested in making more of their components available as independent libraries though. Almost all of the independently usable libraries were split out when it was still a Mozilla project. And efforts I have made to try and get Servo developers to collaborate on shared layout/text/font modules have been in vain.
If you were running it "headless" (which is supported), then it would probably work today.
There would also be the option of using Taffy and/or Parley (the layout libraries) without the rest of Blitz.
Other engines on my radar: quickjs, hermes, primjs (and of course V8/JSC/SM, but they don't exactly fit with the "lightweight ethos").
There is also the possibility of bindings with more than one engine and/or using some kind of abstraction like NAPI or JSI.
I still use a modern version of it now on a Pine Tab 2 tablet, which has slow enough hardware that you want something like Dillo to make it feel snappy. I just make sure to bookmark lightweight websites that are most agreeable to Dillo's strip down versions of web pages.
It's one of the reasons I feel like Linux on the desktop in the 00s and 2010s had the superpower of making ancient hardware nearly up to par with modern hardware or at least meaningfully closing the gap.
But by comparison, Dillo is much more lightweight than even Netsurf (!!), much more brutalist, and a bit more idiosyncratic and the kind of texture and feel of how tabs behave, how you handle bookmarks, how you do searches. Dillo uses fltk while netsurf uses gtk3, and a lot of the resource usage savings in differences in vibe and feel come from that by itself. Netsurf is much more familiar if your baseline is standard modern browsers, and Netsurf does a better job of respecting CSS and rendering pages the way they should look.
Dillo can take a little bit of getting used to but it's a usable text oriented browser that I think is probably as good as it can possibly get in terms of minimalist use of resources, although at a rather significant compromise in not rendering many web pages accurately or using JavaScript or having an interface intuitive to the average person.
dillo-0.0.0.tar.gz [Dec, 1999]
Legendary project.https://dillo-browser.org/release/3.1.0/commits-author.png
The initial release was around the 15th of December, 1999. It's going to be 26 years ago: https://dillo-browser.org/25-years/index.html
It's good to see that it's active again!
IIRC I stopped using it when Firefox ("Phoenix" at the time) was released.
I hope it survives another 25 years.
https://www.bttr-software.de/forum/board_entry.php?id=10797
Unfortunately, none of those ports made their way back to the main project. However, if there is enough interest I would be willing to merge them. I'm not very familiar with DOS/FreeDOS, so probably someone would have to help us to update the changes, but probably doable between 3.0 and 3.2.0.
and then `dillo` starts up a 1.1Mb executable that is so freaking, shockingly fast.
TIL I also learned that although the Google homepage renders beautifully, I need to "Turn on JavaScript to keep searching" [1]
Wow, Google Maps is even snarky-ish about it: "When you have eliminated the JavaScript, whatever remains must be an empty page." (that's what appears! for real)
I mean, what was I expecting. U+1F643.
[0] https://github.com/dillo-browser/dillo/blob/master/doc/insta...
[1] https://www.reddit.com/r/google/comments/1i3njv0/google_begi...
Later they also blocked other non-JS browsers like links or w3m, so I assume they no longer care. They used to maintain several frontends that worked in really old devices.
I don't think there is any User-Agent that works today, however you can still use the Google index via other search indexes that can fetch Google results without JS (for example Startpage still works). However, it is probably a good idea to have more options available that have their own independent index engine (for example Mojeek). Seirdy has a very good list: https://seirdy.one/posts/2021/03/10/search-engines-with-own-...
Incidentally, DDG still works without JS.
https://lite.duckduckgo.com/lite
for searching in Dillo. Once in a while you may get a captcha from DDG that is far better that any other captcha I have ever seen. The captcha is easy to use and can be a bit fun :)
Using it shows how rotten the World Wide Web has become, with mandatory JavaScript everywhere, even on google.com, which I was not aware of.
I'm very much looking forward to Ladybird's first alpha release next August.
Note: DuckDuckGo still offers a perfectly usable JS-free search engine if you visit the website from a browser with no/disabled JS support. Almost all other major search engines now require JS to function.
I miss those times, where I could just run links/w3m/dillo to actually use the web, including more complex services like webmail and stores.
I was very proud that I could call our home phone line and it would boot the computer if it was off. Most pointless feature ever, but I thought I was hot shit when I was a kid getting that to work.