Mine is just normal notebook dropped one too many times on the floor.
Main things are:
1. Use light mode not dark mode
2. Max out screen brightness (obvs) - there are hacks for HDR displays to make them even brighter but my Macbook is too old.
Coding is fine but anything that requires looking at images (low contrast UI design in particular) sucks. However this probably forces you to design good accessible UIs!
I also use a Quest 3 as a display when I can as that also solves the sunlight problem and gives me a huge virtual display to boot.
The biggest thing I'm lacking is a remote desktop app that doesn't mess with my muscle memory. Keys like escape and alt-tab often aren't handled correctly over remote desktop (Chrome Remote Desktop is the best thing I've found so far but that still doesn't handle alt-tab between Mac and PC)
At night, I just use macos built-in accessibility functions to invert the screen. Work pretty well but sometimes you have to un-invert to view photos.
I sincerely hope we see, within a few years from now, e-ink laptops where one side of the screen and the underneath surface of the laptop consist of solar cells, and all one need do for a daily/weekly charge, is tilt the laptop in teepee orientation and let it charge, charge, charge.
I've already decided personally to get off the grid as soon as possible - in my case, in the form of a sailboat outfitted with as much solar panels as possible. Having a solar powered laptop has been a fantastic dream for decades - I really think it's going to happen, commercially and successfully, within the next few years.
I could already power my iPad and uConsole with portable solar and battery banks. This all just needs to get integrated, and someone is going to have a HUGE HIT on their hands ..
A typical cellphone uses about 4 watts, and a laptop closer to 12, the difference mostly being the screen. If you use batteries and want to run the computer for half an hour for every hour it spends in the sun, you'd need at least two watts of solar panels. Mainstream solar panels are about 220W/m², so you need roughly 100cm² of panels (0.01m²), more if you don't tilt the laptop perfectly. The cellphone I'm typing this on is 82mm×165mm = 135cm², so you would have to devote ¾ of its surface to solar panels, not leaving much room for a screen. If you covered the back with solar panels instead, you could use it a third of the time if you left it face down in the sun to recharge the rest of the time.
That's almost usable, but not quite. But, if you can cut the power budget by about an order of magnitude to about 0.4 watts, you can get to continuous usage with only a fraction of the face of the device devoted to solar panels. LCDs without backlights can help here, especially important for larger devices, but using lower-power CPUs is also important.
You can see some of my previous notes on the topic, listing nanojoules per instruction for processors then available, at https://dercuano.github.io/notes/keyboard-powered-computers..... Ambiq's subthreshold microcontrollers go much lower.
Much of my own interest in this is because batteries and charging are almost always what breaks on computers these days, so I'd like to be able to get by entirely without batteries, just using solar panels, like a solar calculator. http://canonical.org/~kragen/sw/zorzpad.git/ has some preliminary research into that.
I don't mind lower power if it means I can get off the grid, and especially if I can have a small fleet of devices for the purpose of.. I'm already doing myself of my development work on the uConsole - it's not a fast PC, nor a beast in any sense, but it does function just fine for my use case.
I could see an ARM64 cluster being solar powered with workable results.
I agree .. the idea of a battery-less solar calculator-type computer would be grand. Lets see how long it takes for someone to design a viable product around this idea ..
My power budget for the Zorzpad is one milliwatt, at which I think the Ambiq Apollo3 (an ARM Cortex-M4F) can deliver 20 MIPS, roughly equivalent to a Pentium-100 or a SPARC 5. It has 384KiB of RAM. The uConsole (https://www.clockworkpi.com/uconsole ?) has at a minimum a 64-bit 1GHz RISC-V processor, on the order of 800 MIPS, 40 times as fast. Ambiq doesn't make ARM64 hardware.
My theory here (and maybe this is just the inflexibility of old age, treating my own experiences as gospel) is that there's a kind of phase transition in personal computers along the path between, say, a PDP-8 or an Altair, and a SPARC 1 or a 486:
‣ PDP-8 or Altair (≈0.01 MIPS, ≈0.01 MB RAM): self-hosted development is almost impossible; you need a big computer like a PDP-10 to compile the operating system, although the small computer is capable enough to run an assembler, a BASIC interpreter, and non-optimizing compilers for languages like Fortran. Applications like word processing and spreadsheets can be sort of approximated.
‣ IBM PC, Apple //GS, or PDP-11/70 (≈0.1 MIPS, ≈0.1 MB RAM): self-hosted development is comfortable and responsive in languages like C or Pascal. Viable applications include things like 2-D mechanical CAD, word processing, version control, and spreadsheets. GUIs are clunky because redrawing the entire low-resolution screen takes a second or so.
‣ Macintosh 512K, Sun 3/60, 80486 (≈1 MIPS, ≈1 MB RAM): mouse-driven GUIs have reached basically the same form we use them in today on the desktop. Self-hosted development is the only way to go, and higher-level languages like PostScript, Emacs Lisp, Tcl, HyperTalk, Visual Basic, and Perl are popular. Viable applications include things like VLSI simulation, 3-D modeling, and Web browsers.
So somewhere in that range there's a phase transition between "basically a peripheral of a larger computer" and "GUI workstation".
The Apollo3, being a microcontroller, is balanced like a microcontroller rather than a personal computer: 100 MIPS (with FPU), but only 0.4 MB RAM. So it has the RAM of an Amiga 500 or a Mac 512, but the CPU speed of a SPARC 5, a Pentium 60, or a PowerMac 8100. (See https://netlib.org/performance/html/dhrystone.data.col0.html.) My speculation is that, coupled with the much faster mass storage speeds available with NAND Flash, this ought to be enough for a comfortable self-hosted development experience. I mean, I wrote GUI apps and browsed the Web on a SPARC 5 and a 5x86-133. I suspect that the structure of the system software will have to be significantly different.
(See https://rossumblog.com/ for some examples of what people have done with small microcontrollers that are capable of running on low power.)
But the uConsole will still be on the order of 10 times faster, and consequently (if we hold the underlying implementation technology constant) dissipate on the order of 10 times as much energy. Considering that it's using conventional CMOS instead of Ambiq's subthreshold process, the ratio is probably closer to 50:1.
I could see myself doing a lot of work with a couple of z80/6502-based systems, as long as they were maxed out in terms of memory and had decent peripheral support .. put a network device in the mix, and it offers plenty of opportunities to run a Strange New Operating System. I would happily run a 21st Century CP/M to read email, watch sensors, drive the ship .. if, say, it had multi-processor/network support and there were somehow 1024 z80's in my wrist-watch/book/headset/nav station, all cooperating at low power to do bigger things.
384k is enough for everyone.
What I wonder actually now, is physically what would it look like to have a fully functioning z80 in silicon from sunshine to user display, in a single package. I bet that could be mighty small, physically.
Scale this into an energy-friendly form, and we have solar-powered computing at hand.
(Edit: I'm also a grey-beard, have kept every system I've ever worked on/written software for, for 50 years. My living room is a retro- computing museum... My motto is "computers don't get old - their users do" .. so the utility of very lower-power computing devices is entirely relevant to my interests..)
—⁂—
Physically I think the user display is likely to be much larger than the CPU, the RAM, the Flash, or the power supply capacitor. The size of the solar panel might be larger still; in direct sunlight you can get 1 milliwatt from 4.5mm² of 22%-efficient solar cell. Possibly a glasses-mounted display with the appropriate optics to focus onto your retina would allow you to use a display smaller than that, which could also reduce its power consumption. The SHARP LS027B7DH01 400×240 memory-in-pixel LCD I want to use (two of) consumes 50μW just to maintain its lovely high-contrast display (according to the datasheet), and nominally 175μW to flip every pixel on the display at 20Hz, the maximum datasheet speed. Nicolas Magnier was able to get 60Hz out of his: https://www.youtube.com/watch?v=zzJjE1VPKjI but we can extrapolate that this requires an additional 250μW. (Which I also still haven't measured.)
But, without head-mounted optics, I think these screens are too small for a comfortable development environment. You can fit 8 lines of text in a 12-point font on one, with a few words per line. My current working hypothesis is that I'll be able to live with two of them if I use reading glasses and hold the screen close to my face.
These memory-in-pixel LCDs use some power to retain the screen image, unlike e-ink, but much less power than e-ink to update it. I don't have even datasheet numbers for e-ink displays but the crossover point where e-ink uses more power seems to be about three screen refreshes per hour. So, for interactive computing, the memory LCDs should use several orders of magnitude less energy.
But they use proportionally more energy when they're larger. The discontinued 6-inch version https://www.youtube.com/shorts/snXYogDEseA reportedly used 24 milliwatts for a 30fps movie.
An audio interface would be another alternative. AirPods and in-ear hearing aids pack quite a bit of processing power already.
—⁂—
As for a Z80 with CP/M, although it's self-sufficient, it's only marginally so: you can run Turbo Pascal on it, but Anders Hejlsberg, Philippe Kahn, and the others had to write Turbo Pascal in assembly (https://www.latimes.com/archives/la-xpm-1988-01-21-fi-37556-...). Similarly, CP/M (or CP/Mish) can build CP/M, but that's only because it's written in assembly. Some of this is due to deficiencies in the Z80 instruction set which make it ill-suited for high-level languages. Probably the slowness and smallness of floppy disks was also a factor; the S34MS01G2 chips I have here are nominally 133 megabytes per second with 25μs random "seek" time, while floppy disks were more like 0.001 megabytes per second and 1234567μs random seek time. I'm hoping this means that "swapping" from Flash does a better job of providing the illusion of larger memory than loading WordStar's print overlay from a floppy did.
Also, it should help a lot that the Apollo3's Cortex-M4F provides 25 Dhrystone MIPS (at 20MHz to not blow the 1mW power budget) rather than 0.052 Dhrystone MIPS like a 4MHz Z80. So you can push the time/memory tradeoffs waay over to the side of saving memory. And you have 1MiB of NOR Flash on-chip as well.
1024 Z80s would be only 8.5 million transistors, in the neighborhood of an Alpha 21164 or a Pentium II. But 64KiB of 6T SRAM is π million transistors all by itself, and 64KiB of DRAM is half a mebitransistor, plus half a mebicapacitor. So if you want 64KiB on each of those Z80s, you need closer to a billion transistors, like a SPARC T3, an Opteron 2400. (The Apple A17 chip fabbed in 3nm is 19 billion transistors and 103.8mm², according to https://en.wikipedia.org/wiki/Transistor_count, so we could extrapolate that a billion transistors would be about 5mm², which would easily fit into a wrist-watch.) At this point, though, it might seem appealing to use something like the 27000-transistor ARM2 for your processing elements rather than something like the Z80.
Actual Z80s (the kind Zilog discontinued last year, which I believe was CMOS rather than NMOS) are pretty energy-hungry, using hundreds of milliwatts, if we trust the datasheet. But that's presumably because they're fabbed in a large process node with boatloads of gate capacitance, rather than because they switch a lot of transistors. So I think you get lower energy consumption with more recent Z80 clones like the ones in the S1 MP3 players or the TI-84+CE pocket calculator.
—⁂—
I suspect that you can spend less picojoules per computron by using bigger CPUs like ARM, for a variety of reasons. You can decode less instructions to do a given task, and I believe that setting a register bit to 0 that was already 0, or 1 that was already 1, doesn't use extra power, so in a sense the wider registers and ALU should be almost free from a power perspective. Also, I would expect that specialized hardware such as the integer multiplier or the barrel shifter burns less energy to do what it does than doing the same thing through a sequence of steps using things like an adder or a 1-bit shift. You can take these principles further with SSE- or NEON-style SIMD instructions or GPU-style SIMT, and with additional specialized logic for things like floating point, LZW compression, AES encryption, etc. It won't use any power if you power it down when you're not using it.
On the GA144, Chuck Moore claims he got a lot of efficiency mileage out of asynchronous logic, perhaps mostly because synchronous CPUs these days have to devote a lot of brute force to keeping clock skew down. I don't think this is as big a factor as the Apollo3's subthreshold logic, which, if we believe their datasheet, allows it to do 20 MHz and 25 DMIPS at 500μW, working out to 20pJ per Dhrystone "instruction".
Agreed that the NAND can consume a ton more than the CPU, so duty cycle has to be kept low. There's a few places where XIP NAND excels: it's big, it's cheap, and it can saturate the XIP memory bus just like NOR for large reads - it's a great place to store bitmap graphics. One downside is that the random access latency is pretty terrible.
> with XIP you can't predict or even really measure how much you're accessing it
There are a couple incomplete options here:
Just for measuring, you can fence off the XIP address range to generate MPU access violations, then work out a duty cycle.
The cache has performance counters, but at the cache level they don't tell you anything about internal flash vs XIP flash.
> The (non-SPI!) NAND I'm going to attempt to use only uses 18μW in standby
There are similar low standby QSPI parts available(10uA@1.8v typical), like W25N01GV.
I'm in my skoolie, off-grid at the moment in Skyline Wilderness Park in Napa.
My standing desk for the weekend: https://www.instagram.com/p/DPpZjy1Ej9t/
7.3 turbo diesel. This year it's been from my house in St. Pete, FL, to Porcupine Fest in NH, across the US to Oregon, and then down to Cali where I am now.
It's funny that you say you had a friend in Portland with one: I did a significant part of the internal build in Portland. There are so many skoolie projects there.
The stand that it came with was awful; I switched it for the stand that my curved Samsung OLED came with.
If/when my Sun Vision stops working, I'm going to be so sad if I can't get another one.
Rechargable projector, which I charge during the day, and also a few power blocks in case it needs more.
But nothing beats working in the sunshine on an RLCD (as I'm doing while I type this to you). It's just divine. Feels so much closer to nature.
However, black and white eInk is great to use in any well-lit environment and doesn't need direct sunlight. However, the lack of color can be fatal for many workflows.
Now, with the MacBook Pro w/ nano texture display and the Vivid program to increase brightness, I can have a dual display setup outside using the MBP and iPad. It's an expensive setup if your employer isn't paying for it, but it works very well.
I'm glad to see that the very frustrating vogue for glossy computer screens finally ended after many years, making regular, nonreflective screens much less unusable outdoors. (I date it to the wide adoption of IPS, which might be a coincidence.)
If power consumption is not an issue would you recommend it for a real-time information radiator that strives for the paper-like look?
Without the backlight the contrast is lower than a newer e-ink display, such as on a Remarkable, so you need good ambient lighting. It being actually backlit rather than front-lit is nice though.
I’m not sure why, maybe it’s just psychological, but the Daylight panel feels like a screen, whereas an eink panel feels more like a static surface.
Do you use any workaround for that huge limitation? Or just SSH into a proper Linux box?
I wish there was a bigger market and interest for 'unsexy' RLCD transflective displays, at the moment all the RLCD solutions feel very constrained for user side modding and just generally overpriced.
Something like the old Pixel Qi 10" Display modules in a bigger form factor would be ideal.
1 - IDE (PyCharm)
2 - Chrome
3 - Outlook
4 - Firefox (Jira, Github)
5 - iTerm (terminal)
6 - Excel (time tracker)
7 - Teams
8 - Slack
9 - ChatGPT app or Obsidian
I'm not sure what it means to switch between code and tests though?Code and tests: I just have open two editor windows: code and tests. Side by side. If you write tests for piece of code it is just easier to see what is happening by just looking on the side instead of switching constantly.
And track pad is much harder to use for me than just mouse. Yes, I could carry mouse too, but then it is more than just laptop.
If laptop is enough for you sure. Everybody can work how they want.
$ pkg install tur-repo $ pkg install code-server
Theres even a paid native android app on the play store for it