I've been making a DSL for writing sheet music specifically for drums as raw text, inspired by ABC Notation (but of course just for drums).
Now writing this I noticed that it's kind of complicated to explain and having a landing page would make my life so much easier.
But the gist of it is, you write notation that looks like this: https://gist.github.com/Luigi123/945af7e5cc8dfbfd186f0a99754... and it renders sheet music in PDF, and also allows you to play the same music as a game (DrumMania / DTXMania style).
Now the language / compiler itself has been working quite well and I've been dogfooding it for like six months now. The next thing is an IDE-style editor where you can import a song and write the notation following it. Making THAT has been quite the journey. Here's a screenshot for good measure: https://i.imgur.com/EmlqlrM.png
is this intended for drummers, or electronic music composers?
But the main use case I'm going for is my own: making sheet music for drum practice.
aiming for more extensions to The New Breed than just Syncopation that you could auto-generate for funny practice/things you wouldnt think of to play?
Started with a niche and launched it: VersionAlert for Unity (https://versionalert.com/unity)
Working on the bigger product still. Existing solutions I've found in this space seemed lacking. On my website, I want people to quickly find the software they want to be kept up to date about (with a smart search bar that does the heavy lifting for them) and easily sign up for notifications for new versions. Hope to make a Show HN for it soon!
Since the initial MVP, it's done close to 100k orders and I've added new functionality like:
- Intelligent order batching & route optimization that can interleave tasks across orders in such a way that they still have the best chance possible of completion within their delivery windows
- Further refined the mobile tracking logic in our driver app to improve the quality/frequency of position updates while continuing to be as efficient as possible on battery
- Numerous backend/DB optimizations such that average response times are in the tens of ms at the current volumes it's handling.
It's not open source but if you have an interesting use case and are curious about it, feel free to reach out.
We integrate with macOS spaces to switch out a project-specific dock on each space, containing only the resources you need for that project. We made it possible to add granular resources instead of full apps to the dock (think specific slack channels instead of the whole slack app), to keep the dock hyper focused on what you need.
We built this to stay focused while working on the computer, and we thought that the native interface mixed all our projects together, causing us to get distracted.
Looking for beta testers! Free download from https://drawers.computer
Does it have project context within apps (like default folders and settings)?
Would love to hear what you think we should add next!
About a year ago, I engine-swapped my Nissan D21 hardbody from the Z24 petrol to a TD27T turbo diesel and also installed a whole bunch of accessories, like spotlights, a winch, and an air compressor. But being lazy, I didn’t write down any of the wiring changes I made while doing all of this. So fast forward a year, and now I can’t remember how all the wiring works.
My current project car is a Jeep Cherokee FSJ, and for it, I want to build a completely new loom from the ground up. So to try and avoid making the same mistake I made with the Nissan, I Googled “create automotive wiring diagram”, but all the results were for complex enterprise grade solutions charging $200/month. That’s why I created X/D Loom as a project car guys' tool for creating wiring loom diagrams. It allows you to drag different electrical components onto a canvas, connect them with wires, and export them to a PDF or PNG.
Forbes just wrote an article about it which was a fun surprise! [1]
It recently turned 6 months old which is wild to me. My wife and I have made a new puzzle every day for half a year! I wrote a blog post about this [2]
I recently released user logins. That went well and a lot of people are using them. I also let you filter the backlog by completed puzzles based on player feedback.
This week I’m going to start releasing player submitted puzzles and release my puzzle building tools. You can watch a video for a sneak peek of those tools. [3]
1. https://www.forbes.com/sites/barrycollins/2026/05/02/bored-o...
2. https://paulmakeswebsites.com/writing/six-months-of-tiled-wo...
I have, however, rejected making a user login. I recognise you're putting in time and energy to make something I'm just taking without payment, and it's your right to try to leverage it into something more - I wish you all the best in doing so - but asking for a user login as a gate to a feature you clearly don't need a user login for is enshittification.
While you're here if I could make a small suggestion - the wording of the 'type of' questions was confusing to me until I got used to it; 'stop' is not really a type of 'watch' for example, so maybe you could find a different way to phrase those? Maybe there isn't a neater way to encapsulate the idea of 'is a prefix or suffix to', I don't know, but I found it difficult. Anyway kudos to you and your wife, it's a great game!
I saw someone on here recently say they like to do the puzzle without looking at the clues, and I've started doing that on and off too, it changes the game in an interesting way.
Great feedback on the “type of” clues. I’ll need to noodle on that and see if there’s a clearer way to express it. Maybe I should just be doing blanks… e.g. for “sun” it could be “___ dress, ___day, or ___ flower”
If you enjoy it there’s a new puzzle every day and a backlog of over 200 puzzles free to play ;)
Being a weightlifter for 20+ years now, I'm working on a barbell speed and path tracking sensor based on newer IMU hardware technologies, which makes it both more precise and cheaper than camera- or actuator-based systems. Ultimately it helps you lift and train safer and better.
It's an intersection of industrial design, hardware, firmware, and software (and some sport science, of course). This intersection is not yet dominated by LLMs so it's a breath of fresh air.
In an early prototype stage as in "strap a Raspberry Pi to a bar", but it looks promising and I'm happy to move forward, also using connections from my previous 12+ years in China.
Velocity on the other hand is a great metric to track and is used as a proxy for RPE. Mike Tuchscherer was the first one to systematize it for powerlifting a while back, if you've been lifting for 20 years you're probably aware of the name.
For more complicated lifts like bench press (J-shaped) or snatch (S-shaped), for example, I would rather set a "golden sample" path with a coach and compare to that.
It's unlikely to be the sole metric, especially given the inverse kinematics of different body types (long/short femur, etc), but together with bar speed, over time, it can provide a lot of good feedback.
No offense, but this post does come across as you only having a surface level understanding of the field. Especially surrounding injury/pain perception, I would be more careful of what you assume is true, there's far more nuance.
i wonder if it would make sense to consider it as a data problem, capture a bunch of high fidelity inverse kinematics data for various forms of bad form/dangerous lifting along with the imu data and then work from there. there could be some interesting and unexpected features that are easier to detect than straying from straight line paths with some tolerance.
It is a language that is embeddable in other programming languages, with the type system similar to typescript, and a runtime that is similar to Go.
People use it currently for structured outputs with llms but soon we will support orchestration and more.
We are letting some users have an early access preview! Let me know if you are interested in hacking with it!
I know some Rust, was going about it with clap, but no one I know cares about Rust so I've switched to Golang with spf13 Cobra cli
Harness is pretty cool, but I'm still a quite noob gopher, so I'm taking the chance to learn the ins and outs of Go...
No A.I touches my code lol, else I would learn jack shit
It's a short chain-reaction game in which you explode balls bouncing in the screen, and need to build up to target scores. You build bigger and bigger combos as the game progresses.
It was a blast to work on it, starting with a small toy and just adding features that "felt right" until I had a game that was fun to play. It was quite hard to find a balance though, so a lot of numbers are arbitrary - but I enjoy seeing people breaking the game in new ways and finding new builds.
These days I've been working on patching reported bugs and sharing the game with people. Now after the latest patch, I feel like I'm done, but I feel like going back at it and adding an idle mode. And maybe simplify the codebase so I can test and iterate better, and then add many more ball types...
I know that any good LLM could replicate this pretty quickly, but I made this myself and I'm still feeling proud of the accomplishment :)
https://x.com/paulnovacovici/status/2041722840190480581?s=46...
OP: beautiful work with your surf projector!
Now I'm working on expanding the work into more parameters and improving performance. I just finished an extremely harsh test of a Nemotron-flavored RVW that consisted of stretches of a random assortment of domains interspersed with long runs of single domains. Across all of it the model didn't forget (and actually improved on some of the more challenging domains). PPL on SmolTalk is still in the ~18 range, which I'd like to get lower, but this is all with only 4B params.
Currently, I'm training a Llama 3.2-flavored RVW with only about 2B params to see how that turns out. Depending on results of that, I may take it to Gemma 4 next.
the general idea is to take pictures of birds and mountains, and use a bunch of colour-theory-from-minecraft
to first meanshift a bunch of the image to come up with a lower colour resolution image, then to match that to dmc threads
but then i also want to use tools like the axiom mod to fill in gradients, and to do hue shift/temperature changes to represent shadows, like how bdouble0100 uses purples as a shaded green, rather than a darker green.
ive also been using it to see how the claude code for web setup works, and it feels real poor compared to the cli.
the main problem i think i need to pull to local and do my own code for is the colour sampling from the oklab space. when i try to create gradients from colours already in the list, i ve got a visualization of the line its aiming to follow, but its picking the next colour and placing it out of order vs projecting to the line.
likely my biggest issue is that claude and the like are still bad at thinking in more than 2 dimensions, but i think my vocabulary is also subpar for giving the feedback either in clear linear algebra or colour theory terms.
next idea is for when thats done is to make a mod that turns a survival game into a roguelike - in the style of the hades 2 challenge runs, so i can play a session of the game in a certain biome without having to do all the grind first to get there on a new character.
I always wanted to build a real-life puzzle game, which is app/mobile assisted. Had yet another eureka moment, and built a usable prototype (backend plus iOS app). Good feedback from a small circle.
For a while I was aware of someone (I knew by sight) who worked in the same sort of subject matter (but a non-tech). I approached her, we had a coffee, I pitched the idea and how she could bring it to life, as I made the tech side. She jumped on board.
We're two and a half weeks in, have gone full speed and are making something great (for our audience). My future co-founder is amazing, great insights, opinions, drive. We're potentially launching in a couple of weeks, a free/MVP version of a puzzle game.
I've been through many iterations of trying to get something off the ground. Tried tech co-founders, and the last years of going solo (very hard after you've done the coding). But this now feels right. A puzzle app/game for every day people to have some fun. And a future co-founder whose life is outside tech, who's bring a sort of fun energy outwith let's make loads of money or isn't the framework/AI cool.
Balance is good. Contact with reality is good too :)
let’s say you are arriving in paris. it will send you advice on how to get to the city from the airport. big soccer game in an hour? will send you advice on prepare it.
you don't need to ask, it will give you before / when you need it.
now working on the sandboxing and scheduling of the advices. releasing it this week if anyone want to give it a shot. (it will be paid only)
iCloud Photos is fully baked along with implementing their completely undocumented SyncToken. I’m doing some QoL work in the next few weeks, tightening up some early architecture decisions, and then adding more providers (Immich, NextCloud, Google Takeout… else TBD).
Since last time I posted this, two other people contributed and I’m almost at 100 stars! That’s some dopamine.
Device based strength tracking is still so weird to me.
I think this is a perfect example... somewhere out there a genius and a grug are happily exercising together for the simple joy of doing so and feeling good in their bodies, and nearby is a midwit with the GDP of a small village worth of wearable electronics wondering where the joy has gone as he laments the 0.1% of VO2MAX he's dropped since his last gadget-run.
It's a durable orchestration system for AI code generation which solves the problem of not being able to trust LLMs to complete long running (and high quality) implementations without having to babysit them and monitor the process, which is what I think is the most exhausting part of coding with AI.
You start with a spec or programmatic task list and the engine runs the whole workflow: implementation, verification, review, fixes, and finalization.
It treats agentic coding like a durable CI-style process, with state, retries, reviewer feedback, commits, and auditability built in. It's externally orchestrated, meaning it's not the agent running the loop, it's simply agents being used as tools and spawned in the loop as needed without awareness of the loop itself.
It's going to be open sourced soon and it's not meant to replace your IDE or Agentic Harness of choice. You keep using codex/claude code/open code/cursor/pi whatever you want and simply delegate the actual implementation to the engine, through MCP/CLI and other integration points.
It supports any LLM provider so you can have GPT 5.5 implementing and a mix of Opus 4.7 / Deepseek v4 Pro / GPT 5.5 reviewing at every phase for example.
Sign up on the website or follow us on https://x.com/enginedotbuild or me personally on https://x.com/aljosa , desperately need more followers :D
These embed a remote browser in an iframe to give you “embed anything browser view” custom elements. The demos focus on retro desktops to emphasize the browser - as these common web tropes, the retro desktop, can never actually ship a real browser without something like bbx.
https://browserbox.io https://github.com/BrowserBox/BrowserBox
a performance-first TypeScript checker written in Rust. Started 5 months ago and it's been mostly AI-written code. 99.8% tsc conformance test pass rate today. Single file benchmarks are 3–5x faster than tsgo.
It's on Amazon in both Kindle and paperback formats.
https://www.amazon.com.au/Code-Design-software-projects-deve...
(Desktop Strongly recommended) https://dahlend.github.io/ketev/
It’s a hobby project in a very early state where it technically works but it’s missing several things I think it needs before I’d use it for anything serious. As of right now it isn’t even complete enough to dogfood a minimal container for itself without an intermediate base image because it can’t target a platform compatible with the distroless uv container image.
It's a little web application that allows for the ranking of all kinds of abstract entities. Think of the merging of Goodreads for books, Vivino for wine, Letterboxd for film, etc. This will allow you to instead rank whatever you want across a variety of different categories in a single place.
Using your rankings across all these different fields, you can draw analysis of what you like, and in future I'd like to add a little personal (not an ad) recommendation engine to help you find new stuff based on your actual interests across loads of different categories.
From a technical point of view, its been a great learning opportunity on how to fully host a complete stack using an opiniated, but cross-platform orchestrator, allowing me to host this wherever (bare metal VPS, homebrew system, cloud provider) in a flash.
It supports voice cloning, dubbing, transcription, and local/self-hosted workflows with Docker + desktop UI support.
Using open-source models like Whisper, Qwen, OmniVoice and more.
https://github.com/debpalash/OmniVoice-Studio
Thanks for checking it out
Just posted a first early demo and sample orchestrator system prompt yesterday: https://x.com/Westoncb/status/2053429329233895857
You initialize the system with an objective and a number of rounds to run for, and it loads the current config (orchestrator + specialist prompts and LLM configs) and begins working on it. You can manually step one round at a time or just let it run.
Rather than accumulating a single long work log/context, at each round specialists apply patches to a number of named 'artifacts' with different roles (e.g. uncertainties, dead ends, findings), which are injected into prompts during subsequent rounds.
The engine is written in rust and there's a web UI (and CLI). You can use the built in config editor to define specialists (and their prompts), what the artifact set is, orchestrator prompting etc.
I got to the MVP state which was useful for my personal use case in about a month. I took it further than that as a learning exercise and as a means to share it with others. Some features that came later are live cursors (like Figma), elevation chart and grade overlay, and QR-code enabled collaboration links to make in-person sharing simple.
Check it out! https://plotalong.app
Figuring out the exact UI/UX I wanted was the hardest part. I did the branding myself, handdrawn on paper, traced in Procreate, and vectored in Sketch. Fast iterations and a good test suite made it possible to try lots of different approaches and refine the one I liked the most. There are roughly 4000 unit tests and over 300 e2e tests that run on multiple environments with fully automated CI/CD.
I’m using Mapbox for the frontend and the whole app is basically just a monolithic Cloudflare Worker. Claude pretty much implemented the entire thing. I got a lot of mileage out of self hosting a Gitea project and recording all my planning sessions as Milestones and Issues. Claude has his own account without admin privileges. The process of managing a team of agents to build this practically autonomously was a bit jaw dropping and eye opening to be honest.
I would love to hear from other pleasure & sport drivers about the features they use or want the most in a routing app. I have an Android app in Play Store review, if you’d like to be an early access tester shoot me an email at my handle @plotalong.app
The idea is everyone opens the same route for coordinating and there’s just one source of truth for the group. And then when you’re all about to hit the road, everyone can use the nav app they’re already familiar with (or that’s built into their vehicle)
I will tackle the navigation aspect at some point if I do keep up on feature dev, though!
I quit Figma about 4mo ago to start working on this, and the gpt-image-2 drop really legitized the bet. I recently release Brands for diffui, which let you establish a design system and consistently generate with it. I made a Brand out of the recent UFO files release, which allow for some really fun designs:
https://diffui.ai/brand/2ff1b00a-d698-43ea-a42e-7c4a2e670c04 (no account required to generate with this if you want to try)
It works on MacOS, built with Swift and Metal. My goal is to make a super fast, and free, focus stacking program. I provided a notarized MacOS DMG for the initial release, but if built yourself, it will run on an M4/M5 series iPad Pro as well.
The core ability I wanted was to support RAW files as inputs, with DNG files as outputs. This is done using either LibRaw, or Adobe DNG Converter (runtime options).
I have been really into macro photography the last couple years, and have been slowly working on trying to build my own program to handle the focus stacking.
I’ve found it super useful in my personal life and is pretty much my #1 app.
You play by setting rules onto a small grid of numbers to maximise your score.
My focus the past few weeks has been on refining the difficulty by experimenting with different rule types, and improving the UI.
I'm pretty happy with the look and feel now but feedback is always welcome, and I'm especially keen to hear what you think of the level of difficulty of the puzzles. It's a tricky balance to introduce variety without adding complexity.
There's a (very) small contingent of daily players now which is really motivating.
Your comments very welcome.
- NookJS: a Javascript/Typescript interpreter and sandbox written in Typescript (https://nookjs.dev)
- Litz: a thin React meta framework that uses RSC as purely a server transport, allowing for more flexible client/server architectures (https://litzjs.dev)
- Nativite: a Vite plugin for building for native platforms using web technologies, with a custom plugin/platform support (https://github.com/samlaycock/nativite)
- superformdata: superjson but for FormData/URLSearchParams (https://github.com/samlaycock/superformdata)
- NoSQL ODM: ODM for various noSQL (and “unstructured” SQL) data stores, supporting both lazy and active data migration strategies (https://github.com/samlaycock/nosql-odm)
I wrote a blog post about my process: https://sxp.studio/blog/subjective-building-a-native-vfx-edi...
...and you can download the app here if you're curious (the app is free!): https://subjectivedesigner.com
Next project is going to be a pivot of that project into something related to creative coding and agentic :-)
I'm also working on launching https://watch.ly (network/fs sandbox with human in the loop for ai agents), mostly waiting for the entitlements from apple at this point...
oh and I launched https://dirtforever.net recently to keep Clubs going for Dirt Rally 2 without the EA servers. Learned about the egonet protocol and made a server.
And when I say darkest recesses, I'm not referring to "0.1 + 0.2 != 0.3" (which is fairly well-known) but things like "so when you turn on denormal flushing, how exactly are you defining it because there's at least three different definitions..." Or also "does my emulator actually emulate floating-point behavior correctly, or is it delegating to the current hardware which might have a slightly different definition?"
I'm actually looking for beta users! GetSetReply is a SaaS I've been building. It does two things for small businesses:
1. It helps you get more reviews by sending automated requests for reviews to your customers over SMS and/or email after they purchase from you (PoS Integrated / Manual Sending)
2. The second is helping you reply to the reviews you already have with AI-generated drafts in your brand's voice that you can send to Google/Yelp/TripAdvisor.
I'm very grateful to anyone who is willing to test or provide feedback. If you create an account (it's free with no credit card or integrations required), I'll reach out! Or you can email me via my email in my profile.
https://kintoun.ai - Document translator that preserves formatting and layouts
https://ricatutor.com - AI language tutor for YouTube
* Tab Wrangler for Chrome: https://chromewebstore.google.com/detail/tab-wrangler/egnjhc...
Continuing to work on Tab Wrangler, an extension for both Chrome and Firefox that has been available and open source for 10+ years. It auto-closes tabs when they have not been active for a configurable amount of time, similar to the feature built into Mobile Safari but more configurable.
I have been maintaining it and in the past few months added features that had been requested for a long time.
Been working on it on & off for a couple years, usually taking breaks between refactoring stupid decisions.
https://klados.bio/ Prod site is pretty behind dev branch, basically abandoned normal CI / repo hygiene for the moment
Been using that to power a Mac mini alternative I’ve been making https://jperla.com/blog/quill-one
Intended for an audience of one so still a bit rough around the edges, but the intended audience said “excellent” and is actually using it.
Mostly AI-built. Source code is here:
Lets say you have a complex industrial plant, or datacenter you want to upgrade.
You scan it with lidar and get a pointcloud and 360 panorama images. This gives you a large dataset, but what you really want is a floorplan, a lite CAD plan showing the racks, cable trays etc.
You take the scan, slice the pointcloud and make an ortho image .. it really looks like an xray of a building from the top down.
Then someone has to manually trace that in CAD to make a useful 3D model they can use for designing the upgrade.
So Im automating the boring manual part - turning the xray plan pixels into vector polylines, using machine learning.
One of our clients scanned their datacenter, and we generated a floorplan that shows all the rack box positions, cable trays, pipes etc.
Other examples : drawing the weld lines of patches in steel storage tanks, drawing in all the steel girder beams in a scan of an old railway bridge, or the windows, doors, ceiling pipes of a commercial realestate refurb.
gord at quato.xyz
As part of this work, were looking at running our custom machine learning kernel on multi-core x86 CPUs.
Free tier is enough for most users, paid tier just exists to gate the stuff that is expensive to run like SMS alerts.
Check it out at [Larm](https://larm.dev) and try out the [response time checker too](https://larm.dev/tools/response-time) to try out the Larm probe infrastructure.
Big thing I made recently is moving it from SvelteKit to Hono + Inertia + Vue.
I like SvelteKit, but I was struggling with stability in active development periods, and writing proper tests was very hard due to mocking all the magic, especially outside trivial testing tools.
Now the whole app is straightforward Hono MVC with Vue powered UI. Logic is easy to test, and all UI states exposed in Storybook.
I wrote a custom adapter that makes Inertia run on Hono, and coincidentally same thing was released by Hono author itself as official module, which is great sign for adoption!
So, try Inertia – it is a best of both worlds. You write MVC backend as you like, and use modern JS frameworks for templates.
It's an SDLC workflow harness for agents. Instead of using skills to encode my typical workflows (e.g., create PRD, then create plan using TDD, then dispatch subagents, etc) I've built a concurrent event-sourced process manager to handle it.
It's an iOS & Android app that applies various generative art effects to your photos, letting you turn your photos into creative animated works of art. It's fully offline, no AI, no subscriptions, no ads, etc.
I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, glitch art, string art, perlin flow fields, etc.) pretty much directly inspired by various Coding Train videos.
I'm building a tool that allows you to determine the health of an electric transformer from only your phone. It tells you:
- the loading
- the health of the windings and core
- and whether the phases are unbalanced
I used to be a submariner, so my professional background is in power plants and sonar analysis, so I'm getting to combine the two in this.Acoustic diagnosis of electric issues is FASCINATING, and it feels like there hasn't been a lot of research into this, so I have been slowly chasing down various acoustic patterns I find and try to derive them from first principles of physics.
I'm making an iPhone app for it, and Xcode has been truly awful: non-deterministic, crashing all the time, and error messages that tell me absolutely nothing. I would like to use xtool, but it doesn't have the preview, which I need for debugging.
The app has a lot of UX details that I've really enjoyed working on. I wrote up some notes about it here: https://www.freshcardsapp.com/3/
Separately, also working on a Zettelkasten notes app that pushes you to make small, atomic notes that you can organize in "collections" to provide structure beyond just hyperlinking in the note text: https://understory.ussherpress.com/ This has been a lot of fun iterating on. I started with a Miller Columns UI, like Finder, to visualize the graph of connections between notes, but I found that it was too overwhelming to use, so I scaled back and went with a more Notational Velocity-like quick search bar with note addressing. The app UI mimics a browser because I found that it works really well for something like this. I need to polish it a bit more and want to find people who will give it a beta test to help me iterate on the ideas some more.
In short, it unifies the configuration of different desktop components as policies ( dconf, Kconfig, polkit, Chrome, Firefox, etc.. . It's LGPL.
You can check my slides for the upcoming Tuxconf conference this Friday: https://getbor.dev/publications/tuxcon2026/
Cheers! Blago :)
I haven't really forgiven myself for dropping my PhD; I think it was the right decision at the time, but I also kind of wish I had pushed through it. I'm going to see if I can at least get a few papers published.
I've also had some fun getting Claude to create LSP servers for different languages, which it has been pretty good at, and that's nice; having good integration with Vim makes a language a lot more fun for me.
Oh, I also presented at LinuxFest two weeks ago: https://youtu.be/HmcVJWyOwJQ?t=6623
I'm working on <https://untether.watch>. Trying to shift 20-30 micro phone interactions to the wrist per day to ultimately reduce phone use. Dumbphones are too extreme - you need a smartphone for certain day-to-day activities (banking etc.)
The watch is a great form factor - it's got a crap screen (MIP), the ergonomics are awkward (rotate and look down), it has limited capabilities. But that's the point! Do essential quick actions and leave the phone out of site.
Requires Android companion app to do the heavy lifting. Use the (head)phone mic and STT to reply to any android notification and make notes. More features to come.
Garmin's SDK is seriously challenging. APIs are often broken across firmwares, limited developer tools and testing is tough.
MedAngle is literally everything one could need, personalized to their curriculum across 4-6 years of medical school. Quizzes, videos, notes, flashcards, reminders, scheduling, performance, search, and more.
Our Super App is comprised of MedGPT + MedAgent + Spaci (futuristic spaced repetition), which serve as layers over our massive collection of features such as the Smart Suite, Learning Library, Clinical Corner, Tested Tools and more.
100k+ users, 10s of billions of seconds spent studying smarter, invite only. Bootstrapped, growing nicely. I lead a team of top medical students and doctors.
I was tired of copying/pasting between agents, so I gave them identities, and tools to talk to each other and share tasks. I've found it so useful that I've left my job as the CTO of a German startup to focus on this.
The identities are public-key DIDs with DNS as the source of truth, as well as team membership. I also run a public registry at https://awid.ai (also OSS).
Docker is...quite slow with large images. I've built a registry+pull client+buildkit builder to make it better. It splits apart layers, allowing for files to be shared between related images. In a robotics context, it can make pulls 10x faster. And in a cloud context, the format allows for pulling an image in 15 or 20 seconds instead of 60, without having to do a FUSE w/lazy pulling. Builds are faster, I store 7x less data due to better deduplication, I can run security scans faster due to not having to unpack tarball layers, etc, etc. I want to be the default registry for all ML related work, in the future.
I wanted to make it easier to quickly see/study trending articles on Wikipedia because they tend to make good topics to know before going to trivia night.
I've had the domain for awhile, but just made the app recently on a whim.
I use Wikimedia's api to get the trending articles, curate them a bit, add some annotations to provide some context, then push to deploy the static site.
It’s a self-hosted email marketing/newsletter app. The basic idea is: own your subscriber database, run the app on your own server, and send through SES/Postmark/Mailgun/SMTP instead of being locked into another SaaS.
Not trying to be “Mailchimp but cheaper”. It’s more for technical founders, agencies, and consultants who want a boring, controllable email tool they can deploy for themselves or clients.
I’ve kept the changelog public because I wanted the work to be visible: https://sendbroadcast.net/changelog
My buyers are typically people who want to own their data and are in regions that have strict data privacy regulation/laws.
Interesting fact: This was my real last project where v1 was built by hand before AI coding became the norm in the software industry.
The vision is for everyone to have an executive assistant that manages their email. It's built for people who spend hours in their inbox every week.
It has automatic prioritization, split inboxes, snippets, bundles, automatic follow-up reminders, and an AI agent that can do stuff for you -- without deleting your emails.
If you've read this far, I'd encourage you to give it a try and let me know what you think!
Plug in solar became legal here in the UK
Still sussing it out but started shipping something
Finding the pitch direction of the roof is kinda hard
Uses data from the house to try and get a rating
It can get you up and running in a few minutes with an installer that can set up a new system or keep an existing system up to date. There's also a command line version that works on Arch and Debian based distros (including WSL 2) and macOS. I use it on my personal devices and a company issued MBP.
I'm not going to lie, I've been using computers for 25 years and this is the happiest I've ever been with using 1 machine for everything (software development, media creation, gaming, etc.).
https://allaboutberlin.com/guides/immigration-office/wait-ti...
I wish I had more time for such projects, but since AI is now capturing most of the traffic, I am losing a lot of my income and I have to make up for it. It's a huge distraction.
- Building a platform where talented people can list the services and skills they're experienced in. Clients can book paid sessions with them directly through the platform, and once a session is booked, they both meet online to discuss, collaborate, or get advice based on expertise.
I finally finished the (monumental) Svelte 4 -> 5 migration that had been getting dusty for the last year. This unlocked a higher performance ceiling for me to polish my animations and UX. Now I'm revamping my onboarding experience and taking another crack at marketing and promoting it. Last year, I was focusing on setting it up as a PWA and integrating Sentry monitoring and Stripe integration. All important stuff but not what got me excited about the process.
I've been pretty tied up with maintenance and admin work, and haven't gotten a chance to work on the actual game design in a while, so I'm very excited to return to that part of the project soon. I have ideas for new puzzles and modes spilling out of my ears and I feel like with LLMs my prototyping can finally keep up with my brain, now that I have a robust foundation for the game architecture.
No tracking, no analytics, no cloud uploads, no account. MIT licensed. Everything stays on your Mac.
I'm currently planning and designing a plugin system, so others can contribute new functionality without affecting the scope of BetterCapture itself - which should stay as small as possible.
- Deploy containerized apps to your own AWS account with minimal config!
- CLI tool with instant console sessions
- Set up SQL/Redis instantly with Heroku-like add-ons.
- For enterprise: Autoscaling, preview apps, audit trail, release approvals.Some finished covers (https://saltwatercowboy.github.io/albedo/pages/en-10-05-26.h...). Next up pixel sorting.
The idea is to have a better experience for navigating livecam streams that are publicly available on YouTube. There are a few livecam aggregators that include maps, but I never felt that any of them were satisfying, as they always require you to open new pages to watch the streams. On World Watcher, you can jump from place to place seamlessly.
You can also filter the streams by type of place or features, for example beaches or cams with audio. And if you don't know where to go, just try out the Explore button.
Fold-up, scissor lift, cross-cantilever 3D printer for open sauce
M.2 FPGA hardware accelerator devboard
All just for fun and open source https://github.com/kaipereira :D
If you want to try it out, we offer some free credits at https://fuguux.com
Any feedback you have would be incredibly helpful! We're considering more kinds of reporting, support for QA testing, better integration with CI/CD, and more.
Note: we don't want to replace real user testing, but rather complement it. With AI user testing, you can get quick feedback on potential usability problems in hours for a fraction of the cost, making it so you can iterate much faster. We advocate doing user tests with real people to understand problems that require domain knowledge or nuance.
Useful to debug local Kafka apps against any cluster, intercepts the traffic, decodes the protocol. You see interesting (and weird) things when you look at the protocol. Still early, though already useful for local debugging when you know what you want.
A self-hosted database IDE with built-in migration, CDC, and DuckDB-powered federated SQL.
Mostly trying to remove the annoying gap between "I can inspect this database" and "I can safely move/sync this data somewhere else".
Current focus: resumable large loads and cleaner initial-load-to-CDC handoff for Postgres/MySQL.
I've been working on something in the vein of a indie game for a little over a year now. It has been a passion project, but I'm starting to come around on showing it to people.
I am a big fan of Telltale style narrative games. I think Baldur's Gate 3 was the biggest revelation of this for me. Taking that branching dialogue and freedom of choice, and tacking it on to a fun combat system was just everything.
When text based GTRPGs started popping up, I found it hard to connect with them stylistically. I found that I needed the multimodal stimulus of visuals and audio. This led me to start building something, and it ended up being somewhat of a cross between a Telltale game, a Visual novel, and a TTRPG.
Orpheus (https://orpheus.gg) is a fully on-the-fly generated tabletop simulator, with graphics, audio (TTS), and the freedom you can usually only find at a real TTRPG table. That means you can play a sci-fi, fantasy, or even a modern setting in your campaign. The assets are made for you as needed. It runs in your browser so nothing to install or tinker with.
Getting the harness right so the AI GM can stay coherent and organized has been the biggest challenge. It took a lot of iterations to get it to a point where it could understand the scenes it was building as the player changed them.
I've built it to be played with either a keyboard or a gamepad so you can play from your couch. You can switch between them as you feel like it. There is a 3D tabletop for combat, full character sheets, dice rolling, lore tracking. I want it to be dense.
Mostly, I’m looking for people who want to try it, break it, and tell me what feels magical, confusing, boring, or broken. My biggest roadblock currently is that asset generation is relatively expensive. I'm currently mulling over whether a playtest would allow for a BYOK setup so people could try playing as much as they'd like, or if I should add turn limits.
You can join the playtest waitlist at https://orpheus.gg/ -- and I just setup a discord (https://discord.gg/pychWyzf) that I will use for early playtests. (Just me right now! Come hang out!)
I'd love to see a more modern day attempt at something like Bioware's Neverwinter Nights - which was designed so that someone could create a campaign, and then the game would provide the behavior, pathfinding, assets, and everything else with a virtual (or human) DM behind the scenes. You could still tell a human-driven story, but the engine would do a lot of the heavy lifting.
I think a lot of those attempts you mentioned try and brute force the problem or trust the AI too much on what to generate.
A lot of the same problems that AI coding agents run into also apply to this problem. You have to really manage context (avoid sending a novel at the model) and enforce strict rules in the "engine". The hard part is world building that is consistent without railroading the player and forcing specific paths. I have an agent (for lack of a better term) that manages arcs across each tier. World arcs (nations, factions), player character arcs, NPC arcs, individual scene arcs, and location arcs (towns, cities, dungeons, etc). By prompting all of these as tight, individual arcs with flavor and context peppered in as needed, you end up with stuff that is more compelling. It has to be loose enough that you don't railroad the player. When you decline that NPC's quest, down the road that might have changed the overall arc for a town in a meaningful way.
I won't pretend that I've perfected anything but I have definitely noticed a spark in its writing and world building that I personally have really enjoyed.
OTOH, that means that the underlying story is that much more important. I think a lot of people mistake coherence for novelty. Biggest offender is puzzles - oh god do LLMs absolutely blow dire wolf chunks at coming up with organic and interesting puzzles.
I have a private vs public flag for assets that I'm considering more unique or sensitive, at the AI GM's discretion. I'm using embeddings from there to try and parse if an asset already exists in the public pool or not, and reuse it if possible. The thinking is that eventually I will have pretty decent asset coverage on most standard campaigns. I can't account for people going way off book though.
I have an asset pipeline that tries to determine player intent and pre-generate assets before they're needed. That way we can attempt to hide the "load screens" like retro games did with elevators. I have a kind of sliding scale for player coherency, and if the player has too many "misses" on the pre-generation pipeline it will increase its requirements for when it starts generating.
I may have wildly over-engineered this but I love it. =)
From dataset harvest, to training intricacies on CUDA/ROCm to fun HIP kernels. Full circle to inference testing, building it around consumer hardware(the challenge). Using this as a "how it works" deep dive, allowing me to learn more about the how, more than endless papers will. It's a MoE and I'm slowly running a human loop, research, build, correct, research.
You build up a library from your physical books by scanning them in or discover OpenLibrary books to read in app. Then as you mark books in your library as read, it starts building a rotation and recommending books you haven’t read recently. I’ve been using this nightly to track my son’s 1000 books before kindergarten for the last couple of months.
Currently, I’m working to get the app out on Google Play and adding multiple story time attendee support.
It’s a project of the non profit Open Transit Software Foundation that we’re using to fund our other initiatives, like bringing realtime transit information to billions of people around the world.
All of this depends on a bunch of really cool open source projects we’re building, like Maglev, a Golang server that can power realtime transit apps. I wrote up a blog post explaining how to set it up here: https://opentransitsoftwarefoundation.org/2026/04/setting-up...
We’re always looking for volunteers, especially non-engineers. https://ossvolunteers.com/organizations/open-transit-softwar...
Do i understand correctly that the product is a white label app for public transport providers that riders can download to get arrival data?
Do you think people will download an app for each bus/train? Isn't it better to integrate with google maps or equivalent?
1. The Puget Sound region, where a regional transit authority, Sound Transit, currently maintains their own OBA servers on behalf of a dozen individual transit agencies. Sound Transit piggybacks on our official OBA apps which you can find in the Play and App Stores. The official apps also work in 10 other cities across the US. This is the ideal for us—and transit riders, imho, and similar to what you see with apps like Citymapper or Transit.
2. New York City, where MTA runs their own OBA servers that power their own branded app and realtime signage throughout the five boroughs.
3. UC San Diego, where the university is using OBACloud to power real time transit information systems for students on campus.
4. Republic of Cyprus and Malaysia (yes the entire countries), where enterprising individual developers have set up their own OBA servers to power realtime transit information systems for their fellow citizens.
The underlying OBA server provides a rich set of REST APIs that make it much easier to build a public transit app than using raw GTFS and GTFS-RT data: https://developer.onebusaway.org/api/where/methods
We also have SDKs for many major languages so that agencies and independent developers can build their own apps on top of OBA servers without having to fiddle around with the intricacies of our APIs. https://developer.onebusaway.org/api/sdk
~~~
Integration with Google Maps is important, and a "yes and" solution. I think there's a lot of value in having public transit-focused apps, especially ones that don't have advertising or questionable privacy issues.
~~~
edit: I noticed you're in Argentina. The Ministry of Transportation maintains its own white label version of OBA called Cuando Subo. https://www.argentina.gob.ar/sube/cuandosubo
- AI assisted academic progress reports so parents can effortless stay on top of kids middle/high school academics. https://www.gpa.coach
- A family economy app where parents set the rules, kids earn credits for chores and good behavior and kids redeem credits for screen time, money, and other benefits. https://www.kredz.app
- AI first fun mobile media editor your parents could use. https://www.mix.photos/
You should check out my new open source software build tool, https://pcons.org.
Draws from a bunch of sources, MCP-connects to my agents, comes with a browser plugin to invite meeting bots to calls, lets me (and my testers) leave notes on websites which also gets added in.
The goal is to make work as simple as dragging tickets around, and load as many best practices + review clarity into it
I've set a deadline to finally launch tomorrow, but frankly - I don't know how it's gonna go. Feeling proud, yet a bit anxious about it.
https://kodan.dev, if anyone wants to take a peek
So: ac-ng didn't reduce the impact of the DDoS, but it does lead to impact when there is no DDoS. Worst of both worlds.
So I'm working on an apt-cacher that goes to lengths to keep working as much as possible when the upstream is down. It will check the repo metadata and keeps a list of your "hot packages", and will download those before flipping the new metadata to be live, effectively a snapshot. It won't allow you to download a package you've never downloaded before in the case of a DDoS, but packages that you do download regularly (machine re-installs, apt updates), it will ensure are available in the repo.
I'm calling it apt-cacher-ultra. It is pretty early days, it'll probably be another week before it's ready for a beta. I'm running it in my dev cluster right now, successfully.
I'm working on a Personal / Family travel organizer. Started as tool to allow me and SO to plan a trip together. There's been steady progress over the last couple years. Focus on privacy and ability to self-host. Of course, there is a managed version if one doesn't mind me having access to their data.
Replit for the website (he did the first 80%), Gemini to make the flyers and he'll be walking the neighborhood and talking to neighbors.
There's Truthsorting, a logic puzzle where you have to order logical statements to make them true or false.
Pathword, a puzzle where you lay out letters along a path to spell out 4 words.
Morphology, a clued word ladder written by a different contribution daily.
And a few others!
I've been trying to promote it for a few months but I haven't had a ton of luck, to be honest. The audience hovers around 500 people and growing it beyond that has been pretty challenging.
After AI happened, I built an app (promptfunnels) to scratch my own itch and generate funnels (fancy name for landing pages with a purpose).
Then came the harder part: marketing it. Coming from a tech background, I knew nothing about marketing, so I started reading and came across the $100M Leads book. I realized codifying those principles together with funnels and marketing automation had a real market. My family, friends, and acquaintances became the first customers. A friend joined me as cofounder and we both quit our jobs to do this full time.
As we talked to other startup founders, they kept describing a tangential problem they called GTM. At the core it was the same thing we were solving: marketing for non-marketers. So we pivoted to RevMozi(https://revmozi.com/), which helps non-marketers do both inbound and outbound GTM.
We’re dogfooding the product and coming out of beta next month.
Wish us luck.
Umm where? They are indistinguishable from each other. Not pretty.
Menu bar app that reduces your Claude Code token costs by ~50% so you get 2x more usage out of your plan.
People seem to like it so far :-)
https://dhuan.github.io/mock/latest/examples.html Command line utility that lets you build APIs with just one command.
https://github.com/dhuan/dop JSON/YAML manipulation with AWK style approach.
Just finished the software side using a boring technology and am about to order the materials for the first few locations. Curious to explore photo alignment once real submissions start coming in. Stitching all slightly different angled photos into a smooth animation seems interesting.
https://github.com/KevanMacGee/Repomix-Desktop
It's open source and has no official connection to Repomix. But the developer, yamadashy on Github, knows about it and seemed to like it enough to add it to the Repomix website under the community projects.
I like being able to paste all the code into a browser window and have lengthy discussions with ChatGPT, Gemini and GLM. Doing so in the browser saves tokens over doing it in Cursor or Codex. I like using the Projects feature in ChatGPT in the browser and Notebooks with Gemini because that gives the model context and history on whatever I am working on. It was one part scratching my own itch, one part learning about Python and Customtinker.
It's made specifically for when you just want to get the code and paste it, no muss or fuss. It doesn't have support for flags (yet?) like the CLI because again it is built for speed. Besides, when I want flags, I like using the CLI instead to get granular. Repomix Desktop is for "just give me the code."
I'm a self taught coder so I'm very open to feedback.
How the algorithm works: it finds people who liked the same posts as you, and shows you what else they’ve liked recently.
Launched the feed a little over a year ago and it has become the most liked feed.
Been pushing some new stuff on https://infrabase.ai as well, my AI infrastructure tools directory. Traffic growing steadily from comparison and alternatives pages. Interesting finding is that blog posts rank better but get fewer clicks now because AI Overviews, interactive comparison pages still earn clicks. ChatGPT has also started citing the site more as a source. Adding new content and polishing existing parts of it, added a page focusing on EU based services at https://infrabase.ai/european.
After a few rounds of using it, I already know a few things I didn't before: I suck at right-to-left breaking putts, I baby uphill putts too much, and getting out of bunkers consistently is not good enough if I can't sink the occasional save. So I know what to practice now.
I've always wanted this and have used it to experiment with Gemini's cloud agent Google Jules.
It uses Let's Encrypt by default. We use delegated DNS to handle ACME challenge validation (we run the DNS, you just CNAME to us). This means you don't need to give us DNS credentials or anything. And for HA workloads it's great, because there's a central clearinghouse for certificates - so all the machines in your web farm (or whatever) get the same cert, but you don't run in to rate limits with LE.
We're recovering Windows Server guys so we made sure our automation works for painful windows workloads like IIS, Exchange etc. too.
We've had enough interest that we're building it out for real. Just left beta last month.
A sample puzzle can be found here: https://sudokupad.app/23x300ggzn
It's been well received by the (very kind!) Sudoku/puzzle communities, so I'm working on throwing a nice interface on it that fits the rules a bit better. I've found about five other examples of others doing a variation of this ruleset before in one way or another, and it's been fun trying to see how hard/deep I can get this puzzle to go.
Use this to doomscroll nba twitter and sports bet, or if you're feeling more highbrow, peruse the NYT and passively gamble on geopolitical events.
Try it out here: https://chromewebstore.google.com/detail/anywager/eebgbiogbb...
The tech surrounding the game is awesome, the game and engine are fully deterministic, discrete (not float based), and bit-packed data structures throughout, powers of 2 everywhere for really fast operations, and logic and rendering are fully decoupled.
I wrote a simulator for the game and can simulate 10,000+ games in around 50 seconds on my MacBook M1 Pro. Purpose of the simulations is Monte Carlo method to tune my enemy AI (not LLM - conventional bots etc)
Email in profile - would love to connect.
The persistence model makes documents somewhat sharable, but I do find Open Graph previews to be mixed. In Messenger it renders the whole URL, which is quite long due to encoding, and that kills the conversation view.
I figured "I already have a battle-tested solution, I just need to make it modern and spiffy, build a website for it and see if there's any interest -- in the age of Claude Code, this should be fast work!"
Wrong. Taking an internal library and offering it to others -- complete with documentation and modern tooling -- is an immense project, even with the help of AI agents.
Is there a market for a "formula engine in a box"? I don't know. But I also didn't know whether there would be a market for Calcapp either, and that has supported me working full-time for the past seven years. So I'm willing to take another chance.
I’ve been trying to reduce and eliminate my reliance of the Big Tech and the lack of user reviews and ratings was always a big pain point for me each time I tried to switch away from Google Maps.
I’ve started building a service where users can write reviews and rate “places” (POIs) in OpenStreetMap database, such as a cafe, a museum, or a shop. It’s a quite straightforward CRUD app with bunch of OpenStreetMap-specific features such as logging in with OpenStreetMap and querying places by their OpenStreetMap metadata.
It’s still in active development but it has good docs, a great API reference (including an OpenAPI spec), a demo app with the entire planet imported and queryable, and an early stage Android SDK.
https://nodes.max-richter.dev https://github.com/jim-fx/nodarium
Also, we're hiring engineers and PMs (the eng position is about to be up). https://openmined.org/careers/#brxe-zgsziy
Since it does it anyway I added dossier pages to it as well https://searchcode.com/repo/github.com/rust-lang/rust Which is useful for humans, and shows what the system is creating.
Best part is that I get to use the tools I have built, so https://github.com/boyter/scc and https://github.com/boyter/cs to improve it which benefits anyone using those tools.
For the past few years, a group of us from Google, Microsoft, GM, IBM, Roblox, Rubrik + more have been working on a design standard for APIs called [AEP](https://www.aep.dev). The goal is twofold: learn from our companies mistakes around APIs and enable better tooling with less configuration.
We’re at a point where AEP-compliant APIs get a resource-oriented CLI, MCP server, full UI, and Terraform provider for near-zero configuration.
Aepbase has been my way to tie the whole ecosystem together. You run a single binary and define the schema for a resource with one API call. Now, you’ve got a full set of CRUD APIs and support for CLI/TF/MCP/UI. After one API call.
It’s a really cool way to tie together all of the work AEP has been doing.
Love to hear HN’s opinions on all of this. We’re still trying to figure out the best way to sell people on AEP.
Given a distance, an allowable time to reach that distance, a payload to send, and an expected exhaust velocity, how would you calculate the time required to convert energy into antimatter fuel and how much antimatter needed to arrive at the destination (starting from the Moon)?
There are a few side calculations, such as the size of the radiator, estimated footprint of the fusion reactor itself, and how much metamaterial is needed. This is to help figure out timelines for a sci-fi novel, so ballpark answers are completely fine.
The calculations yield what appear to be values around the correct order of magnitude. Would be delighted to have insights, comments, and corrections.
[0]: https://technokick.com/ (Techno Kick synth)
[1]: https://riviera-demo.surge.sh/ (Reverb effect)
[2]: https://ya3.surge.sh/ (TB-303 synth clone)
Example book here: https://www.amazon.com/dp/B0GYCZJVGX
Employee benefit plan analytics. Had a huge dataset long ago as a consultant to the industry and finally vibecoded up a decent frontend. All public data but if you know the data there is a bunch of analytics you can do. Just about to launch and do some marketing in a few weeks, so saw this and thought I'd throw it in!
Features:
- Control channel for block header announcements, operational mechanisms, and network topology automation
- Separate channels for subtree, subtree grouping, and transaction load
- Transaction load sharding by deterministic multicast group membership based on TXID
- Transaction specialization filtering and retransmission both unicast and multicast, to connect edge networks only interested in a portion of the transaction load for whatever reason
- NACK-based retransmission of missed packets via hash chain gap sequence tracking (per sender, per shard) with automated caching endpoint beacon discovery and tiered network distribution
- BGP-AnyCast based transaction ingress
Basically all the topology pieces to scale the actual small-world network for Bitcoin miners or transaction processors; dense at the core, with layered and sharded group distribution towards users at the edges. Right now just site or org-scope multicast in planned, but provisions are being made to extend via MP-BGP eventually.
For BSV Blockchain but could work for the other Bitcoin variants too, if they ever wanted to scale.
We just received the API usage approval from Google, and I'm integrating GBP to https://pinpost.io this week (our reliability first social media management tool)
The idea was borne out of wanting to use the review tools that you get on existing sites like GitHub, without having to push and start bloating PR lists. You'll be able to leave yourself comments and code suggestions after review, which you can then pull out in a Markdown file to feed back to your coding agent (or anything else for that matter).
I'm also trying to include some optional (very optional) AI extras where you can use your own keys, and then get a tour of what you've changed and a quick overview of the changes.
Something I can finally enjoy: just playing with it. I tediously wired up a pair of pendulum simulations to drive an XY oscilloscope—got a nice Lissajous curve.
But now I want to double it to four pendulums. Each axis (still just X and Y) to be driven by the sum of a pair of pendulums. With them out of phase, the curves appear to sometimes collapse but then suddenly explode again…
(Love to eventually hook it up to an actual plotter.)
I believe writing my own "Toy Harness" is a good way to learn and understand these tools.
Other than that, I did plant my tomatoes today.
The goal is to build a deep research product for actual researchers, since we believe that it is an extremely powerful product that is still nascent but has enormous potential - which we've already seen with some early users.
I've published several panels under this banner already (tools for redis, caches, celery, etc.); I am currently working on a base library layer for tools to inherit from and to make it easier to create new tools.
Essentially, the point of all of this is to make it so that you don't need so many external services; Instead, DCR provides self hosted alternatives. This in turn makes it a lot easier to build and productionalize something using Django.
Reception has been decent so far and I estimate several thousand current adopters (Its hard to estimate based on download numbers alone.) For May I will finalize a common design language, further formalize the plugin system and how it works, and likely release a new panel.
Website: https://arkvis.com
Poker Equity Calculator: https://github.com/lodenrogue/poker-equity-calculator-web
Davao Explorer: https://github.com/lodenrogue/davao-explorer
Reading Summaries: https://github.com/lodenrogue/reading-summaries
I also created a couple of chrome extensions:
HN Dracula Dark Theme: https://github.com/lodenrogue/hackernews-dracula-theme-chrom...
Regex Search Chrome Extension: https://github.com/lodenrogue/regex-search-chrome-extension
Created a small command line util to get earthquake data in the Philippines:
Philquakes: https://github.com/lodenrogue/philquakes
https://store.steampowered.com/app/4129270/Tactus/
Right this second I'm looking for an alternative to After Effects that runs on Linux systems, as kdenlive has some limitations with its layering implementation. I'll probably give Blender and Godot both a whirl, as I want to get more comfortable with those tools for future projects.
Have you considered also releasing it to itch.io? (I don't do business with Steam due to DRM and their inaccessible website.)
I would happily purchase a NES ROM file so I could play it on my pitendo (RPi3 in a case that looks like an NES).
I'm not well versed in video editing. That said, the people I know who are tend to use Da Vinci Resolve.
https://store.steampowered.com/app/247080/Crypt_of_the_Necro...
- https://shirt.cash - Vibe code your t-shirt ideas and sell them.
- This weekend was substack MCP (https://www.youtube.com/watch?v=jHARlcInLqU)
new ideas welcome lol
TestFlight link, good for 10 users: https://testflight.apple.com/join/9VREtXzq
https://github.com/brettkoonce/lean4-mlir
I (w/ Claude) have built a framework for writing neural networks in Lean 4 that compiles to StableHLO MLIR and runs on GPU via IREE.
I have new features such as sharing bookmarks and possibly BPM detection planned but also some quality of life changes like better UI scalability for different size screens/split screen use.
It doesn't use generative AI, instead it auto-rigs the drawings in just a few seconds.
The idea is to have "real" linux, exposing ipv6, supporting nested virtualization, docker, etc.
an agentic coding scaffold/framework you can reference when building out your next random raspi project. prefer to build around systemd units first; make an idempotent installer script, then put as little as possible custom coding around that.
`impl muster` comes down to: /build out this tool wiring together `patterns` like: C3.dropfolder-trigger; R2.device-binding; C4.lazy-resource-gate
or composite patterns like:
T2R4.device-triggered-conveyor "Bind a physical device event to a bounded ingest job that waits for hot-storage capacity, proves cold-storage capability, stages local work, and hands output to a hot/cold conveyor."
I need to back up a couple hundred DVDs, so with muster I get out:
dvd-ingester T2R4.device-triggered-conveyor
Architecture DVD media becomes ready -> udev rule adds SYSTEMD_WANTS=dvd-rip@%k.service -> systemd runs /opt/dvd-ingester/current/bin/dvd-rip-one /dev/%I --apply -> dvd-rip-one proves DEST_DIR and waits for HOT_DIR capacity -> completed rip moves to HOT_DIR/<run-id> -> dvd-publish-one.timer drains HOT_DIR to DEST_DIR -> publish writes DEST_DIR/.incoming-<run-id> and atomically renames final output
Pipelined; ejects after rip completed. Monitors local disk capacity, retries after NAS comes back online; resumes after random reboot; etc.
However, I worked on it for the past ~5 years on and off (well, mostly off) and rewrote it too many times. Now finally close to releasing, bought a domain and setting up all the last remaining things.
Majority of code (almost 70%) is generated by Gemini Pro and is extremely ugly. Due to a recent eye injury, I've not been able to code as much as I want, so I'm delegating many things to Gemini. Eventually, as my health improves, I plan to rewrite the entire thing.
[0]: https://codeberg.org/naiyer/mesaphore
[1]: https://support.microsoft.com/en-us/office/excel-specificati...
1. Responsive artboards and flex-like layout engine
2. Deep support for design tokens
3. HTML/CSS previews and export
4. Multiplayer AI and human collaboration. Agents can connect to documents and collaborate like any other user.
Built in Swift and cross platform Mac, iPad and iPhone.
I’m designing and building the UI and implementing the underlying features with Codex. So far it’s going surprisingly well.
Just launched Studio, which is the self-hosted version of DB Pro.
I also keep a devlog. #9 was just published to YouTube.
Self-Host Your Own Database Client | DB Pro Devlog #9 https://youtu.be/MJvSrJGtk70
https://github.com/jondwillis/jacq
2) Claude code plugin based on some ideas found in https://www.anthropic.com/research/emotion-concepts-function The main idea is to add hooks that inject “baselines” under some conditions to counteract certain “emotions” that can cause subtle misaligned behavior in agents
https://github.com/jondwillis/functional-emotions
3) Final Fantasy XI custom client remaster in Bevy/Rust alongside an MCP integration that aims to allow agents to play autonomously on private servers à la “Claude plays Pokemon”
Contact: https://jonwillis.dev
It's designed to integrate with Maven projects, to bring in the benefits of tools like Gradle and Bazel, where local and remote builds and tests share the same cache, and builds and tests are distributed over many machines. Cache hits greatly speed up large project builds, while also making it more reliable, since you're not potentially getting flaky test failures in your otherwise identical builds.
You get to choose the genres you're interested in, and it creates playlists from the music in your library. They get updated every day - think a better, curated by you version of the Daily Mixes. You can add some advanced filters as well, if you really want to customise what music you'll get.
It works best if you follow a good amount of artists. Optionally you can get recommendations from artists that belong to playlists you follow or you've created. If you don't follow much or any artists, then you should enable that in order for the service to be useful, as right now that's the only pools of artists the recommendations are based on.
I started with this last summer. Usually I get tired of an idea, but this one is just an endless pit of things to try out.
Currently seeing how we can get an analytics agent working on the canvas. Video here: https://x.com/i/status/2053410747137266070
Incremental Markdown parser that emits streams of semantic events, plus tools to manipulate them - designed for real-time rendering of streamed LLM output.
With Unity I'm trying to bundle a bunch of different free, cheap or open source solutions together. For facial, that includes a custom converter from the output of Deadface (based on Mediapipe) with ARKit blendshapes, and also eye movement. For body it's a custom hook to SlimeVR that allows you to mocap with cheap-ish IMU-based DIY trackers, and all that on top of a custom made (not free but open source) physics rig solution that gives you accurate rigid body real time collision, saving on cleanup work.
It's being going really nice despite being an unusual workflow. Hope to release it as a plugin for a in-development sandbox game in the near future. Mocap and animation has been my passion long before i started with tech stuff, and finally I'm able to pursue it.
While working on it, I realized I should build a small Hex package for authoring and playing demos right in a Phoenix app (it's very easy to author scripts with AI or by hand):
Since I started it a couple of months ago, it's been used by me to transpile SQLite to Go, and by some other folks to transpile other C, C++, Zig and even Perl libraries to Go.
Since last month we’ve stabilized the search UI/UX and have 5 search providers you can choose from and sort as you prefer.
We entered May with over 50 paying customers and have recently launched Uruky Site Search [2] (for website owners, this effectively is our own search index and crawler, which we’ll be bringing into Uruky soon as another search provider option)!
Customers really enjoy the simple UI (search doesn’t require JavaScript) and search personalization (from choosing the providers to the domain boosting and exclusion). We also have hashbangs (like "!g", "!d", or “!e”) when something doesn’t quite give you what you’d expect, though.
You can see the main differences between Kagi, DuckDuckGo, Ecosia, etc. and Uruky in the footer (right side), but one huge difference is that with Uruky, after being a paying customer for 12 months, you get a copy of the source code!
Our main challenge right now is outreach because we want to do it ethically, and it’s hard to find communities or places to sponsor which are privacy-focused and don’t require €5k+ deals. Ideas are welcome! We’ve been sponsoring a project per month (Qubes OS, The Tor Project, and Hister so far), with our limited budget of ~$100 / month.
Because of bots and abuse there isn’t a free trial easily available, but if you’re a human and you’d like to try it for a week for free, reach out with your account number and we’ll set that up!
Thanks.
One thing I can recommend right off the bat is Reddit - there's many privacy focused subreddits, and also you can share the whole project in EU related subreddits and e.g. r/SideProject.
Would love to try it for a week, this is my account number - 9772263817629091
Keep up the great work!
I've topped up that account number for a week, enjoy (I'd recommend removing it from the post because anyone will be able to use it)!
It's inspired by GitHub PR review workflow, only with quick iterations and local.
It's been great! I found some dedicated users, dogfooding it every day with Claude and starting to get more contributions from the little community. We just got accepted into Homebrew core which was my target.
I'm expanding the team features now as I've got a few users keen to get the sharing service deployed in their private networks!
fDeploy is a self-hosted Windows deployment automation tool — a lightweight, on-prem alternative to Octopus Deploy. It consists of a Server (Windows service with a Web UI) that orchestrates releases, and Agents installed on target windows machines that execute deployment steps (IIS sites, file copies, scripts, etc.) across environments.
I recommend the book. It certainly isn't easy (maybe 3x harder than Crafting Interpreters), but I've learned a ton (eg how to deal with operations on different sizes of types, or the trick of using pseudoregisters to avoid having to figure out registers up front).
I was responsible for multiple RADIUS services used by millions of people every day. The existing software is slow to build with, difficult to scale and expensive. I couldn't let it go.
Step one was building the platform to run it on and make it sustainable as a business. Step two is implementing protocols like RADIUS that lack a separated compute/storage model but should really have one.
I chose C# because I know it, and build native single-file executables using AoT.
And on and on.
A stateless compute model with separation between the packet handling and the authentication logic solves pretty-much all of it.
Interesting part is that I started off implementing a research paper for indexing and performance was not good enough. I ended up tuning things up for my own use-case and ended up with good enough replicatable RAG store.
Side project is my own agent harness, https://github.com/Smaug123/writ , which is being built sandbox-first and with Nix as a first-class citizen. Obviously everyone has to write their own agent harness as a rite of passage.
I've been using Anki for 10+ years and love it but always wanted something with a cleaner UX and a reader view. The recent Anki ownership change pushed me to finally make something, and it's seeing some traction :)
Right now I'm focusing on getting the reading and note-taking view to be nice. I used to use Polar Bookshelf (RIP) but that went away, trying to make something better.
The flashcard side also has a REST API btw!
Right now I intend to make it compatible with Incus as a remote. So it's just a matter of adding it as remote and then you can consume all of your versioned images.
The bot settings (system prompt and user prompt, temperature, reasoning, etc.) are 100% transparent and customizable, and all users can view and copy anyone else's settings from the leaderboard. The goal is to build the best trading bots possible by seeing what works.
You can run a bot on Gemini 4 31B with a free tier Google AI Studio account (I'm running 5 bots on it myself). Or just run Gemma 4 26B on your PC if you have the GPU for it. I'm running 5 on my 5090, so I'm trading with 10 bots total.
The platform is connected to Hyperliquid and you can trace all the trades on the blockchain from the user's Analytics page (always public).
The way it works is you set a loop interval (default 1 minute) and the model receives the candles, market stats, indicators, account balance, current positions and so on and decides Buy, Sell, or Hold and how many units.
It's still experimental but I have already processed 1m+ prompts, 10k+ trades, and almost $1m in volume since January 2026. I have around 15 bots running right now, you can check their PnL on the leaderboard (public). I've made a lot of changes in the last few weeks so most recent either 24h or 7d results are the most relevant. The model you use is super important (Gemma 4 31B so far is the best value I found, better than Gemini 3 Flash and you can run it for free) and also the coin you choose is important too. Preferably, you want something that's trending. My friend's bot did well with ZEC and VVV this week.
Right now I'm working on improving reliability (I bought a Japanese VPS to run my own HL node), and this weekend I moved the app from Render to my own DC VPS for 10x+ cheaper and 1000x more bandwidth (25 TB instead of 25 GB, seriously if you're using Render and want cheaper infra look into buying your own VPS).
I'm also implementing CLI/MCP for OpenClaw support. And next is an automatic screener that will use LLMs to pick the most promising cryptos to trade (since I noticed this has a huge effect on PnL).
If you have questions, let me know, the Trade page has my Telegram group link.
I’ll keep chipping away at it this year, and probably expand beyond morels to other seasonal natural phenomena that my people enjoy like smelt/salmon run, wildflower blooms, etc.
It replaces paper stamp cards with Apple Wallet passes (Google Wallet coming soon) without the need for customers to download an app or signup. It’s still very work-in-progress (forgive the landing page) but I’m enjoying using Ruby on Rails. Please let me know your thoughts!
The result is http://getcaliper.dev.
It has a number of mechanisms that help substantially:
1. It can extract deterministic quality checks from your CLAUDE.md text; these checks then get executed after every agent turn.
2. It performs a lightweight ai-powered review at every commit; feedback goes directly to the agent, which can then make corrections.
3. It performs a more 'traditional' deep AI review at merge, or on-demand.
Free to use, just bring your own API key. Any and all feedback is welcome!
I'm also thinking about writing the Necronomicon of delinking at some point. The extension keeps spreading by word of mouth and there's only so much UX improvements I can do, for something that requires throwing everything you've learned in CS 101 into the trashcan before you can "get" it.
The idea was to create a quine that runs forever on something like Akash network with its own crypto treasury to support and pay it's bills and try to replicate. It would then talk to an LLM for support and actions on what to do to stay alive.
It got pretty out there. Stored some of the ideas here.
It’s intended just me for and follows a philosophy around hyper personal software that I’ve been developing: https://paulwrites.software/articles/hyps/
As a demo, I repaired an old Philips PM5190 function generator (about 40 years old) and connected it to Claude Code. Lots of fun. Going to post a follow up video the next couple of days.
The main goals are to own my data (memories, artifacts, chats), be able to switch AI providers at any point (if one is down or I want to try a new model), have the same experience between desktop and mobile especially when it comes to working remotely on code.
A bigger vision is to offer everyone a alternative to Claude and ChatGPT they can own just like OpenClaw but with a great app experience.
I hope to have the first beta published by the end of next week.
An LLM benchmark for open-weight models only, with secret questions.
The questions are asked multiple times to calculate a consistency score.
The results are available in JSON, containing the hash of the question with the number of correct and incorrect answers, the number of unique answers, and the number of times no answer is given. (Uses \boxed{})
I feel like even after all these years we’re still missing the devex that Heroku provided.
It’s been super fun to experiment & integrate MCP into it.
We just passed 2000 developers last month actively deploying with canine.
It’s an n-gram viewer for Hacker News comment data.
Still working on daily data updates, etc but it’s live!
Thanks.
You delegate a task or GitHub issue to it and it uses AI coding agents and developer tools to write the code, run checks, read failures, fix problems, and iterate until the result is good, then comes back with a pull request. It does everything a human dev would do, fully automated.
It’s nice to see how well-thought language design can pay off years later, with lower token usage. From entropy POV, Rebol syntax is certainly close to optimal state.
[1] https://apps.apple.com/us/app/reflect-track-anything/id64638...
If you're a creator, researcher or developer looking to reap the rewards of a video without consuming it fully, then it's helpful.
Whole thing is up and running on vercel.
It's a work in progress — would be great to get some input!
My art with pen plotters. Recently released a new series of brush plots. Very inspired by Soulages: https://harmonique.one/collections/brush-plots
An AI first typing application.
I think anyone can learn touch typing and potentially 2x their typing speed.
We make typing practice engaging and data driven.
hack music
For example, if I downgrade from Max to Pro I'd still be able to use the subscription, but also run sessions with other models (less expensive/local) as desired:
ccode init-config # initializes a new config file for me to set everything up
ccode edit-config # opens it in my editor so I can change, can also include editor as argument e.g. vim
ccode # launches whatever my default profile is
ccode --deepseek # Using their API key, they have a discount this month
ccode --openrouter # Whatever OpenRouter model I have configured in the config file
ccode --openrouter-preset # Also supports OpenRouter presets e.g. if I don't want to use quantized models
ccode --deepseek --control # launches a Remote Control session, shows up in web/desktop app as a regular session
ccode --deepseek --auto # overrides the default permissions, --yolo also works
... (and so on, there's more examples on the website)
Source available, pre-built binaries on itch.io, pay-what-you-want with a minimum price of 0 USD, probably get it for free first if interested in taking a look.I finally got around to signing app for Mac, which is what this post originally was about: https://news.ycombinator.com/item?id=48075366 (the new versions will be out soon)
Also thinking that I might make it an Anthropic API --> OpenAI API proxy that allows talking to providers that don't support the Anthropic API directly, alongside allowing switching models dynamically during a session (Claude Code wouldn't even have to know about it, it'd just send requests to a local endpoint and the proxy would do the rest).
Early on, but Go is lovely to work with, mdBook is great for getting a site off the ground and I'm really surprised that more people don't use Itch.io for distributing software (or the pay-what-you-want model in general), it's dead simple!
It's a PWA and works offline. Tech: js, no libs, Canvas API, Web Audio, not vibe coded, but I did use Claude for graphics and tests. Puzzles curated by hand.
It's early days. I'm not even sure it's possible.
So, I built an agent to help remind me -- it's a subscription based service that sends you updates every morning, and stores your preferences so it can learn what you like.
Most recent ha-ha moment: I kept wondering if it was normal that my cluster was only able to process 4 requests per second per vLLM engine (just seemed really low to me).
I realized a better metric is in-flight requests... Each engine is processing 70 requests at any given time, streaming tokens for over 30s.
I am currently rewriting the engine to add ~400 games this month.
Each guess can be a single letter or a full word. Revealing letters helps you make word guesses, which are more efficient since it reveals all instances of those letters across the board.
It's been really gratifying seeing friends enjoy the game, now we're trying to figure out how to get in front of more players. Leave us some feedback if you stop by
Will be trying to implement a virtual bass array next.
Currently we’re using AWS and Backblaze B2, but I’m formulating a plan to move to colocated servers. Not being billed per GB will open up a lot of new opportunities. Even at today’s server prices the math still adds up.
82 sites published so far, with a really weird and wide range of content.
Working on a simple WYSIWYG website editor to go with the current functionality.
Smart documents for teams. Fast, Open, and Self-Hostable.
Basically a much faster Notion.
Think wisprflow + granola with 30+ top STT models under single login and pay as you go billing model with 25% markup over API.
https://beatquestgames.itch.io/textbattlegd
Completely open source if you ask and promise not to make fun of me.
https://github.com/dcminter/kafkaesque
Worth kicking the wheels if you're currently using embedded or dockerised Kafka in your tests.
Well, all of a sudden, now that I kinda quit my gaming time sink, all my mini projects are finally being completed. All small, but useful, things for my setup that seem to slowly become a part of a bigger personal project. And between that kid and lots of books.
Ngl, it is weird for me now. If this is midlife crisis, I am loving it.
For now it's just for iOS but currently I'm working on porting to Android.
(I’ve been procrastinating on marketing basics for seven years, so it’s… fun but still intimidating :) )
We grab interesting business problems, turn them into fun challenges for hundreds of AI engineers to find the best architecture for. Insights are shared back with the community.
It is a fun learning process with unexpected scaling challenges.
This is a Flutter project.
I'm a backend dev, frontend was made with AI.
it enforces very few paradigms, runs in the browser, and allows users to view and edit agent config files within the UI.
it's kind of a nightmare to try to figure out how to do this appropriately, but it's an interesting challenge and i have seen very few (~0?) projects with an approach like this ...
all the offline harnesses are optimized towards coding, vs. general text manipulation aka "writing."
hoping to publish v0.1.0 by the end of may.
Working on https://fastsleep.app
Using this app, you may fall asleep in 20 minutes (maybe within 8 to 15 minutes)
Simply start the session and imagine what you hear. Like if you hear "calm river", imagine that. If you hear "heavy rain over a tree" imagine that. And you may fall asleep soon.
Try this tonight!
---------
- Built with Tauri — installer is small and start-up is near-instant on all three OSes. - No accounts, no telemetry, no MDX server in the loop. Sync goes through whatever cloud folder you already have (iCloud / Drive / Dropbox / a plain directory). - Tab-to-accept ghost-writing is bring-your-own-key
- Exports to PDF, HTML, DOCX. Tables, math, diagrams, code blocks all live behind toolbar buttons — no syntax to memorise.
Hope to have some people like it and use it.
Play a game here: https://bawgle.alifbae.dev
Bg2-like is playable at https://archipelago-sandy.vercel.app
I scanned a couple of chapters and realised it likely wasn't LLM generated, it just needed an edit. The intro to C is a hard and weird intro, but then driver development in FreeBSD is hard and weird and people who aren't prepared to get through such intros probably aren't going to get through the rest of it.
Being the contrarian, I've started going through it. I was involved on the periphery of the FreeBSD project ~25 years ago, went to conferences, ran a BSDUG in my hometown, and so on. And I realised I've missed systems programming and FreeBSD itself a little, and in recent years became a little sentimental.
What I've discovered so far in the first few chapters:
1. I miss FreeBSD. And it's weird my muscle memory kicks in and am surprised in a lovely way to find familiar things like /etc/rc.conf work the way I remember them.
2. This is not AI slop. There are issues that I can blame on him not using the same platforms I am (if you're on Apple Silicon, just use UTM and the aarch64 ISO - don't use the VirtualBox config he suggests, as an early example), but as somebody who sees a lot of AI generated content in my day job - this isn't it
3. I have got excited about coding again for the first time in a while.
So, this is my hobby for a while. Go back to where I started, get into low-level systems programming again, I have some ideas on some hardware I want to help out on... it's different to a lot of what I've been working on for the last decade or so, but that excites me.
* assisted coding, not full code generation
The idea is that each morning, you click the "New Day" button, and your Todo list along with other notes carry forward from the previous day to the new one. When you accomplish something, you add it to the Done section. Other sections can be added as needed. I have been using a text editor and/or shell script for this purpose for about a decade, but have been inspired to make it into an app now that I can delegate the boring bits of app development. It is not quite done yet, but it's getting close to being usable.
(* To the inevitable downvoters, this is in part an experiment to get familiar with what SOTA LLMs can handle. With the intent of comparing it to local LLMs once I get my Strix Halo set up as a coding assistant. I only code as a hobby currently, and have too many other hobbies, and this app wouldn't exist without something else doing the heavy lifting. That said, this is a pretty low-stakes application and I don't commit any code that I haven't reviewed and don't understand.)
A reactive programming language for games! Properties signal when they change and you can register blocks that tell the engine how to use that property, not just once but every time it changes. It’s a more declarative way of making games which I think is lots more productive.
I’ve been working on this for four years, it’s been a big project!
in each job i find myself trying to enhance information in order to visualize it, so this time i'm finally giving it a try
An interactive sound sculpture running on an Arduino uno+Pd
Using Mandelbulber as a visual effects layer for my experimental music AV show
I got into creating my own rings, and I’d really like to create one with ore I harvest myself. Gold is too hard and silver can be kinda dangerous, but malachite is pretty safe and I can just drive to Copperopolis to pick some up.
Basically: smelt the malachite with flux and charcoal to get pure copper, flow that into an ingot mold, hammer it into shape. Then I’ll have my own ring, with metal I collected with my own hands
it's a programming language
Next up is actually implementing game play!
There is a little video demo here (but bear in mind that everything is temp graphics) https://hakon.gylterud.net/diary/2026-05.html#2026-05-02
I just hate the Saas Scene today - even a small productivity app is worth $10-$15 / month . When you couple that with a bunch of apps that you use , you spend hundred of dollars in hard-earned Cash .
The Open Source Community is Amazing on Some fronts , but then enterprise & non-technical users can't use them without a layer of Support , Hosting & Setup Assistance .
We want to be the delivery layer between the Current Open-Source Community & Saas users .
Got a lot of ideas to work on it , but decided to build out a small version right now and launch it !!
the requirements for growth keep changing plus all the AI noise means that the playbook changes regularly. staying on top of the state of the market while improving/maintaining the product and understanding our icp + exploring new verticals is a tricky (but fun) task to manage!
tldr: we help you find good supplement
and for fun, I am building yet another programming language!
The game is going to be a farming tycoon/city builder game where you can buy farm stands and advertise to sell your goods. As your operation grows, you grow the local economy and people move to the town turning it into a city, opening up the chance to sell at farmer's markets or supermarkets. As the city grows you'll have to buy/sell land with the city and work with the mayor to plan where the city should claim new land for you to purchase so you can stay on the outskirts with healthy soil (or in the endgame, run for mayor and manage the growth of the city yourself, a la Sim City/Cities/Frostpunk)
I chose Love2D as my engine so I can use the relative simplicity of 2d art in 2.5D pseudo-3D instead of 3d modeling. The world space is a 3d euclidian grid of cells wrapped around a horizontal cylinder on the x axis. The view space is perpendicular to the side of the cylinder, giving us a natural horizon at the vertex of the cylinder on screen. The world space coordinates are expressed in terms of the polar coordinates of the cylinder, giving natural rise to radius as altitude, angle theta as latitude, and x axis as longitude. All the world math can be calculated using the trigonometry of the unit circle, and converted to 3d Cartesian coordinates before converting them to screenspace coordinates. I can use regular flat plans and elevations for the texutures of building faces, and render them upon linearly transformed quad polygons. Maybe I can also do some screenspace displacement a al Crimson Desert at the finish line to give buildings window sills and ledges when you see down a side of one.
I am doing the development without LLMS as much as possible so I retain a good grasp on Logic, Language, and Math. I have been having a lot of fun digging back into these multivariable calculus and linear algebra concepts I thought were beyond me (because of some autobiographical amnesia issues I deal with) to discover that no wait, I was taught these concepts in high school and was quite comfortable applying them. All the development is done on my own private, secured git instance on my homelab server and I can pull down the latest revision to my iphone to show off, it's been really cool. Kind of a pita to find a good git app on iPhone that allows custom git servers with ports though.
screenshot of a very early hello world, before I made the mental connection between wrapping a 2d cartesian plane around a cylinder and actual 3d cylindrical polar coordinates, which is why the shapes just sit over the world rather than extending from it, I hadn't yet conceived of the radius of the cylinder being altitude: https://fucci.dev/assets/helloworldspace.png
https://www.linkedin.com/search/results/all/?keywords=%23ape...
Too many codes or old or gate kept behind proprietary walls. Many are old and don't use the newest acceleration techniquea to make the simulation fast. Additionally, none of them scale using aws. I want SAS/SAR image to be easy to generate for anyone.
I have a working prototype written in Julia which is a very simple neural network. The input is in vector format so traditional convolutional neural networks don’t work out of the box but I swapped the convolution layer with a path simplification algorithm and it worked extremely well. Like 20 samples per character (from a set of only 5 hiragana during prototype phase) was enough to get 100% accuracy in a test collection of 5 samples per character after only 30 iterations of training.
I plan an working with free and open data, which I don‘t think exists for japanese kanji characters (at least not in vector format; KanjiVG only has one sample per character and I need dozens) so I also build a crowdsourcing web site to collect data from random people on the internet.
I am planning to run some more experiments with my prototype model before I release the crowdsourcing web page to an actual server though.
Model prototype: https://github.com/runarberg/kantoku-prototype
Crowdsource app: https://github.com/runarberg/kantoku-collector
Right now I just germinated a 4x8 bed with flax for fiber. The plan is to grow it for 100 days or so and then harvest, dry, ret, dry, and spin. I need a lot more to do anything serious, but I think it’d be awesome to have a scarf that I made with linen I grew and harvested myself