MindJam helps brands, studios and creators understand their YouTube communities.
MindJam analyses millions of YouTube comments to instantly reveal the unfiltered voice of your audience – their true sentiment, emerging themes, and the topics they really care about.
Here is a sample analysis - https://mind-jam.co.uk/analysis/HPMh3AO4Gm0?utm_source=hacke...
I didn't intend on building MindJam... I wanted to learn about LLMs.
At first I wanted to see how Laravel would/could work with an LLM and after doing some reading I ended up learning about OpenAPI 3.0 Schema and Multi-Modal RAG.
In the last few months I have built on top of Gemini, Claude and OpenAI. All have their perks and quirks.
I am hoping this learning is only the start of a pretty cool journey.
If you're interested in Estonia, e-government, building tech hubs, and the future of the nation state I'd love if you take a look (and let me know what you think). It's available on Kindle now but Oxford University Press will be shipping out physical copies May 15 and buying from a smaller press is always appreciated!
https://global.oup.com/academic/product/rebooting-a-nation-9...
Take photos of the tree from 6 different angles, feed into a 3D model generator, erode the model and generate a 3D graph representation of the tree.
The tool suggests which cuts to make and where, given a restricted fall path (e.g. constrained by a neighbors yard on one side).
I create the fallen branches in their final state along the fall plane, and create individual correction vectors mapping them back to their original state, but in an order that does not intersect other branch vectors.
The idea came to me as a particularly difficult tree needed to come down in my friends yard, and we spent hours planning it out. I've already gotten some interest from the tree-surgeon community, I just need to appify it.
Second rendition will treat the problem more as a physics one than a graph one, with some energy-minimisation methods for solving.
My need/idea was to post that some where (r/backyardorchard probably) to get help in determining which limbs to prune. However, there didn't seem to be an easy way to share that sort of thing and time was of the essence, so I just forged ahead on my own.
It's turning into various DIY rabbit holes, actually, with the next one (outside of various related landscaping stuff) being to gut a basement.
Does an insane amount of fine print really save you? Even if you say the model is only an aide to be used by licensed or certified professional arborists or whatever, I fear some Joe blow whose tree lands on his house will be suing you.
It is a very simple three-pass plan: "Deadwood, Crossovers, Aesthetics".
So, first pass, go through the tree cutting out only and all the dead branches. Cut back to live stock, and as always make good clean angle cuts at a proper angle (many horticulture books will provide far better instructions on this).
Second pass, look only for branches that cross-over other branches and especially those that show rubbing or friction marks against other branches. Cut the ones that are either least healthy or grow in the craziest direction (i.e., crazy away from the normal more-or-less not radially away from the trunk).
Then, and only after the other two passes are complete, start pruning for the desired look and/or size & shape for planned growth or bearing fruit.
This method is simple and saves a LOT of ruined trees from trying to first cut to size and appearance, then by the time the deadwood and crossovers are taken later, it is a scraggly mess that takes years to grow back. And it even works well for novices, as long as they pay attention.
I'd suspect entering the state and direction of every branch to an app would take longer than just pruning with the above method, although for trees that haven't fully leafed out, perhaps a 360° angle set of drone pics could make an adequate 3D model to use for planning?
In any case, good luck with your fruit trees — may they grow healthy and provide you with great bounty for many years!
Happy to help!
I have a couple products I make that require 12" widths, which means I pay a whole lot more per bf than < 10" widths at my hardwood supplier.
its surprising to me how little work is done to make the tools which do this accessible considering how much money and open data there is.
it gets less open and more complicated is when you consider certain mills only can make certain cuts, produce certain products, and accept certain logs. then factor in distance between mills and the products they can make, and also log lengths accepted by the trucks which can travel those routes.
its all solvable and should be, but its so niche and that i still think there isnt an accessible solution
This became convoluted and I just opted for a far easier method of solving vector intersections.
Its also not perfect since I haven't factored in rotation origin very well, and I'm now pursuing a far simpler physics-based approach
My methods are all over the place. Tree is taken as-is on the day, and cuts calculated on the fly, no future growth-modelling if that is what you're asking
https://github.com/dahlend/kete
It can predict the location of the entire catalog of known asteroids to generally within the uncertainty of our knowledge of the orbits. Its core is written in rust, with a python frontend.
It sounds really impressive.
(If so, email hn@ycombinator.com and we'll put it in the second-chance pool (https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page.)
More importantly, good luck with the PhD and we all hope it goes swimmingly!
Would it be appropriate to communicate on the README which telescopes this is used for? You see these very niche, very professional-looking repositories on GitHub now and then, and it's never clear how much credibility they have and whether they come from a hobbyist, student, experiment, or are in operational use.
All the things that are important to me in one place: notes, habit tracker, brag doc, action log, todos, events, data collection, biolinks, and a lot more.
That certainly sounds a bit boring, and rightly reminds of great tools like Obsidian. However, there was always something missing from the tools, or the configuration was too complex. That's why I started to build one myself. It's a mix of pwa and local-first and Github sync. I don't want a tool that only works in the browser, I also need to be able to continue working seamlessly on my smartphone, and Github offers an endless history that I can view. Plus: I can clone the repo locally at any time.
I don't need a habit tracker service, a tool for notes or a brag doc service anymore. Everything is stored as files that I can access at any time. Other nice features are forms (like typeform) or an infinite number of biolinks.
I've been using it daily for a few months now and have already been able to replace a few services with it.
On top of that, it uses a lightweight AI model to read product descriptions and filter based on things like ingredients (e.g., flagging peanut butter with BPA by checking every photograph of the plastic or avoiding palm oil by reading the nutrition facts) or brand lists (e.g., only showing WSAVA-compliant dog foods). Still reviewing results manually to catch bad extractions.
Started this to replace a spreadsheet I was keeping for bulk purchases. Slowly adding more automation like alerting on price drops or restocking when under a threshold.
However, what I would like is a product where I upload my shopping receipt for a few weeks/months from the one store I go to. The application figures out what I typically buy and then compares the 4-5 big stores and tells me which one I should go to for least price.
Uploading a receipt to see how much you can save... that's a good idea. I think I can find your email via your personal site. Can I email you when we have a prototype ready?
However, I am in Canada. So can only test it once you expand there. Thanks.
I don't know how things are in the US, but it does seem like the grocery store oligopoly is squeezing consumers a lot, so tools like this are valuable for injecting competition into the system.
And/or also curious if there is a way to enter in a list of items I want and for it to calculate which store - in aggregate - is the cheapest.
For instance, people often tell me Costco is much cheaper than alternatives, and for me to compare I have to compile my shopping cart in multiple stores to compare.
A few years ago, I was very diligently tracking _all_ my family's grocery purchases. I kept every receipt, entered it into a spreadsheet, added categories (eg, dairy, meat), and calculated a normalized cost per unit (eg, $/gallon for milk, $/dozen eggs).
I learned a lot from that, and I think I saved our family a decent amount of money, but man it was a lot of work.
@mynameisash I'm curious what you learned... maybe I can help more people learn that using Popgot data.
I just dusted off my spreadsheet, and it's not as complete as I'd like it to be. I didn't normalize everything but did have many of the staples like milk and eggs normalized; some products had multiple units (eg, "bananas - each" vs "bananas - pound"); and a lot of my comparisons were done based on the store (eg, I was often comparing "Potatoes - 20#" at Costco but "Potatoes - 5#" at Target over time).
Anyway, Costco didn't always win, but in my experience, they frequently did -- $5 peanut butter @ Costco vs $7.74 @ Target based on whatever size and brand I got, which is interesting because Costco doesn't have "generic" PB, whereas Target has much cheaper Market Pantry, and I tried to opt for that.
Our main example is something like pasta. Our local grocery stores all carry their own brand of dirt cheap pasta but it’s not as good as the more expensive pasta at Costco. Comparable pasta at the local grocer would be more expensive.
For items that are carried at both stores, Costco is usually no cheaper than the regular retail price and rarely much more expensive.
We have historical price tracking in the database, but haven't exposed it as a product yet. What do you have in mind / what would you use it for?
Thank you. Seriously.
Note: I searched "Protein bars", and it treated all protein bars equally. The 1st-20th cheapest had <15g of protein per bar. I had to scroll down to the 50th-60th to find protein bars with 20g of protein, which surprised me for being cheaper than Kirkland Signature's protein bars.
So yeah, we'll add it. If you shoot me an email (or post it here?) to chris @ <our site>.com I'll send you a link when it's done. Should take a day or two.
You can search for full or partial rows and see the whole query lineage – which intermediate rows from which CTEs/subqueries contributed to the result you're searching for.
Entirely offline & no usage of AI. Free in-browser version (using PGLite WASM), paid desktop version.
No website yet, here's a 5 minute showcase (skip to middle): https://www.loom.com/share/c03b57fa61fc4c509b1e2134e53b70dd
I would recommend you target data warehouses like snowflake and bigquery where the query complexity and thus value prop for a tool like this is potentially much higher.
I can ping you via email when the debugger is ready, if you're interested. My email is in my profile
https://chatgpt.com/share/68104c37-b578-8003-8c4e-b0a4688206...
Even if not for DuckDB, you can use this to validate/ parse queries possibly.
At my job, all of our business logic (4 KLOC of network topology algorithms) is written in a niche query language, which we have been migrating to PostgreSQL. When an inconsistency/error is found, tracking it can take days, manually commenting out parts of query and looking at the results.
The existing information is mostly blogspam from non-experts who try to make a quick buck. They only recommend the two brands with an affiliate program.
I wrote a better guide with help from competing insurance experts. The information is clear and succinct without oversimplifying things. It addresses the specific needs of immigrants.
Then I turned the advice into an interactive recommendation tool. People get clear, specific advice in a few seconds.
The best advice is "don't choose yourself, talk to a broker". The problem is finding a honest one. It took me years to vet a good one. After testing him for a year, I have set up an affiliate partnership from scratch with him. The partnership incentivises honesty and neutrality, because he has a lot of skin in the game.
I'm super excited about it. I can't overstate how much of an improvement it will be. Readers get far better advice and easy access to an expert. The broker gets a steady stream of well-informed leads. I get a commission for my trouble. It's a win-win-win situation.
It'd be great to connect. My email address is kane [at] withpoli [dot] com.
Still happy to make little tweaks here and there, since there are some folks enjoying the site.
Colibri—a self-hostable web application to manage your (and your family's) ebook library, intended as a companion to Calibre. I want it to be a friendly, simple, capable, opinionated app to review your books, add metadata to them, get them onto your reader, share them with family and (few) friends, create a public shelf for bragging, connect with Goodreads etc., and exchange comments and reviews on books.
This is explicitly not intended to ever be monetised, and I enjoy all the implications that has on the design. Colibri is as much a tool I personally want to use, as it is a study in small-audience user interfaces, and the quest to build the perfect book catalog schema.
I'm looking for fellow book-loving people to work on Colibri, to create the best personal digital library possible. If you're interested, feel free to reach out via email (in bio), or on GitHub.
Just wondering about the encrypted collection of ebooks from Kindle for example. Are these ebooks supported and does it only supports metadata, what about the content search for these ebooks?
> Are these ebooks supported and does it only supports metadata, what about the content search for these ebooks?
Have you seen https://github.com/colibri-hq/colibri/issues/45? Content search is planned, but requires access to a book's text content, obviously. My recommendation would be to use Calibre to strip DRM and convert the books to epub/mobi files, and import those to Colibri; this has the general benefit of ensuring access to content you bought without depending on Amazon's good will :-)
Colibri is built around a pretty solid data schema (I hope). Check out the migrations folder if you’re curious :-)
I got diagnosed with type 1 diabetes in Feb (technically LADA as it's late onset). I'm the first in my family with it so I had zero info on it. I tried getting some CGMs to use but most don't work in Kenya as they are geo-locked, and even apps for measuring carbs like CalorieKing are not available in my region. I was really frustrated with the tech ecosystem, and started working on My Sukari as a platform of free tools for diabetics.
I mostly get time to work on it on the weekends, so it's not yet ready for public use, but I've fully fleshed out one of the main features: Sugar Dashboard - A dashboard that visualises your Glucose data and helps you easier analyse it.
To help with demos, I've shared my Sugar Dashboard here: https://mysukari.com/tools/sugar-dashboard/peter
I'm really passionate about this and getting as much free, practical tools in the hands of patients (it honestly shouldn't be this hard to manage a disease)
I was diagnosed with LADA type 1 diabetes. First in my family to have it.
My immediate reaction was wanting to put together something to track my diet, blood glucose weight and so on.
Thank you for sharing your experience.
> not having sanctioned access to real-time blood sugar values (the APIs are all one hour behind)
Ah, I didn't know this. One of the prospective tools I had in mind was real time alerting in case of drastic drops eg ping doctor or relative. I think will have to be limited to the apps/tools that do support realtime.
Would love to get in touch to hear more about your long-term vision for the project!
Is there genuinely a consideration here beyond not allowing activity without paying money to the hegemony?
> I tried getting some CGMs to use but most don't work in Kenya as they are geo-locked
Are you familir with xdrip? (https://github.com/NightscoutFoundation/xDrip) It works directly with various cgm sensors (dexcom etc.)
I had a lot of success with Juggluco[1] which is available on the Play Store and provides easy to use APIs to interact with supported CGM readings. Juggluco has an inbuilt xdrip web server but I haven't tried it yet.
Will definitely look into xdrip+ further.
Decided to port the backend to Go + postgres (on a Hetzner VPS), and retain the frontend on Nextjs - A lighter weight client, moving most of the compute to the backend API. Few reasons for the port: I've had a lot more success/stability with Go backends, Turso pulled multi-tenant dbs which is what I mostly wanted them for, Nextjs is getting too hard for me.
Go backend is just the std lib (1.22+ server with the nice routing) - I mostly write all the lines in this
Frontend is textbook modern react: React19,next15,tailwind4 - AI mostly writes the code in the frontend (Cursor + Cline + sequentialthinking + context7 + my own custom "memory bank" process of breaking down tasks). AI is really, really good at this. I wrote this https://image-assets.etelej.com/ in literally 2 days 2 weekends ago with less than 10% of code being mine (mostly infra + hono APIs)
Instead of masonry I would like to work on time of flight cameras. But the day has only 24 hours :-(
I got curious about that statement, since shadow maps tend to look much different. I also knew that he left part of the experimental renderer in the GPLed code. So, I've decided to go on that rabbit hole of Doom 3 graphics, specifically Carmack's experimental renderer and ended up implementing his approach, as well as adding some poisson disc sampling and fixed the peter panning: https://github.com/klaussilveira/exp-dhewm3
I also spend a lot of time on archaic game engines. I like to call it software archeology.
https://i.imgur.com/TvnFuDG.jpeg
You can only notice it is not stencil when you get up close:
https://i.imgur.com/WtWojG2.jpeg
https://i.imgur.com/4dniMOT.jpeg
In Carmack's experimental renderer, peter panning was quite high:
https://i.imgur.com/u2ZZJTR.jpeg
This is my version with tweaks and poisson sampling:
I watch a lot of YouTube videos and have found it very annoying that YouTube latches onto one or two topics that you've watched and only recommends that type of content over and over again. Even if you use their "Not Interested" tool, not a whole lot changes in your recommendations.
At the end of last year I launched Relevant - a crowdsourcing website where users can categorize the channels they watch into a defined hierarchy of categories ranging from broad topics like "Science" and "Gaming" to more specific ones like "Phone Reviews" or "Speedrunning".
Although I've had good feedback on the website, engagement has been relatively low and I think that's because it's a big ask to have someone navigate to the website to find the content. This year I decided that I'd bring the content to them by making a Chrome extension that lets users interact with Relevant directly from within YouTube.
It's still a work in progress but I'd love to get a first version out within a month or so to start spreading the idea and gathering feedback. If this is of any interest to the people here on HN then please let me know what you'd like to see most on your feeds.
I also notice that you first said “browser extension” but later you said “Chrome extension”. Are Firefox users going to be out of luck?
I did say Chrome browser because with the deprecation of manifest v2, I had to make a choice about which to support. I decided given Chrome's larger market share that it would benefit the most people sooner. However I'm building it in such a way that porting to Firefox shouldn't require much additional work.
If you have a look at the category tree, where do you think video essays would go in that?
- there are lots of expenses still to be made (fertilizer, pesticide, salaries), which may not be worth it if germination is under certain threshold
- if detected early, there is still time to plant another grain or to fill up the missing plants (requires precision seeders and seeding maps)
- is a very good proxy for yield estimation (farmers often trade futures even before they have harvested)
For the purpose I have created a dataset (a collaboration between my employer and Sofia University) and published it in order to enable scientific collaboration with other interested parties. Still working on the dataset annotations.
https://huggingface.co/datasets/su-fmi/sunflower-density-est...
Yield prediction is huge indeed, because overshooting your prediction means seller stuff for a lower price. Undershooting means paying for someone's product to make up for the difference. Probably there's quite a bit of matchmaking in between those under and overshooters and someone making a good buck out of that too.
Indeed. Making up the difference can easily eat most of the farmer's profits. I guess it is even more pronounced for berries when compared to grains, because they cannot be stored for so long.
However for precision agriculture kavalg might want to consider other methods.
1. Plow the field and seed again (same or different variety or grain). This is a very crude measure, but it is sometimes the right thing to do, because as I said most of the expenses have not been realized yet (fertilizer, pesticide, fuel, payroll, paying rent for the land). It is also a time critical decision, because the window of opportunity for plowing and reseeding is not very wide.
2. Accept the lower yield if it is within a reasonable margin (e.g. comparable to the expenses to plow and reseed).
3. Do partial reseeding over the existing plants (without plowing). This is an emerging strategy with the proliferation of smart seeders, but it requires a precise seeding map to be created beforehand (i.e. based on the density estimate). As an advantage, you spare the expenses for seeds and plowing, however there is some disadvantage as well, due to the different rate of development of the newly seeded plants. Farmers usually need plants to be ready for harvest at the same time, otherwise the quality of the grains suffers and hence the selling price is lower.
In addition to these points, having precise density information after germination helps with the identification of problems, such as seeder malfunction (e.g. nozzles getting clogged), seed quality and meteo data (e.g. too much rain, low temperatures etc).
It's attempting to be the easiest and nicest way to monitor Linux servers. I'm currently implementing 0 config custom alerting. All you will have to do is write a file to a home directory with some json in it e.g event_name:blah,interval:1m,data=10 - no server side config at all!
So should be quite suitable for big deployments :)
I’m building Cigaal, a super-ecosystem app that blends social media, commerce, payments, crypto rewards, and AI. Think of it as a lightweight alternative to WeChat, Grab, and Temu – but designed for underrepresented regions first.
Key Features: Marketplace + Delivery + Travel Booking
ZooCoin: our in-app crypto rewards system
AR shopping, short videos, stories, and chat
Cigaal ID: unified profile & wallet
A2A/B2A Agent Economy: agents handle cash deposits, deliveries, and API integrations with businesses
CoreIQ: an AI memory core that assists users across all Cigaal mini-apps (wallet, health, travel, shopping, etc.)
Why it matters: Cigaal helps bring modern digital experiences to places with limited infrastructure, enabling creators, travelers, small businesses, and buyers to thrive in a shared economy without being locked into Big Tech platforms.
Would love feedback, suggestions, or partnerships – especially from people building in fintech, agent networks, or AI-driven apps for underserved markets.
Site (coming soon): [cigaal.com]
Been a freelance dev for years, now going on "sabbatical" (love that word) imminently. Just moved to reduced hours, still in the transition and unwinding phase.
Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.
I'm excited.
I had done a half batch last year and really enjoyed the experience.
9 years ago: https://news.ycombinator.com/item?id=13778951
I'll let you know what I've learned by the end when I figure out what "end" means. It's not my goal to go back to what I'm doing now.
I'll post about this again the next time (end of May presumably) this "What are you working on?" thread is posted, for anyone who wants to follow along. Also, email address in my profile; use subject, "What are you working on?".
In brief:
25 years of experience including FAANG. Recently got divorced (complex litigations, fought hard, very satisfied with the outcome).
Now rethinking myself. Want to do something useful for humankind in the rest of my life. Having big ideas for the future. Trying some research-focussed 'side' projects. Considering writing a book. Learning new things. So on.
I'm eyeing some tech conferences in the next months, which would involve varying levels of travel and sightseeing.
I went on a ~three-week roadtrip once, which I saw as a shorter/practice version (it was originally planned for two weeks) of a longer one I'd like to do one day. So I might do something like that.
I've never had to go to a tech conference/meetup for work; the ones I've gone to have been social/community/fun events for me.
I was at North Bay Python last weekend and I'm going to PyCon US in May.
In what I believe is still the spirit of the question though, I discovered Maltese these week and have added it to my casual study. It’s a Semitic language (closely related to Arabic), written in the latin script, with about 40-50% of its vocabulary being Italian/Sicilian based. It’s become my new obsession
It would take a lot to convince me to pay that much for a product like this. True, it can be inconvenient trolling around for content in your target language, but as a software dev I am pretty experienced with finding obscure things on the internet by finessing search queries. And there are plenty of other apps out there that do spaced repetition for you, and open source tools and data sets that can be used to help you scrape/process vocab (again, if you don’t mind spending some time debugging, which I personally do not). Besides that, I really don’t find it that inconvenient to manually write down words/phrases from books or movies and copy them into my SR deck. On the contrary, I think this overhead actually helps the phrases stick better!
So how would you sell your site to someone in my situation? What would I get out of it?
1. I can study all my languages on the same platform. For my, having studied 30+ languages (note: not claiming to speak them), I just want to “do my languages”. I can study dialectal Arabic, minority languages, archaic languages, and the major languages, all in a nice consistent and (if I can say so myself) beautiful UI.
2. Everything is heavily annotated with all the information you could ever need. This means that I add flashcards, and when I’m learning them, I have the gender, cases, tenses, agglutination, phonetics, translations, audio, conjugation/declention tables, character breakdowns, mutations, idioms, multilword expressions, roots, etymology, etc etc (the list really does go on) at my fingertips. This means I just go through my flashcards, and when I have a question, I get an answer. If I have more questions, I have a context aware chat integrated. For me, this is the autodidactal dream come true.
3. Personally, I really love SRS. I also really hate SRS. If I have to study the words “dog”, “walk”, & “morning” - and I have a sentence “I walk my dog in the morning”, I just want to study that one one sentence and be done with it. Also, I really want to just be able to play audio sentences and listen to them while cooking/cleaning/walking my dog. Or do a free recall sessions, write down everything I remember from yesterday, and skip those reviews today (it’s more effective than SRS anyways).
Lastly, WRT to creating your own flashcards: You can still create flashcards manually on Phrasing - I agree the act of creating flashcards is beneficial, I’m not trying to take that away from anyone - but I’m not sure I buy it’s the highest leverage way for one to spend their time. At least for me, it definitely is not. I would rather skip that (admittedly beneficial) step, and move onto the next step. YMMV
It’s really hard to narrow the list down to three, I have a hundred things I want to say, but I’ll leave it here. Due to popular demand, I recorded a few live demos today so you can see it in action:
https://x.com/barrelltech/status/1917093849219895715?s=61
Higher quality demos will come in time!
Let me know your thoughts, I’m happy to dive deeper into any of this (I mean I could talk about Phrasing for literally days on end)
EIDT: s/extinct/archaic
I (think I) managed to create an expression, but: 1. it takes forever, I don't know what it's doing and it's still not done 2. I have no idea how to use it, onward 3. does not seem that I will be actually able to use it, as the app requires to be subscribed to use it...
Looking forward for a how to use manual / page + a real trial period. If the app requires a subscription with this UX experience, I will be gone :-)
Wrt to the expression, all expressions created today succeeded, so if you’re still seeing a progress bar let me know as that’s a bug. It’s possible something failed with the live updates, or it does take several minutes to create an expression (depending on servers, it can take up to 10 minutes at times, although the typical timing is 2-4 minutes depending on the expression)
If you click on any of the review methodologies, it will start reviewing any of your successful expression. From there, the experience should be a lot more explorer-friendly :)
What it’s doing is: analyzing the sentence, splitting it into phrases, aligning it across all languages, tagging all of the gender/case/tense/etc, researching pronunciation, generating audio, aligning the audio, prioritizing the words (across several axes), and generation explanations/dictionary for each individual word
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
What languages do you support?
Learning Latvian through Anki flashcards, but it's not well supported by the main platforms, and there's not a huge amount of content out there for learning.
This alongside a couple of the usual suspects.
As a side note, on a Pixel 4a 5G (old phone , but functionally not ready for e-waste) the homepage bleeds all over. Some components into each other, others off screen. Might want to check that.
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
Languages below, if you know their alpha 3 code. Currently having some issues with Thai and Zulu though, so they're temporarily disabled until I have time to fix them.
I have not ~tested~ verified it for Latvian, I would be curious to hear your thoughts. It has been working pretty well for Maltese, Albanian and Macedonian though, which should be lower resource than Latvian!
As mentioned elsewhere, the first time user experience is abysmal. If you reach out though we can hop on a call and get you set up - or in a few weeks I'll have a video done and up. In the meantime, you should be able to create an expression (in the nav bar for desktop and mobile) fairly intuitively.
afr, amh, ara, ara-are, ara-bhr, ara-dza, ara-egy, ara-irq, ara-jor, ara-kwt, ara-lbn, ara-lby, ara-mar, ara-omn, ara-qat, ara-sau, ara-syr, ara-tun, ara-yem, asm, aze, bel, ben, bos, bul, bxr, cat, ces, chu, cop, cym, dan, deu, ell, eng, est, eus, fao, fas, fil, fin, fra, fro, gla, gle, glg, glv, got, grc, guj, hbo, heb, hin, hrv, hsb, hun, hyw, iku, ind, isl, ita, jav, jpn, kan, kat, kaz, khm, kir, kmr, kor, lao, lat, lav, lij, lit, ltc, lzh, mal, mar, mkd, mlt, mon, msa, mya, myv, nan, nep, nld, nno, nob, ori, orv, pan, pcm, pol, por, por-bra, por-prt, pus, qaf, qpm, ron, rus, san, sin, slk, slv, sme, som, spa, spa-arg, spa-bol, spa-chl, spa-col, spa-cri, spa-cub, spa-dom, spa-ecu, spa-esp, spa-gnq, spa-gtm, spa-hnd, spa-mex, spa-nic, spa-pan, spa-per, spa-pri, spa-pry, spa-slv, spa-ury, spa-usa, spa-ven, sqi, srp, sun, swa, swe, tam, tel, tha, tur, uig, ukr, urd, uzb, vie, wol, wuu, yue, zho, zht, zul
EDIT: I have tested it for Latvian, I know it technically works. I however have not had any Latvian speakers review it's quality
Also, it looks like you have to get the subscription to use it in any way? It's hard to gauge whether it is for me or not if I have no way to trial it. I found the UI a bit confusing too, I was not sure what I was supposed to do after logging in. As another commentator mentioned, it's asking me to set a reference language but I see no way of configuring it.
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
The reference language error should not be shown (I mean it’s not incorrect, but there is a “no expressions error” that should take precedence).
A video is coming :) I didn’t expect so much interest from a comment in this thread. If you get in touch, I can walk you through it personally, otherwise check back in a couple weeks and there will be a video overview.
For my Dutch (which was probably once a high B2, now probably a low B1) I only use the audio review when walking my dog or cooking. It plays the audio of the cards in a playlist, so I practice hearing and repeating them.
It's not so self serve at the moment, but if you get in touch I can get you up and running.
1. I want to get confirmation that the language I want is covered (Hungarian). "120+" doesn't confirm it for me, as Hungarian seems fairly rare for language apps. Can we not just have a "search your language" field?
2. I need to see what the app actually looks like, how it proposes it'll teach me.
I'm one of the eager-to-pay people, because Duolingo is frankly dogshit (ok. Mostly polite) at teaching languages (doubly so ones that it doesn't care about like Hungarian). But I'm so suspicious of language apps, due to being burnt a dozen times.
1. I just started the marketing website a few weeks ago, and if you can believe it, I didn't readily have that information. One of my tasks last week was to compile a list of languages that could work, write some tests for all of the languages, and get a list of supported languages. I have that list now, I just need to put it on the marketing page.
2. As mentioned in other comments, I'm working on a video. I'm preferring to fix glaring issues before making the video, although at this point I'm verrrrrrry close. I have started scripting it, but it takes a lot of time to make a good video (1-2 full days if I don't want to edit it).
Your feedback is completely valid, and they're both reasons why I'm not really marketing the product yet. This thread seemed like a good middle ground though as having some people using all the languages would be really helpful. Also, I've genuinely been loving using it and want to share.
It's just me working on it, so these things are coming, but everything takes a while! Hopefully these didn't sour you on the project permanently :)
EDIT: And yes, it supports Hungarian :)
Nope, not soured. And don't worry, I totally get that things take a bunch of effort and time (doubly so as a solo project). I'll give it a re-look in a little while :)
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
The first time user experience is really bad, but the app itself makes a lot of sense once you see it in action. Feel free to get in touch with me (there are several methods listed when you log in) and I can give you a personal introduction!
If not then check back in a few weeks for a cool video :)
I signed up, but now it's asking me for a "reference language" (which is a little ironic because it tells me this in English lol). I guess I'll play with this later.
https://x.com/barrelltech/status/1917093849219895715?s=61
Please excuse the video quality, there will be better paced/audio/scripted demos soon!
Would love to get feedback on the old languages! It's been really good for the minority languages I'm learning
About Daestro: Daestro is workload orchestrator that can run compute jobs across cloud providers and on your own compute as well. More like cloud agnostic batch jobs or step functions.
[1]: https://daestro.com
(Trying to stay a little pseudonymous, so here is a list.)
If you’ve used H3 or S2 it should be familiar, the major difference (apart from the fact it uses pentagons) is that the cell areas are practically uniform, whereas alternative systems have a variance of around 2 between the largest and smallest cells, making them less useful for aggregation. The site has many visual demos, e.g. https://a5geo.org/examples/area
The code is open source: https://github.com/felixpalmer/a5
Is it essential that the cells be the same shape?
Also where does the name "A5" come from exactly? I get that 5 is because it has five sides, but why A?
However the symmetry of H3’s hexagonal cells lends itself well to flow analysis, or routing - which is no surprise as it was developed at Uber.
As for the name, it follows the convention of S2 and H3, which come from group theory and refer (loosely) to the symmetry groups of the various systems
H3 is based on a dodecahedron it is it the reason the cell areas range so much, the same is true of S2 - but this is based on a cube.
The shapes look a bit wonky when projected onto a map, though, and it may not be as intuitive to reason about as the hexagons that would (mostly) result from subdividing an icosahedron. With a subdivided icosahedron you end up with a regular lattice of shapes that is easier to reason about. I think an icosahedron might be a better fit for an indexing scheme for that reason, despite it's higher mathematical error in approximating the sphere at a given resolution.
I explored a similar idea four or five years ago, without being aware of H3. My goal was to find a compact multi-resolution geospatial height map format. My idea was closer to H3 than to yours, it seems.
https://ibb.co/album/MDw79y?sort=name_asc
(The source example is from David Tong's physics lectures notes, that were featured on HN last week — https://news.ycombinator.com/item?id=43763223 )
my understanding is your typesetting books for responsive eink readers.
The reason I'm not falling back on OCR is because the general case is full of things, like math equations and inset graphics/diagrams, that can't be OCR'd. The only robust way to deal with those is to treat them as graphical atoms: "this bounding box can be moved around, but should not be split up into pieces".
Here’s a a detailed write up of the process: https://samkhawase.com/blog/hacking-kindle/
Right now I'm working on a version 2 that has user accounts, multiple documents, markdown support, and document exports. Everything is local-first and it uses CRDTs to sync documents.
It looks like this: https://i.imgur.com/Plk1DQ4.png the calculator is mostly the same for now, with a few improvements. It's unstable right now, so I don't want to publicise the dev url, but if you'd like to become a beta tester email me at contact@numpad.io
Let me know if you'd like to be part of the beta!
I've been building PodSnacks because I found it overwhelming to keep up with podcasts across tech, business, and science. PodSnacks uses LLMs to summarize the most popular episodes from shows like Lex Fridman, Acquired, All-In, Invest Like the Best, and more.
You choose your favorite shows, and we email you short, high-signal summaries — no audio to skim through, no endless backlog guilt.
So far:
126K+ episode summaries generated 92K+ hours of podcasts processed 48–50% open rates 2,900+ early users
Still iterating and adding features like bundling by theme, language translations, and audio feeds.
If you're the kind of person who wants more inputs without more noise — would love for you to check it out.
Always open to feedback from HN!
1) transcription (Assembly) 2) summarization (Claude) 3) podcast database (listennotes) 4) emails (postmark) 5) application hosting (render)
In currently have a prompt for it that works for me, based on the transcripts.
Problem: too much duplicate information in any type of publication and too much fluff
Problem2: YouTube/transcriptions
Probably wouldn’t pay for such a service, but would be very happy using it. Perhaps some channel promo / email based ads for discovery or recommendations.
Edit: Ok, I found the search function in the hamburger menu. A bit unintuitive.
I'm working on a defense drone.
I built a garage workshop with a Shapeoko 5 Pro, X1C, soldering station, and learned CAD (ok just fusion). I have a lil drone in the air and I'm adding OpenHD for vtx, Rpi for on-edge compute (Jetson would be better but is expensive).
Haven't figured out FHSS or GNSS-denied nav yet (tbh I feel like fhss is gonna be harder). And SITL in a good sim remains to be conquered (ros on osx is a terrible experience). I'm also designing a battery pack that's modular, quick-swap, smart/telemetry.
I've shifted a lot of focus to networking (attending SOFweek in tampa) for the normal fundraising/team-building/customer discovery.
I'm also basically broke due to bootstrapping so I'm about to partner on some b2b ai saas consulting with a friend, today I got Suna up and running, pretty cool.
If it strikes a chord with anyone, I’d love to collaborate! The concept is centered around organization bubbling up naturally from dumping info in with tags, and “typing” your tags so that when you go to a tag’s page, the layout is customized based on what it is - a project, person, etc. A project could have all relevant tasks and notes listed, whereas a person might have name, contact info, etc.
We launched our first location last week and have had a great response so far from business owners and residents alike: https://canandaigua.com With the current tech climate of AI and big tech, things have gotten so impersonal for the majority of actual people. I'm betting on communities and the small businesses that serve those communities in the medium and long term.
I really like the idea of contributing those photos back to OpenStreetMap actually... right now I have Google Maps on profile pages but only because we absolutely need the Google Places API for accurate hours (that's really the only spot businesses update current hours at the moment). But I could see swapping for OSM at least for maps in the near future.
I hit an OpenSearch bug this week where you can't get any browser based requests to work. Its due to zstd becoming a standard part of Accept-Encoding and OpenSearch not correctly supporting it so I wanted to install a browser plugin that modified the browser HTTP request headers to my servers.
I don't know about everyone else but I love that browser plugins are possible but I hate having to find them. Its mostly due to never knowing if you can trust a plugin and even if you find one, you have to worry about it being bought out in the future. With vibe coding I was able to build a browser extension in 45 minutes that had more features than I originally planned for.
I spend more time documenting the experience than building which is wild. If you are interesting you can look at the README in https://github.com/mattheimer/vibe-headers
But I left the experience with two thoughts.
Even seasoned developers will be using vibe coding in the future.
I think in the near future the browser plugin market will partially collapse because eventually browsers will build extensions themselves on the fly using natural language.
Then as scope expands you're left with something that is difficult to extend because its impossible to keep everything in the LLM context. Both because of context limits and because of input fatigue in terms of communicating the context.
At this point you can do a critical analysis of what you have and design a more rigorous specification.
Only issue is Gemini's context window (I've seen my experience corroborated here on HN a couple times) isn't consistent. Maybe if 900k tokens are all of unique information, then it will be useful to 1 million, but I find if my prompt has 150k tokens of context or 50k, after 200k in the total context window response coherence and focus goes out the window.
>I've been using AI to get smaller examples and ask questions and its been great but past attempts to have it do everything for me have produced code that still needed a lot of changes.
In my experience most things that aren't trivial do require a lot of work as the scope expands. I was responding more to that than him having success with completing the whole extension satisfactorily.
After I completed the extension I did try on another model and despite me instructing it to generate a v3 manifest extension, the second attempt didn't start with declarativeNetRequest and used the older APIs until I made a refinement. And this isn't even a big project really where poor architecture would cause debt.
Vibe coding can lead to technical debt, especially if you don't have the skills to recognize that debt in the code being generated.
Yes, customer is a special snowflake but they still need 90% or whatever every other client in this industry needs.
Feeling increasingly like this is a fools errand.
Even though we've proved this out with tool sets strung together with duct tape and safety pins, and are therefore the most profitable group within our department, we still need to be 100% billable.
It's only because we're the most profitable group that we can pretend we're all billable while I work with two other people to bootstrap this crazy project
Edit: anyone hiring? Just found out my boss is quitting.
Good luck and we're rooting for you!
It was not intentional but my post really does read like a little story vignette that ends with a gut punch.
Not looking for sympathy so much as fellow appreciators of irony and schadenfreude but here's another kicker.
I pitched this idea to my previous company and was told there was no appetite for it. Just saw on my old company's blog that they released a "digital transformation in a box" program for mid-market clients in this space which is 90% of what I pitched to them. Bad and hilarious timing all around.
People are weird!
Good Luck!
FYI there's a misspelled word - "accountat" on the home page.
The process of generating levels is based on constraint solving. For now I'm not going to say much more about it, since it's the most innovative and valuable part of the project.
I did! That was the main workload of this project, and is still ongoing. I have ideas for improvements, and I also have to fix some sentences manually sometimes. But it's getting there.
The sharing shows a kind of "health bar". Every time you make an illogical guess, it reduces one. Running out of "health" doesn't prevent you from completing the puzzle, but it does show up in your share. Based on this, you made 4 "illogical" guesses. If you didn't, then there's a bug. But feels like I should anyways clarify this, if it wasn't clear to you. Thanks again!
I did enjoy the game though. Re-reading the clues is helpful - reinitialize your context window!
- "My only innocent neighbor is to the left of Harold"
- "Barb and I have one innocent neighbor in common"
- Implies Gary is a criminal
But the game won't let me.
As you've seen in replies here already, many term choices you've made have enough variety in how they're conventionally used that people incorrectly assume they know what it means only to see the "nope!" popup when they try to apply it. That frustration is going to spoil first impressions of what actually seems to be a really great puzzle system, which is a shame. The more you can reduce that experience, the less likely you'll be to prematurely burn off players.
A good measure for getting it right would be that you don't even need a glossary at all, or that you can get it so condensed that you can make it more prominent without becoming distracting.
Alternately, you could maybe use symbols instead of words to represent your rules, as more players would intuit that they should learn the symbols before making (wrong) assumptions.
A programming language for teenagers to learn to code by making multiplayer games. I've spent 3 years making the multiplayer completely automatic so you can just code it like a singleplayer game, then flick a switch and it just works. My hope is that teenagers will find it more engaging if they can play their games with their friends. A bit like a combination of Roblox and Scratch.
Currently trying to implement some region affinity so it doesn't just put everyone in the world in the same game. It's rollback netcode so the latency is very good even across continents, but it can't overcome the fact that the world is just too large.
In parallel, I'm building an exercise generator "Jazzln" [2] to help me practice.
[1]: https://www.goodreads.com/book/show/54391815-jazz-harmony
https://www.youtube.com/watch?v=8OJHPWlaCrc
Cheers =3
Currently; > Main goals: Improving my writing and finding some people with similar interests Writing about my current walking season that I'm in, combined with reflections during the walking, recently about walking around the island of Menorca, and Aloneness:
>> https://felipevanbeetz.substack.com/p/build-some-capacity-to...
Thinking about starting another Substack: > Main goal: Audience growth Called something like: "Walk more", "Walk intentionally", "Move intentionally", "How to walk more" About: Short, (bi)Weekly, Practical tips/inspiration, to Move (and specifically Walk) more and more intentionally.
https://podcasts.yayaapps.com/
I have too many podcasts to listen to and realistically not enough time to get through all of them, so I created a web app that transcribes and summarizes your podcasts and emails you a summary every morning.
My plan for the next step is to detect faces, ask the user to label the most occurring faces, and then label all images accordingly. This step seems a bit harder than just feeding the image through Gemini and asking it to create labels.
Escape rooms are honestly... almost always a let down but the concept has a lot of potential and there are some really neat ones that standout like this local one where you pilot an airship https://www.portlandescaperooms.com/steampunk-airship
Once I build the best escape room on the planet, I can consider selling the tools.
BUT I also get what you mean, have find out what works first. Do you have a blog for your escape room progress? That sounds like such a cool thing to follow you making!
[0] (https://twinery.org/)
I find that many books out there are focused on documenting the "happy path" for rather small and simplistic applications without boundary conditions or business thinking behind them.
So, I thought (as with the first books) that it mix things up by also documenting the business context, the questions, decisions, and the decision-making process itself, as well as all the gotchas and "side-quests", rather than showing "here's how you do it" and then expecting the reader to suddenly make the jump from tutorial hell to actually software engineering.
Overall, it's enormously enjoyable, and I hope it goes as far as the first book, and possibly even farther.
It contains words and phrases with their accompanying context.
Benefits of SDFs over the standard Boundary Representation (used in Freecad and similar) are that you can do "pattern" operations with domain repetition, which means making N copies of a feature is O(1), vs O(N^2), you can deform objects with domain deformation, which means if you have a closed-form representation of how you want to deform space you can basically directly apply that to your object, procedural surface texturing is easy, CSG operations are easy.
The big drawback is that it is hard to provide any workflow based around "selecting" faces, edges, or vertices, because you don't naturally have any representation for these things, they are emergent from the model's SDF.
I have some blog posts on my progress: https://incoherency.co.uk/blog/stories/sdf-thoughts.html and https://incoherency.co.uk/blog/stories/frep-cad-building-blo...
I am solving the "selecting faces" problem by having the SDF propagate surface ids as well as distances. So the result of the evaluation is not just the distance to the nearest point on the surface, but the id of the specific surface that is nearest.
My next big frontier is reliably providing fillets and chamfers between arbitrary surfaces. I have a handful of partial solutions but nothing complete yet.
The most promising idea is one that o3 came up with called "masked clones", the idea is roughly to make a clone of the 2 surfaces you want to blend, mask them by intersecting with an object that is like a "pipe" along the intersection of the 2 surfaces, apply the blend within the pipe, and then add this "blend pipe" as another child of the lowest common ancestor of the 2 blended surfaces.
And after that I need to work on more standard CAD stuff like constraint solving in the 2d sketch editor.
I think the problem of finding edges can be solved by stepping back and redefining primitives. One idea I had was defining them as a 2D sketch and a transformation function along a path. A sphere would be a 2D hemisphere that rotates as it moves along a circular trajectory, with the flat side staying in place, for example. A cube would be a square That moves along a vertical or horizontal path the distance of the sides. You get the idea.
The advantage of this type of representation is that edges can only ever be edges in the 2D shape, or the path traveled by vertices. The hard part is that when you do boolean operations using primitives, you probably want to go back and turn it into a primitive representation (2D shape, transformation along a path).
I wonder if there are any ideas on how to make this a OpenSCAD-style editor instead of interactive? I like the text-based style for simple regular shapes, but they tend to end up too simple and regular. Maybe tools like filleting edges SDF-style is a game changer?
But if you are into code-style SDF interfaces, I have some links on https://incoherency.co.uk/notes/sdf.html
You can read an intro here: https://blog.tangled.sh/intro (it’s publicly available now, not invite-only).
In short, at the core of Tangled is what we call “knots”; they’re lightweight, headless servers that serve up your git repository, and the contents viewed and collaborated upon via the “app view” at tangled.sh. All social data (issues, comments, PRs, other repo metadata) is stored “on-proto”—in your AT Protocol PDS.
We just shipped our pull requests feature (read more here: https://blog.tangled.sh/pulls) along with interdiffs and format-patch support! https://bsky.app/profile/tangled.sh/post/3lne7a4eb522g
We’ve also got a Discord now: https://chat.tangled.sh — come hang!
The new one is browser based.
The mud engine runs in a web worker and takes advantage of some of the modern web tricks to do stuff. For instance, data files (think: area files that don't change often) can be stored remotely and then cached with a service worker. This allows the MUD to run offline. But that's only fun if you're playing solo.
IO between the UI and worker is handled by message passing.
Multiplayer is handled by the MUD opening an outbound connection (probably websocket) to a connection collector host. Other players would then connect to that host and have their IO routed appropriately. The host can even be smarter: it could be a specialized Discord client, allowing users to play from there. Firebase may also be involved. I don't know.
The important bit is that this is still basically message passing, so the engine won't need to know the difference between the local user and a remote user.
The MUD database would be an IndexedDB. Probably. I haven't thought as much about that yet.
I am sure all of this is theoretically possible, at least.
I really like your multiplayer idea. That sounds really clean and flexible.
This is a second part of a series on likely-correctness, the first is how to create likely-correct software: https://www.osequi.com/studies/list/list.html
I vibe coded the above including everything: code, design, logos. Just did it solo. It has all error handling, video generation notifications (it takes a while) and credit system. I myself can't believe it's been done in a month with AI. It's already in closed beta in iOS and android app stores. Let me know if you want to try it out before public release.
My quoted comment above was 28 days ago. This is working on this part time and with a family.
EDIT: Added context.
It'll be my first major personal project since I haven't had the time or a serious idea worth implementing (In my mind, at least), so I'm excited!
My current plan is to test the counting with people with good rhythm sense and once I find a good algo for beat detection I'll proceed with writing the app.
This is one of my long-standing passion projects, a simple web-based music sequencer built to have a very low barrier to entry.
At a very early prototyping phase, but having a ton of fun. https://x.com/artofpongfu
It supports multiple languages, currencies, European VAT deductions, and more.
I built this tool for myself so it’s kinda like a personal software. Hopefully, others will find it useful too :)
I quit working about ~20 months ago, started a low-carb time restricted eating regime, lost ~230 lbs, have been doing 15-25 hours of cardio a month for the past year, started going to therapy, got an ADHD diagnosis, read a bunch of classic literature (Middlemarch and The Count of Monte Cristo are my favs thus far), maintaining a 19 week streak of Latin language learning through the killer Legentibus iOS app, and I'm playing guitar every day (trying to nail the the major scale in three different fingerings across all 7 modal starting points).
I miss my old job working with Vitess and Kubernetes a lot (Hi Sam!) but eliminating all work stress has really allowed me to take control of my life.
230lbs is wild. Great job :)
Sleep is essential. Getting a full night of quality sleep will probably help more than anything else. If you often wake up feeling tired already then maybe do a sleep study to see if there are problems there.
Exercise helps.
Developing a system for organization is key to unburdening your mind. There are various books for adults, for me I also found an executive function coach gave me the right amount of accountability and discipline to practice the systems enough that they didnt feel like a burden anymore.
I got a prescription for Adderall, and I take 10mg dose a couple of times a week when I need to get psyched up for deep-cleaning the house or doing a week's worth of food prep, or folding and putting away a mountain of laundry. Moving your body doing repetitive manual tasks just feels amazing on speed. My wife is pretty happy not having to over-function for me, and to have help around the house. Having healthy food that you want to eat in your fridge is also very helpful.
I seem to get plenty of motivation and focus for doing things I want to do by doing an hour of cardio every morning at the gym. Recently I've been moving between HR zones 1-4 while listening to the 4 movements of Beethoven's 9th, which I have a recording of that is just over an hour. I sometimes time it so I walk out of the gym at the end of the 4th movement to thunderous applause in my headphones. Reading while doing boring steady state Zone-2 on the elliptical is amazing -- I rarely feel smarter or more engaged. I read all of Jane Austen's novels that way, and it inspired me to read more!
I also go to bed every night at the same time, get up at the same time every morning, and if I don't manage to get at least 7 hours of sleep I take a nap during the day. I have developed friendships with the regulars at my local coffee shop and often have good conversations at nearly the same time several times a week. I meet with a couple of old college friends every week for dinner. I also take leisurely cold-ass showers every morning when I get back from the gym to prove to myself that I have executive function, and because by the time I've shaved and brushed & flossed I feel like a million bucks.
I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.
It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor. It's a lot of fun!
Not quite done with it yet, but you can see the progress here https://github.com/shikaan/OSle and even test it out in the browser https://shikaan.github.io/OSle/
[1] https://shikaan.github.io/assembly/x86/guide/2024/09/08/x86-...
So far I have ~ 3M distinct IP addresses per 30 days, with a lot of fresh proxy IPs, 1.7M. The DB contains only verified IP addresses through which I've been able to route traffic. It DOESN'T rely on 3rd party/open-source data sources.
I also made an open-source proxy IP block list based on the data: https://github.com/antoinevastel/avastel-bot-ips-lists
I don't include mobile proxies since they're heavily shared, so knowing that an IP address was used as a proxy at some point is basically useless.
Regarding your remark, indeed, there are several shared residential IPs, including IPs of legitimate users who may have a shady app that routes traffic through their device. That's why I don't recommend blocking using IP addresses as is. It's supposed to be more of a datapoint/signal to enrich your anti-fraud/anti-bot system. However, regarding the block list, I analyze the IPs on bigger time frames, the percentage of IPs in the range that were used as proxies, and generate a confidence score to indicate whether or not it is safe to block.
I’m working on a scraping project at the moment so looking at this too but from the other end. Super low volume though so pretty tame - emphasis on success rate more than throughput
I bought a 4G dongle for use as last resort if nothing else gets through. And also investigating ipv6
Currently planning on doing a layered approach. Cloud IPs first etc.
Interesting challenge but also trying to be somewhat respectful about it since nobody likes aggressive bots
A few released apps for now that are iOS/macOS, with some exciting more things in the pipeline.
If you’re a photographer who has frustrations with current mainstream photography software (whether capture/edit/publishing), I’d also love to hear from you - you can find me as Héliographe on (mastodon,bluesky,threads,x) or just email me at contact@heliographe.net :)
- Voice based note taking tool that does transcription locally
- Markdown files to any folder(s) you setup.
- Help with ideation, rumination, todolists
- Alternative to when writing is too high friction
2. VoiceType
- wisprflow, superwhisper alternative
- runs locally so no subscription
- dictation tool, best for when using cursor, windsurf, chatgpt or talking to an LLM
7 months ago, I was looking for a job and got frustrated with the current resume builders, so decided to build one exactly how I wanted a resume builder to be.
- Free (like really free).
- No signup, no login.
- Has AI features to improve text.
- Find jobs matching the resume.
I try to use a pdf upload to gather info more easily for a user (eg from LinkedIn). Maybe you can incorporate that also?
As for LinkedIn, you’re the second person asking for this. I can implement it but still not sure how it will be adopted, along with the cost of fetching a LinkedIn profile.
Maybe just an idea I think would be worth charging for to offset costs on yourself: if you could get a few accounts on different recruiting software packs (BambooHR, smartrecruiters, etc) and then let users test their resume with different recruiting software's AI filtering tools, that could help a ton of people. You'd have to make a lot of different job descriptions/postings in each one, but you could probably craft them all generically enough to fit most careers.
Once that's going, maybe a pay-per-use fee to test your resume that gives the paying user a couple unique recruiting links to a few job postings, and then use playwright or something to capture screenshots of their profile in the backend(s).
I like your idea, but its hard to implement due to privacy concerns and could violate the ToS of these platforms (for sharing the account).
But I do have a feature in mind to do something similar. My plan is to always keep it free, with any feature, but I have to think of monetisation. Right now, it would be charging employers/job boards instead of job seekers. I've been there, the job search is stressful enough to add financial burdens.
Would like to see more written down on how the résumé-building part works.
Would love to see something that can start from a pre-existing CV and help refine. (My current CV is my own record of projects I have undertaken, so it has a lot of detail and runs into approx. 10 pages.)
You can upload your current CV, and it will parse it to fill out the form for you. You can then amend or improve it, choose a design, and export it as a high-quality PDF.
I will try to write about it. I faced some challenges related to exporting as a high-quality text PDF, including multilingual support and ensuring JS messages are all translated, among others.
So most of my project work is home-based, after years of being able to chase (and execute) dreams at work.
On the technology front, I'm finally investing in a proper network core for home. WiFi 7 AP, 2.5Gig core, PoE everywhere, zone-based firewall. Still mapping out DHCP scopes and VLANs, but once that's done it'll be moving on to proper IoT and Home Assistant build-outs to prepare for the Unfolded Circle 3 later this summer. Also looking to redo my two N100 hypervisors off Proxmox and back to RHEL + Cockpit, or some other Linux + KVM implementation; from there, it's all about Kubernetes, Ansible, and Terraform. Really just a lot of oft-postponed side projects because I had amazing fulfillment at work, that I suddenly have ample time for now.
Outside of the tech stuff, I'm still trying to get some decent photos of two local birds of prey that have been hunting in my neighborhood. They seemingly spite me by only showing themselves when I don't have my camera with me, but dangit, I will photograph them.
On the writing front, I've got a few topics jostling around in drafts: speculating on potential futures of LLMs, the internet as a psychohazard, and a series of "fundamentals" to try and teach my non-techie circles more about how computers and the internet work, so they can do some modest self-hosting and get off centralized services. I'll likely dovetail some of them with my own home projects, writing them alongside the documentation as I make progress.
Also, a visual programming language implemented as a PICO-8 script, where the "programming" is done fully in the sprite editor.
I also "recently" (~2 years ago) added Twitch interactive mode so that streamers could play against chat, but so far haven't gotten anyone to play it on stream (that I know of). While I was adding features, I realized that I could easily make a version that works over WLAN.
On the side I'm working on a mixed reality mode but it's been a slog trying to adapt the game to fit into various room sizes.
These cloud flavours have a compatible SQL dialect, but it's often details like missing features (CDC and Auditing on RDS are good examples) or differences in system objects that make it difficult to support your app on these platforms.
I capture all sql statements, run them through multiple SQL parsers to find all the system objects your app is using (tables, functions, stored procedures, etc). I then check them all against a catalog I have built of all system objects for every version of SQL Server on every platform.
I then give a report to see which platforms your app will work on, which ones it wont work on and which system objects are the problem.
Other database engines will be added once I get it working end to end (almost there).
I'm not doing this because I'm convinced it's a great idea, or because it's going to revolutionize computing, or because it will be a good language, model or beneficial in any particular way, I'm doing it because I think it's fun, neat and interesting to think about (and talk about).
maru has this vau operator implicitly, rather than explicitly: https://github.com/attila-lendvai/maru
Building this for myself mainly, but hoping others might find it useful. Still very early and building out the bear essentials, but then the hope is to keep reading marketing books and use that to improve the platform.
The idea is simple: a job search site that actually works for job seekers. It focuses on listings posted directly by companies (no spam, no middlemen, no bloated sponsored posts drowning out real opportunities). It uses AI to surface better matches, recommend jobs intelligently, and pull out the most important info from job listings automatically. You can also bookmark jobs you’re interested in and track them easily — no signup needed unless you want personalized suggestions.
It’s still early, and we’re improving it constantly. Would love for you to check it out, try a search, and let me know what you think — good, bad, rough — all feedback helps. Thanks a lot for the early support!
Site: https://jobswithgpt.com
Pretty anxious about that, given how massive of a life change it is, and how much will be riding on me getting good grades.
The majority of people in my MSSE program were also heavy on work experience. It made for a far more interesting peer group than the few that had just come in from undergrad. Having that work experience meant that you could look at the coursework from the framework of how it would play out in an actual corporate environment.
It was really fun discussing how to apply the SQA and Project Management coursework in the workplace with people from very different companies.
This is on my mind too.
Am an engineer (EE + CS) with 25 years of work experience, with a passion for Physics. Am widely known in my circles as a scientist/physicist, however, I do not actually know much. Learned some Lagrangian and Hamiltonian classical physics recently.
I personally do not mind going for even an undergrad in Physics if that would be a better fit for me to learn. :-)
I'm kind of contemplating the same thing - not leaving the corporate world, because I have too many bills and debts for that - but getting a PhD in something, maybe math or CS. I don't know that anyone really does that in their forties, though...
Best of luck!
As a freelance web dev who also has an Airbnb side hustle, I got tired of expensive bookkeeping for a few transactions per month. I tried DIY, but my time is worth more than that.
Most importantly, both pros and DIY got subtle things wrong and caused me to miss out on thousands of dollars in deductions and credits.
So I’m making an AI bookkeeping chatbot that will handle all that for me. The aim is full automation while surfacing tax deduction and credit opportunities throughout the year. Like wouldn’t it be awesome to just have the research and tax credit or do home office deductions with zero effort?
At the end of the year, Kumbara puts together a series of financial reports that you plug into your tax software or hand to your CPA.
Working hand-in-hand with CPAs and some platform partners on this. Would love to hear from other solopreneurs or engineers who want to help build the future of financial freedom.
I procrastinate on taxes for the silliest of reasons: I don't want to spend the time to create a P&L for my side gig to give to my accountant. Takes less than an hour but it is just annoyingly tedious.
All my income and the majority of my expenses are done through PayPal because I want to minimize the bookkeeping effort. For some unknown reason, they don't have an annual P&L statement as a standard report. This year I tried a bunch of things with Copilot using PayPal reports. The most eye-opening result was that I could give it a .csv file with all my transactions for the year and tell it to generate a P&L statement with expense categories. It managed the task almost flawlessly. The only cleanup I had to do was to recategorize a couple of items. To say that I was blown away is an understatement.
It's something I've been thinking about for years, but kept avoiding because I knew it'd be a huge commitment and I figured surely someone else would do it eventually. But I decided to finally tackle it and learn some new skills. 40k has literally 1000s of special rules across all the armies, so it's been fun designing a highly modifiable architecture.
While Kubernetes offers power at the cost of complexity, Uncloud focuses on simplicity for common deployment workflows.
Progress from this month:
- Enhanced Docker Compose support: You can now deploy your entire stack from standard Docker Compose files. This includes volumes, environment variables, resource limits, scaling and logging configuration.
- Volume management: Create and manage Docker volumes across your cluster with automatic scheduling based on volume location.
- Context management: CLI command to quickly switch between multiple clusters, e.g. homelab and production one
I'm particularly excited about the volume management system as it provides the cluster semantics to the good old Docker volumes. It uses a constraint-based scheduler that ensures services sharing volumes are properly co-located.If you're seeking something between "just Docker" and full Kubernetes for deploying applications on your own infrastructure, I'd love to get your feedback on Uncloud.
https://ncatlab.org/nlab/show/computational+trilogy
It helped me a lot putting into context my existing programming knowledge while learning category theory
I am suggesting this since you said you want to better understand functional programming. Category Theory, as mathematicians look at it, is an extremely abstract field. If you want to do pure math related stuff in Category Theory, and only then, I would say important prereqs are Abstract Algebra and Topology. I believe the motivation for Category theory lies in Algebraic Geometry and Algebraic Topology, but you definitely don't need to be an expert on these to learn it.
It’s free to use with a reasonable daily limit (5 lessons). To access unlimited learning and additional utility features, it's ~$4.99/month
https://github.com/turbolytics/sql-flow
I think the industry lacks lightweight fully featured stream processing solutions. I think it’s either heavyweight jvm or bespoke custom solutions
Sqlflow aims to be a middle ground , performant, fully featured, observable and supports sql
In the process of adding stuff like euclydian sequences, and trying to figure out how to generate melodies. Been considering using something like a simple markov probability from a bunch of jazz standards, but also starting to read more music theory behind it.
It's a programming project but it's directly related to me trying to figure out music. So not a random sequence of notes in scale or not. The idea is more to generate backing tracks or song starters.
Strong recommendation: Hire a teacher. Even with experience playing four instruments, and when I decided to learn another, I still hired a teacher.
A layoff killed that goal for the foreseeable future.
Theory has helped me practice like I think you're supposed to. More structured, more analysis. It also tickles the same part of the brain the certain comp sci topics do.
I think it came from wanting to learn how to improv, and then wanting to make my own songs. So I make a few tracks a week, of different genres, depending on what I'm interested in at the time. I've seen improvement, and I take notes about what I learned/what works.
After three tough years in finance, I’ve decided to start a trip around the world, visiting the UK and the US, spending one month traveling through Japan, and then heading to South Korea.
There might be problems in life that can’t be solved with a three-month sabbatical trip around the world, but luckily, I don’t have any of them. :)
At the same time, I’m exploring ways to apply LLMs to video games and have built a small prototype of an LLM-based D&D system. Let’s see where this goes!
If you’re ever in Los Angeles or California, give a ring!
Its been a blast to build. At one point I was hosting my own ORS server but that's extremely silly to do when Mapbox has a very generous free tier. Learning about all of the open source tooling and open data available in the mapping world has been incredible.
The cost of the Hetzner box it runs on isn't much more than a Caltopo pro subscription with the added bonus of being much easier to share with non-hikers.
A quick demo: https://www.loom.com/share/a5f7a7c23457400aa92b3f0f71a0008f
The app: https://go.trailboundapp.com
No marketing site, no onboarding, or any real UX attention paid to the app at this point, it is mostly Just For Me and will probably remain that way once I land a job again.
I'm starting to chip away at an iOS app so I can get offline access to maps and routes, but I'm not a Swift dev so the going is slow.
What they are. their capabilities, and their and risks and write about it on my Substack
Encyclopedia Autonomica: https://jdsemrau.substack.com
On that note, I also curate a list of resources around AI Agents that fit my narrative:
https://hellocsv.github.io/HelloCSV/
It basically runs 100% on your / your users browser, and I'm adding localstorage support so the user can refresh their page without losing their progress.
Love flatfile but it sends your data to remote servers, and we're a healthcare company, so we need to have full control over our data storage
Hoping someone on here will find this useful!
We've recently put a paper out in arXiv (https://arxiv.org/pdf/2503.12533) and the project page is: https://beingbeyond.github.io/being-0/
BAAI is also hiring! No fully remote positions though. :-/
[0] https://walledgarden.ai/r/wgd [1] https://www.blackpoolgazette.co.uk/news/people/twice-wed-fyl...
The direction Big Tech is heading made me reevaluate what is important in my career and life.
Basically it continuously records into a buffer (length is configurable), and if you realize that you wanted to start recording 30 minutes ago you can recall the buffer and have everything saved.
In my work and an audio engineer I was in this situation a couple of times, and since there was no tool for that on the market, I’m building it.
I've explored all existing commercially available solutions and none match my requirements/standards (high-quality audio codec/DAC, USB-PD, steering wheel controls, no splicing or cutting into the loom, no cutting up the dash, etc.).
After researching how the Toyota/Lexus infotainment systems work I found they are highly modular, and actually quite easy to produce custom modules for. The proprietary AVC-LAN protocol is well-understood, and actually makes it really easy to integrate with. The display are just NTSC composite, and touch events (in fact, all button presses, including steering wheel media controls) are emitted on the AVC-LAN bus.
An simple board with some interfacing logic, a DAC and an RPi nano should be all I need to allow good Bluetooth audio integration, and lossless audio via wired USB, both controllable from the steering wheel. You could do all sorts of fun things with the display (such as displaying vehicle status from ODB, lap timer, G-force meter etc.). If I use a RPi compute module then running Android Auto should be quite feasible.
It's just a hobby project though, I'll probably open-source everything. Not planning on commercializing it.
Here is my version 1.
https://youtu.be/pJjq_0hZEh4 https://youtu.be/drCtDADRJe8 https://youtu.be/jLCiIB6z968
Version 2 is being built around a CM4 IO board and will include my AVCLAN implementation, DAC, GPS and is intended to run AOSP Android. The board is fully designed already and I'm working on the custom Android build now.
It's also fun to see how each Toyota product uses the same modules and buttons, just different layouts and front panel designs. Your solution (sans the custom display that will only fit your model) should work on just about every Toyota and Lexus model from the AVC-LAN era (2000~2010)
Not sure if you'd be willing to share your code or diagrams, but anything would help me tremendously. My contact info is in my bio.
They do, actually!
Most car manufacturers, including Toyota, now use the MOST-bus for infotainment system interconnects, which is an open standard. So your car being a >2010 Toyota, it will have a MOST system.
> I can't justify replacing the system in my 2016 Highlander just to get Android auto
You don't have to, there are plenty of aftermarket modules available that will add Android Auto to your existing MOST infotainment system. Can be a hit-or-miss though in terms of quality and integration. If you want something reliable and well integrated, stick with the OEM solutions.
> I don't like having to rely on the manufacturer nor Google/Apple for connectivity basics.
You're already relying on the car OEM for mobility and your safety, and you're already relying on Google or Apply for connectivity (aka: your phone's OS).
Remember that your 2016 Highlander (3rd gen XU50) was introduced in 2013, so developed well before that. Android auto didn't even exist back then, and we still hadn't consolidated on the iOS/Android duopoly we have today. Heck we were still using Windows phone and Blackberry OS back then. How could Toyota engineers have designed something that would still remain compatible with devices in 2025? Quite remarkable actually that it still works at all.
The idea is essentially: An Erlang-based control plane, supporting ENet and WebSocket connection modes, with Protobuf for messages. Erlang has an excellent concurrency story, and I think it's a great fit for game servers. I've wrapped up a bunch of this work into behaviors on the Erlang side, such that developers can target the "gen_zone" behavior (for instance) to implement a tick-based game server. I'd like to expand that into other types of games, such as turn-based games.
I've also got a Godot plugin for generating a client library based on your protobuf schema. The plugin handles session stuff, exposes functions for client-to-server messages, and emits signals for server-to-client messages.
These days I'm working on integrating Luerl (Lua in Erlang) and Love2D support. I want to be able to take advantage of Erlang on the back-end while writing the majority of game logic in Lua. Further down the road I want to explore hot reloading parts of the Lua game state on the client/server, perhaps with an in-game editor, to develop the game "inside-out", in a way.
https://mini2-photo-editor.netlify.app to try it out (https://github.com/xdadda/mini-photo-editor)
https://github.com/wantbuild/want
Want is a hermetic build system, configured with Jsonnet. In Want, build steps are functions from filesystem snapshots, to filesystem snapshots. Want calls these immutable snapshots Trees. Build steps range from simple filesystem manipulation and filtering, to WebAssembly and AMD64 VMs.
I’ve done my share of programming languages (PHP, C++, Python, Ruby, Haskell) and for the last 10 years I’ve been working in OCaml (which I love so much) but Rust would be a nice addition IMO.
And I never implemented LSM style database before! So that’s fun.
I only just started and the pace will be slow (I have 3h/week to spend on it on a good week), if you are curious: https://github.com/happyfellow-one/crab-bucket
LSM style should be an interesting path, especially when it comes to optimization.
I really wanted an optimisation rabbit hole and seems like this projective going to deliver on that :)
I also tweet about the progress on @onehappyfellow if you’re interested
The gecko comes from New Caledonia, so my goal will be to replicate that environment as closely as possible. This will be difficult, since most of the plants on the island are only found there, but you can get surprisingly close.
One awesome fact I learned about this: conifers actually started out in warm climates. They just got out-competed once angiosperms (flowers!) came about.
As someone who grew up on the internet, I feel that the freedom it gave me to explore the world at my own pace allowed me to develop my personality. The thought that every second of my online life will be logged in an app and accessible to my parents honestly sounds horrible.
I respect every parent's decision regarding how they raise their children, but I invite you to reflect on whether growing up under this level of surveillance is something you would have wanted for yourselves.
I do want to give them a little privacy and it gets to the appropriate level. Like restricting some apps at certain times, access to chrome but not xhamster. Locking it for certain periods of time and having them request more screen time past 4 hrs/day. Locking the phone whenever they've barricaded themselves in the room the whole morning.
I don't necessarily mind that they're watching YT or TikTok and such. I just want to kick them out of the doom scrolling cycle every now and then.
Really like to look of the product page!
Scratching an itch, the intention is that its a map, centered on the users that shows all (configurable) things of interest near by. Think of Atlas-Obscura but much more local - eg AO doesn't list every prehistoric burial mound on the planet, but I want to know where they are ;)
It's a translation map of Indian languages - type in a word, see the translations across 22 languages.
I was inspired by this HN post (https://news.ycombinator.com/item?id=43152587), and wanted to make something similar for India (which has similar linguistic diversity). Translations are fetched with Google Translate, but I also display 'romanizations' (transliterated into Latin script), which are generated with a local ML model.
Now that it's done, I've mostly been working on a little Markdown-to-HTML parser in Haskell.
A Markdown-first CMS and website builder for blogs, newsletters and documentation websites.
I've been blogging since more than 10 years, and the only thing that made it possible is Markdown. That's why I've decided to build a complete publishing platform to replace the complex and fragile setups of bloggers and startups. Do you really need a CI/CD pipeline, static site builder, hosting, CDN and analytics just for a website? :/
The platform is currently 100% operational and I'm now working to Open Source it.
The best thing? You can publish directly from the CLI:
$ mdninja publish
I've only built it for Minneapolis and Chicago for now.
A few weeks ago I got a video of one of my friends playing it at a show: https://mastodon.social/@DesiderataSystems
Something I'm working on right now is trying to implement a basic on-board synthesizer so people can use it without a laptop or external hardware synth. (I added a DAC a couple hardware revisions back, I just haven't done anything interesting with it yet.)
The firmware is open source and there's a fairly detailed user manual.
I work at a small startup trying to hire engineers and got tired of looking through resumes. As Joel Spolsky points out, they're not great indications of technical ability. Instead, I decided to throw together a take-home challenge that applicants could access via an API. "If they can't solve the challenge, I don't need to see their resume," I told myself.
FizzBuzz.md is a better version of the solution I built for work. It lets applicants send questions and submissions to configurable email addresses via API so the email addresses aren't exposed directly. (Less spam, FTW)
For now it is a bunch of ifs, and that's ok - lot of them are generated with AI but validated to be reasonable.
Basically it's an AI aggregated service where users can prompt multiple ai models at the same time/place for free and/or under one subscription fee ($20).
I've experimented with Unity, Godot, and GameMaker in the past, but for the time being I'd like to see what I can accomplish on my own in Go to keep my dev chops sharp especially since I've moved into an engineering management role at my job (which has nothing to do with game dev but is increasingly employing more Go source throughout). Something I've realized as I've been applying good code organization and reusability is that I'm essentially building an engine for anything top-down (hesitant to say "isometric" since I know that means something graphically specific)--RTS/TBS/tactics/RPG all seem doable with what I've built given a little bit of extra logic on top for each.
https://spicychess.com: A real-time chess playing app where you can taunt your opponent and smack them during the game! If you smack them enough times to completely drain their health, you can steal their turn and make their move. There is also has a progression system where leveling unlocks increasingly fun abilities designed to torment and troll your opponent.
https://wordazzle.com: Inspired by the quote 'the limits of my language are the limits of my world,' it's a daily game delivering 7 carefully chosen, sublime words designed to genuinely elevate your verbal prowess. You can also save the words you love as flashcards to review them later!
I also published the list of url schemes / universal links on GitHub: http://github.com/sxp-studio/app-list-catalog
(Minor note: The Setup Tutorial says, "Swipe right to continue" when the user actually swipes left.)
So despite not being a programmer, barely knowing a little javascript, and nothing about python, linux, or gtk, I was able to use AI to muddle my way through to creating a program that takes the input from a numpad, looks up the song from a list, sends it to mpd, and posts a picture from the album art embedded in the music file. https://github.com/jccalhoun/mpcButtonJukebox
You give a title to your research session, and it keeps track of which tabs you have opened, which ones you have read and have not read.
When you want to resume your research, you can simply resume on whichever research session you want, and it will reopen all your tabs as before, so you can continue from where you left.
I have revived my work on Go Micro (https://github.com/micro/go-micro) and rewritten the v5 cli/api from scratch (https://github.com/micro/micro). As a VC funded company there was a lot of confusion around the tools we were building and we veered off in a direction that alienated the community. With the company dead, funding gone, etc there's an opportunity to rebuild value around the Go Micro framework.
The second thing I'm working on is the Reminder (https://reminder.dev && https://github.com/asim/reminder). As a muslim I feel like it's my duty to spread the word of Islam and as an engineer I feel like an appropriate way to do that is build an app and API for the Quran, names of Allah and hadith. It's a slow patient building of something as opposed to expecting anything from it.
In terms of new ideas, maybe not new but less screen time, less phones, more nature.
This is a complete remake of the original I made a long time ago for the Mac App Store, sharing only the name. I realized it was better to use the jq language since it was already familiar to many people and way more powerful than designing my own query language. I also do not agree with Apple's App Store command and control so I've decided to make it a web app, and I'm astounded at how much more powerful web apps and their DX are compared to native application development in the Apple world.
I learned a lot about text encodings, multipart emails, inline attachments, postgres' tsvector/tsquery, etc. I'm particularly proud of how I was able to use `WITH RECURSIVE` to get an email's entire thread, which seems basic but the other archive apps I tried didn't have that feature.
You can email prompts directly to your ThreadWise address and get instant AI-powered responses, essentially an always available co-worker. Another great feature is the ability to schedule recurring tasks and since the AI has web access, you can get things like:
Daily mortgage rates or airfare price monitoring
Weather and news summaries
Sport scores, jokes, quote of the day
Pull data from public APIs (and more)
So you can essentially use it as a personal newsletter, crafted to your taste.
The free tier will let you test this out for free! I am looking for some feedback/criticism, testing, and additional ideas and I am open for collaboration if you have experience with sales. Also open to hearing which scheduled tasks people would find most useful.
Why I built it: I noticed a trend online, as well as with family/friends, that people would like to have a quick access to AI in instances where they couldn't always install apps or use browser-based tools (such as in remote/low bandwidth environments). This is when it him me, email clients already have all the features needed to interact with an AI (text + attachments) and I quickly got to work.
Some of the advantages are also that since there are no new apps, or browser tabs needed, the tool is ideal for companies who don't have the bandwidth to setup full fledged AI solutions on their own. The companies can choose either between public LLMs (e.g. OpenAI) or host everything on-premise with locally run models, so no data ever leaves the premises.
Eager to hear what you all think!
It lets you create multiple agents, configure them via the web console (such as LLM parameters and system prompts) and manage their plugins and functionality.
The system is fully plugin-based, where each plugin is a WASM program that exposes functions/tools that the agent can call, and can also hook into the query lifecycle. Because plugins are WASM, they can be written in various languages such as Rust, Go, TypeScript etc. Plugins can also act as libraries, which is possible because of WebAssembly Components (a great piece of software!) -- so you can dynamically call functions from other plugins within your agent, and you get type support for your chosen language too (with codegen via WASM Components tooling).
More recently, I've been working on an SSH server for agents. The idea is that you can add public keys to your custom agent and then SSH into it to talk to it easily from terminal.
If this sounds interesting, feel free to join our Discord! The project is still new and feedback is highly appreciated. http://asterai.io/discord
That's a good question. Currently, there is one way to do it. The client querying the agent receives JSON-encoded values that are returned from plugin function calls made by the agent. These values are received alongside the agent token response stream (via SSE). So plugins can essentially emit events that the client can forward to the UI application, such as to click a button etc. The limitation with this is that there is no built-in way to send a success/error status back, it's one way only. It works well for actions that are infallible such as simple UI actions.
The client here would also need a way to interact with the target program of course, e.g. from a JavaScript browser you can click buttons and manipulate the DOM, or from a VSCode Plugin you can interact with the editor etc.
It's definitely something that can be improved though! I've been thinking about some type of MCP interoperability that could maybe assist with this.
Talo makes it easy to add systems that traditionally need extra non-gameplay build time like authentication, player analytics and game stats.
Right now you can drop Talo into your game or use the API directly. Importantly, I’ve made Talo easy to self-host and you can point the Unity package/Godot plugin to your own Talo instance.
This makes it possible to create a kind of requirement "programming language" where the requirements can be evaluated. With this language it becomes possible to create cross-references from various compliance standards/frameworks, like ISO27K, to USM, and automatically evaluate the compliance.
(2) Dance event calendar in Finland, running in production for over a year. 1-2M page views/month. Django app, but I am now implementing a copy of the unauthenticated user views to S3 bucket and delivering it through Cloudflare.
(3) Django app to handle all the data related to custody trial. Emails, SMS, notes, official records, voice memos, etc. can be attached to a timeline, and tagge and searchable. It has command line interface for adding data, in addition to UI, so I can quickly add notes and attach files.
It is very interesting. I am very surprised how well it worked out.
so you can get logos / icons that doesn’t look AI generated.
it comes with Photoshop-like editor (https://mitte.ai/editor) so you can zoom into details and change / remove anything, or upscale, etc.
I built it for myself but now there’s good amount of paying users as well.
simply go to the sign in button and then there is a reset password and then when you click on it , there adds an optional sign up and when you click on it , it leads you to mitte.ai/join which says Not Found.
Kind of interesting, wappanalyzer shows its written in erlang? So are you raw dogging erlang or maybe elixir or gleam? What's the tech stack behind this.
Where are you generating the images / videos at? Are you using something like openrouter api or are you self hosting the gpu / using aws for it??
I am also interested in what percentage of users are paying? and also the abuse vector that might arise from generating some pretty down bad images... , are all images that are generated here public or what exactly??
I had to close the sign up because there was so many abuse coming from regular sign ups.
'Sign in Google' is great because it eliminates low quality traffic who never pays and tends to be there for abusing the system.
Seriously, I don't mind a sign in option, I just don't want to be forced to use gmail / inferior privacy solution to use your app, no offense.
> Please add more Oauth solutions > I am investigating what can be the most universal Oauth solution while still preventing spam unlike emails, sounds interesting enough of a problem,IDK
Keycloak also has strong integration in a lot of languages, there are other projects like authelia etc. but for some reason keycloak actually seems to me of having a little bad ui but still absolutely great in what it does
But authelia/ authentik are easier to host/integrate IDK
I like collecting books and have lots of series, but editions and cover/spine designs changes all the time for no good reason. Especially for long series it's near-impossible to get a collection with consistent styles, which I find frustrating. And when buying rare or old second-hand books online, it's even worse.
The app will allow you to enter your book information (title, author, size, summary...), then choose the design/layout. You will then have the option to print it by yourself (for free - if you can find a big enough printer) or get it printed by a professional.
https://tallytabby.com: The simplest way to keep track of anything in your life. From habits to inventory, TallyTabby makes counting effortless.
https://qrfeedback.app: Get honest customer feedback with a simple QR code
https://skillriskaudit.com: Knowledge Risk Management
https://shiftingtictactoe.com: The classic game with a twist—play in real-time, and there’s never a draw!
How does this work? I searched Boston and got zero results?
This is why I'm building a unified MCP server with just two meta tools: - Search Available tools - Execute a tool
When I want to send an email, I ask LLM to use the Search Meta tool to search for Gmail related tools in the backend, then the meta tool returns the description of relevant tools. The LLM then uses the Execute meta tool to actually use the gmail tool returned. https://github.com/aipotheosis-labs/aci
I'm a member of the JSON Schema Technical Steering Committee, and been making a living consulting with companies making use of JSON Schema at large. Think data domains in the fintech industry, big OpenAPI specs, API Governance programs, etc. The tooling to support all of these use cases was terrible (non-compliant, half-baked, lack of advanced features, etc), and I've been trying to fix that. Some highlights include:
- An open-source JSON Schema CLI (https://github.com/sourcemeta/jsonschema) with lots of features for managing large schema ontologies (like a schema test runner, linter, etc)
- Blaze (https://github.com/sourcemeta/blaze), a high-performance JSON Schema C++ compiler/validator, proven to be in average at least 10x faster than others while retaining a 100% compliance score. For API Gateways and some high-throughput financial use cases
- Learn JSON Schema (https://www.learnjsonschema.com/2020-12/), becoming the de-facto documentation site for JSON Schema. >15k visits a month
Right now I'm trying to consolidate a lot of the things I built into a "JSON Schema Registry" self-hosted micro-service that you can just provision your schemas to (from a git repo) and it will do all of the heavy lifting for you, including rich API access to do a lot of schema related operations. Still in alpha (and largely undocumented!), but working hard to transition some of the custom projects I did for various orgs to use this micro-service long term.
As a schema and open-source nerd, I'm working on my dream job :)
Reached a milestone today by being able to create a http endpoint and route the traffic (based to path variables) between various pathways. Thus it is possible to create a blog using Erlang-RED - what all good frameworks should be able to do ;)
[2] = https://github.com/gorenje/erlang-red
[1] = https://nodered.org/
By doing the calculations myself, I can play with different scenarios, I can also integrate the effect on the material quality uncertainty. Nothing fancy, but it fits my needs.
Over the past few weekends, I’ve been building Pantry Recipes – a mobile app that lets you quickly generate recipe ideas based on the ingredients you already have at home.
The idea is simple:
- Save or quickly select ingredients you have on hand - Tap Generate Recipes and get ideas instantly - You can also describe what you want to make free-form (e.g., "cheese omelette") and the app will generate a recipe for you.
The app is free for a number of recipe generations, then offers a low-cost subscription if you want unlimited use. It's live on the iOS App Store now: https://apps.apple.com/us/app/pantry-recipes/id6744589753
Happy to answer any questions if anyone’s curious about the tech, UX challenges, or what I learned from launching!
I think it could be useful to have a “recently generated” section in the Recipes tab that lets you find things you might have forgotten to save. Substitutions could also be a useful feature. For example, if I can’t find Mexican oregano, what else can I use?
Also, there can be set of ingredients that should not be mixed together or be cooked in certain way. Are these cases considered when generating recipes ?
I think sales will bifurcate: either fully automated self-sale, or fully relationship based. All of the major CRMs (customer relationship management) software are going all in on AI and sales automation.
I think relationship sales is going to see an increase in comparative advantage. I've always been a relationship seller but the current crop of CRMs are not designed for me. So I've built https://humancrm.io/ to scratch my own itch.
It sends me an email once a story hits a certain number of upvotes per minute, so it's useful for keeping track of breaking news.
Little personal project that started as a means to try out AI tools that can talk to each other. Turned out to be super useful for building and debugging complex AI automations. I haven't had the time to promote it. But maybe someone here will find it useful.
https://akprasad.github.io/tamil/
It's been a lot of fun getting the basic tools going: transliterators, morphological generators and analyzers, and some other things on top. But the main goal is to improve fluency as quickly and efficiently as possible.
Hoping to share a first version of it soon. It’s been absolutely fascinating digging into Postgres internals!
It’s not much, but it’s all I can cram into the free time I have, there’s a possibility I might actually finish it for once, and it’s something I could actually use once it’s done.
I did a ton of work on building an Elo model first, but was getting very compressed results in terms of postseason predictions. I swapped to a Bayesian approach which has really taken off. Not sure how it's going to handle the second round of games approaching, but that's a problem for the future!
It's the first original piece of content for a newsletter of curated gaming content that I've been running for more than five years now, called The Gaming Pub [2].
[1]: https://www.thegamingpub.com/features/ [2]: https://www.thegamingpub.com/
https://gist.github.com/nraynaud/5c7613d876f10c5df6f3ec48046...
https://github.com/kenrick95/ikuyo
So far it has some sort of activity calendar + expense tracker
There's still so much ideas to implement, like adding map, improve UX of creating activities, to-do list, etc
I've used it once or twice of a short trip, but in 6 months time, I'll have a 2-weeks trip, so that's my self-imposed "deadline" for this project
Anyway this project is a pure static web page and all the 'back-end' is handled by InstantDB ( https://www.instantdb.com/ ) after I saw their submission on HN >.< So far it has been quite a good experience overall except maybe the permissions model which can be a bit confusing to me
I tried Wanderlog, but didn't get the right 'vibe' for me. The one that I actually used is Navitime specifically to travel in Japan, but didn't have as much function as I hoped.
So most functionalities I had in mind was inspired from my Excel sheets that I used over the years for travel planning
An easier and more secure way to work with secrets during local development. It’s open source, cloud/vault agnostic, and doesn’t require a single line of code change to use. I call it RunSecret.
RunSecret is a CLI that replaces your static secrets with “secret references” in your ENV VARs (or .env files). These references are then replaced when your application starts up by reaching out to your secret vault of choice - making your .env safe to share across your whole team and removing a slew of gotchas when you use git ignored env files. Even better RunSecret redacts any instance of these secrets from your application output, reducing your chances for accidental leaks.
The approach is inspired by the 1password CLI, but built for the rest of us. I’ve got AWS Secrets Manager support pretty well baked, but the goal is to support all major secret vaults within the next couple of months (Azure KeyVault is already in progress).
I assume this is to cover the non-CI/CD scenario.
Gitlab Secrets looks cool, but that hits at another reason I think RunSecret is valuable even for CI - we don’t use GitLab at my day job so it’s not an option for me! I think GitLab and 1password have interesting proprietary solutions that definitely have inspired RunSecret, but I’d love to see an open source, universal solution here - which I’m hoping RunSecret can be!
Azure KeyVault support is in progress and should land soon. I will notate it in the release changelog once it’s ready, but I’m also happy to reply here or let you know another way if you are interested!
Also be interesting to see what trufflehog finds (should be false positive)
https://github.com/trufflesecurity/trufflehog
Where are you storing the creds to get the secret from the vault?
This is the secret zero problem and other platforms solve it in other ways such as HSM
https://joeldare.com/how-to-lose-money-with-25-years-of-fail...
I've created a mailing list. Please sign-up if seeing all my new business tries is something you might be interested in.
I started a functional beverage brand about 2 years ago and rebranded for larger scale about 7 months ago. I am located in South Florida and have had some decent success so far but still investing all profits back into the biz. Raised about $50k of outside money.
If you're in South FLA and are into fitness and health, consider reaching out to me or responding.
I will also have the capacity to sell in to the NYC market quite soon. If this sounds interesting, again, please respond or DM!
Recently I was fortunate to join a cool startup (as Head of Engineering) that tries to improve the AEC industry by helping them with their paperwork.
So lots of RAG, chat, agents, deep research. It's really interesting but also challenging. The biggest challenges are: large data; different stakeholders/user stories in one org
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
Started my first garden this spring. Have several peppers, tomatoes, zucchini, and a few herbs.
Looking into internships and new opportunities. Been out of the profession for along while and need find my way back in.
This is something I've always wanted for myself so I decided to build it. Plenty of event aggregators out there, but not many that let you search and filter by any combination of locations, dates, and genres.
It currently supports automated data feeds from ticketmaster and the Civic Joy Fund; hoping to add Parks & Rec, Library events, and some other ticketing sites in the near future.
It has been a fun way to keep my coding skills sharp (Eng Director by day), I've learned a ton about react, mapbox, django, postgis, and GCP doing it. If you have an event source I should look into or a feature request let me know!
Whispers is a self-organizing, belief-driven mesh where nodes propose, verify, and evolve solutions through dynamic, decentralized consensus.
Basically a shared knowledge graph of proposed partial inferences in CRDTs using verification as a merge function. There're some issues I know I'll run into with e.g. admissibility but I have some solutions in mind.
I'm in the very early stages but I think it's a simple idea so I have high hopes for a cool demo soon.
My goal is to create games on the go, during my commutes.
It's a fork of https://lowresnx.inutilis.com/, there is some videos of my progress on https://www.youtube.com/playlist?list=PLtmKVaz_2Cxe6pG7VbQfw... and a Discord channel https://discord.gg/jcT9CXDgHB
- Snapshot (https://apps.shopify.com/snapshot): AI generated product photos for Shopify. Previously used Flux and Stable Diffusion but always had quality problems. Was very tricky ensuring text remained the same and the product fit into the generated background. Just integrated the new OpenAI image generation model and results are much better however their masking feature doesn't work properly so need to wait for them to fix that before I can offer the same feature of keeping text/fine details the same
- Lurk (https://apps.shopify.com/lurk): New one I just launched - allows Shopify merchants to track the prices of competitors and adjust their own in response with dynamic pricing rules. It's cool because you just have to paste a URL and it uses AI to figure out the price. Again, there's a surprising amount of things you need to figure out to make this work reliably at scale (e.g. popups, ambiguous HTML, location-based pricing, etc.)
- Origin UTM Tracking (https://apps.shopify.com/origin-utm-tracking): Simple UTM analytics for Shopify stores. Acquired this last year and its being growing nicely.
https://github.com/codr7/hacktical-c
Also learning to deal with having very little to no money atm.
Not your regular "idea" but still interested in how it plays out.
https://onlineornot.com for the curious.
My situation is I possibly have a "local rhinitis", though my ENT dismissed that. I base that on him visually confirming that I have an allergic reaction in my sinuses, but my allergy screening on my shoulder showed no reactions. But it might also be a lot of other things including being in an extremely dry environment. Next steps are to get a CPAP, which will give me high humidity air and treat minor apnea, and a nebulizer treatment once a quarter, plus taking allergy medicine semi-regularly.
ATM, I'm not planning on releasing the source. Due largely to being way, way underwater on time available to deal with my existing projects. Is there something you wanted from the source? If you are interested in auditing it for where the data is stored, try using your browsers developer tools and watch the network traffic while using it.
https://github.com/ammmir/sandboxer
It may not be useful, but it's been fun, and I've honed my gut-level experience in Docker, Podman, Linux namespaces, Checkpoint/Restore, CRIU, and more. The ultimate goal is to hand each AI agent iteration a sandbox of its own (forked from the previous iteration), and have it build apps in private sandboxes. You'll be able to view intermediate progress as the app is being built (or failed rabbit holes), since each sandbox gets a unique URL automatically. Like, imagine if each commit of your git repo had its own URL to preview the app!
[1] https://amirmalik.net/2025/03/07/code-sandboxes-for-llm-ai-a...
4 years ago I made an install script that worked for Debian, Ubuntu and macOS. This made it easier to get going with them.
Over the last week or so I extended and polished that script to make it even easier and customizable, including adding Arch Linux support. The next step is to start installing and configuring GUI tools instead of only focusing on command line tools and environments.
I just used it the other day to set up a fresh work laptop in 5 minutes. Given the script is idempotent I run it all the time on my personal box.
It started when I found it surprisingly hard for my partner to install and connect MCP Servers — even simple ones. I realised if we want AI agents to really interact with the web, it needs to be as easy as installing an app.
Right now, you can browse, install, and connect servers in one click. Over time, it’ll make AI integrations as easy as installing an app — no messy APIs, no custom scraping.
If you’re working with AI models, agents, or data-heavy tools, I’d love to hear what kinds of “context pipes” you’d want to see added.
My primary goal is to make a game that's fun. Secondary goal is to make you experience why an AI might do things you don't expect. Specifically, to further instrumental goals like collecting resources, refusing being turned off, things of that nature.
There are two endings currently but I'm working on adding some more.
Reactivity can update the state of the notebook automatically, so you don't have to keep track of which cells to execute again. Side effects are managed to make it easier to reason about while maintaining reactivity and ability to interact with the outside world.
A few examples of what it can currently do:
- Automated data backup: Listens for Nomad job events and spawns auxiliary jobs to back up data from services like PostgreSQL or Redis to your storage backend based on job meta tags. The provider for this is not limited to backups, as it allows users to define their custom job and ACL templates, and expected tags. So it can potentially run anything based on the job registration and de-registration events.
- Cross-namespace service discovery: Provides a lightweight DNS server that acts as a single source of truth for services across all namespaces, solving Nomad's limitation of namespace-bound services. Works as a drop-in resolver for HAProxy, Nginx, etc.
- Event-driven task execution: Allows defining custom actions triggered by specific Nomad events; perfect for file transfers, notifications, or kicking off dependent processes without manual intervention. This provider takes in a user-defined shell script and executes it as a nomad job based on any nomad event trigger the user defines in the configuration.
Damon uses a provider-based architecture, making it extensible for different use cases. You can define your own providers with custom tags, job templates, and event triggers. There's also go-plugin support (though not recommended for production) for runtime extension.
I built this to eliminate the mundane operational tasks our team kept putting off. It's already saving us significant time and reducing gruntwork in our clusters.
Check out the repository[1] if you're interested in automating your Nomad operations. I'd love to hear your thoughts or answer any questions about implementation or potential use cases!
https://chromewebstore.google.com/detail/unpack/mcgdbnjjnnfm...
the process of creating APIs for testing and automation should be as easy possible. the tools that exist nowadays for this purpose aren't good enough IMHO, which led me to build it.
The experience with printers has left me thinking there's a gap in the market for _good_ Europe-based, small batch card printing. Awful experience with printers.
MITM + Waydroid doc: https://github.com/ddxv/mobile-network-traffic
Actual scraper (look in adscrawler/apks/waydroid.py): https://github.com/ddxv/adscrawler
Final product of reporting for which apps talk to which country / companies will go on: https://appgoblin.info
Feel free to contact me if you're interested in learning more.
https://github.com/abhchand/simplee-food
I found most of the offerings out there to be too bloated. Recipes are a simple thing - you want to store them, search them, and view them easily.
It has a full screen viewing mode for easy cooking with your tablet or phone.
My main challenge has been making meeting detection more robust -- it currently uses both mic and camera activity, which led to a lot of false positives. In the next version I’m switching to mic only (the camera caused most of the noise) and I’ve added a way to identify which app is using the mic, so users can exclude non-meeting apps.
I’ve also added plenty of small tweaks throughout to make LookAway even less interruptive. I’m excited for the next release!
Built a backend and web version but now focusing on an Expo/React Native app (my first ever).
Taskade started as a real-time workspace for teams to organize projects and ideas. It's evolved into something bigger — a platform where humans and AI work side by side.
We’re moving past simple chatbots into real agentic workflows, where teams can generate structured task lists, mind maps, and tables, train custom AI agents with dynamic knowledge, and automate work from start to finish.
Today, Taskade is built around three core pillars: Projects, Agents, and Automation. It’s like giving your team a second brain that can think, plan, and get work done across projects, automations, and real-time collaboration. If you’re interested in the future of human-AI collaboration, take a look!
https://marketplace.visualstudio.com/items?itemName=H337.c2p
I’m developing a VS Code and Cursor extension that helps developers quickly copy all code in a workspace to the clipboard for use with LLMs.
It also displays the token count for each file, as well as the total token count across the workspace.
By default, it ignores files listed in .gitignore, but this behavior can be customized in the extension settings, along with many other options.
RedwoodSDK is a React Framework for Cloudflare. We wanted to build something that allows people to focus on the software that they want to write, rather than the infrastructure that it runs on. Writing software isn't really a barrier anymore, the parts that developers find annoying can be shortcut with generated code, but you can't gloss over pushing to production.
Cloudflare gives us the ability to offer developers compute, database, storage, queues, AI, realtime with durable objects, etc... and to emulate that locally with just a single package installation.
Basically a test of putting guard rails around format and content of a website and seeing how much I could automatically generate on a topic of interest to myself.
Biggest benefit I've seen with cursor is to write tests for everything. Far too mcuh content hallucination, or made up links at first, but once you put in some test guardrails you can minimise this.
Currently just test driving it in some relatively simple work scenarios, but I imagine it could be a useful tool for consuming a NATS firehose of messages in a single application and routing them internally using the same semantics subject rules so that it would be easy to split them out as separate consumers when/if that became necessary.
It's a PHP application running in Symfony. All the CMS heavy lifting is done in the Bundle, and a minimal amount happens in the actual application side. I have worked hard on what is there, and still have a ton of work to go. Always could use the help.
Yet my problem really arises that its too luck based, sometimes I can be the last guy, Sometimes I can be the first guy so I have to wait for the van to get fully occupied which will take a lot of time...
I have just made it, and I find it pretty nifty, I made it all completely via AI and this one absolutely crazy good youtube video on deploying telegram bots on cloudflare...
Also, I had seen this telegram bot ai maker idea on HN a few days ago, So I had also created a project which you can chat with the microsoft deepseek r1 post training bot for free because the api key of open router for this model is free, It doesn't have incremental streaming or multi chats, really basic, and It can generate me the code but I am not sure how I would deploy that code .. , I used to think its easy but not... ,any resources out there? (Though I want to open source this, but I am not going to be building this ai idea further because I lack time and I have to study)
It's inspired by VS Code and hopefully positioned to eventually be a Cursor-like experience for transactional lawyers. The LLM integration isn't baked in yet to keep the in-house onboarding frictionless.
It's a desktop application written in Rust. It uses egui (an immediate mode UI library) for speed.
I'd greatly appreciate any comments.
Crucially, to make this bound work, you only learn the final state of the heap, not which element got deleted when.
I'm also building a simpler soft heap.
- XRoll.io — a fully on-chain gaming framework on the XRP Ledger, inspired by SatoshiDice but built for compliance. Commit-reveal fairness (HMAC_SHA256(secret, bet_txn_hash)), full transparency on-chain. Integrated KYC, AML, self-limits at the protocol level. Frontend is optional; ledger is the source of truth.
- Nexula — an evolutionary image generation system. Embeddings extracted with CLIP, clustered via HDBSCAN, visualized with UMAP. User behavior (time spent) drives fitness scores; top samples recombine through weighted interpolation to generate new images. Built on Django backend, session-based personalization without login.
Looking for like-minded people interested in exploring both the technical and business sides of these systems.Combining HTMX with raku Air, Red and Cro so that you can just build websites the right way™.
Here is an entire (minimal) website…
use Air::Functional :BASE;
use Air::Base;
sub SITE is export {
site
page
main
p "Yo baby!"
}
The software he currently uses is too complicated and he gets lost easily with all the buttons and features. My website is basically the same thing but only the buttons he needs, only his hymns and is completely controlled by me so I can fix things to suite him.
(Pre-launch [https://www.sevenbaton.com] would welcome feedback from marketing folks)
Personally I'm having fun learning some web design and three.js as I try to debug the very tricky issues with my new personal website loufe.ca using AI is fantastic but you need to micromanage when things get complicated, I find.
It is a smaller part of a whole collection of addons I've been working on meant for helping with animating character assets from Daz Studio in Blender and then bringing the animation back into Daz Studio.
Eventually I want to have a zero effort way to get characters from Daz Studio into Unreal or Godot with FACS morphs, JCMs, etc. already setup.
I started experimenting and I think this builds out pretty neat estimations from jira tickets/other descriptions. When I was sitting in the CTO role, I spent a ton of time talking with people about how long/short various projects would be. When I was a developer, I hated the estimation piece because it felt like it was both keeping me from building and was almost never done with enough context to get really accurate results.
I was playing with the OpenAI API and I noticed that they can actually return a set of probability x next tokens and I thought that it might actually give you kind of cool ways to see the distribution of potential outcomes. (You can see an example here: https://universalestimator.com/estimates/c68db45b-7622-4bab-... that looks at a detailed ticket for implementing filtering on a dashboard.)
Let me know if you have any feedback, it's free with the promo code TYHN. If you run into any issues, please send me an email at earl@unbrandedsoftware.com
Right now, it asks you some follow up questions and assumes you're a medium sized org, but I'd like to start to move that into configuration and do some sort of time/bayesian expiration of memory/information as part of the questions it asks.
I think a ton of the variance between teams is probably captured by some version of a few calibration questions, aka: - How large is the org? - What region is your org based in? - How long does it take to get the smallest possible change into production?
My current goal is to spend half of my time on the development and maintenance of open source projects, such as Glicol (https://glicol.org/).
The other half of my time is to do some business that can generate profit from day 1.
I just found that the VC model is not suitable for my current situation.
All code opensource at https://github.com/meekaaku/verso
Coming soon: embedded DB, password manager, dotfile manager, boilerplate generator — all inside the CLI. waitlist: https://devexp.pro
The book is written in the Choose Your Own Adventure style. Readers get to join the protagonist Daphne on a fun, week-long adventure launching her own mini-businesses. Readers help her make smart decisions, solve fun challenges, and learn about money and problem-solving in the process.
The book has been a fun endeavor in both writing the manuscript and code! On the latter, I wrote an exporter for Twinery to Org Mode and an Emacs Org export backend to do the reverse.
The book is currently open to beta readers - Happy to let a few more in through the sign up page here: https://tendollaradventure.com/#get-notified
A simple puzzle game: a cassette-futuristic story, heavily inspired by Nell’s adventures at Castle Turing in Neal Stephenson’s “The Diamond Age”.
I could go on and on...
I’d love your feedback:
Rewind to the start of the story: https://magiegame.com/test/2025/04/04
Or skip to today’s puzzle: https://magiegame.com
Right now I’m focused on story development and integrating it with the daily-puzzle system I built last time I was between jobs.
This is a years-long slow-burn side project. My career moved away from the frontend long ago—this is my first React app, with a Django backend running on AWS Lightsail.
https://apps.apple.com/us/app/warm-sesam3/id6744872364
Please take a look at the 'About' in settings
I've got an LTE module connected to a solenoid lock. The module listens for a "checkout complete" callback from a Stripe payment link which will unlock the solenoid. There's also some weight sensing involved to track the current product inventory inside the cabinet.
I built this for a family friend who does a lot of wellness outreach around combating food deserts by introducing small scale farming to local schools.
As a result of their community bee hives, they have a bunch of excess honey. So, I thought I’d build them this little honey vending cabinet for their neighborhood.
I've expanded the service a bit to be more product agnostic, maybe someone else can find a use for it.
- As you're reading, AI helps you with words it thinks you might not know
- Highlights etymologies & mnemonics
- Shows you words in their natural habitat, e.g. listen to example sentences
I'm trying to read a kid's version of The Odyssey in Greek and to be able to understand my partner's mum, and these are the features that I wanted.
Also, I wanted to experiment with "what would an app like this look like if we could trust AI to be very cheap/fast/correct?".
- So, for example, it's a fully generative dictionary & search, e.g. the dictionary entries/metadata/example sentences don't exist until the first person searches for them!
- You can upload any kind of content (image, audio, text), and it'll automatically transcribe, translate, annotate, etc.
No app required!
We took all of the complexity of issuing MIFARE DESFire enabled NFC credentials and made it extremely developer friendly. SDKs in most major languages (python, ruby, csharp, js, etc), developer console with request logs, and more.
You provide a PDF and a JSON schema defining what to extract, and it returns the extracted values, the citations and their precise locations in the document.
This is especially valuable in workflows where verification of LLM extracted information is critical (e.g. legal and finance). It can handle complex layouts like multiple columns, tables and also scanned documents.
Planning to offer this both as an API and a self-hosted option for organizations with strict data privacy requirements.
Screenshot: https://superdocs.io/highlight.png
Feel free to get in touch for a demo.
It works well but copying/pasting back and forth gets old very fast, and it would be better if the process was done inside the word processor. For some reason (various reasons) I still use Office 2003, which doesn't have any AI feature. (It does have a "translate" function but it's awkward to use and not very good.)
So I wrote a macro to send selected text to OpenRouter and replace with the response (with a system prompt that says to only output the translation, otherwise most models start with "Here's the translated text:")
I had never written a macro in vba; I got started with Sonnet and adjusted many parts with the help of StackOverflow (which turns out to have more information about vba than Sonnet...)
And finally it worked; and it turns out to be an incredible boost to translation productivity!
I'm designing the content browser right now. I'm trying to achieve something really immersive like Apple's new Spatial Gallery app.
Edit: in case this sounds like a piss take, I’m serious about it. My way is really better though! No syntax typing, efficient encoding, human writable. But also not like the other formats with those properties.
Performance is rock solid, and it’s almost ready to release, I just need to tweak a few things (like free trial with no CC).
I have a very long to do list, and ultimately want to extend it with “change detection”, e.g. notify when an HTML element on a website changes.
All feedback is welcome
I’m using FastRSS[^0] with some lightweight pattern matching to convert them to an internal model. I get error notifications for mismatches, and just push a new pattern match to handle the outlier.
Longer term it could be interesting to get an LLM to write some Lua to parse JIT.
I love exploring data, but it always felt clunky to juggle multiple tools, write code/commads, just to import and query a dataset. While there are multiple GUI tools for databases, none are focused on raw data.
TextQuery is the tool I built to solve that. You can import CSV, JSON, and XLSX files and start querying them instantly using SQL. Want to create a chart? Just hop over to the Visualize tab.
I'm also rolling out a major update this week — adding tabs, filters, a redesigned UI, and keyboard shortcuts.
Looking ahead, I’m planning to expand support for more formats (like Parquet and ORC) and data sources (like Postgres and BigQuery), so you can import data from anywhere, and query it right from your desktop. Something like a local data warehouse. With Apple Silicon, the capability of a desktop can make it very cost-efficient compared to something like BigQuery, Snowflake, or Athena.
Happy to hear any feedback!
1. While I can share source code and documentation with trustworthy people, that won't work at scale: the market would get flooded with Chinese clones that re-use my Open Source software but then I have no ongoing revenue to fund support / maintenance.
2. Especially for products with a physical component, shipping, taxes, refunds, CC chargebacks etc. add considerable overhead. Plus I need to add in Amazon fees and marketing spend. And suddenly I need to charge 8x the manufacturing price, which means I either need to massively cheapen out with quality, or it's going to be a very premium product.
I realized after trying multiple tools like Supabase and Firebase, I really need to program my own solution. I don't need a bunch of *Enterprise* level features. I just need to read the card data from a database, and process the games with some simple server side code.
I hope to have the server, along with a basic front end done by this summer. Then it'll be released under a permissible license. Probably MIT.
I want a community of different front ends compatible with the same server. I suspect a straight up cli client without a bunch of effects might be popular with some of y'all.
If you want to code a frontend in Unreal, the server doesn't care. The game remains the same.
Of course your free to fork it privately, build a commerical product, etc.
We're currently working with language influencers to build courses on Emurse. This year, we launched Japanese Phonetics course created by the YouTuber Dogen https://emurse.io/course/japanese-phonetics.
If you want to try out Emurse, we have a free Thai reading course available. You can view the first lesson without out creating an account: https://emurse.io/courses.
There's a limit to how many reveals you can do to make your guess.
I’ve recently added hints, spare moves, and an easy mode, as some days are harder than others.
I've trained martial arts for a few years, and have always been that person who tries to introduce those close to me to the community. I know a lot of other people do the same.
Now, I don't care that I wasn't rewarded for it, but why not? We have point-based programs for nearly everything. Why not for martial arts as well?
* Have a few academies in the process of testing it out. Still a lot to do, including the demo video that isn't going to load on the main page.
It's compatible with Settlers of Catan. However, I plan to make my own rulesets, artwork, manuals, etc. It will not be a commercial product, of course you can make your own with the files I provide.
Right now the boards, electronics, and firmware are in good working order. Although the routing is pretty YOLO.
It feels like there's a lot of unpleasantness going on in the world right now. I thought maybe I could put my other projects aside and try to make something that might brighten your day (It certainly has enough of LEDs).
A big TODO is to replace the 0402 SMT components with something larger and easier to work with like 0603. I'll find time within a week or so and push it to the repo. (I am notoriously cheap and only keep 0402 in stock)
Just needs some slick design for broader appeal.
I was thinking maybe some surface features, like craters (in silkscreen) and some "resources" -- tinned exposed copper / copper covered by solder mask.
Or some way to hold the boards together, like a magnetic clasp or even just velcro. It's not really a problem presently, but might be neat.
Some things I'm planning on including: - App drag & drop for assignment. - Programmable macro buttons. - Small OLED displays to show app icon and volume levels.
Attempting to do everything in Rust too, even the MCU firmware. It's been a lot of fun. xD
Soon I'll start work on Lua bindings. The idea is to configure the core engine programmatically. Hook up inputs, modify synth parameters, route output. It's going to be inferior in every way to something like SuperCollider, but I just think it'll be insanely cool to materialize music from thin air. I've learned lots.
Basically looking to clone laracasts.
If I can't find one, that is what I'll be working on.
I made an online partition calculator for ESP32. I made it because calculating this manually is a huge pain, and the only tool I could find was google-sheet based. I've gotten some feedback from people who've found it useful.
Open, secure personal genomics using fully homomorphic encryption.
With 23andMe bankrupt, I want to put out somewhere secure people can put their genomic data and receive insights. In a few months, I'll have a protocol in place to open up the data to third party apps (with user consent). The data does not have to be decrypted ever to be operated upon!
I got tired of waiting for Perplexity to launch Comet, so me and a friend just decided to build our own. This is probably the most fun I have had building a project.
A side project I started at the end of last year.
It's pretty clear that compute and energy are going to be two of the most important resources to track and manage in the coming decades. I'm trying to get a sense for prices and usage of compute and this project is my attempt to do that hopefully providing useful info to others as well.
The experience with GitHub can be terribly frustrating when it comes to managing the stream of incoming pull requests. The default inbox and notification systems are not so good, and not flexible. Critic allows to create any number of sections, each section being defined by an arbitrary search query.
I would now like to expand it to provide a better code review experience, similar to what Graphite or Reviewable may provide - but under as an open source project. Source code is available at: https://github.com/pvcnt/critic
Primarily uses Claude Sonnet 3.7 and Gemini Pro 2.5. But you can choose other models too.
You can try it for free while I'm beta testing it here: https://lumosbuilder.com?ref=hn
I’ve been doing tedious manual entry for a bit over two years now and after having missed three consecutive months, the only other option was to bail.
As a start it should help with 3 main things:
- Translation, categorization: my source documents aren’t in English but my GnuCash entries are. This is one of the reasons I don’t use the built-in imports. (This should shave off at least 90% of time spent entering data)
- Human-error prevention: there were at least 5 times where it took me over 15 minutes to reconcile a discrepancy because I entered some number or some account wrong somewhere.
Onsite deployment is a lot more difficult to make slick and easy. We've been thinking about the best way for our customers to deploy while reducing the load on our support team. So far, we are thinking about RPM's, Debs and Docker and trying to make this as close to a '5 step process' as possible.
I would love to hear people's thoughts on other mechanisms that make it easier for SRM's / DevOps to manage key platform infrastructure software.
Any feedback is welcome!
I once saw an idea on here[1] about putting a lot more historical information into a calendar, including past activities. It resonated with me and I wondered if bank transactions could be part of this activity layer. At the time I was working on a real-time integration between my bank and YNAB so I was already thinking about the space.
1: https://julian.digital/2023/07/06/multi-layered-calendars/
YNAB stands for "You Need A Budget." It is a privately-owned personal budgeting software company.
I wrote this tool to make instrumenting language servers very easy. MCP (both protocol and architecture) seems heavily inspired by LSP which made me curious about what it would take to support it through my telemetry-capturing proxy.
Haven't thought too deeply about how useful an MCP proxy can be but I see it as potentially a general platform for monitoring or debugging MCP servers or clients.
www.fableflops.com
Then specifically I was making an app which let me customise rules for poker - extra streets, antes, throwaway cards, passing cards, multiple boards, multiple decks, etc to support as many variants as possible, and ideally, stumble across new ones.
As an aside, I posted to reddit for research of other home variants people play (Basically to stumble across more fun variants in our home games) there's a few good alternatives I've not heard of in here!
https://www.reddit.com/r/poker/comments/1i91mnz/what_are_you...
I've run out of steam a little bit (burnt out & seeking work isn't great for own projects), but has been an excuse to learn swiftui. I'd be tempted to team up with people to keep the project alive...
https://github.com/scottfalconer/vibedb
I still have no idea if it's a good idea or a bad idea but it's been fun to think through.
I'm also working on and off on a hardware device for blind people.
e.g.
"san francisco ca united states" - San Francisco, California, America
"california, san francisco" - San Francisco, California, America
"glasgow, kentucky" - Glasgow, Kentucky, America
"glasgow, UK" - Glasgow, Scotland, United Kingdom
It started as a project when I was scraping websites, and some data had inconsistent ways of writing a location or address.
The library is called location4j - https://github.com/tomaytotomato/location4j
With 16x Eval, you can manage your prompts, contexts, and models in one place, locally on your machine, and test out different combinations and use cases with a few clicks.
I also just today made a fun way to fully host a front end and SQLite back-end on GitHub for free using Pages + Actions, you can try it right now: https://do-say-go.github.io/fully-hosted/
This thread made me realize I've made no progress in acquiring new clients since posting about this 6 months ago. However, my part-time job ends in May, which will allow me to focus exclusively on Fanfare. It's somewhat intimidating, but I'm also looking forward to it; the prospect of resolving this uncertain situation (either through success or failure) feels liberating.
[0]: https://ezb.io/thoughts/interaction_nets/lambda_calculus/202...
input a relevant url, it will decide if it IMDB or Youtube or a list. using an llm, it will attempt to extract the movies or series from the list and find the relevant IMDB link.
Then I have a masonry view of movies & series I have and have not watched, sortable by tags that I can rate and add notes to.
It’s a small vending machine on the internet where people anonymously send a friend three postcards, one word at a time. The first two cards are unsigned, and the last one reveals who sent them. It’s meant to be a slow, kind surprise in the mail.
I shared this on HN a while back, and it gave us a quiet little push. Since then, we’ve sent 246 out of the 300 postcards we set out to deliver this year. Things have slowed down lately, but the whole thing is automated, costs almost nothing to run, and has been a lot of fun to work on!
There's really nothing in that list that is interesting enough to send to anyone in my life. I'd be wanting to send something very specific like "Remember cycling Iceland?" or "Soy chicken success" or something.
I get your concerns about "writing something inappropriate" but you could probably let people choose 3 words from a list of a few hundred pre-vetted words?
"soy chicken success" hits close to home :D
We use data from club & sponsor to measure conversion over time. Our challenge lies in attributing a select group of people who appear in both datasets. Sponsors spend millions on their sponsorship, and they have no idea what comes back.
We also got funded last week & looking for a founding engineer (2nd employee) :)
I'm working on software specifically for the bulk water cartage/haulage industry.
There was no ready-made solution in the market so looking to fill the gap.
If anyone knows any water haulers looking to digitize their business, let me know!
I don't expect to place competitively but I learn a lot from these competitions. I like competitions like this that are connected to physical problems and datasets (though sadly this one is largely simulated), I learn as much about the broader world as I do deep learning. I've always idly wondered how seismology worked, and now I have an excuse to dig into it.
It's also given me a greater appreciation for the "seismology" we perform in our day to day, like knocking on things to see if they're hollow, or (as I learned here a while back) the way battitores test the porosity of cheese by knocking on it with hammers.
I’ve been building https://lowlow.bot, it tracks price changes on any website. I was inspired by https://camelcamelcamel.com, but wanted something that worked for more than just Amazon. It’s been handy for big purchases I’m ok waiting for and stocking up on recurring non-perishable essentials when they go on sale. It also lets me know when something has come back in stock.
RSS readers show content based on the feed they're coming from, and show read and unread items in the same list. That makes it difficult to know which items you've already seen, and is especially annoying for managing items that you want to read at some point but not anytime soon.
Lighthouse splits it into Inbox (new items) and Library (bookmarked items). This makes it possible to process new items quickly, and take your time with reading them.
Explicit is a validation and documentation library for REST APIs built with Rails that enforces documented types at runtime. This week I added support for MCP servers with the Streamable HTTP transport.
It'd be cool if you were on the same branch as somebody else, or another device, and your working directories could be synced. It'd also be cool for the commit history to be a bit richer, so you could see who, what and when for a change at a keystroke level.
So I'm working on real-time sync for Git! I'd represent the working directory as a tree CRDT [1] and sync that through FUSE and p2p networking.
Not sure whether this is actually a good idea! This is a POC :)
This would combine well with your idea.
So let's say that I change python source code, the "file system" would understand the syntax of python source. Your tool could then use this to derive the sematics of my change, i.e. "added a function foo() with the following signature and body"
You can drag and drop links from YouTube, Twitch, TikTok, or Kick --and they show up in a grid.
You can reload or remove streams without refreshing, save mixes for later, and share them as links. It works best on a really big screen --phones aren't really supported and even notebooks are too small to get much benefit.
There's no backend, no login --everything runs in the browser.
It works like a run club, where you have to make a review first to see other people's reviews.
I am currently implementing watchlists, comments and a mural to make it feel a bit less lonely. Right now I like the UI but it feels to lonely.
But reviews are everywhere, good ones too so it will be a hard chicken egg problem to solve.
(Thoughts welcome!)
It’s a service for endurance athletes to configure nutrition packs to be available along the course at aid stations.
Right now it’s just a cool tool to build the bags based on your nutrition goals. I’m still doing a lot of outreach to race directors to get an opportunity to pilot the distribution of bags at an event.
https://medium.com/@DougDonohoe/ce45d56c8773
Writing well is hard and takes time. Was it Mark Twain that said "If I had more time, I would have written a shorter letter."? I can totally relate to that this month. Getting your point across without being long winded is challenging.
I’ve been working on it for a few months and I’m hoping to have a demo up at 7:00 PST today for HN to play with :)
SaaS - I'm working on this mostly marketing that tech.. harder than it looks am I right? https://prfrmhq.com - see https://news.ycombinator.com/item?id=43538744 [Show HN: My SaaS for performance reviews setting goals and driving success]
- Shows I can use AI and I've integrated into AWS Bedrock
- Shows I can integrate with Stripe for payments
Consulting (Architecture, Strategy, Tech) - I'm working on getting my consultancy started. If anyone wants the kind of skills I offer here let’s talk https://architectfwd.com
Next SaaS - Starting a SaaS for managing core strategy and tech concepts. I created goals for it but I’m failing to kick the tyres
Last night I actually also started playing with firebase studio, though the app I prompted isn’t even doing save of the document properly. I figure can’t be me but will try again and work through the errors.
And playing drums, must get better
Key features:
• Built-in speaker and LED for alerts
• Preloaded with 66,000+ known speed camera locations (more to come)
• Easy updates via drag-and-drop on your PC
It’s been a fun project to build, and I’m excited to see it help drivers stay away from speeding tickets while keeping their data private.
I’d love to hear your thoughts or suggestions!
https://seongminpark.com/ipa-transcription-in-kilobytes-with...
https://github.com/ncruces/go-sqlite3/tree/main/sqlite3/libc
https://www.exploravention.com/products/askarch/
It is an interesting discovery journey on extracting relevant information from both code and documentation with a sufficient density to stay within the context window and extracting efficient criteria from arbitrarily phrased questions.
It's my fun little project to resort to. Implemented dark mode, sorting, grouping and various layout improvements. Also added a Drawer with Auction view the other week. UI is finally fun again with component libraries and LLMs.
Oh, and I added a Cloud Server Availability [2] page as I noticed people on /r/hetzner were complaining about lack of resources. Looks like their Cloud offerings are going quite well.
[1] https://radar.iodev.org/ [2] https://radar.iodev.org/cloud-status
Fil-C is a memory-safe implementation of C.
https://github.com/pizlonator/llvm-project-deluge/blob/delug...
You can try it easily on Linux/X86_64:
Also revisited and updated Let's see, an eye trainer, which is basically a PWA you can "install" on your tablet/mobile/e-reader. I'm not a scientist, but have had some success training my eyes with this technique and wanted to make a simple app that I can share with my friends to try.
https://letssee.publicspace.co/
Any feedback welcome :)
I wanted a library to store my own prompts once and retrieve it in multiple locations (i.e. Try something on claude desktop and then once I wrinkle out the edges, load it in Roo code or claude code and use it.) Give some variables to the prompt and creating infinite versions of same prompt by providing the value. Or having the versions of each prompt.
Currently I have the landing page, soon (In max 10 days) I will make it live for everyone to use.
https://medium.com/@level09/build-the-future-an-ai-powered-n...
My main motivation was to implement a service that publishes the addresses of containers and vms that I run on my workstation to my local network, but it gradually has grown into a full-blown implementation of RFC 6762, which has been fun.
It has hierarchical clustering, rolling correlation charts, a minimap, time series data detrending, and 2D matrix virtualization (to render only visible cells to the DOM).
It has up to 130K matrix cells and correlates up to 23.5M time series data points.
Grog on the other hand let's you keep your existing build setup while just parallelizing and caching it. It's not a full replacement, but it's more than enough for most mid-sized teams that want to have fast mono-repo builds.
It gives you a ranked list so you can quickly spot the strongest candidates, or at least get a solid starting point without reading every resume manually.
It scores applicants based on job requirements, flags any concerns, and suggests interview questions you might want to ask.
Next feature is search.
I would love to see some UI/UX improvements like split view where the map is on the left and the news reading/scrolling happens on the right reading pane instead of on the bottom while horizontally scrolling.
You could even use AI/LLM's to summarize the most important news from each country etc.
If you like what you see please let me know, all feedback is appreciated
You might ask why use Pocketbase at all, and I'm not sure anymore. I suppose the dashboard is great, built in auth is great (although I've had to write cookie middleware to make it SSR anyway). I wish there was a lightweight Pocketbase/Supabase style "backend in a box" setup that didn't push the whole client library directly communicating to DB paradigm.
It uses the webcam and a locally running facial detection model to alert you if it detects someone in frame.
It's FOSS and available @ https://www.eyesoff.app
After working on it for a while, I noticed there’s a stigma around using CORS proxies, often associated with fetching undocumented APIs.
While that’s sometimes true, I’m hoping to change that perception, to show that they can also be used for accessing real APIs. It just requires the proxy to correctly handle credentials and secrets.
The idea is to open up more possibilities for building static-first apps without worrying about CORS.
I work a lot with smaller investors, in real estate, private money lending, etc. It's sometimes hard to do due diligence on someone, and after having a couple bad deals and realize over 30 people were scammed, I wished there was a simple review site where you could see someone's past reviews.
Site is 80% there, hoping to enter beta in the next month.
I know you can print photos at Walgreens etc, but I never do it because it has some friction: email-based reduces almost all of the friction in my use case.
The idea is that growth becomes a lot more intentional when you can reflect daily, set goals clearly, and get structured input from people you trust — all in one place instead of scattered across different tools.
I'm getting ready to open early access soon. Curious if others have tried combining these areas or if you use separate tools for goals, journaling, and feedback!
Slowly building an open-source Data Lakehouse management utility application for local development, scratching my own itch and trying to accelerate development workflows with customers developing for Databricks.
For now it only supports Delta Lake (using delta-rs + duckdb), only supports table metadata inspection and querying, but in the near future will add dashboards as code, simple Markdown notebook like mode, and Apache Iceberg support.
For now it's an enabler for me and others, hopefully I can turn it into a product somehow at some point.
https://github.com/mcp-router/mcp-router
Works with VSCode, Cursor, Cline and any MCP client, and connects to servers from any registry (Zapier, Smithy, etc.).
Since my third year in medical school, I've built the largest medical education platform in MENAP. 90k+ users, 100m+ questions solved, billions of seconds spent learning across our Super App. Now working on sustainably scaling further, building medGPT.
It's awesome, exciting and impactful work. I am the first medical doctor + full stack technologist in Pakistan (250m) people, and we've helped the country move medical education decades forward.
For such markets, you can imagine that the TAM etc is smaller, but still important. For us it's a blend of mission driven and business.
Thanks for the comment! I would love to chat vet-ed-tech further, I am on LinkedIn (/in/az1b) or email: azib [at] az1b [dot] com
Currently working on a solidity upgrade for a leader-board, and public analytics.
D-Safe for children, adults and plants. I am looking for contributors to integrate it on https://internet-in-a-box.org .
No worries, I'll do it myself if everyone is busy.
Email: data at datapond.earth
On the way, I developed lightweight image editor and 3D model viewer components, which I've open sourced [1].
Regardless of if I target macOS or Linux first, this would be a pretty full time endeavour on my part. I could wait until the commercial use licenses of the Windows version sustain me enough to be able to work on this full time, or try to raise a Kickstarter for $X00,000 to be able to quit my 9-5 and work on porting full time for a year or so
- repo: https://github.com/vseplet/PPORT
It only stores (timestamped) floating point values with a series id and uses a B+Tree as the backing data structure. Querying is done with a lisp-like query language.
I'm giving myself 18 months- it's been super fun so far!
Let me know if you have any feedback or feature requests.
I'm generating random IP addresses on the frontend, then making an call to our free API to validate the "realness" of the IP addresses — mainly to remove bogon IP addresses, non-routable IPs, and IPs from large ASNs (national ISPs, the DoD, car companies, etc.).
Our free API supports 1,000 requests per day from unique IP addresses, so there shouldn't be any issues for low usage. However, if we get more power users who enjoy the game, I’ll switch to our Lite API service (which is also free, https://ipinfo.io/lite) to validate IP addresses, as it supports unlimited requests.
Let me know if you have any feedback for me :) I made it mostly by "vibe coding", I will write a post about the whole process of it.
Using a dataset-based implementation would require me to have a backend, which is out of the scope of this project. Right now, I'm generating random IPv4 addresses, but if I were generating random IPv6 addresses, I would have to go the database route. For that, I would use our free IPinfo Lite dataset: https://ipinfo.io/lite
My colleagues actually developed an extremely fast algorithm to select truly random IPv6 IPs from a series of CIDRs, which is what you see reflected in our dataset.
Let me know if you have any feedback or suggestions for me, please.
Conversely, tearing apart a bunch of things, around my family's house -- invasive vines, old worn-down structures, an extensive amount of brush, etc.
Aside from that, getting a landing page working for my side project along with various ancillary tasks for a demo deploy.
1. Eli5 equations(2) uses an LLM to convert a given picture of an equation to latex and, if given additional context, breaks down the equation parts to explain it. Gemini for the model.
2. reflecta - a journal prompting app with deepseek to help reword and target the prompts towards you better.
Key features: - Multi-website management with single sign-on (one dashboard for all sites) - Static rendering via Cloudflare KV for 100% uptime and blazing speed - Real-time editor with AI-powered automated internal backlinks - Theme switching without breaking functionality
We're currently serving 100+ websites. It's completely free for non-profits.
Would love feedback from anyone managing multiple content sites!
Ive got a MVP right now, but i'm reworking the region build system and potentially reworking the underlying designing to follow a more tree based approach for managing the windows
I've just tried other windows managers on windows and felt that have either been slow or buggy and wanted something that looks nice. My inspiration is based on Hyperland, as im currently dual booting and when I work on windows, all i want is a nice window manager.
still very early in development but im excited for its potential.
This also goes in line with my current studying of the windows OS, so its a bit of learning and then working. :)
I've been working on a webring creation and management app with embeddable widgets for member sites.
anubis_policy_results{action="CHALLENGE",rule="bot/lies-browser-but-http-1.1"} 3891
This is coming soon to an Anubis near you!Open-source differentiable geometric optics in PyTorch.
I spent a long time working in manufacturing and struggled to find a piece of software where we could define a process, share instructions and collect data all in one go.
The idea is you can basically turn your process into an interactive flowchart and follow it through. I’m almost code complete on the MVP, moving into distribution mode in a few weeks.
I’d love to hear from any HNers who’ve gone from 0 to 1 on a SaaS for non technical users. What worked for you?
I'll have a working project in about a month. But for now it's just a readme.
https://johnscolaro.xyz/projects/so-you-think-you-know-brisb...
I'm working on expanding it to all large cities in Queensland, moving it to its own domain, and monetizing it to cover hosting costs.
P.S. Link https://apps.apple.com/us/app/handsonmoney/id6740042181
2. Basecoat, a HTML/CSS port of shadcn/ui v4 [2] (no React).
3. DevPu.sh, a Vercel for Python apps.
Releasing both Basecoat this week and DevPu.sh hopefully in the next 2 weeks.
Had to implement the bindings first, because js.Value kind of sucks. Meanwhile I am building web components and widgets and it's slowly getting where I want it to be.
Maybe after a couple more weeks I can finally build apps in 100% Go and together with webview/webview. Still needs a lot of work around the edges here and there.
My wife and I are fans, but their Finland-Swedish Vörå dialect is not easy to understand, especially for us in the very south of Sweden. I have watched the recording too many times to count, and made these so she could enjoy it more.
I'd love for you to try it out! It is browser based only for now and pretty basic. I'm adding features sparingly as needed but my next task is to add some documentation for brand new users and making what it can already do more obvious.
I was having trouble meeting people after moving to a new city so I designed and printed some goofy funny t-shirts for my wardrobe, and that has really helped on getting the ball rolling on conversations.
Hoping to make the launch within the next few weeks.
I'll setup a redirect on https://shop.gtmnayan.com when it's ready, still have to figure out logistics.
Official release is Cinco de Mayo, I'm very excited!
e.g.: Following up on one of my HN comments on OpenAI ImageGen gpt-image-1 quality: Side by side comparison of more challenging prompts at Low/Medium/High:
https://generative-ai.review/2025/04/apple-a-dog-how-quality...
I may also finally finish implementing WebMentions support too as a kind of comment section.
I may also work some more on my long-term relaxation/creative maze generation and solver project.
At work, I keep putting off yet more refactorings that are required because of poor/missing requirements and non-technical leadership of the project.
It wouldn't be so bad, but part of this "new" project involves communicating with some awful SharePoint """database""", as well as a poorly designed real database (it has multiple values in one column, not even with any standard, just sometimes there's extra numbers I need to parse, sometimes not - just lots of this type of crap repeated everywhere), and the worst development/deployment experience I've ever had to deal with in ~10 years.
To write code involves Remote desktop to what was a single core VM (and much protesting gained me... one extra core) to Windows Server 2016 meaning most modern/nice developer tooling isn't supported, and deployments are all done by copy pasting files over yet more nested remote desktop sessions.
Sadly there's no real way of automating any of this, every suggestion is always a "default no", again most of the tools I'd need for this won't run on Windows Server 2016, and even if I worked around it the stakes are way too high for "It's easier to ask forgiveness than it is to get permission".
The turn around time for even a small change is huge because of this mental burden, it's a complete slog to get anything done.
So I guess what I'm saying is I've been casually looking around at jobs this month.
This is why I always stress the importance of being able to work on my own projects, because otherwise, I'd have burnt out.
/rant
So far all my work has gone into the technical side of setting up the game (a Java app written in 2010) to work as a reinforcement learning environment. The developers were nice enough to maintain the source and open it to the community, so I patched the client/server to be controllable through protobuf messages. So far, I can:
- Record games between humans. I also wrote a kind of janky replay viewer [1] that probably only makes sense to people who play the game already. (Before, the game didn't have any recording feature.)
- Define bots with pytorch/python and run them in offline training mode. (The game runs relatively quickly, like 8 gameplay minutes / realtime second.)
- Run my python-defined bots online versus human players. (Just managed to get this working today.)
It took a bunch of messing around with the Java source to get this far, and I haven't even really started on the reinforcement learning part yet. Hopefully I can start on that soon.
This game (https://planeball.com) is really unique, and I'm excited to produce a reinforcement learning environment that other people can play with easily. Thinking about how you might build bots for this game was one of the problems that made me interested in artificial intelligence 8 years ago. The controls/mechanics are pretty simple and it's relatively easy to make bots that beat new players---basically just don't crash into obstacles, don't stall out, conserve your energy, and shoot when you will deal damage---but good human players do a lot of complicated intuitive decision-making.
[1] http://altistats.com/viewer/?f=4b020f28-af0b-4aa0-96be-a73f0... (Press h for help on controls. Planes will "jump around" when they're not close to the objective---the server sends limited information on planes that are outside the field of vision of the client, but my recording viewer displays the whole map.)
Private recipe archiving/bookmarking. No ads, no AI, no javascript . Join a server or host your own (https://github.com/bradly/recipin).
Bacon Wrapped Urns- https://baconwrappedurns.com
Mortality is so hot right now so why not celebrate with a custom urn to enjoy your journey into the spirit world in style.
A translation app that keeps document layout almost intact. It's also better than Google Translate and DeepL.
It's an website who's goal is to make it easier to find apartments/hotels/etc that fit your housing preferences (starting with places that are close to the people and things you care about). It's flagship feature is the ability to make heatmaps of cities based on your preferences.
Since February I've slowed down on feature development temporarily as I try and find a way to sustainably increase it's popularity and learn what's the most important thing to focus on next.
The idea is you can set a few filters (like bpm, key, decade, genre) and then swipe through random songs, accept or reject them for inspiration or playlists. Kinda like Tinder but for digging through your own tracks.
It also tracks what's trending on TikTok/YouTube/SoundCloud weekly, so you can find stuff that’s blowing up, filterable by region. Plus it can build smart playlists automatically based on rules you set (like “new 90s house under 120bpm” every month).
It is just a tool to make working with your existing local/Apple/Spotify/SoundCloud library faster and more creative.
Here's a demo of it: https://filtered-f.web.app/
I was using Python to get the bpms and keys on local tracks, and was going to start figuring how to fill in the missing metadata.
Thank you!
It's a bottom-to-top rewrite of a timer app that I've had in the App Store since about 2012. This is probably the fourth rewrite.
[0] https://github.com/RiftValleySoftware/ambiamara/tree/master/...
Everything will be bound to a key in the spirit of VIm :-)
I don't have a landing page yet, but if this sounds interesting to you, you can sign up here: https://lindon.app
I'd also love to hear what would be important to you in an application like that.
More advanced digital marketing features, scaling and what's typical for mature product, an upgrade cycle of for major 3rd party dependencies.
What I'd love to be working on: Try to initiate a high voltage arc through the air to a target device, and modulate it to send "Data over Lightning", like Alyx does in Half-Life 2. It won't work the way it does in the game, but I'd it's an idea I've had for a long time and I'd love to prototype it some day.
Produces a pick order that shifts as the draft progresses.
Have a browser runnable colab notebook for 20+ sets.
We use a combination of 1) static analysis/PL theory and 2) Large Language Models to help large enterprises decode their legacy COBOL systems
My current side project is a vulnerability scanner for binaries. I do VR in my day job, so im trying to figure out how useful (or not) AI is for this domain.
Jury is still out. Getting false positives and negatives, but I can find some known CVEs!
I honestly hate it at this point and would appreciate any reading on the topic. It's been a grueling 4 months of back and forth with a lot of mission critical business aspects to handle that I had to learn on the fly.
Plus, implementing encryption for https://github.com/mattrighetti/envelope
I am working on an open-source insurance application platform.
The main goal is to accelarate time-to-market for insurance and insurtech innovations, providing all these "boring" enterprise features (like multitenancy, role-based security, audit trails, etc.) out of the box, so that you can focus on building the actual product.
The first is a preventive maintenance and calibration tracker (https://pmcal.net) that was born out of my day job as an engineer in small business manufacturing.
The second is an AI engine for pulling structured data out of incoming email (either via IMAP on your email server or via SES). If you think of the engine that powers TripIt, they had to write about 10,000 different ingestors for each airline and hotel and travel booking site. With a structured output AI, the need to write specific ingestors goes away.
Since you already have a method for reaching into folks Microsoft 365 inboxes and such, you could probably train an LLM to extract arbitrary data based on a user's prompt quite quickly though.
There is still a lot to do and learn (especially in the marketing department), but we have plans for a new product in the privacy space. I don't want to say too much about it until we've started working on it, but it's in the compliance space and fits quite well with our existing product. I think it's always a good starting point to solve your own problems.
We're based in Kyoto and the posts are heavily Japan-centric; we'd love to see posts from all over the world!
Are y'all involved with "for Cities"? (https://www.linkedin.com/company/for-cities/posts/)
Not directly involved with for Cities, but they're friends of friends and we should have a chance to meet them very soon.
just waitlist for now, but I have posted some demos on my twitter - https://x.com/rogutkuba/status/1915533678207262931 - https://x.com/rogutkuba/status/1915226139812839690
It's still a work in progress, but it's already functional if you want to try it out. I'm keeping it super lightweight, clean, and focused just on writing without the usual bloat.
If you’re tired of bloated to-do and note-taking apps, give this a try. It's OSS, free and No sign-up.
It has these capabilities for now : - Prebid flooring insights ( uses an LLM to generate a summary )
- screenshot at various times
- real time console logs
- Prebid detection
Basiclaly an app where you can travel a city on a hex grid (h3) and learn about it/receive recommendations on things to do. Different activities and landmarks are hooked into language learning games, which when completed, add phrases/words to a flashcard deck for future study.
Also working out the logistics of offering a microgrant to award people who want to make movies like this!
I’m still looking for a new SaaS idea, so if you have something you want to partner on do reach out. Preferably Rails or Go. Previously I built stuff like https://getbirdfeeder.com/
Sounds basic, and it is, but I've yet to find any open source project (let alone product) that does this.
All I want to tap a button, talk to the little guy about how to update my document, and see the changes flow. I guess Claude projects or similar might do this but I'm making it more for friends and family. Current use case is keeping track of a house renovation project going on.
I was tired of all the ads, and the poor formating on recipe websites.
So I made a website to import food recipes from any location (text, YouTube, file...).
It has been fun so far! I tried importing from fb/ig by using a meta app but it has been a horrible experience so I scrapped that ^^
Terminals are tragically under powered as well as hostile towards beginners! We're moving the needle there.
Much cheaper to hire a VPS with attached local storage, than to use an external database and a lot quicker too.
Feedback appreciated - https://proxymock.io/
It's in a functional state, I use it myself but it needs some more ergonomic features before I'd suggest for someone else to use it.
this was spurred from group texts with friends planning a vacation or outing. we would just throw a bunch of dates that would get lost in the mix and people would forget who was open on what dates. Whenish just allows user to select dates they are free and then it will show what is the best date that works for everyone.
i have a testflight right now and if anyone wants to give it a go please do! i would love feedback :)
https://testflight.apple.com/join/4HaADNMF
ps: i am not a designer and app icons are hard
thinking about taking dancing lessons instead, maybe afrobeats.
Been prod for a few months, recently ripping through 900TB with ~5x efficiency (customer was on BigQuery).
If anyone has any data/infra challenges, or just wanna talk about this kind of tech, lemme know :D
https://github.com/bsubard/Godot-3D-Procedural-Infinite-Terr...
It has integrated BIN inventories from Afternic, Sedo, Namecheap, Porkbun, and Gname.
Currently working on custom price alerts and an API.
A prompt collection platform that let's you organize your prompts, share them, learn prompts from other users and reuse them on multiple LLM / AI platforms. It's aimed at improving prompt engineering skills for both technical and non technical LLM users. Currently in Alpha phase and actively looking for feedback.
The idea is to build scanning databases, file systems, buckets, etc. for static keys and credentials while allowing users to add new file types and parsers.
It was born out of a personal need in past roles and teams. I launched it last year.
learns about your brand and creates custom email graphics for headers etc
pretty cool what gpt-image-1 can do
if curious, can check out https://graphic-design.email
I also got some code-share and collaboration features working, but got a bit stuck on fonts. But I can appreciate your feeling of 'how cool this is'
I ground to a halt once I realised I had no barrier to entry, ie it could be cloned very easily. Always an issue with Web Development I guess. Plus I hate what modern browsers have become in recent years and not sure I want to target such a fast moving platform. I got burned once already with WebStart 'warning this app might do something scary' and certificate fiasco.
I thought about some native binaries, but I know I am kidding myself. I had an ios app that was pixel cloned within 6 months. But somehow a web app feels like publishing straight into public domain.
For fonts, I just went with a simple raster bitmap font and pixel grid storage format. Creating these limitations makes it easier for me and for developers and artists. I choose 320x180 because it fits the 16:9 perfectly, which would make full screen ideal on most monitors.
The only self-hosted option I found was wger.de and while it looks great, it's a bit too much for my needs. I want something lightweight (so as not to hog resources on my cheap VPS) that does what it needs to do and nothing more.
It's been a while since I've done web dev, so I'm going to try out Deno (TypeScript) with htmx.
Nice to do something just for myself
It’s currently in beta for macOS but I’m waiting for Anthropic to extend my rate-limits before I announce it here on HN.
I'm trying to capture a sense of fun, wonder and connection through these tools which I feel has been lost in recent times with remote working.
There is some small improvements to make but I want to focus on onboarding via a sandbox environment first next month.
Currently I have decided that I can add "Email" as source, to be able to read not only news, but emails in my app.
I would like to create a sort of search engine for that.
Nothing fancy or innovative, but just to learn Golang in a bigger context.
currently i am working on a graaljs javascript web runtime written in clojure.
PostScript has two mechanisms to save and restore the machine state and they are intertwined and only vaguely documented. I'm trying to get my head around all that this week.
I actively work on it a few hours every day.
Having had some former coworkers that ended up at various dating platforms, dating is fascinating, I still think dating is something that needs a better modern solution than what “the apps” offer. Every dating app has a few fundamental flaws. There’s the human element too.
What worked for me was hacking Tinder circa 2014 by faking my geolocation and hypertargeting certain places and neighborhoods I knew would be up my dating alley and spamming posts on social media sites like Reddit and Craigslist.
It’s tough because some people don’t even know what they’re looking for in a partner.
I also use 5 dating apps, spend ~15min in them in total.
What kind of posts?
A merge conflict resolution tool integrated with GitHub. Now working on a solution for preemptive conflict detection and a smarter/simpler merge queue.
and this small site makes a curriculum for anything you want to learn and gives you books + sources to do so:
which enables you to debug faster with error optimization.
you can join our community from website. https://www.almightty.org/
So I started making a simple roguelike and an engine for a browser game. Nothing fancy but entertaining.
Launch soon! Drop a comment if you want early access
Which one? We are figuring this out.
Notably not an AI agent like Operator, Manus, etc. which are largely unreliable for the time being. Instead this uses AI to turn your task into something repeatable and configurable.
Currently focusing on scraping use cases but hope to make it more powerful soon so it can actually do complex tasks rather than just extracting data.
I'm curious: what are your must-haves in a note-taking application?
Eventually, I've settled with Obsidian because of its simplicity and extensibility. You can leave it with basic features and truly own your notes in a simple format (you can also put them into any cloud, as long as that cloud reaches your filesystem). It doesn't do everything just like I'd want to, but I've thought about just building another notes app that reads and writes to the same path your Obsidian notes are in, instead of trying to cover every possible editing feature like most big notes apps. Then I'd use different apps for different needs, with one place to store data.
Since you're focusing on privacy, have you considered using Obsidian? Is there anything particular you want to do differently?
Unlimited undos. Even if I deleted text a year ago, app must bring it back. Ideally something like git, with branches and auto-commits.
Jobless and no prospects ... Working on a couple projects at once.
basically a user give an initial prompt ie "create a game" and then a series of ai (gemini, openai, claude) prompt themselves until a finished outcome. the user can change any output step in between so the end result can be tuned instead at any point
- https://github.com/smorin/py-launch-blueprint
Features TLDR
- Bootstrap commands
- Command Runner
- Dev Tools: Ruff (linting/formatting), MyPy (type checking), Pre-commit hooks
- AI Ready: Default configs for Cursor, Windsurf, Claude Code
- Production: Python 3.10+, uv package manager, testing setup
- DX - Developer Experience: VS Code integration, sensible defaults, quality documentation
- CI/CD: GitHub Actions workflows, automatic testing, version management
- Task Templates, PR Templates
- License and Contributor License Agreement Checks
On the "side projects" side of things, I've been working on a "boot to EndBASIC" disk image since December with the goal of creating a small "dev kit" box that boots quickly and directly into the interpreter. The disk image is pretty much done for a first release, so now I need to get to the design and 3D-printing of a case for the board+screen combo. The latter is brand new territory for me, and I have a self-imposed deadline of June when I'll be presenting this at BSDCan.
... but I've also taken a small detour to improve the EndBASIC website and provide a dynamically-generated gallery of user-uploaded projects. This has been a deficiency for a while and I felt it'd be easy to add it, right on time for the "dev kit" release.
Stay tuned!
I know, it sounds crazy.
In a month or so, I’ll be sharing some news.
Building a tool to supercharge your Cursor, Windsurf, Claude and other developer tools by connecting it to polished, high quality mcp servers for linear, slack, DBs, and other useful workflows.
The smallest (in terms of system calls and code) event sourcing database I can make.
Being more present.
Linux skills? Just the basics: cd, ls, mkdir, touch. Nothing too fancy.
As things got more complex, I found myself constantly copy-pasting terminal commands from ChatGPT without really understanding them.
So I built a tiny, offline Linux tutor:
- Runs locally with Phi-2 (2.7B model, textbook training)
- Uses MiniLM embeddings to vectorize Linux textbooks and TLDR examples
- Stores everything in a local ChromaDB vector store
- When I run a command, it fetches relevant knowledge and feeds it into Phi-2 for a clear explanation.
No internet. No API fees. No cloud.
Just a decade-old ThinkPad and some lightweight models.Full build story + repo here: https://www.rafaelviana.io/posts/linux-tutor
It'll eventually be at c3n.ro and will be "sovereignly" hosted in the EU.
Collecting the data helps with recording engine performance, tyre ages, best lap times but is also really useful for recalling how well each setup performed for future reference.
I’m deliberately doing this all in a very low-tech way as my son will be creating a more polished version for a school project. We’re front-running that a bit to give him a good dataset and explore various ideas.
On that note, they do Python in school. For the backend it will be SqlLite and Flask. Any suggestions for the front end tech? This will mostly be forms- and grids-based so nothing sophisticated needed, but some simple client-side logic (e.g. validation, geolocation, simple stop watch) would be good. Ideally this would be python as well. We could use WebAssembly but am wondering if there is a suitable framework that does the is out-of-the-box.
Pictures at the link. There's also some webtoys on there, feel free to peruse
I would use a polished version.
When I read books, I find myself getting easily distracted since my phone has so many alternative apps/things to do OTHER THAN looking up a word in a dictionary.
I am particularly interested in seeing how you handle the transition from LLMs doing great at one-shot prompts but then struggling as the scope of everything expands and you have to get smarter about breaking down problems.
Also still working on a custom Slicer for a special metal printer design. The VTK library version needed replaced by a simpler Blender Geometry nodes solution to extract texture information, and infill hull features.
Also considered a beautiful solution to Roger Penrose's Andromeda paradox. That guy has a wicked sense of humor... very funny. =3
- we ship with a mounting piece that is easy to put on any bike frame to mount our battery
- our app and communication protocol is open-source, and we provide code for compatibility with the major bike controllers, meaning we're compatible with 90% of the e-bikes!
so in practice we're the perfect battery for e-bike enthusiast who want to change their battery to a repairable and fireproof one
and we also do B2B deals with some brands who want custom design etc
but basically if you are running Bafang / Shimano / Bosch controllers, it will work with our battery
PAPER is designed from the ground up to avoid Pip design mistakes, directly taking advantage of new standards while also offering Pipx-like functionality. Unlike uv, Poetry etc., PAPER is not a project manager or workflow tool; it installs the packages that you tell it to (and their dependencies), when/where/because you do. It's entirely user-focused, and usefulness for developers is treated as mostly incidental. (However, it can of course form a useful part of a proper development toolchain.)
It's not in a publicly-usable state yet, but these are the main design principles I'm working from:
* It's designed from the ground up to provide a programmatic API and to install cross-environment (in fact, you're only expected to install in its own environment in order to provide plugins). The API is provided by a separate wheel; other projects can explicitly cite that as a dependency and don't have to `subprocess.call` to a CLI.
* Size and performance are paramount. (Much of Pip's slow performance on smaller tasks is due to its size). I'm aiming for ~1MB total disk footprint for the base installation (compare ~10-15 for Pip, which is often multiplied across several environments; ~35 for uv). Dependencies are very carefully considered; installations are cached as much as possible (and hard-linked when possible, like with uv); etc.
* The program bootstraps itself as a zipapp that pre-loads the program's own cache before having it install itself from its own wheel (within that cache). This entails that there aren't any "hidden" vendored dependencies; anything that ends up bundled with PAPER can immediately be installed with PAPER, without an Internet connection.
* Non-essential functionality can be provided later by simply installing optional dependencies. The default is sufficient for the program to install wheels with minimal feedback.
* The CLI is built around separate hyphenated commands rather than sub-commands, for simpler implementation and better tab-completion. Commands are aimed at offering somewhat finer-grained control while keeping simple use cases simple.
bbbb is largely inspired by Flit, but is also intended to support projects with non-Python code, and doesn't enforce distributions with a single top-level import package. It uses the same split between a core package and a full development package, except the core package is treated as default. It's designed to be even more minimal than flit-core, and in fact can only build wheels by itself (dynamically pulling in the dev package if asked to make an sdist).
The goal is to minimize download footprint and maximize modularity when end users download an sdist and want to make a wheel from it (for example, implicitly via an installer). Users will declare the dev package as the build system in pyproject.toml, and may even declare it as an in-tree backend which will then be automatically added to the sdist. So people who (for some reason - perhaps to satisfy Linux distro maintainers) want to distribute an sdist for pure Python code, won't need any build-time dependencies at all.
Building non-Python code in bbbb, as well as customizing sdist contents and metadata (beyond what's directly supported), works by hooking into arbitrary Python code. Unlike the setup.py of Setuptools, the code can have custom name and locations specified in pyproject.toml, and it has specific and narrow purpose. I.e.: it's not part of implementing a class framework to power a now-deprecated custom CLI; it only implements compilation/metadata creation/manifest filtering - and does so with a much simpler, more direct API. You're meant to build upon this with additional support libraries (e.g. to locate compilers on the user's system) - again, modularity is key - that are separately listed as build-time dependencies.
I'm trying to make simple, elegant, pure-Python tools that complement each other and respect (my understanding of) modern Python packaging standards and their underlying goals. A developer could end up with a toolchain that looks like: PAPER, bbbb, build (the reference build frontend provided by PyPA), cookiecutter (or similar - for setting up new projects), twine (the default uploader), and some shell scripts - for things like "install dependencies with PAPER in a new environment and then add a .pth file for the current project to that environment". (And if you like linters and typecheckers, I certainly won't stop you from using them.)
Currently working on adding a manga mode and Netflix auto-captioning
So far this has means orchestrating a set of processes just to talk to bluetoothd, automatically connect to boards, guess which /dev/input/event* device each one is associated with, and start reading those and sending data somewhere over OSC. Every step of the process is about 10 times as difficult as I initially expected it would be, which is why, after months of work, I'm still putzing around with process orchestration instead of actually doing anything with music.
The bit of hair on this yak that I'm currently shaving is building a TUI library for Deno so that I can build a nice dashboard for keeping tabs on the state of all the different pieces. I prototyped a bit of the orchestration process in Node-RED, which was neat, but not exactly what I'm going for. There is a TUI library for Deno already, but it's kind of buggy and non-ergonomic for my brain.
Think multi-generational family and asset data. Everything in common formats, written in simple code.
Looking for early customers or modest pre-seed investment.
It’s not launched yet officially, only friends and family so far!
Any feedback is welcome!
There are already a few services like this, but most don't support using a parent's voice, and very few can connect stories together into a continuous narrative based on previous ones. Also I would like to hold context for fairy tales kinda local. For example Polish folklore is different than British. Most common villains are different and it can be fun and educating problem to solve.
I'm mainly doing this as a learning project, but curious to see where it ends up.
And an example video is here: https://youtu.be/1duE604MGHs
It definitely has not gotten the traction I'd expected, but at the very least I'm very close to start making my own courses with it!
It's a performance analytics platform for runners who love to dive into the details after they've been for a run and to be able to accurately track their progress over time.
It's not like Strava because I'm not including any social elements, intitally. And not like trainingpeaks because it's focused on individuals as opposed to teams or coaches. Also the analytics and models I offer are peronalised as opposed to one-size-fits-all. It's also running only. No cycling or anything else.
Ideal target market would be fairly decent amateur runners (e.g. sub 3 hour marathoners) who already know quite a bit about training but don't have a coach and are not good enough to be pro and have a full team doing this stuff for them. The pros have awesome tools but sadly most are not available for us mere mortals BUT I can build some of them! Example features:
1. Personalised "adjusted speed" models. The strava GAP model doesn't fit very well for me and many others, so I've made my own personalised model which gets updated each week. If you get better/worse at running up hills then model adjustments take that into account. The idea is not to provide a physiological correct model but more a performance based one.
2. I'm trying to do the same for surface types, heat and humidity as well. Of course these models are not personalised. I'll get to wind later on as it's much more complicated than the former. The idea is to have an accurate representation of "effort pace", which you can use as an input to performance models.
3. Using adjused pace data I will offer a pace/duration model to estimate critical speed/LT1/LT2/VO2max and this model forms the basis of tracking progress over time. Clearly most training wont be all out efforts, so I also estimate race performances based upon current fitness as well. E.g. if you ran X speed for Y time at a sub maximal effort then you can estimate what a maximal effort would be based upon the remaining aerobic and anaerobic power. From reading sports science literure, this is the most advanced way to track performance at the moment. The actual model I use is called an omniduration model.
4. I also have build some other models, e.g. Daniels running formula, which can be used but I don't find them to be as useful as the omniduration model.
5. I'm also trying to model how a workout or training session will effect your fitness. Where it's base/aerobic, threshold, VO2max or an anaerobic effect. Then, the idea would be to look at future training performance to assess whether the model was correct. You can then assess which types of training you respnd best to as well as which types of sessions you need to get the performance gains you need for your next race.
6. Specific race time predictor. Most platforms offer a single figure prediction for a distance but I want to offer specific race perdictions which take the course and weather into account. The model will give you splits taking all this into account.
7. Cohort adjusted performance models. How are you tracking against people your age? But more importantly, how are you tracking against people doing a similar volume and type of training to you? Are you improving at a similar rate?
There's a tonne of other stuff I can add but I'm going to keep it simple and focus on performance modelling for now because no-one seems to offer any decent tools around this at the moment.
If anyone found this interesting then I'd love to hear any feedback - let me know. Cheers!
I am also working on the last few remaining issues of Hyvector, of which some are surprisingly difficult to solve and AI unfortunately cannot help me a lot.
for (int i = 0; i < 10000; ++i) renderWindow.draw(sf::Sprite{/* ... */});
Upstream SFML: - 10000 draw calls (!) - My fork: 1 draw call
This (opinionated) fork of SFML also supports many other changes:
- Modern OpenGL and first-class support for Emscripten - Batching system to render 500k+ objects in one draw call - New audio API supporting multiple simultaneous devices - Enhanced API safety at compile-time - Flexible design approach over strict OOP principles - Built-in SFML::ImGui module - Lightning fast compilation time - Minimal run-time debug mode overhead - Uses SDL3 instead of bespoke platform-dependent code
It is temporarily named VRSFML (https://github.com/vittorioromeo/VRSFML) until I officially release it.
You can read about the library and its design principles in this article: https://www.vittorioromeo.com/index/blog/vrsfml.html
You can read about the batching system in this article: https://www.vittorioromeo.com/index/blog/vrsfml2.html
You can find the source code here: https://github.com/vittorioromeo/VRSFML
You can try out the interactive demos online in your browser here: https://vittorioromeo.github.io/VRSFML_HTML5_Examples/
The target audience is mostly developers familiar with SFML who are looking for a library very similar in style but offering more power and flexibility. Upstream SFML remains more suitable for complete beginners.
I have used this fork to create and release my second commercial game, BubbleByte. It's open-source (https://github.com/vittorioromeo/VRSFML/tree/bubble_idle) and available now on Steam: https://store.steampowered.com/app/3499760/BubbleByte/
BubbleByte is a laid-back incremental game that mixes clicker, idle, automation, and a hint of tower defense, all inspired by my cat Byte’s fascination with soap bubbles.
A trailer is available here: https://www.youtube.com/watch?v=Db_zp66OHIU
streamlining the house to save money and sanity when we both start working and/or going to school
and starting to seriously start using a simple phone book
and mentally preparing for rent to go up or a car issue by causing controlled chaos and finding new ways to calm down as a family with as little money and energy as possible
by watching old movies from movie Madness, we're gaining insights into the origins of the items we use today. The futuristic visions of the sci-fi pioneers are serving as a Wikipedia of sorts, helping us break free from generational curses and regain a sense of control.
since it helps because, understandably, not being employed makes us feel ultra vulnerable, and that is not only normal but what the tech cults thrived on, but it is insane to take out our nerves on each other constantly and or/the animals. And to be on mass amounts of pharmaceuticals or heavy drinking is no walk in the park either in terms of ROI on keeping the embers of love and growing old, but Gandalf/Sith Lord grandaughter redemption President signed letter die at 102 years old happy clearly an attainable trajectory
As my upcoming birthday approaches, I can't help but feel that it might be the most special one yet. despite the tears, anger, and losses, I feel some reality based it is gonna be hakuna Matata SophiaG20
while zero jobs I see and know about are for me, I'm content knowing that my brain's imagination, creativity, and curiosity machine is still growing at 36. And feeling bad ass
read an article from Wired about Patricia Moore and honestly thinking about how Empathy and how the need for tech to let the ones like my mom freaking out and having plastic surgery and going overboard on Facebook and freaking out over menopause because she's 20 years older than when most of the women in our direct family died
I am inspired to find a real way to a home we actually own, a home we feel we have worked and earned. I am also motivated to soften the blows that life inevitably brings, such as the possibilities of allergies due to eventual menopause or the loss of a loved one. I see myself working for me and the current and women who are girls right now.
So tech will be more on their side not telling them to worry about the outside while the inside rots or worse something is off and no one cares or listens.
I am navigating these challenges on newly Medicare (newly disabled) and still finding ways to go back to the default feeling of being the fierce queen that I had when I was 5 years old with my grandpa.
but also letting the mind wonder so that others will be the fierce queens that they are because that is how my grandfather and his team made things after being POW in the holocaust, and he would want it for me and my badass mothers. Not to follow in what he did but find my own path for my own tribes with our own ways.
It feels like a rebirth of some kind for the trillionth time, but this time sober with a family (who loves me and I chose) and less toxic companies in my subscriptions or emails (because I nuked entirely the old Ecosystems which is making taxes pretty astonishing) like the intro to Hackers. Still, I killed myself online on purpose, crying and laughing, saying: don't threaten me with a good time On a Pixel and Samsung phone and Apple phone while Microsoft and all of them were like, NOOOOOOOOOOO!! ~ 20th Nov 2024, and it feels like that was a sinister evil evil Cult, but damn, reality tastes crisp, and there is more to chop to make it to 102. With the vibes still fresh from the interesting dozens on dozens of 100+ year-olds I took care of when I was 14-22 years old in Alaska and coming to terms my grandpa who I was TWO peas in a pod was one of the architects of DARPA and architects of NASA, and that is fricken cool
but now it is time to start pulling in the resources for this growing family instead of being salty about wasted time in Cults moving forward, like my father and grandfather's visionaries. I am finding a way forward by choosing not and withholding this visionary raw energy from DARPA and NASA because, in the end, all the promises they gave my grandpa did not happen. at. all.
And being cool with the fact that I will learn how to live life without ADHD meds even though it has been an option since I was 5 years old, but like the olds who came off Prozac recently, I want to start life anew without that stuff. 15 years is enough to say that shit wont get me to 102 years old comfortably with the family and loved ones knowing I'll be fine if 5-20 terrible terrible things happen back to back again.
love this.
People hate AI generated content, but the quality is actually good and Google likes it.