> Supabase easily the most expensive part of my stack (at $200/month, if we ran in it XL, i.e. the lowest tier with 4-core CPU)
That could get you a pretty decent VPS and allow you to coassemble everything with less complexity. This is exemplified in some of the gotchas, like
> Cloudflare Workers demand an entirely different pattern, even compared to other serverless runtimes like Lambda
If I'm hacking something together, learning an entirely different pattern for some third-party service is the last thing I want to do.
All that being said though, maybe all it would've done is prolong the inevitable death due to the product gap the author concludes with.
Meanwhile setting up a little VPS box would come more naturally if you learned in the era of the LAMP stack and got your hands dirty with Linux.
In fact I wonder if for some people that’s made worse by the tendency to split frontend and backend web development into completely separate disciplines when originally you did the whole thing.
Some other clarifications:
- I was also surprised with how expensive Supabase turned out to be and only got there because I was trying to sync very big repos ahead of time. I could see an alternative product where the cost here would be minimal too
- I did see this project as an opportunity to try out Cloudflare. as mentioned in the post, as a full stack TypeScript developer, I thought Cloudflare could be a good fit and I still really want it to succeed as a cloud platform
- deploying two separate API server and auth server is actually simpler than it sounds, since each is a Cloudflare Worker! will try to open source this project so this is clearer
- the durable objects rate limiter was wholly experimental and didn't make it into production
> All that being said though, maybe all it would've done is prolong the inevitable death due to the product gap the author concludes with.
Very true :(
I have also summarized my key lessons here:
1. Default to pgvector, avoid premature optimization.
2. You probably can get away with shorter embeddings if you’re using Matryoshka embedding models.
3. Filtering with vector search may be harder than you expect.
4. If you love full stack TypeScript and use AWS, you’ll love SST. One day, I wish I can recommend Cloudflare in equally strong terms too.
5. Building is only half the battle. You have to solve a big enough problem and meet your users where they’re at.
if you want to turn your search into a business now that's a new and different effort, mostly marketing and stuff that most self respecting engineers gives zero shits about, but if that's your real goal don't call it a failure yet because you haven't even tried.
Thank you for your encouragement! I take your point that it was not a technical failure, but I think it's still a product failure in the sense that SemHub was not solving a big enough pain point for sufficiently many people.
> if you want to turn your search into a business now that's a new and different effort, mostly marketing and stuff that most self respecting engineers gives zero shits about, but if that's your real goal don't call it a failure yet because you haven't even tried.
Haha to be honest, my goal was even more modest, SemHub is intended to be a free tool for people to use, we don't intend to monetize it. I also did try to market it (DMing people, Show HN), but the initial users who tried it did not stick around.
Sure, I could've marketed SemHub more, but I think the best ideas carry within themselves a certain virality and I don't think this is it.
I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues. Especially, as in my case, when the project attemps to advance an open standard and just checking issues in the main repo will not give you the full picture. For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK - https://oslc-sourcebot.berezovskyi.me/ . I think your tool is great in complementing the code search.
> I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues.
That was indeed the original motivation! Will see if I can convince Ammar to reconsider shutting down the project, but no promises
> For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK
Ohh, in case it's not clear from the UI, you could create an account and index your own "collection" of repos and search from within that interface. I had originally wanted to build out this "collection" concept a lot more (e.g. mixing private and public repos), but I thought it was more important to see if there's traction for the public search idea at all
Looks like one of the more interesting deploy toolkits I've seen in a while.
Ohh I had not seriously considered this until reading this. I could have multiple embeddings per issue and search across those embeddings and if the same issue is matched multiple times, I would probably take the strongest match and dedupe it.
I could create embeddings for comments too and search across those.
Thanks for the suggestion, would be a good think to try!
> Choosing a chunking strategy seems to be a deep rabbit hole of its own.
Yes this is true. In my case, I think the metadata fields like Title and Labels are probably doing a lot of the work (which would be duplicated across chunks?) and, within an issue body, off the top of my head, I can't see any intuitive ways to chunk it.
I have heard that for standard RAG, chunking goes a surprisingly long way!
> Filtering with vector search may be harder than you expect.
I've only ever used it for a small proof of concept, but Qdrant is great at categorical filtering with HNSW.
In my research, Qdrant was also the top contender and I even created an account with them, but the need to sync two dbs put me off
I agree that starting with pgvector is wise. It’s the thing you already have (postgres), and it works pretty well out of the box. But there are definitely gotchas that don’t usually get mentioned. Although the pgvector filtering story is better than it was a year ago, high-cardinality filters still feel like a bit of an afterthought (low-cardinality filters can be solved with partial indices even at scale). You should also be aware that the workload for ANN is pretty different from normal web-app stuff, so you probably want your embeddings in a separate, differently-optimized database. And if you do lots of updates or deletes, you’ll need to make sure autovacuum is properly tuned or else index performance will suffer. Finally, building HNSW indices in Postgres is still extremely slow (even with parallel index builds), so it is difficult to experiment with index hyperparameters at scale.
Dedicated vector stores often solve some of these problems but create others. Index builds are often much faster, and you’re working at a higher level (for better or worse) so there’s less time spent on tuning indices or database configurations. But (as mentioned in other comments) keeping your data in sync is a huge issue. Even if updates and deletes aren’t a big part of your workload, figuring out what metadata to index alongside your vectors can be challenging. Adding new pieces of metadata may involve rebuilding the entire index, so you need a robust way to move terabytes of data reasonably quickly. The other challenge I’ve found is that filtering is often the “special sauce” that vector store providers bring to the table, so it’s pretty difficult to reason about the performance and recall of various types of filters.
For anyone coming across this without much experience here, for building these indexes in pgvector it makes a massive difference to increase your maintenance memory above the default. Either as a separate db like whakim mentioned, or for specific maintenance periods depending on your use case.
``` SHOW maintenance_work_mem; SET maintenance_work_mem = X; ```
In one of our semantic search use cases, we control the ingestion of the searchable content (laws, basically) so we can control when and how we choose to index it. And then I've set up classic relational db indexing (in addition to vector indexing) for our quite predictable query patterns.
For us that means our actual semantic db query takes about 10ms.
Starting from 10s of millions of entries, filtered to ~50k (jurisdictionally, in our case) relevant ones and then performing vector similarity search with topK/limit.
Built into our ORM and zero round-trip latency to Pinecone or syncing issues.
EDIT: I imagine whakim has more experience than me and YMMV, just sharing lesson learned. Even with higher maintenance mem the index building is super slow for HNSW
> building HNSW indices in Postgres is still extremely slow (even with parallel index builds), so it is difficult to experiment with index hyperparameters at scale.
Yes, I experienced this too. I from 1536 to 256 and did not try more values than I'd have liked because spinning up a new database and recreating the embeddings simply took too long. I’m glad it worked well enough for me, but without a quick way to experiment with these hyperparameters, who knows whether I’ve struck the tradeoff at the right place.
Someone on Twitter reached out and pointed out one could quantizing the embeddings to bit vectors and search with hamming distance — supposedly the performance hit is actually very negligible, especially if you add a quick rescore step: https://huggingface.co/blog/embedding-quantization
> But (as mentioned in other comments) keeping your data in sync is a huge issue.
Curious if you have any good solutions in this respect.
> The other challenge I’ve found is that filtering is often the “special sauce” that vector store providers bring to the table, so it’s pretty difficult to reason about the performance and recall of various types of filters.
I realize they market heavily on this, but for open source databases, wouldn't the fact that you can see the source code make it easier to reason about this? or is your point that their implementation here are all custom and require much more specialized knowledge to evaluate effectively?
https://manticoresearch.com/blog/manticoresearch-github-issu...
You can index any GH repo and then search it with vector, keyword, hybrid and more. There's faceting and anything else you could ever want. And it is astoundingly fast - even vector search.
Here's the direct link to the demo https://github.manticoresearch.com/
https://jina.ai/news/what-is-colbert-and-late-interaction-an...
http://musingsaboutlibrarianship.blogspot.com/2024/06/can-se...
I don't quite understand, because searching issues across all of Github and also within orgs is already supported. Those searches show both open and closed issues by default.
For searches on a single repo, just removing the "state" filter entirely from the query also shows open and closed issues.
I do think that semantic search on issues is a cool idea and the semantic/fuzzy aspect is probably the biggest motivator for the project. It just felt funny to see stuff that Github can actually already do listed at the top of motivating issues.
If you don't mind me giving you some unsolicited product feedback: I think SemHub didn't do well because it's unclear what problem it's actually solving. Who actually wants your product? What's the use case? I use GitHub issues all the time, and I can't think of a reason I'd want semhub. If I need to find a particular issue on, say, TypeScript, I'll just google "github typescript issue [description]" and pull up the correct thing 9 times out of 10. And that's already a pretty rare percentage of the time I spend on GitHub.
* Search for "memory leak", get "index out of memory"
* Search "API rate limits", get “throttling”, “250 results” limit, and “rate limiting”
* Search issues for "user authentication" to see whether anyone has submitted your feature request
* Search for “SQL injection” to get “database infiltration” or “SQL vulnerability”
The original pain point probably only exists for small minority of open source maintainers who manage multiple repos and actually search across them regularly. Most devs are probably like you and I, and the mediocre GitHub search experience is more than compensated by using Google.
In its current iteration, it's quite hard to get regular devs to change their searching behaviour, and, even for those who experience this pain point, it probably isn't large enough for them to change their behavior.
If I continue to work on this, I would want to (1) solve a bigger + more frequent pain point; (2) build something that requires a smaller change in user behavior.
Ohh do you speak from experience? I know I will likely never do this, but curious how did you do it? When I looked into this, I found that Airbyte has something to connect the vector db with the main db, but I never bit that bullet (thankfully)
I know github kind of added this but their version falls apart still even in common languages like C++. It's not unusual for it to just completely miss cross references, even in smaller repos. A proper compiler's eye view of symbolic data would be super useful, and Github's halfway attempt can be frustratingly daft about it.
For code search, I have used grep.app, which works reasonably well
This article might be super helpful, thanks! I don't intend to make a product out of it though, so I can cut a lot of corners, like using a PAT for auth and running everything locally.
To elaborate, I was thinking of:
- running a cron that checks repos every X minutes
- for every new issue someone has opened, I will run an agent that (1) checks e.g. SemHub to look for similar issues; (2) checks the project's Discord server or Slack channel to see if anyone has raised something similar; (3) run a general search
- use LLMs to compose a helpful reply pointing the OP to that other issue/Discord discussion etc.
From other OSS maintainers, I've heard that being able to reliably identify duplicates would be a huge plus. Does this sound like something you'd be interested to try? Let me know how I can reach you if/when I have built something like this!
I am personally quite annoyed by all the AI slop being created on social media and even GitHub PRs and would love to use the same technology to do something pro-social.
You might want to talk to Jovi [1] about that, he's doing something very similar.
[1] https://bsky.app/profile/jovidecroock.com/post/3lh6hkcxnqc2v
Yikes, these sorts of errors are so hard to debug. Especially if you don't have a real server to log into to get pcaps.
I understand your sentiment, but I vehemently disagree.
The cloud provider space has rapidly become an oligopoly and CloudFlare is one of the few new entrants that (1) has sufficient scale to compete with the incumbents; (2) has new ideas that the incumbents cannot easily match (region earth, durable objects etc.).
For most production workloads, I would not even consider the newer cloud providers, but I sincerely meant it when I said I hope Cloudflare will succeed. They've also been very responsive to the feedback raised in the blogpost when I DM-ed them.
(On a side note re: difficulty for newcomers in this market, I used to be part of a team that would run e.g. staging and testing environments on a new serverless db provider, but would run prod on AWS Aurora. In retrospect, this did not make much sense either as you want your environments to be as similar as possible, which means new cloud providers have an even tougher time getting started.)