131 pointsby zxt_tzxa day ago12 comments
  • kevmo31419 hours ago
    It's somewhat ironic that the author advocates for keeping it simple and using pgvector but then buries a ton of complexity with an API server, auth server, Cloudflare workers, and durable objects. Especially given

    > Supabase easily the most expensive part of my stack (at $200/month, if we ran in it XL, i.e. the lowest tier with 4-core CPU)

    That could get you a pretty decent VPS and allow you to coassemble everything with less complexity. This is exemplified in some of the gotchas, like

    > Cloudflare Workers demand an entirely different pattern, even compared to other serverless runtimes like Lambda

    If I'm hacking something together, learning an entirely different pattern for some third-party service is the last thing I want to do.

    All that being said though, maybe all it would've done is prolong the inevitable death due to the product gap the author concludes with.

    • ljm35 minutes ago
      Not speaking for OP’s experience but I suppose that you might default to all this fancy serverless edge worker stuff if you learned how to code on their (usually generous) free-tier plans, or they were the only things you dealt with at work.

      Meanwhile setting up a little VPS box would come more naturally if you learned in the era of the LAMP stack and got your hands dirty with Linux.

      In fact I wonder if for some people that’s made worse by the tendency to split frontend and backend web development into completely separate disciplines when originally you did the whole thing.

    • zxt_tzx13 hours ago
      Totally fair point. Thanks for taking the time to read through it! I guess I didn't want to use a VPS and then have to switch to something else if the product really worked, but I guess that rhymes with premature optimization.

      Some other clarifications:

      - I was also surprised with how expensive Supabase turned out to be and only got there because I was trying to sync very big repos ahead of time. I could see an alternative product where the cost here would be minimal too

      - I did see this project as an opportunity to try out Cloudflare. as mentioned in the post, as a full stack TypeScript developer, I thought Cloudflare could be a good fit and I still really want it to succeed as a cloud platform

      - deploying two separate API server and auth server is actually simpler than it sounds, since each is a Cloudflare Worker! will try to open source this project so this is clearer

      - the durable objects rate limiter was wholly experimental and didn't make it into production

      > All that being said though, maybe all it would've done is prolong the inevitable death due to the product gap the author concludes with.

      Very true :(

      • android52112 hours ago
        I am using a vps and it is dead simple and cheap. If my projects actually gained traction , switching from vps to a more scalable infra is not a big challenge. The biggest challenge is to find PMV as fast and as efficient as possible.
  • zxt_tzxa day ago
    Author here. Over the last few months, I have built and launched a free semantic search tool for GitHub called SemHub (https://semhub.dev/). In this blog post, I share what I’ve learned and why I’ve failed, so that other builders can learn from my experience. This blog post runs long and I have sign-posted each section. I have marked the sections that I consider the particularly insightful with an asterisk (*).

    I have also summarized my key lessons here:

    1. Default to pgvector, avoid premature optimization.

    2. You probably can get away with shorter embeddings if you’re using Matryoshka embedding models.

    3. Filtering with vector search may be harder than you expect.

    4. If you love full stack TypeScript and use AWS, you’ll love SST. One day, I wish I can recommend Cloudflare in equally strong terms too.

    5. Building is only half the battle. You have to solve a big enough problem and meet your users where they’re at.

    • gfody15 hours ago
      it's weird you consider this a failure. you spent a few months and learned how to work with embedding models to build an efficient search. the fact that your search works well is a successful outcome. if your goal was to turn a few month effort into a thriving business that's never going to happen period - it only seems possible because when it does happen for people we completely discount the luck factor.

      if you want to turn your search into a business now that's a new and different effort, mostly marketing and stuff that most self respecting engineers gives zero shits about, but if that's your real goal don't call it a failure yet because you haven't even tried.

      • zxt_tzx13 hours ago
        > it's weird you consider this a failure. you spent a few months and learned how to work with embedding models to build an efficient search. the fact that your search works well is a successful outcome.

        Thank you for your encouragement! I take your point that it was not a technical failure, but I think it's still a product failure in the sense that SemHub was not solving a big enough pain point for sufficiently many people.

        > if you want to turn your search into a business now that's a new and different effort, mostly marketing and stuff that most self respecting engineers gives zero shits about, but if that's your real goal don't call it a failure yet because you haven't even tried.

        Haha to be honest, my goal was even more modest, SemHub is intended to be a free tool for people to use, we don't intend to monetize it. I also did try to market it (DMing people, Show HN), but the initial users who tried it did not stick around.

        Sure, I could've marketed SemHub more, but I think the best ideas carry within themselves a certain virality and I don't think this is it.

    • smarx00720 hours ago
      Hi, thanks for building a great tool and a great write-up! I was trying to add a number of repos under oslc/, oslc-op/, and eclipse-lyo/* orgs but no joy - internal server error. Hopefully, you will reconsider shutting down the project (just heard about it and am quite excited)!

      I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues. Especially, as in my case, when the project attemps to advance an open standard and just checking issues in the main repo will not give you the full picture. For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK - https://oslc-sourcebot.berezovskyi.me/ . I think your tool is great in complementing the code search.

      • zxt_tzx13 hours ago
        Ohh apologies, I think there was a bug that led to the Internal Server Error, please try again, I _think_ it should be working now!

        > I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues.

        That was indeed the original motivation! Will see if I can convince Ammar to reconsider shutting down the project, but no promises

        > For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK

        Ohh, in case it's not clear from the UI, you could create an account and index your own "collection" of repos and search from within that interface. I had originally wanted to build out this "collection" concept a lot more (e.g. mixing private and public repos), but I thought it was more important to see if there's traction for the public search idea at all

    • fulafela day ago
      SST: https://github.com/sst/sst - vaguely similar to CDK but can also manage some non-AWS resources and seems TypeScript-only
      • e12e15 hours ago
        Apparently they started on top of cdk - then migrated to pulumni adding support for terraform providers.

        Looks like one of the more interesting deploy toolkits I've seen in a while.

    • romanhn15 hours ago
      Thanks for posting this, very timely as I'm also playing around with pgvector for semantic search. I saw that you ended up trimming inputs longer than 8K tokens. Have you looked into chunking (breaking input into smaller chunks and doing vector search on the chunks)? Embedding models I'm playing with have a max of 512 tokens, so chunking is pretty much a must. Choosing a chunking strategy seems to be a deep rabbit hole of its own.
      • zxt_tzx13 hours ago
        > Have you looked into chunking (breaking input into smaller chunks and doing vector search on the chunks)?

        Ohh I had not seriously considered this until reading this. I could have multiple embeddings per issue and search across those embeddings and if the same issue is matched multiple times, I would probably take the strongest match and dedupe it.

        I could create embeddings for comments too and search across those.

        Thanks for the suggestion, would be a good think to try!

        > Choosing a chunking strategy seems to be a deep rabbit hole of its own.

        Yes this is true. In my case, I think the metadata fields like Title and Labels are probably doing a lot of the work (which would be duplicated across chunks?) and, within an issue body, off the top of my head, I can't see any intuitive ways to chunk it.

        I have heard that for standard RAG, chunking goes a surprisingly long way!

    • vaidhya day ago
      Having built a failed semantic search engine for life sciences (bioask when it existed), I think the last point should be the first. Not getting a product market fit very quickly killed mine.
    • niela day ago
      Thanks for writing this up!

      > Filtering with vector search may be harder than you expect.

      I've only ever used it for a small proof of concept, but Qdrant is great at categorical filtering with HNSW.

      https://qdrant.tech/articles/filtrable-hnsw/

      • zxt_tzx12 hours ago
        Thanks for sharing! Do you have more details to share, e.g. did you just have a vector db, or did you have a main db as well?

        In my research, Qdrant was also the top contender and I even created an account with them, but the need to sync two dbs put me off

    • wrsa day ago
      Fantastic writeup — thank you for taking the time to do this!
      • zxt_tzx12 hours ago
        I'm glad you found it helpful :)
    • With 5 you mean promoting the app? It is by far the biggest problem, yes. In many cases even bigger than building the app itself.
  • whakim9 hours ago
    I was the first employee at a company which uses RAG (Halcyon), and I’ve been working through issues with various vector store providers for almost two years now. We’ve gone from tens of thousands to billions of embeddings in that timeframe - so I feel qualified to at least offer my opinion on the problem.

    I agree that starting with pgvector is wise. It’s the thing you already have (postgres), and it works pretty well out of the box. But there are definitely gotchas that don’t usually get mentioned. Although the pgvector filtering story is better than it was a year ago, high-cardinality filters still feel like a bit of an afterthought (low-cardinality filters can be solved with partial indices even at scale). You should also be aware that the workload for ANN is pretty different from normal web-app stuff, so you probably want your embeddings in a separate, differently-optimized database. And if you do lots of updates or deletes, you’ll need to make sure autovacuum is properly tuned or else index performance will suffer. Finally, building HNSW indices in Postgres is still extremely slow (even with parallel index builds), so it is difficult to experiment with index hyperparameters at scale.

    Dedicated vector stores often solve some of these problems but create others. Index builds are often much faster, and you’re working at a higher level (for better or worse) so there’s less time spent on tuning indices or database configurations. But (as mentioned in other comments) keeping your data in sync is a huge issue. Even if updates and deletes aren’t a big part of your workload, figuring out what metadata to index alongside your vectors can be challenging. Adding new pieces of metadata may involve rebuilding the entire index, so you need a robust way to move terabytes of data reasonably quickly. The other challenge I’ve found is that filtering is often the “special sauce” that vector store providers bring to the table, so it’s pretty difficult to reason about the performance and recall of various types of filters.

    • ichiwellsan hour ago
      > Finally, building HNSW indices in Postgres is still extremely slow (even with parallel index builds), so it is difficult to experiment with index hyperparameters at scale

      For anyone coming across this without much experience here, for building these indexes in pgvector it makes a massive difference to increase your maintenance memory above the default. Either as a separate db like whakim mentioned, or for specific maintenance periods depending on your use case.

      ``` SHOW maintenance_work_mem; SET maintenance_work_mem = X; ```

      In one of our semantic search use cases, we control the ingestion of the searchable content (laws, basically) so we can control when and how we choose to index it. And then I've set up classic relational db indexing (in addition to vector indexing) for our quite predictable query patterns.

      For us that means our actual semantic db query takes about 10ms.

      Starting from 10s of millions of entries, filtered to ~50k (jurisdictionally, in our case) relevant ones and then performing vector similarity search with topK/limit.

      Built into our ORM and zero round-trip latency to Pinecone or syncing issues.

      EDIT: I imagine whakim has more experience than me and YMMV, just sharing lesson learned. Even with higher maintenance mem the index building is super slow for HNSW

    • zxt_tzx5 hours ago
      Thank you for the comment, compared to you I have only touched the bare surface of this quite complex domain, would love to get more of your input!

      > building HNSW indices in Postgres is still extremely slow (even with parallel index builds), so it is difficult to experiment with index hyperparameters at scale.

      Yes, I experienced this too. I from 1536 to 256 and did not try more values than I'd have liked because spinning up a new database and recreating the embeddings simply took too long. I’m glad it worked well enough for me, but without a quick way to experiment with these hyperparameters, who knows whether I’ve struck the tradeoff at the right place.

      Someone on Twitter reached out and pointed out one could quantizing the embeddings to bit vectors and search with hamming distance — supposedly the performance hit is actually very negligible, especially if you add a quick rescore step: https://huggingface.co/blog/embedding-quantization

      > But (as mentioned in other comments) keeping your data in sync is a huge issue.

      Curious if you have any good solutions in this respect.

      > The other challenge I’ve found is that filtering is often the “special sauce” that vector store providers bring to the table, so it’s pretty difficult to reason about the performance and recall of various types of filters.

      I realize they market heavily on this, but for open source databases, wouldn't the fact that you can see the source code make it easier to reason about this? or is your point that their implementation here are all custom and require much more specialized knowledge to evaluate effectively?

    • gregw1348 hours ago
      What would you recommend for billions of embeddings?
  • nchmy17 hours ago
    This seems pretty similar to something that the ManticoreSearch team released a year ago

    https://manticoresearch.com/blog/manticoresearch-github-issu...

    You can index any GH repo and then search it with vector, keyword, hybrid and more. There's faceting and anything else you could ever want. And it is astoundingly fast - even vector search.

    Here's the direct link to the demo https://github.manticoresearch.com/

  • scottyeager10 hours ago
    > * No way to search across multiple repos within GitHub. > * No way to easily see open and closed issues in the same view.

    I don't quite understand, because searching issues across all of Github and also within orgs is already supported. Those searches show both open and closed issues by default.

    For searches on a single repo, just removing the "state" filter entirely from the query also shows open and closed issues.

    I do think that semantic search on issues is a cool idea and the semantic/fuzzy aspect is probably the biggest motivator for the project. It just felt funny to see stuff that Github can actually already do listed at the top of motivating issues.

  • johnfn20 hours ago
    That was a great write up.

    If you don't mind me giving you some unsolicited product feedback: I think SemHub didn't do well because it's unclear what problem it's actually solving. Who actually wants your product? What's the use case? I use GitHub issues all the time, and I can't think of a reason I'd want semhub. If I need to find a particular issue on, say, TypeScript, I'll just google "github typescript issue [description]" and pull up the correct thing 9 times out of 10. And that's already a pretty rare percentage of the time I spend on GitHub.

    • Noumenon72an hour ago
      https://manticoresearch.com/blog/github-semantic-search/ gives some good examples where you get more with semantic than keyword search:

        * Search for "memory leak", get "index out of memory"
        * Search "API rate limits", get “throttling”, “250 results” limit, and “rate limiting”
        * Search issues for "user authentication" to see whether anyone has submitted your feature request
        * Search for “SQL injection” to get “database infiltration” or “SQL vulnerability”
    • zxt_tzx13 hours ago
      Thanks for the feedback, to be honest, my own experience is actually very similar to yours.

      The original pain point probably only exists for small minority of open source maintainers who manage multiple repos and actually search across them regularly. Most devs are probably like you and I, and the mediocre GitHub search experience is more than compensated by using Google.

      In its current iteration, it's quite hard to get regular devs to change their searching behaviour, and, even for those who experience this pain point, it probably isn't large enough for them to change their behavior.

      If I continue to work on this, I would want to (1) solve a bigger + more frequent pain point; (2) build something that requires a smaller change in user behavior.

  • serjester19 hours ago
    Great write up, especially agree on pgvector with small (ideally fine tuned) embeddings. There’s so much complexity that comes with keeping your vector db in sync with you main db (especially once you start filtering with metadata). 90% of gen AI apps don’t need it.
    • zxt_tzx12 hours ago
      > There’s so much complexity that comes with keeping your vector db in sync with you main db (especially once you start filtering with metadata)

      Ohh do you speak from experience? I know I will likely never do this, but curious how did you do it? When I looked into this, I found that Airbyte has something to connect the vector db with the main db, but I never bit that bullet (thankfully)

  • brian-armstrong19 hours ago
    Am I misunderstanding what is meant by semantic code search? I thought the idea was that you run something like a parser on the repo to extract function/class/variable names and then allow searching on a more rich set of data, rather than tokenizing it like English.

    I know github kind of added this but their version falls apart still even in common languages like C++. It's not unusual for it to just completely miss cross references, even in smaller repos. A proper compiler's eye view of symbolic data would be super useful, and Github's halfway attempt can be frustratingly daft about it.

    • zxt_tzx12 hours ago
      Ah I was doing semantic search of GitHub _issues_, not the actual code on GitHub.

      For code search, I have used grep.app, which works reasonably well

  • franky4719 hours ago
    I started a quick weekend project to do just that today: index my OSS project's [1] issues & discussions, so I can RAG-ask it to find references when I feel like I'm repeating myself (in "see issue/PR/discussion #123", finding the 123 is the hardest part).

    This article might be super helpful, thanks! I don't intend to make a product out of it though, so I can cut a lot of corners, like using a PAT for auth and running everything locally.

    [1] https://github.com/47ng/nuqs

    • zxt_tzx12 hours ago
      After this failed experience with SemHub, I am actually thinking of building something like this, for open source maintainers like you are definitely the ICP! (nuqs seems really cool btw, storing state in the URL param is definitely the way to go)

      To elaborate, I was thinking of:

      - running a cron that checks repos every X minutes

      - for every new issue someone has opened, I will run an agent that (1) checks e.g. SemHub to look for similar issues; (2) checks the project's Discord server or Slack channel to see if anyone has raised something similar; (3) run a general search

      - use LLMs to compose a helpful reply pointing the OP to that other issue/Discord discussion etc.

      From other OSS maintainers, I've heard that being able to reliably identify duplicates would be a huge plus. Does this sound like something you'd be interested to try? Let me know how I can reach you if/when I have built something like this!

      I am personally quite annoyed by all the AI slop being created on social media and even GitHub PRs and would love to use the same technology to do something pro-social.

      • franky473 hours ago
        While having a bot that auto-replies with "similar issues" pointers might make sense at a large scale (to relieve maintainers), I usually prefer to do this manually at my current scale, knowing that there's one particular instance where I pointed someone in a given direction, and want to either reuse/modify a code example block, or stitch together semantically unrelated but relevant comments & discussions together.

        You might want to talk to Jovi [1] about that, he's doing something very similar.

        [1] https://bsky.app/profile/jovidecroock.com/post/3lh6hkcxnqc2v

  • gregorvand8 hours ago
    Hi Warren, great article. Would love to connect on what we're doing (also in Singapore). Please drop me a message gregor@vand.hk
  • nosefrog19 hours ago
    > When using Cloudflare Workers as an API server, I have experienced requests that would “fail silently” and leave a “hanging connection”, with no error thrown, no log emitted, and a frontend that is just loading. Honestly, no idea what’s up with this.

    Yikes, these sorts of errors are so hard to debug. Especially if you don't have a real server to log into to get pcaps.

    • viraptor19 hours ago
      Cloudflare workers are not amazing in terms of communicating problems. The errors you get can also be out of sync with the docs and the support doesn't have access to poke at your issues directly. Together with the custom runtime and outdated TS types... it can be a very frustrating DX.
      • sebmellen18 hours ago
        We’ve tried, but it’s hard to imagine any real production system using Cloudflare Workers..
        • viraptor16 hours ago
          I've done it, but: enterprise support helps a lot, multiple extremely annoying tickets were required, I did find multiple issues - some fixed, some worked around. And overall, the fewer people use CF (or another provider of their size) the better.
          • zxt_tzx12 hours ago
            > And overall, the fewer people use CF (or another provider of their size) the better.

            I understand your sentiment, but I vehemently disagree.

            The cloud provider space has rapidly become an oligopoly and CloudFlare is one of the few new entrants that (1) has sufficient scale to compete with the incumbents; (2) has new ideas that the incumbents cannot easily match (region earth, durable objects etc.).

            For most production workloads, I would not even consider the newer cloud providers, but I sincerely meant it when I said I hope Cloudflare will succeed. They've also been very responsive to the feedback raised in the blogpost when I DM-ed them.

            (On a side note re: difficulty for newcomers in this market, I used to be part of a team that would run e.g. staging and testing environments on a new serverless db provider, but would run prod on AWS Aurora. In retrospect, this did not make much sense either as you want your environments to be as similar as possible, which means new cloud providers have an even tougher time getting started.)

  • dangapeass20 hours ago
    [dead]