First, both strategies - the one outlined by the Neon/ParadeDB article, and the one used here -- are presented as viable alternatives by the Postgres docs: https://www.postgresql.org/docs/current/textsearch-tables.ht....
Second - as the article correctly demonstrates, the problem with Postgres FTS isn't "how can I pick and optimize a single pre-defined query" it's "how do I bring Postgres to Elastic-level performance across a wide range of real-world boolean, fuzzy, faceted, relevance-ranked, etc. queries?"
`pg_search` is designed to solve the latter problem, and the benchmarks were made to reflect that. You can always cherry-pick a query and optimize it at the expense of data duplication and complexity. The Neon/ParadeDB benchmarks contained 12 queries in total, and the benchmarks could have:
- Created composite b-tree indexes for each of the queries with boolean predicates
- Extracted the all the text fields from JSONBs, stored and indexed them as a separate columns for queries against JSONB
But that's not realistic for many real-world use cases. `pg_search` doesn't require that - it's a simple index definition that works for a variety of "Elastic style" queries and Postgres types and doesn't ask the user to duplicate every text column.
This is what we did:
DB with pg_search: We created a single BM25 index
DB without pg_search: We created all these indexes
GIN index on message (for full-text search)
GIN index on country (for text-based filtering)
B-tree indexes on severity, timestamp, and metadata->>'value' (to speed up filtering, ordering, and aggregations)
See the problem? You didn't create an index on the vector in the without-pg_search case. You didn't compare apples to apples. TFA is all about that.Perhaps you can argue that creating a fastupdates=on index would have been the right comparison, but you didn't do that in that blog.
> You can always cherry-pick a query and optimize it at the expense of data duplication and complexity. The Neon/ParadeDB benchmarks contained 12 queries in total, and the benchmarks could have:
TFA isn't cherry-picking to show you that one query could have gone faster. TFA is showing that you didn't compare apples to apples. Looking at those 12 queries nothing screams at me that TFA's approach of storing the computed tsvector wouldn't work for those too.
Perhaps pg_search scales better and doesn't require trading off update for search performance, and that would be a great selling point, but why not just make that point?
> "You didn't ..."
No, they didn't. They aren't Neon and didnt do the benchmarks in the linked article. They are a postgres maintainer.
If you actually read their comment instead of raging you will see that they are saying that pg_search is a simple generic index definition that makes a _variety_ of queries work with little effort, and you can still add the additional optimisations (that are already documented - which they linked to) where needed.
Maybe I’m reading the whole thread wrong, but it looks like you are screaming at a maintainer of pg_search that someone else did a poor benchmark
I'm shocked that the original post being referred to made this mistake. I recently implemented Postgres FTS in a personal project, and did so by just reading the Postgres documentation on FTS following the instructions. The docs lead you through the process of creating the base unoptimized case, and then optimising it, explaining the purpose of each step and why it's faster. It's really clear that is what it's doing, and I could only assume that someone making this mistake is either doing so to intentionally misrepresent Postgres FTS, or because they haven't read the basic documentation.
The PG docs make it clear that this only affects row rechecks, so this would only affect performance on matching rows when you need to verify information not stored in the index, e.g. queries with weighted text or queries against a lossy GiST index. It's going to be use-case dependent but I would check if your queries need this before using up the additional disk space.
It is, in my mind, the single biggest remaining advantage MySQL has. I used to say that MySQL’s (really, InnoDB) clustering index was its superpower when yielded correctly, but I’ve done some recent benchmarks, and even when designing schema to exploit a clustered index, Postgres was able to keep up in performance.
EDIT: the other thing MySQL does much better than Postgres is “just working” for people who are neither familiar with nor wish to learn RDBMS care and feeding. Contrary to what the hyperscalers will tell you, DBs are special snowflakes, they have a million knobs to turn, and they require you to know what you’re doing to some extent. Postgres especially has the problem of table bloat and txid buildup from its MVCC implementation, combined with inadequate autovacuum. I feel like the docs should scream at you to tune your autovacuum settings on a per-table basis once you get to a certain scale (not even that big; a few hundred GB on a write-heavy table will do). MySQL does not have this problem, and will happily go years on stock settings without really needing much from you. It won’t run optimally, but it’ll run. I wouldn’t say the same about Postgres.
I'm not sure why the author doesn't use them but it's clearly pointed out in the documentation (https://www.postgresql.org/docs/current/textsearch-tables.ht...).
In other words, I believe they didn't need a `message_tsvector` column and creating an index of the form
CREATE INDEX idx_gin_logs_message_tsvector
ON benchmark_logs USING GIN (to_tsvector('english', message))
WITH (fastupdate = off);
would have allowed queries of the form WHERE to_tsvector('english', message) @@ to_tsquery('english', 'research')
to use the `idx_gin_logs_message_tsvector` index without materializing `to_tsvector('english', message)` on disk outside of the index.Here's a fiddle supporting it https://dbfiddle.uk/aSFjXJWz
[0] https://dev.mysql.com/doc/refman/8.4/en/create-index.html#cr...
Still, I’m sure they’ll get there. Maybe they’ll also eventually get invisible columns, though tbf that’s less of a problem for Postgres as it is for MySQL, given the latter’s limited data types.
Postgres has one option for replication that is a godsend, though: copy_data. This lets you stand up a new replica without having to first do a dump / restore (assuming your tables are small enough / your disk is large enough, since the primary will be holding WAL during the initial sync). Tbf, MySQL doesn’t need that as much, because it offers parallel dump and restore, even on a single table.
https://www.postgresql.org/docs/current/ddl-generated-column...
I can’t think of any advantage of a virtual generated column over a generated column for something like a search index where calculating on read would be very slow.
Postgres has been able to create indexes based on the output of functions forever though, which does the job here too.
It’s pretty great.
Elastic is on a different level for a lot of use cases, but pg is more than enough for the vast majority of workloads.
Small writing note, I probably would've swapped the order of those. Hanlon's Razor and all. :)
vibe sysadminning, bro
The issue I have had with postgres full text search isn't that it's too slow, it's that it's too inflexible. It's a nice way to add simple search to fields but poor if you want to tune the search at all. Even allowing for general substrings is too much to ask, even allowing for custom tokenization is too much to ask. There's no tokenization pipeline to speak of unless you want to write c extensions (which of course you can't do for hosted databases anyway). Solr and Elasticsearch let you set up very complex indexes and search processing via configuration. There's absolutely nothing that would prevent postgres from adopting a lot of this capability, but instead postgres offers literally NOTHING. I get the impression that most of the developers for postgres full text haven't spent much time with other solutions as from previous discussions they don't really understand what I mean when I talk about tokenization and filter setup, and they don't really understand why this is a deal-breaker even for very simple applications. Postgres just splits on whitespace (and lets you basically manually use stopwords and stemming, which is crap). There is really no way to concatenate fields in a clean way into a single index, which again makes it extremely annoying to work with. There's no way to score searches based on field weighting or really any other kind of weighting beyond BM. Compared to the alternatives it's a toy system.
If the query uses the index, then the on the fly tsvector rechecks are only on the matches and the benchmark queries have LIMIT 10, so few rechecks right?
Edit: yes but the query predicates have conditions on 2 gin indexes, so I guess the planner chooses to recheck all the matches for one index first even though it could avoid lots of work by rechecking row-wise
Using the search engine built into PostgreSQL, MySQL or SQLite makes this problem SO MUCH less difficult.
Bundling everything into one system will eventually fall apart. But it’s sooo good while you can do it.
And I am decades past the point where I introduce new shit just to learn it under the guise of “needing” it. Instead I’ll introduce new things I want to learn under the guise of new things I want to learn and it will find the appropriate place (often nowhere, but you never know).
Now applications need only one technology, not necessarily one server.
What you query on is not the same as what you store in your DB. And it can be expensive to calculate and re-calculate. Especially at scale. And iterating over all your stuff can be challenging too. It requires IO, memory, CPU, etc. Your application server is the wrong place. And so is your main application database.
The challenge with search is that querying just gets a lot easier if you calculate all the expensive stuff at index time rather than at query time. Different tokenization strategies for different languages, calculating things like page rank, normalization, tokenization, semantic vectors, enriching data with other data (including denormalizing things from other data sources), etc. There are a lot of tricks you can use to make stuff easier to find.
Foregoing all of that indeed makes things simpler and faster. But your search quality will probably suffer. And if you aren't measuring that to begin with, it is probably not great. Doing all these things on write in your main database schema is going to cause other issues (slow writes, lots of schema migrations, complicated logic around CRUD, etc.). The rookie mistake with ETL is just joining the three steps into one thing that then becomes hard to run, evolve, and scale. I see that with a lot of my clients. This is textbook "doing it wrong". It's usually neither fast nor very good at search.
Even if you are going to use postgresql as your main search index, you are probably doing it wrong if your search table/schema isn't decoupled from your main application database via some ETL pipeline. That logic has to live somewhere. Even if it is a bit of a simplistic/limited "do everything on INSERT" kind of thing. That's going to hold back your search quality until you address it. There is no magic feature in postgresql that can address that. Nor in Elasticsearch (though it comes with way more features for this).
I've worked with postgresql's FTS a few times. It's pretty limited compared to Elasticsearch. Anybody running performance benchmarks should be running quality benchmarks instead. Being fast is easy if you skip all the difficult stuff. Being high quality and fast is a big challenge. And it's a lot easier with proper tools and a proper ETL pipeline.
And indeed engineering that such that the two stay in sync requires knowing how to engineer that properly. I usually start with that when I consult clients looking to level up their home grown search solutions to something a bit better.
Of course if you do ETL properly, having your DB and search index in the same place stops making sense. And if you are going to separate them, you might as well pick something more optimal for the job. There are a lot of decent solutions out there for this.
The cost of adding services to an app is so much higher than people give it credit for at organizations of every size, it's shocking to me that more care isn't done to avoid it. I certainly understand at the enterprise level that the value add of a comprehensive system is worth the cost of a few extra employees or vendors, but if you could flatten all the weird services required by all the weird systems that use them in 30,000+ employee enterprises and replace them with one database and one web/application server, you'd probably save enough money to justify having done it.
The team on that inventory project obviously created a new database to put their data in, plus QA and test replicas. They (probably) have since moved to another DB system but left the old ones running for legacy applications!
Depending on your database system, it may even have a 1:1 equivalency with Schemas (MySQL.)
We moved our queues to PG and it cuts out the same kind of overhead to be able to wrap an update and start a job in a transaction. PG has been plenty fast to keep up with our queue demand.
Ultimately I think being able to do things transactionally just avoids a whole class of syncing issues, which are basically caching issues, and cache invalidation is one of the 2 hard things.
I just got off a call with a client where their developers were using ORM-style abstractions to manipulate data for downstream processing in code, turning what should have been a few seconds of one custom SQL command into several hours of passing objects around multiple computer systems.
If we can put the FTS engine inside the SQL engine, we can avoid the entire space of APIs, frameworks, 3rd parties, latency, etc that goes along with otherwise integrating this feature.
Modern SQL dialects represent universal computation over arguably the best way we know how to structure complex data. Adding custom functions & modules into the mix is mostly just syntactic sugar over what is already available.
There is ZERO honor in offloading what could have been a SQL view into a shitty pile of nested iterators somewhere. I don't understand where any of this energy comes from. The less code the better. It's pure downside.
I wholeheartedly agree with you. As to why we use ORMs, the impression I get from the engineers I work with is that many of them a) don’t know SQL and b) feel like it’s “data analyst” stuff and so beneath them to learn it. Real engineering requires objects and inheritance or structs, pointers and arrays (depending on the engineer).
I think it’s the declarative nature of SQL that turns them off.
I was replacing an application management interface of sorts, large ish sets of configuration parameters, ideal for a relational database. But I wanted to treat the combined configuration as a document, since that's what the front-end would send over. Ended up using GORM, which was fine for a little while... but quickly falls apart, especially when your data model is nested more than one level deep. And then you end up having to figure out "how do I solve X in GORM" and find yourself with limited documentation and a relatively small community whose members quickly burn out of trying to help people.
I'll just write the code next time.
Yeah distributing state among 10 nodes, totally easy, fine, good.
That said, it's easy to forget to check if you're in either of those 20% (or both.) There's probably a whole bunch of postgres usage where really something else should be used, and people just never checked.
At my current gig, we used to shove everything (including binary data) into postgres because it was easy and all our code plugged into it anyways. When it started to become uneconomical (mostly due to RDS storage costs), we then started shunting data to S3, DynamoDB, etc.
Also, not everybody can be on a cloud with easy access to all the fancy products for queuing, caching, etc. Sometimes it's better overall to have to deal with one complex beast (that you'd have to maintain anyways) than spending time deploying Kafka, MongoDB, etc (even though it can sometimes be easier than ever with pre-built manifests for K8s) as well as securing and keeping them all up to date.
I do strongly encourage people to treat code that deals with these things with as much abstraction as possible to make migrations easier later on, though.
But either way, the answer is simplicity and cost. I assume you’ve heard of Choose Boring Technology [0]? Postgres is boring. It’s immensely complex when you dive into it, but in return, you get immense performance, reliability, and flexibility. You only have to read one manual. You only have to know one language (beyond your app, anyway – though if you do write an app in pure SQL, my hat is off to you). You only have to learn the ins and outs of one thing (again, other than your app). ES is hard to administer; I’ve done it. Postgres is also hard to administer, but if I have to pick between mastering one hard thing and being paged for it, or two hard things and getting paged for them, I’ll take one every day.
That said, "postgres" is a very broad subject if you take all of those into consideration, if you need to specialize your search for someone who knows how to do X in PG specifically you're almost back at the same spot. (I say almost because I'm sure it's easier to learn a specialization in Postgres if you're already familiar with Postgres than it is to learn a completely new tool)
And caveat, there's a high golden hammer risk there. I'd start questioning things when needing to query JSON blobs inside a database.
"So when designing your software architecture, think about PostgreSQL NOT as storage layer, but rather as a concurrent data access service. This service is capable of handling data processing."
The only things you shouldn’t put in Pg are things where there’s an obviously better alternative that’s so much better as to make running an extra service worth it. There definitely are cases where that’s true (Redis when you need exceptionally quick responses, for example) but it’s a high bar to clear.
That’s the part that baffles me. You’ve selected a DB with native support for esoteric but useful data types like INET (stop storing IP addresses as strings in dotted quad!), and a whole host of index types beyond B+tree, but they’re never using them.
Read your RDBMS docs, people. They’re full of interesting tidbits.
"Every single one" in your team agreeing on a single specific technological choice is one of the rarest things I can image! Developers argue about libraries, frameworks, programming languages, services, etc., and I think it speaks for itself if Postgres is the thing that comes closest in bridging the gap at least on one layer in the tech stack. Postgres is a "conservative" choice with a very active community and extensible ecosystem.
Also, nobody is ever making use of their technological choice to its full extent, you'd rarely know what you'll need beforehand, and it's just nice not having to add other storage engines when that one feature request steps into your life.
Alternative solutions (lucene/ tantivy) are both designed for 'immutable segments' (indexing immutable files), so marrying them with postgres heap table would results in a worse solution.
There are open large(-ish) text datasets like full Wikipedia or pre-2022 Reddit comments, that would work much better for benchmarking.
Not completely surprising, but on a table with _potentially_ couple of thousand of inserts / seconds, it slowed down the overall updates to the point that transactions timed out.
We already added an index for one of the columns we wanted to index and were running the statement for the second one. The moment this the second index finished, we started to see timeouts from our system when writing to that table, transaction failing etc.
We had to drop the indices again. So, sadly, we did never get to the point to test the actual FTS performance :/ I would have like to test this, because didn't necessarily had to search hundreds of millions of documents, due to customer tenants this would always be constrained to a few million _at most_.
ps: I already wrote about this -> https://news.ycombinator.com/item?id=27977526 . Never got a chance to try it nowadays (newer versions of everything, never hardware, etc.)
could we get similar performance in the browser using something like SQLite + FTS5 + Wasm? Seems like an interesting direction for offline-first apps...
https://github.com/jmscott/talk/blob/master/pgday-austin-20161112.pdf
I don’t think the question is speed, it’s scale. Use it until it breaks, though.