vs. the current license:
"IF ANY LITIGATION IS INSTITUTED AGAINST SUPABASE, INC. BY A LICENSEE OF THIS SOFTWARE, THEN THE LICENSE GRANTED TO SAID LICENSEE SHALL TERMINATE AS OF THE DATE SUCH LITIGATION IS FILED."
( https://github.com/orioledb/orioledb/blob/main/LICENSE )imho: the current wording might discourage state organisations, since even a trivial lawsuit (e.g. a minor tax delay) could terminate the licence - perhaps a narrower patent-focused clause would work better (or an OSI-approved licence?).
I’ll revisit this with legal to try make it clearer.
Our intentions here are clear - if people have examples that we can follow we will do what we can to make this irrevocable (even to the extent of donating the patent if/when the community are ready to bear the cost of the maintainance)
https://github.com/orioledb/orioledb/pull/558
It is now Apache 2.0 which grants patent rights and can be re-licensed to PostgreSQL when the code is upstreamed. I'll amend the blog to make that clearer.
> "It is now Apache 2.0 which grants patent rights and can be re-licensed to PostgreSQL when the code is upstreamed."
It’s worth double-checking the relicensing angle. Imho: you can only relicense your own code. Any 3rd party contribution stays under apache 2.0 unless the author explicitly agrees.
So a full switch to postgresql license is only possible if every contributor signs off. That usually means having a Contributor License Agreement (CLA) in place up front.
And ethically, contributors should already know their work might be relicensed under "postgresql terms" later - otherwise it's a surprise change for the community.
ps: if the plan is serious, do the legal homework early and gather consents now, so upstreaming to postgresql doesn’t fail later because a few open-source contributors (who aren’t supabase/orioledb employees) are unreachable.)
imho: given that postgres has many corporate forks and contributors from different companies, mixing apache 2.0 and postgresql licenses isn’t ideal - it complicates the legal picture and can even block upstream acceptance.
And if supabase's goal is really this [1], then it makes sense to think through the legal side now and start consulting with the upstream Postgres community early.
[1] https://supabase.com/blog/orioledb-patent-free#aligned-with-...
"We believe the right long-term home for OrioleDB is inside Postgres itself. Our north star is to upstream what’s necessary so that OrioleDB can eventually be part of the Postgres source tree, developed and maintained in the open alongside the rest of Postgres."
For other patent-shield licenses such a combination also removes most of the protections of the patent shield (a patent troll user can use the software under MIT and then sue for patent infrigement). However, the Apache 2.0 patent shield is comparatively weak (when compared to GPLv3 and MPLv2) because it only revokes the patent license rather than the entire license and so it actually acts like a permissive license even after you initiate patent litigation. This makes the above problem even worse -- if you don't actually have any patents in the software then a patent troll can contribute code under MIT then sue all of your users without losing access to the software even under just Apache 2.0 (I don't know if this has ever happened but it seems like a possibility).
IMHO, most people should really should just use MPLv2 if they want GPLv2 compatibility and patent grants. MPLv2 even includes a "you accept that your contributions to this project are under MPLv2" clause, avoiding the first problem entirely. It would be nice if there were an Apache 3.0 that had a stronger patent shield but still remained a permissive license (MPLv2 is a weak file-based copyleft), but I'm more of a copyleft guy so whatever.
Isn't the idea that you could then sue the suer for infringing your patent?
It also requires actively persuing a patent case which may result in the patent being rendered invalid, while a termination clause for the whole license just requires a far more clear-cut copyright infringement claim (possibly achievable purely through the DMCA system, out of court). But I'm not a lawyer, maybe counter-suits are more common in such situations and so either approach is just as good in practice.
https://engineering.fb.com/2017/09/22/web/relicensing-react-...
The license granted hereunder will terminate, automatically and without notice,
for anyone that makes any claim (including by filing any lawsuit, assertion or
other action) alleging (a) direct, indirect, or contributory infringement or
inducement to infringe any patent: (i) by Facebook or any of its subsidiaries or
affiliates, whether or not such claim is related to the Software, (ii) by any
party if such claim arises in whole or in part from any software, product or
service of Facebook or any of its subsidiaries or affiliates, whether or not
such claim is related to the Software, or (iii) by any party relating to the
Software; or (b) that any right in any patent claim of Facebook is invalid or
unenforceable.
And so that was a fairly justified reaction IMHO. Funnily enough, it seems that the license written by Supabase has the same issue -- I suspect this might just be the "default approach" for patent lawyers.However, MIT has _no_ patent protections and is strictly worse than almost any license with some patent protections for users included. The modern landscape of software patent trolls is far less insane than it was in the 90s but I would really think twice about using something that is likely patented under a license other than Apache-2.0, MPLv2, or GPLv3.
The relevant patent license is the following:
> 1.3. Defensive Termination. If any Licensee, its Affiliates, or its agents initiates patent litigation or files, maintains, or voluntarily participates in a lawsuit against another entity or any person asserting that any Implementation infringes Necessary Claims, any patent licenses granted under this License directly to the Licensee are immediately terminated as of the date of the initiation of action unless 1) that suit was in response to a corresponding suit regarding an Implementation first brought against an initiating entity, or 2) that suit was brought to enforce the terms of this License (including intervention in a third-party action by a Licensee).
For example, if Supabase failed to pay a vendor that happened to use OrioleDB they wouldn't be able to sue you for damages without compromising their stack. That's uncool.
My take-away from the Facebook/React license issue was that the community agrees this violates the spirit of FOSS and invalidates claiming to be open source (at least OSI-approved), with many taking offense to the punitive nature of the clause.
Granted Facebook was in a position to see litigation over a lot more reasons.
For practical adoption, especially in larger orgs, OSI-approved licences are much easier to get through legal review than custom ones.
We could also change to MIT/Apache but we feel PostgreSQL is more appropriate given our intentions to upstream the code
That's just not true. Your license[0] adds a clause to the Postgresql license[1]. This makes it a different license, which by extension also means it isn't OSI approved.
It's the same with the BSD licenses[2]: the 4-clause one is OSI-approved, whereas the 3-clause one is not. Turns out that one additional "all advertising must display the following acknowledgement" clause was rather important - and so is your lawsuit clause.
[0]: https://github.com/orioledb/orioledb?tab=License-1-ov-file
[1]: https://github.com/postgres/postgres?tab=License-1-ov-file
[2]: https://en.wikipedia.org/wiki/BSD_licenses#4-clause_license_...
https://github.com/orioledb/orioledb/pull/558
The code is now Apache 2.0 which grants patent rights and can be re-licensed to PostgreSQL when the code is upstreamed. I'll amend the blog to make that clearer
From the phrasing it already seemed your heart was in the right place, but I understand that it can get tricky once people get involved who aren't as familiar with the details of open source licensing.
Getting legal to sign off on a different license this quickly is impressive!
Anyway, I'm not sure this is true. Having a separate software license + secondary patent grant license is very very common in open source projects where patent trolls are common. See e.g. https://aomedia.org/about/legal/
I would just put them in separate files and then you're good to go.
But I am not sure if the first exemption is necessarily a good thing. The Apache License, Version 2.0 is broader in what may be grounds for patent licence termination. So it is a better deterrent against patent trolls (even if that means some legitimate patent claims are also discouraged).
But they have switched to Apache 2.0 now, so crisis averted.
Whoops, I did indeed type that a bit too quickly.
> Having a separate software license + secondary patent grant license is very very common
Perhaps, but those are separate. In this instance it was one and the same license, with any violation of the patent part terminating the whole license - including the non-patented software parts.
Additionally, the AOMedia patent license seems to be a bit different: the OrioleDB one said it would terminate when you sued Supabase (and to make it worse: sue them for any reason), but the AOMedia one says it'll terminate if you sue anyone over the licensed patents.
In other words: the OrioleDB one protected only Supabase, the AOMedia one protects the entire community. When it comes to being compatible with open source licenses, details like that become crucial.
I hope you can look at the Apache 2 patent grant as a better clause- or even adopt something like Google's Additional IP License found here- https://www.webmproject.org/license/additional/, which doesn't modify the open source license but instead adds an additional grant as a separate license.
Supabase is doing great work, thank you!
(if the atlasgo team are reading this feel free to reach out too)
https://opensource.org/license/ms-pl-html
Microsoft used it a ton, until they eventually just made everything open-source fall under the MIT license.
Some people will still be angry about it (I got a downvote for just mentioning it elsewhere on this thread) but as the person who built your software, you have every right to license your software as you deem necessary. There is a cost to what you've built and you have no true obligation to give everything for free.
On that note, as far as I can remember the MS-PL is OSI approved already.
"If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed."
On violation the Apache 2.0 license terminates the patent license. I might be mistaken, but that reads an awful lot like you're still allowed to use the software provided you do so in a way which doesn't violate the patent.
On the other hand, the OrioleDB license seems to terminate the entire license - so the way I read this it would include parts of the software which aren't covered under the patent itself.
It starts off nice with the usual:
> PERMISSION TO USE, COPY, MODIFY, AND DISTRIBUTE THIS SOFTWARE AND ITS DOCUMENTATION FOR ANY PURPOSE, WITHOUT FEE, AND WITHOUT A WRITTEN AGREEMENT IS HEREBY GRANTED
.. but then there's the:
> HEREBY GRANTS A (..) LICENSE TO UNITED STATES PATENT NO. 10,325,030 TO MAKE, HAVE MADE, USE, HAVE USED, OFFER TO SELL, OFFERED TO SELL, SELL, SOLD, IMPORT INTO THE UNITED STATES, IMPORTED INTO THE UNITED STATES, AND OTHERWISE TRANSFER THIS SOFTWARE
.. which to me seems to be missing some kind of "modify" clause? Sure, it seems like you're allowing me to distribute it as-is the way a store like Amazon distributes boxes, but what happens when I start modifying the code and distributing those modifications? Is it still "this software", or has it become a derivative? Is the license I get to that patent even sublicensable? What happens to users of a fork when the forkee sues Supabase: do they also by extension lose their patent license?
The GPLv2, for example, has a clause stating that "Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor" which makes it very clear what happens. If you're adding a poison pill to open-source code, you really shouldn't be this sloppy: it should be painfully obvious to every reader what the implications are, or nobody will ever risk using it.
GPLv3 has text about this in (s)11, MPLv2 has (s)2.3, and Apache-2.0 has s(3). GPLv2 doesn't have an explicit patent grant (and while some folks have argued that it has an implicit one that is just as good, I think the general consensus is that GPLv2 is not immune to patent trolls). All of them still allow you to make modifications but they do not guarantee that some other patents will not be infringed by your modifications and open you up to patent lawsuits (even from the same entity).
Assuming a lawyer wrote this, this is probably part of the reason for it. But it does feel a little sloppy, a separate patent license with clear terms would probably be more preferable.
Use as a shield would mean limiting it to patent litigation against a user of the software.
It also only covers litigation against Supabase - it does not provide a shield against litigation against OrioleDB users.
The MS-PL has the fairly standard reactive patent shield that only activates for patent-related litigation for the specific software under the license and is kind of similar to the language in Apache 2.0, MPLv2, and GPLv3.
But they have now switched to Apache 2.0, so crisis averted.
You might have good intentions, but in my value system if you invite others to also enjoy what you have stolen, you still are just a thief.
Polite reminder: Just because you managed to trick the US Patent Office into stamping your patent application does not mean you have invented something. It simply means you have managed to convince a bureaucrat to give you a stamp so you can claim ownership about other researchers' work.
Want to be part of the good guys? Burn the patent, and apologize to the research community you tried to steal from.
How'd you arrive at this conclusion? The stuff in the body of the patent can be expected to be 99.99% of widely known stuff, always.
What counts is that something new is disclosed, and that is what the claims cover.
A description of a patent must be enabling: it must tell someone ordinarily skilled in the art enough to reproduce the claimed invention. Gesticulating at "you could find a bunch of the simpler steps in earlier research papers" is not good enough.
How far attorneys go to make sure a patent description is wildly varying (I remember some of my earlier ones take some time to describe what a CPU and program are...) but it's best to error on the side of caution and describe well known techniques. Otherwise, you might spend time arguing in future litigation about whether the average software engineer in 2015 knew how to do a particular thing.
This is all positive. Super appreciate what you folks have done. It's clearly hard, well intentioned, and thoughtfully executed.
People might say that the company is doing it for good will, but that is the point, it is better to get the good will of the users by actually helping them instead of being like thousands of other companies which don't even do that. It is a nuanced topic but I feel like we should encourage companies which do good period. (like silksong / team cherry in gaming) etc.
I will look further into this now :p thanks!
that's ... that's what they are doing by making it freely available, no?
this helps anyone who is covered by the patent because they are (a bit better) protected from other patent trolls (and from other IP litigation)
The closest that comes to mind is "Free Nestlé bottled water".
for a parent granting irrevocable royalty-free usage rights to anyone is equivalent to putting it into the public domain, no?
both copyright and a patent grants exclusive rights to the rightsholder (right to perform, modify, or in case of patents to use the invention in the way covered by one of the claims), and the rightsholder is free to include more persons in the group of authorized users (ie. extend the rights to others, and can do so conditionally - hence the Apache2 patent grant)
can you please explain where my understanding is incorrect or missing something? thanks!
For example there a lot of ways to use the so-called priority of the US patent to file other patents, including elsewhere in the world.
US Americans sadly often forget that the majority of the planet is not the USA. Here in Thailand, US and EU pharma companies have managed to get patents on 15 year old basic stuff like anti-histaminic meds. "Oh, but the system of limited monopoly in the US worked for us?". Yeah. But in Europe, the cost per pill is somewhere around $0.08. In Europe it's about $0.02. In Thailand it's $0.80 because Zyrtec managed to file a patent here re-using the priority of the US patent. And wage-adjusted in Thailand the cost per pill is about $15 per pill. You better don't have an allergy over here. So: Patents can have long-term consequences.
Back to IT: Have a look at the whole patent troll industry. The biggest chunk of junk patents that they bought are coming from "we will not do any harm" owners/filers. A lot can happen in 20 years.
The intention of OrioleDB is not to compete with Postgres, but to make Postgres better. We believe the right long-term home for OrioleDB is inside Postgres itself. Our north star is to upstream what’s necessary so that OrioleDB can eventually be part of the Postgres source tree, developed and maintained in the open alongside the rest of Postgres.
Call out shady shit when companies do shady things, but the sentiment behind this comment seems to be looking for reasons to bee outraged instead of at what's actually being done.
If companies get evicerated every time they try to engage with the community they'll stop engaging. We should be celebrating when they do something positive, even if there are a few critiques (e.g. the license change call out is a good one). Instead, half the comments seem like they're quick reactions meant to stoke outage.
Please have some perspective - this action is a win for the community.
I am "owner" of a bunch of patents, too, and some have actually been proven their test of time by after years having been re-invented (better: "parallel-invented later in time") elsewhere in the open source world.
But in my value system one does not do press releases saying "HELLO! We have decided not to do something evil!".
They could have done the very same thing done quietly to make clear there is no hidden agenda.
"Look, we hold this trivial patent on the open source ecosystem. No no no, all will be fine. No, no, we will not pick up the phone should Broadcom call us one day."
Yay. \o/
It seems that they have changed their mind to make it even more permissive.
They just relicensed the OrioleDB project under Apache 2.0 an hour ago [0], which contains a patent clause.
[0]: https://github.com/orioledb/orioledb/commit/44bab2aa9879feb7...
Our goal now is to ensure that it’s as F/OSS as possible given the pre-existing conditions
According to the docs, it “uses Postgres Table Access Method (TAM) to provide a pluggable storage engine for PostgreSQL. […] Pluggable Storage gives developers the ability to use different storage engines for different tables within the same database. Developers will be able to choose a storage method that is optimized for their specific needs: some tables could be configured for high transactional loads, others for analytics workloads, and still others for archiving.”
Just to add: if anyone wants to contribute (beyond code) benchmarking and stress-testing is very helpful for us
https://www.orioledb.com/docs#patch-set
The actual storage engine is written as an extension - these patches are mostly to improve the TAM API. If these are accepted by the community then it should be simpler for anyone to write their own Storage extensions.
I think (correctly) it will take a lot longer to upstream the extension - the PG community should take a “default no” attitude with big changes like this. Over time we will prove its worthiness (hopefully beyond just supabase - it would be good to collaborate with other Postgres providers)
Would be really nice with a pgdg package, as this is definitely the kind of thing I would want to test in a separate cluster :-)
And more generally, curious if you have any sense for what might make up the "1%" of workflows this wouldn't be advisable for? Any downsides to be aware of?
[0] https://github.com/orioledb/orioledb?tab=readme-ov-file#orio...
In term of other workloads it might not be great for, all my testing has shown a great improvement in every workload I have thrown at it.
We have seen this issue with YugabyteDB, and their integration off RocksDB as the storage engine for postgresql.
and many extensions (e.g. postgis) already work fine with OrioleDB storage.
IANAL nor a patent judge, but this is my understanding after watching the space for some years.
But at the same time, globalization means legal mandates are increasingly extra-territorial in scope and impact. U.S. patent law affects anyone whose products touch the American market.
Similarly, CCPA/CPRA and GDPR reach far beyond their nominal geographic borders.
When you're in IP, bang on IP
That's just the path for all who do this stuff. America seems culturally to like IP (everyone saying that copyright law is paramount and LLMs should be stopped, etc.) but that's just recent history.
Ahead in production. Did China research/innovate/develop those industries, or were they 'just' fast followers? (Early in its history the US used the same 'tactics' relative to the UK and other European countries.)
This is an unfortunate limitation to be aware of when evaluating
https://www.orioledb.com/docs/usage/getting-started#current-...
1) Overhead. SSI implies a heavyweight lock on any involved index page or heap tuple (even for reads). The overhead of SSI was initially measured at ~10%, but nowadays, scalability has gone much farther. These many HW-locks could slow down in multiple times a typical workload on a multicore machine.
2) SSI needs the user to be able to repeat any transaction due to serialization failure. Even a read-only transaction needs to be DEFERRABLE or might be aborted due to serialization failure (it might "saw impossible things" and needs a retry).
In contrast, typically it's not hard to resolve the concurrency problems of writing transactions using explicit row-level and advisory locks, while REPEATABLE READ is enough for reporting. Frankly speaking, during my whole career, I didn't see a single case where SERIALIZABLE isolation level was justified.
Any time you have to check constraints manually, you can't just do it before the write, or after the write, because two REPEATABLE READ write transactions will not see each other's INSERT.
You need something like a lock, a two-phase commit, or SERIALIZABLE isolation for writes. Advisory locks have sharp edges, and 2PC is not so simple either, there is a lot that can go wrong.
In the case of SERIALIZABLE you do need to retry in case of conflict, but usually the serialization anomalies can be limited to a reasonably fine level. And an explicit retry feels safer than risking a livelock situation when there is contention.
https://www.postgresql.org/docs/current/ddl-constraints.html...
An exclusion constraint needs concrete values to compare, but here we can't pre-compute and index every future value (there are infinitely many)
We solve a diophantine equation for this check (if there is a solution to Ax - By = 0, then formulas A and B can conflict at some point)
Heck, https://www.sciencedirect.com/science/article/pii/S147466701... and https://www.amazon.com/Declarative-Models-Concurrent-Cyclic-... seem to indicate this is still an area of active research. And, to your point, almost certainly too complex to try to encode into a Postgres index - that would be a paper-worthy project unto itself!
(I've had to implement the naive approach here, which nested-loops over dates and rules and finds ones that might conflict. Definitely not meant to scale, and would be a nightmare if not at a date-level granularity!)
No pressure to share, but would love to learn more!
I'm not entirely familiar with the literature, so we did some simplifications to keep it manageable, and it's very possible there might be a better approach!
We model the events generated by our recurring rules as intervals [SA+n*A, SA+n*A+DA), where A is a constant integer number of days (e.g. A=3*7 for an event on Tuesday every 3 weeks), SA is the start date, and DA is the duration of an event (... which can be >24h to make things more interesting). It might continue recurring forever, or it might stop by some end date.
Now if you have two of these recurring events with periods A and B, you can think of each of them as a periodic function, and the combination of them will have a period equal to LCM(A, B). So we could check everything modulo LCM(A, B) once, and that would hold infinitely far forward.
But actually, we can do even better by taking the difference Δ=SA-SB mod GCD(A, B). You can sort of think of it as the closest approach between these two recurring events. If that's less than DA or more than the GCD-DB, these events are eventually going to overlap. There's some extra details to check whether overlap happens before either end date (if any), but that's the idea in a nutshell.
---
Where it gets really interesting is when you introduce timezones and daylight savings time (DST). Now a day is not always 24h, so there isn't a nice [SA+n*A, SA+n*A+DA) at regular intervals, and we can't naively work modulo the LCM or GCD anymore.
But we can notice that the days of the year on which a DST transition falls generally follow simple rules, and would repeat every 28 years (actually every 400 years due to leap years). In practice for a given timezone, for example CEST, there are only 14 unique days of the year on which the DST transitions can fall, and it repeats after a full period.
So if we want to check for overlap, we can treat those DST transition days as special cases, and all the time intervals in between DST transition days look locally "UTC-like" (time doesn't jump forward or backwards, if we ignore leap seconds), so the previous formula continues to work on all of these.
But at some point things become a little messy, so we did some simplifications =)
My napkin math suggests the size of the supercycle when you take two recurring events A and B with periods < 365 days and the 400 year cycle for DST transition can be about a hundred thousand years in the worst case.
But considering that timezones and daylight savings time are fundamentally political decisions, and that the IANA tzdata file is updated regularly, there is not much sense in applying the current rules thousands of years into the future! So we check at most 400 years ahead, and we consider that when timezones are involved, the overlap check is best effort and we prepare for the fact that it could miss a DST transition – no matter what math we do, a country could change the rules at any time, and a day that used to be 24h could now be 25, 22:30, or even skipped entirely, invalidating all past assumptions.
The idea that someone could book a room for 400 years of consistency guarantees is somewhat hilarious, yet absolutely the kind of thing that would check a box in a regulated environment like healthcare!
It does speak to a larger issue I see quite often, which is that capturing structured intent as of the time an event is created, with the context that led to that decision, is incredibly hard to model. Because no matter what, something about the system will change, like daylight savings time or assumptions around resource/room availability, in an unpredictable way such that the data that had been used to drive a confident evaluation of lack-of-conflict at insertion time, is now insufficient to automatically resolve conflicts.
There's no single way to solve this problem - in a way, "reinterpreting intent" is why programming is often equal parts archaeology and anthropology. Certainly gives us very interesting problems to solve!
I use MySQL, not Postgres, for this application (for better or for worse), and I can absolutely generate a bad state if I drop MySQL to a level below SERIALIZABLE — I’ve tried it. (Yes, I could probably avoid this with SELECT FOR UPDATE, but I don’t trust MySQL enough and I get adequate performance with SERIALIZABLE.)
To make SERIALIZABLE work well, I wrap all the transactions in retry loops, and I deal with MySQL’s obnoxious reporting of which errors are worthy of retries.
(Aside from bad committed states, I’ve also seen MySQL return results from a single straightforward query that cannot correspond to any state of the table. It was something like selecting MIN, MAX and COUNT(*) from a table and getting min and max differing and count = 1. It’s been a while — I could be remembering wrong.)
"IF ANY LITIGATION IS INSTITUTED AGAINST SUPABASE, INC. BY A LICENSEE OF THIS SOFTWARE, THEN THE LICENSE GRANTED TO SAID LICENSEE SHALL TERMINATE AS OF THE DATE SUCH LITIGATION IS FILED."
This is a poison pill. At best the licensing is naive and blocks even customers of Supabase from using OrioleDB, at worst it's an attempt for Supabase to provide themselves unlimited indemnity through the guise of a community project. It means the moment you sue Supabase for anything. Contract dispute, IP, employment, unrelated tort you lose the license. They could lose your data and if you try do anything about it they can immediately counter sue for a license violation. Using the brand of the postgres license to slip this in is pretty wild.
OrioleDB looks like a promising project and undoubtedly an great contribution from Supabase but it's not open source or really usable by anyone with this license.
It is now Apache 2.0 which grants patent rights and can be re-licensed to PostgreSQL when the code is upstreamed. I'll amend the blog to make that clearer.
I think that "under review" claim is doing some very heavy lifting, especially when it relates to their changes to index tuple lifecycle management. The patches that have been submitted are unlikely to get committed in full anytime soon, even after substantial changes to the patches' designs.
PostgreSQL just has not been designed for what OrioleDB is doing, and forcing OrioleDB's designs into PostgreSQL upstream would a lot of (very) sharp edges that the community can't properly test without at least a baseline implementation - which critically hasn't been submitted to upstream. Examples of these sharp edges are varsized TIDs, MVCC-owning indexes, and table AM signalled index inserts.
There are certainly ideas in OrioleDB's designs that PostgreSQL can benefit from (retail index tuple deletion! self-clustering tables!), but these will need careful consideration in how this can be brought into the project without duplicating implementations at nearly every level. A wholesale graft of a downstream fork and then hoping it'll work out well enough is just not how the PostgreSQL project works.
1. With PostgreSQL heap, you need to access the heap page itself. And it's not for free. It goes all through the buffer manager and other related components.
2. In OrioleDB, we have a lightweight protocol to read from pages. In-memory pages are connected using direct links (https://www.orioledb.com/docs/architecture/overview#dual-poi...), and pages are read lock-less (https://www.orioledb.com/docs/architecture/overview#page-str...). Additionally, tree navigation for simple data types skips both copying and tuple deforming (https://www.orioledb.com/blog/orioledb-fastpath-search).
According to all of the above, I believe OrioleDB still wins in the case of secondary key lookup. I think this is indirectly confirmed by the results of the TPC-C benchmark, which contains quite a lot of log of secondary key lookups. However, this subject is worth dedicated benchmarking in the future.
Of course, the flip side of the coin is that if you do an UPDATE of a row in the presence of a secondary index, and the UPDATE doesn't touch the key, then you don't need to update the index(es) at all. So it really depends on how much you update rows versus how often you index-scan them IME.
[1] TPC-H doesn't have difficult enough queries to really stress the planner, so it mattered comparatively less there than in other OLAP work.
This is why Postgres b-tree indexes offer CREATE INDEX (indexCol1, indexCol2, ...) INCLUDE (includeCol1, includeCol2, ...). With INCLUDE, the index will directly store the listed additional columns, so if your query does `SELECT includeCol1 WHERE indexCol1 = X AND indexCol2 > Y`, you avoid needing to look up the entire row in the heap, because includeCol1 is stored in the index already. This is called a "covering index" because the index itself covers all the data necessary to answer the query, and you get an "index only scan" in your query plan.
The downside to creating covering indexes is that it's more work for Postgres to go update all the INCLUDE values in all your covering indexes at write time, so you are trading write speed for increased read speed.
I think it's quite typical to see this in SQL databases. SQLite behaves the same way for indexes; the exception is that if you create a WITHOUT ROWID table, then the table itself is sorted by primary key instead of by ROWID, so you get at most 1 index that maps directly to the row value. (sqlite docs: https://sqlite.org/withoutrowid.html)
> This means that in an ordinary index scan, each row retrieval requires fetching data from both the index and the heap.
Note that it says _index and the heap_. Not _index and the primary index and the heap_. (For a B-tree-organized table, the leaf heap nodes are essentially the bottom of the primary index, so it means that to find anything, you need to follow the primary index from the top, which may or may not entail extra disk accesses. For a normal Postgres heap, this does not happen, you can just go directly to the right block.)
Index-only scans (and by extension, INCLUDE) are to avoid reaching into the heap at all.
> The downside to creating covering indexes is that it's more work for Postgres to go update all the INCLUDE values in all your covering indexes at write time, so you are trading write speed for increased read speed.
For updates, even those that don't touch INCLUDE values, Postgres generally needs to go update the index anyway (this the main weakness of such a scheme). HOT is an exception, where you can avoid that update if there's room in the same heap block, and the index scans will follow the marker(s) you left to “here's the new row!” instead of fetching it directly.
Based on the limited description of OrioleDB I understand it works like SQLite WITHOUT ROWID, actually storing the row tuple in the primary key b-tree, but I didn’t go read the code
Notably, you can have a Postgres table without a primary key at all, not even an implicit one.
> Based on the limited description of OrioleDB I understand it works like SQLite WITHOUT ROWID, actually storing the row tuple in the primary key b-tree, but I didn’t go read the code
This is my understanding of OrioleDB as well.
We should start counting the times a permissive-licensed software needs this kind of protection, and then wonder what is the difference between all this effort and just going GPL.
You don't need to hang onto hopes and public shaming until someone else does the hard part.
Still, kudos for the grant.
https://github.com/orioledb/orioledb/commit/44bab2aa9879feb7...
How does it compare with Neon DB?
Wait when you need to manage a bunch of servers yourself. Unfortunately, the solutions available are complex, and not something where you can simply points something to a server, or VPS, and have a quick total controlled kernel level solution. K8, sure, but even that is not on the same level. And when you then need to manage DB's, your often placing those outside K8. Most scalable solutions like CRDB (10m pay, required yearly "free" license approval), Yugabyte(half broken), TiDB ... have their own issues and again, do not tie into a complete managed cloud experience.