How do you enforce tenant isolation with that method, or prevent unbounded table reads?
We do something similar for our backoffice - just with the difference that it is Claude that has full freedom to write queries.
What are other limitations and mitigations folks have used or encountered to support stability and security? Things like
- Query timeouts to prevent noisy neighbors
- connection pooling (e.g. pgbouncer) also for noisy neighbors
- client schema compatibility (e.g. some applications running older versions, have certain assumptions about the schema that may change over time)You can also limit it by creating read-only replica's and making SELECT's happen on the replica. We don't usually bother, since 99% of our users are employees of ours, we can teach them to not be stupid. Since their usage doesn't change much over time, we can usually just hand them a SQL query and say: here run this instead.
Most of our employees don't even know they have SQL access, it's not like we force people to learn SQL to get their job done. Because of RLS and views, the ones that do SQL don't have to know much SQL, even if they do happen to use it. SELECT * from employees; gets them access to basically all the employee info they could want, but only to the employees they have access to. If you are a manager with 10 people, your select returns only your 10 people.
The payroll staff runs the same query and gets all of the employees they handle payroll for. Since our payroll is done inside of PostgreSQL(thanks plpython[1]), we can do some crazy access control stuff that most systems would never even dream about. Whenever new auditors come in and see that our payroll staff is limited to seeing only the info they need to do payroll, and only for their subset of employees they actually pay, they are awestruck.
The random vendors that can't be taught, we usually hand them a nightly SQLite dump instead. I.e let them pay the CPU cost of their crappy SQL.
Around client schema compatibility. This happens with other models too(API, etc). It's not unique to PG or SQL Databases. You have to plan for it. Since most all of our users interact with views and not with the actual underlying tables, it's not usually that big of a deal. In the extreme cases, where we can't just keep around a view for them, we have to help them along(sometimes kicking and screaming) into a new version.
0: https://www.postgresql.org/docs/current/runtime-config-clien...
Obviously it's not a silver bullet and the isolation can be confusing when debugging, but generally a single point for your applying RBAC is a feature not a shortcoming. The next level of security might be how you define your roles.
I actually believe the simplest, most secure client scenario is physical isolation, where you give the user/consumer only the data they are allowed to use and then don't try to control it (someone mentioned this above, using parquet & duckdb). There's downsides here too: doesn't work for write scenarios, can be resource intensive or time delayed, doesn't handle chain of custody well, etc. You typically have two strategies:
1. pick the best approach for the specific situation.
2. pick your one tool as your hammer and be a d!ck about it.
You can code it yourself in your bespoke app, have your vendor maintain it with their bespoke access control, or let RLS do it. There aren't really any other options that I'm aware of.
Personally, having done the "code it yourself in your bespoke app" it's a PITA and it's generally not nearly as good as RLS. That's what we did before RLS and it sucked.
On top of that, you can do things like SSO, data encryption, etc, but those are not data access layers, those are different layers. We do these things too(tho very little of the data encryption part, since it's such a PITA to make work very reliably even with vault/boa holding the encryption keys for us).
I have. I gave the example of using RLS where users still provide the token to gain RLS privileges but an app brokers and constraints the connections. I have also given the example of encryption, to which your response is that encryption is hard, which I don't think is true but doesn't really change anything. Encryption is absolutely a data access layer control.
> every employee can access our main financial/back office SQL database
This means that there is no access gate other than RLS, which includes financial data. That is a lot of pressure on one control.
RLS has been around a long time and is very stable and doesn't change much. SSO providers keep adding stuff ALL the time, and they regularly have issues. PG RLS is very boring in comparison.
I don't remember the last CVE or outage we had with PG that broke stuff. I can't remember a single instance of RLS causing us access control problems on a wide scale. Since we tied their job(s) to their access control many years ago, it's very rare that we even have the random fat-fingered access control issue for a single user anymore either. I think the last one was a year ago?
Some do, which is why they want MFA on the target side as well as on their SSO. But yes, SSO is very scary and there's a ton of security pressure on it. I don't think that's a very good argument for why we should think that every system should only require one layer of defense.
I'm going to sort of skip over any comparison to SSO since I'm not going to defend the position of "SSO is fine as a single barrier", especially as SSO is rarely implemented with one policy - there's device attestation, 2FA, etc.
> RLS has been around a long time and is very stable and doesn't change much.
RLS is great, I'm a fan.
> I don't remember the last CVE or outage we had with PG that broke stuff.
It doesn't really matter. The fact is that you're one CVE away from every employee having access to arbitrary data, including financial data. I feel a bit like a broken record saying this.
Sure, but it's the same with pretty much any other app architecture.
Either your app has all the data access and you put your access control there, or you do the access control in the database. There really aren't other options here. There isn't access control defense in depth here. The best you can really do is do some data encryption of the data in the tables. We do some of that, but it's such a PITA that we relegate it to special stuff only.
> especially as SSO is rarely implemented with one policy - there's device attestation, 2FA, etc.
Sure but ALL of that relies on the SSO system behaving as advertised, so you think of it as separate policies, but it really isn't. It's once SSO CVE away from giving away the store. We use SSO with PG, that's how they authenticate to PG, we are fans of SSO too.
That's odd, I just clearly delineated an option in which this is not the case. The DB enforces RLS, users provide the RLS token, and an app gates access to the db.
It's not really any different than using pgbouncer or something similar. All it buys you is not having to use the PG protocol on the client.
There is no extra security here though. One could even argue you lose security here, since now you have to rely on the app to not get tokens confused, since they hold and use the tokens on behalf of the user. A single bad CVE in the app and one can become any user the app currently knows about.
Most companies, or at least the ones I've worked at, don't use row level security at all. Instead, the application just accesses the multi tenant database.
It's absolutely littered with broken access control vulnerabilities. You have to remember to put the user key and group in every query across the entire application. And then there's dynamic queries and ORMs, which make everything worse. Now you cant even audit the source code, you have to observe behavior.
Most people don't know their applications have these vulnerabilities, but they're very common.
Every user gets their own role in PG, so the rest of the PG access control system is also used.
We have your normal SSO system(Azure) and if Tootie employee doesn't need access to Asset Control, they don't get any access to the asset schema for instance.
What would be your method?
You would have some app that your dev team runs that handles access control, so your app gets unrestricted access to the DB. Now your app is the single boundary, and it forces everyone to go through your app. How is that better? It also complicates your queries, with a ton of extra where conditions.
A bunch of bespoke access control code you hope is reliable or a feature of the database that's well tested and been around for a long time. pgtap[0] is amazing for ensuring our access control (and the rest of the DB) works.
If some random utility wants to access data, you either have to do something special access wise, or have them also go through your app(let's hope you have an API and it allows for whatever the special is). For us, that random utility gets SQL access just like everyone else. They get RLS applied, etc. They can be naive and assume they have total control, because when they do select * from employees; they get access to only the employee column and rows we want that utility to have.
We have a bunch of tools over the decades that need access to various bits of our data for reason(s). Rather than make them all do wacky stuff with specialized API's, they just get bog standard PG SQL. We don't have to train vendor Tito how to deal with our stuff, we just hand them their auth info to PG and they can go to town. When people want Excel spreadsheets, they just launch excel, do a data query and their data just shows up magically. All from within Excel, using the standard excel data query tools, no SQL needed.
I don't know because I don't know your use case. At minimum, direct db access means that every postgres CVE something I'd have to consider deeply. Even just gating access behind an API where the API is the one that gets the role or accepts some sort of token etc would make me feel more comfortable.
> Now your app is the single boundary,
No, the app would still use RLS.
I'm not saying what you're doing is bad, but as described I'd be pretty uncomfortable with that deployment model.
I don't think you thought this through? The problem with the app being constrained to RLS is you have User A and User B accessing your API, how do you get them access to the different data they need? It means the RLS is very wide open, since it needs to be able to see what User A and B can see. This forces your app to be the single boundary in pretty much all cases. Sure maybe you can give it a role where it has limited DDL rights(i.e not create table access or whatever).
> At minimum, direct db access means that every postgres CVE something I'd have to consider deeply.
I mean, not really, in practice? Most are just denial of service type bugs, not instant exploits. . Most of the DoS issues are not that big of a deal for us. They could affect us, but 99.9% of the time, they don't in reality, before we upgrade. RLS has been in PG for a good many years, it's quite stable. Sure, we upgrade PostgreSQL regularly, but you should do that anyway, regardless of RLS usage or not.
Well I'm not designing some arbitrary system. Don't expect a full spec.
> The problem with the app being constrained to RLS is you have User A and User B accessing your API, how do you get them access to the different data they need?
You can still have users provide the access to the service (ie: the password to get to the role), or otherwise map between User A and the role, etc. The service just brokers and constrains access.
> Sure maybe you can give it a role where it has limited DDL rights(i.e not create table access or whatever).
Yes, of course. Just as you would with users.
> I mean, not really, in practice?
I don't think it's contentious to say that if RLS is your only security boundary then your pressure is entirely on that one boundary. How could it be any other way? If you want to say "It's an extremely good boundary", okay. There have been relevant vulnerabilities though and I really don't know that we should say that we should expect 0 vulnerabilities in RLS in the future such that every employee having access to a db containing financial data is fine. The point of layering is to avoid having to put all pressure on this one thing.
I don't even understand how this is contentious or confusing. If you have one boundary, you have one boundary. I'm suggesting that I'm uncomfortable with systems having one boundary.
We trust that Amazon or Google or Microsoft are successful in protecting customer data for example. We trust that when you log into your bank account the money you see is yours, and when you deposit it we trust that the money goes into your account. But it's all just mostly logical separation.
Right but ideally more than one.
> But it's all just mostly logical separation.
Yes, ideally multiple layers of this. You don't all share one RDS instance and then get row level security.
We all know that authentication should have multiple factors. But that's a different problem. Fundamentally at the point you're reading or writing data you're asking the question "does X has permission to read/write Y".
I don't see what you're getting at.
Encryption is an extremely powerful measure for this use case. If the data does not need to be indexed, you could literally take over the database process entirely and still not have access, it definitely doesn't rely on the permission model of the db because the keys would be brokered elsewhere.
We require SSO(Azure via vault) to authenticate to the DB. We also don't expose PostgreSQL to the public internet. We aren't complete monsters :)
> Granting direct access to a database is a pretty scary thing.
For you maybe, because you were taught it's scary or it just seems different? I dunno. I'm very surprised with all the pushback about it being a single layer. Every other data access architecture will be a single layer too, it just can be made to look like it isn't. Or people think their bespoke access control system will be better because they have more control. Our experience taught us that's just bad thinking.
We've been doing direct access to PostgreSQL since 1993 without many issues. Though RLS is "recent" in terms of deployment(it came about in PG 10 I think). Before that we had a bespoke solution(written with lots of views and some C/pgsql code, it was slow and kind of sucked). RLS was a little buggy when it first was released, but within a year or so it was reliable and we moved everything over as quick as we could and haven't looked back.
> Encryption is an extremely powerful measure for this use case.
We do this with some data in some tables, but it's a PITA to do it right, so it's use is quite limited. We use Hashicorp Vault(now openbao) to hold the encryption/decryption keys.
> For you maybe, because you were taught it's scary or it just seems different?
Over a decade in computer security and software engineering. Nothing I'm saying is contentious. For some reason when I say "Having one boundary is bad" you say "There's only ever one boundary", which... is not true.
I didn't use the in browser WASM but I did expose an api endpoint that passed data exploration queries directly to the backend like a knock off of what new relic does. I also use that same endpoint for all the graphs and metrics in the UI. Just filtered out the write / delete statements in a rudimentary way.
DuckDB is phenomenal tech and I love to use it with data ponds instead of data lakes although it is very capable of large sets as well.
And "data pond"? Glad I am not alone using this term! Somewhere between a data lake and warehouse - still unstructured but not _everything_ in one place. For instance, if I have a multi-tenant app I might choose to have a duckdb setup for each customer with pre-filtered data living alongside some global unstructured data.
Maybe there's already a term that covers this but I like the imagery of the metaphor... "smaller, multiple data but same idea as the big one".
We’re about to introduce alerts where users can write their own TRQL queries and then define alerts from them. Which requires evaluating them regularly so effectively the data needs to be continuously up to date.
Quadrillions, yeah go find yourself a trino spark pipeline
> Why call it DuckDB?
> Ducks are amazing animals. They can fly, walk and swim. They can also live off pretty much everything. They are quite resilient to environmental challenges. A duck's song will bring people back from the dead and inspires database research. They are thus the perfect mascot for a versatile and resilient data management system.
Just to clarify, the data is prepared when the user (agent) analytics session starts. Right now it takes 5-10s, which means it's typically ready well before the agent has actually determined it needs to run any queries. I think for larger volumes, pg_duckdb would allow this to scale to 10s of millions rows pretty efficiently.
Reason 4 is probably an improvement, but could probably be done with CH functions.
The problem with custom DSLs like this is that tradeoff a massive ecosystem for very little benefit.
Agreed with the ecosystem cons getting much heavier as you move outside the product surface area.
First I need to learn a new (even easy & familiar) language, second I need to be aware of what's proprietary & locks me to the vendor platform. I'd suspect they see the second as a benefit they get IF they can convince people to accept the first.
I think an application that exposes a curated dataset through a SQL-like interface - so the dashboard/analytic query case described here - is where I think this approach has value. You actually don't want to expose raw tables, INFORMATION_SCHEMA, etc - you're offering a dedicated query language on top of a higher level data product offering, and you might as well take the best of SQL and leave the bits you don't need. (You're not offering a database as a service; you're offering data as a service).
The main advantages of a DSL are you can expose a nicer interface to users (table names, columns, virtual columns, automatic joins, query optimization).
We very intentionally kept the syntax as close to regular ClickHouse as possible but added some functions.
This sounds solvable with clickhouse views?
> automatic joins
Is this also not solvable with views? Also, clickhouse heavily discourages joins so I wonder how often this winds up being beneficial? For us, we only ever join against tenant metadata (i.e. resolving ID to name)
> query optimization
This sounds potentially interesting - clickhouse's query optimizer is not great IME, but it's definitely getting better
row level access control, resource quotas, scheduling policies, session settings, etc. all could have been used in concert to achieve a very similar outcome with a dozen or so ddl/dcl statements.
The article also mentioned that they isolate by project_id. That implies one customer (assume a business) can isolate permissions more granulary.
Multi-database is more expensive generally but is a more brain dead guaranteed way to ensure the users are properly segregated, resilient across cloud/database/etc software releases that may regress something in a multi-tenant setup.
Multi-tenant you always run the risk of a software update, misconfiguration or operational error exposing existence of other users / their metadata / their data / their usage / etc. You also have a lot more of a challenge engineering for resource contention.
That way when a customer leaves or they want a backup copy of their data, it's a rm <customer>.sqlite3 or .backup away.
Sometimes you can't do that for various (almost always non-technical) reason(s), but it's always my 1st choice.
The DSL approach has other advantages too: like rewriting queries to not expose underlying tables, doing automatic performance optimizations…
It’s a sql that compiles to the real database sql based on configuration.
For query operations I would try to find a way to solve this with tools like S3 and SQLite. There are a few VFS implementations for S3 and other CDNs.
We (https://prequel.co) recently started offering this as a white labeled capability so anyone can offer it without building it yourself. Its a newer capability to our export product where instead of sending the data to the tenant's data warehouse, we enable you to provision an S3/GCS/ABS/etc bucket with the data formatted. Credential management, analytics, etc is all batteries included so you don't have to do that either. The initial interest from our customers was around BI integrations but agent use is starting to pick up which is kinda interesting to see.
We use it (I’m the author or the article) so users can search every run they do and graph all sorts of metrics.