I’ve been using Hasura and PostgREST for a few years now with real big production apps, in enterprise and in startups, and honestly the only problem with them is that backend engineers feel threatened.
They are great products that cover 95% of what a CRUD API does without hacks. They’re great tools in the hands of engineers too.
To me it’s not about vibe coding or AI. It is that it's pointless to reinvent the wheel on every single CRUD backend once again.
Experienced backend dev here who also uses Hasura for work at a successful small business. I think it's great at getting a prototype to production and solves real business problems that a solo dev could do by himself. As engineer #2 it's a mess, and it doesn't seem like a viable long term strategy.
I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns. Your entire schema is exposed. Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly. You're pushed into fake open source where you can't always run the software independently. Who knows what will happen when the VC backers demand returns or the company deems the version you're on as not worth it to maintain compared to their radically different but more lucrative next version.
I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
"Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper."
Exactly. This is one of the things I never understood about Supabase's messaging: The highly-touted, auto-generated "RESTful API" to your database seems pointless. Why would I hard-code query logic into my client application? If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
> If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
To avoid the above problem, it's a standard practice in PostgREST to only expose a schema consisting of views and functions. That allows you to shield the applications from table changes and achieve "logical data independence".
1. If your function returns a table type, you can reuse all the filters that PostgREST offers on regular tables or views [1].
2. The SQL code will be much more concise (and performant, which leads to less maintenance work) than the code of a backend programming language.
3. The need for migrations is a common complaint, but you can treat SQL as regular code and version control it. Supabase recently released some tooling [2] that helps with this.
Nobody but you is forcing you to put the “business logic” in the frontend.
Both those techs might make this look convenient, but engineering rules must still be followed.
Frontend should do validation and might have some logic that’s duplicate for avoiding round-trips… but anything involving security, or that must be tamper-proof, must stay in the server, or if possible be protected by permissions.
There are whole classes of applications that can be hosted almost entirely by Supabase or Hasura. If yours isn’t, it doesn’t mean you should force it.
Who said anything about forcing? I asked what the value of Supabase's most highly-touted features are, when they CATER TO the movement of such things as query logic to the front end. What else are you doing with an auto-generated RESTful HTTP "API" to the database?
I also didn't mention security, let alone promote moving it to the front end.
PostgREST creates the same type of CRUD endpoint that one would create when writing a traditional backend with an (eg) MVC framework, and it does this without requiring a developer and with complete consistency.
If "letting the client formulate queries" you mean "filter posts by DidYaWipe, sorting by date", this is also what traditional CRUD backends do.
I wouldn't write a back end with an MVC framework, since it's not doing any presentation whatsoever.
If PostgREST auto-generates three-table joins automatically to resolve many-to-many relationships and presents an appropriate endpoint, that's interesting.
As a long-time Hasura stan, I can't agree with this in any way.
> Your entire schema is exposed
In what sense? All queries to the DB go thru Hasura's API, there is no direct DB access. Roles are incredibly easy to set up and limit access on. Auth is easy to configure.
If you're really upset about this direct access, you can just hide the GQL endpoint and put REST endpoints that execute GQL queries in front of Hasura.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper
> Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops
... How is an API that queries Hasura via GQL any different than an API that queries PG via SQL? Put your business logic in an API. Separating direct data access from API endpoints is a long-since solved problem.
Colocating Hasura and PG or Hasura and your API makes these network hops trivial.
Since Hasura also manages roles and access control, these "extra hops" are big value adds.
> You're pushed into fake open source where you can't always run the software independently
... Are you implying they will scrub the internet of their docker images? I always self-host Hasura. Have for years.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I think your arguments pretty much sum up why people think it's just about backend engineers feeling threatened - your sole point with any merit is that there's one extra network leg, but in a microservices world that's generally completely inconsequential.
Backends are far messier (especially when built over time by a team), more expensive and less flexible than a GraphQL or PostgREST's api.
> I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns
Writing backend code without knowing what you're doing is also an insecure nightmare that forces anti-patterns. All good engineering practices still need to apply to Hasura.
Nothing says that "everything must go through it". Use it for the parts it fits well, use a normal backend for the non-CRUD parts. This makes securing tables easier for both Hasura and PostgREST.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly
I'm gonna disagree a bit with the sibling post here. If you think that going through Hasura for everything is not working: just don't.
This is 100% a self-imposed limitation. Hasura and PostgREST still allow you to have a separate backend that goes around it. There is nothing forbidding you from accessing the DB directly from another backend. This is not different from accessing the same database from two different classes. Keep the 100% CRUD part on Hasura/PostgREST, keep the fiddly bits in the backend.
The kind of dogma that says that everything must be built with those tools produces worse apps. You're describing it yourself.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I have heard the arguments and all I hear is people complaining about how hard it is to shove round pieces in square holes. These tools can be used correctly, but just like anything else they have a soft spot that you have to learn.
Once again: "use right tool for the job" doesn't mean you can only use a single tool in your project.
I've only played with these kinds of plug and play databases, but mixing and matching seems like the worst of both worlds. The plug and play is gone, because some things might me in API 1, some others in API 2, and maybe worst of all, their domains might overlap. So you need to know that the "boring" changes happen via the postgREST, but the fancier ones via some custom API. The APIs will probably also drift apart in small ways, making everything even more error prone.
What you say is also true for situations where you us an ORM vs queries, or some direct MVC approach vs business service libraries which are common in backend apps. Or even having two different sets of APIs.
What sounds like the worst of both words to me is forcing Supabase/Hasuea to do what it isn’t good at or force a traditional backend to do the same thing those tools can do but taking 10x of the time and cost.
My experience was super positive and saved a lot of coding and testing time. The generated APIs are consistent and performant. When they don’t apply, I was still able to use a separate endpoint successfully.
I like PostgREST for some of it's use cases (views mostly), but the issue I have with it is that I don't often want a user to have direct access to the database, even if it's limited to their own data.
Mike can edit his name and his bio. He could edit some karma metric that he's got view access to but no write access to. That's fine, I can introduce an RLS policy to control this. Now Mike wants to edit his e-mail.
Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely. I can attach webhooks for this and use pg_net, but I could quickly have a lot of triggers firing webhooks inside my database and now most of my business logic is trapped in SQL and is at the mercy of how far pg_net will scale the increasing amount of triggers on a growing database.
Even for simple CRUD apps, there's so much else happening outside of the database that makes this get really gnarly really fast.
> Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely
Congratulations: that's not basic CRUD anymore, so you ran into the 5% of cases not covered by an automatic CRUD API.
And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time.
You don't have to throw the baby away with the bathwater just because it doesn't cover 5% of cases the way you want.
It's a rite of passage to realize that "use the right tool for the job" means you can use two tools at the same time for the same project. There are nails and screws. You can use a hammer and a screwdriver at the same time.
>You can use a hammer and a screwdriver at the same time
How do you balance the nail and screw? I'm serious, I'm trying to picture this, hammer in one hand, screwdriver in the other, and the problem I see here is the nail and screw need to be set first, which implies I can't completely use them both at the same time.
Perhaps my brain is too literal here, but I can't figure how to do this without starting with one or the other first
I'm going to answer this using Firebase, which Supabase is supposed to be a copy of.
There are 2 parts to using Fireabse, the client SDK and the admin SDK.
The client SDK is what's loaded in the front end and used for 95% of use cases like what u/whstl mentions.
The adminSDK can't be used in the browser. It's server only and is what you can use inside a custom REST API. In your use case, the email verification loop has to happen on a backend somewhere. That backend could be a simple AWS lambda that only spins up when it gets such a verification request.
You're now using a hammer for the front end and a screw driver for the finer details.
Some projects require nails, other require screws, some might require both.
Instead of hammering screws (or in this case reinventing a screwdriver), just use an existing screwdriver. That’s what I mean: don’t reinvent the solved problem of CRUD endpoints when applicable to the endpoint. Nothing says you can’t use two techs per project.
Where in my message does it say or imply that you should “hard code queries in your client application?”?
EDIT: What I’m advocating here is the opposite: use those tools for CRUD so that your frontend looks exactly the same as a frontend with a regular backend would. If the tool is not good for it (like the example), just use a regular endpoint in whatever backend language or framework. Don’t throw the baby (the 95%) with the bathwater (the 5%).
By “just use a normal endpoint” I mean “write a normal backend for the necessary cases”.
I mean instead of doing a GET on an endpoint called userMessages with an ID parameter, you're formulating a join in the client between specific tables.
In PostgREST, if userMessages is a table in itself, you do get an endpoint called /userMessages.
If the table is called messages and you want to get messages from a user, you can just request something like /messages?user_id=123. And if user_id must your own user_id, you can just skip passing the parameter, thanks to RLS.
If userMessages requires is a join between two tables and you don't want to let the frontend know about it, you can use a view and PostgREST will expose the view as an endpoint.
Once again, there is no "need" to formulate joins in the frontend to reap the benefits of this tool.
I don't do anything close to "formulating a join in the client" with PostgREST and I still use it to its full extent, and it does save time.
EDIT: If one wants to formulate more complex joins in the frontend, then they probably want something like Hasura instead. Once again: complex queries in the frontend is BY NO MEANS mandatory, you can still use flat GraphQL queries and db views for complex queries. PostgREST OTOH is about keeping it simple.
Thanks for the reply. If your database is normalized to any degree and you have multi-way relationships, I don't really see significant payoff from the auto-generated API vs. writing traditional queries and endpoints.
I have used them too, and I would say that at least for Hasura, performance can be poor for the generated queries. You have to be careful. Especially since they gate metrics behind their enterprise offering.
This is the same for any GraphQL backend. And even REST backends can be misused: I've fixed way too many joins-in-the-frontend that were causing N+1 queries in lists.
They are great products that cover 95% of what a CRUD API does without hacks. They’re great tools in the hands of engineers too.
To me it’s not about vibe coding or AI. It is that it's pointless to reinvent the wheel on every single CRUD backend once again.