Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tau: Open-source PaaS – A self-hosted Vercel / Netlify / Cloudflare alternative (github.com/taubyte)
498 points by thunderbong on July 12, 2024 | hide | past | favorite | 139 comments


Tau’s ability to self host platforms could be a huge boon to regulated industries because it avoids sending data and exposing connections. Great idea, keep it up!

Right now as I understand it, if you want to connect Vercel securely to a database with more than a password, you need to “contact sales” about “enterprise” (no self service option for demos and MVPs)

Might be a tech issue but imho needing to contact sales about enterprise level deals just for basic security stuff is not the best move since it forces people to expose their stuff or wait around and pay a bunch of money.

Dunno about you guys but I don’t ever click “contact sales” I just go to something else where my dev work isn’t gated by salespeople (even if it’s significantly more complicated) and I say this as a big proponent of Vercel, I wish I could use it more, but expecting users to wait around for sales to invoice them just to have a secure database connection is a dealbreaker for my use case regardless of my opinions or preferences of liking their stuff.

Sources

[1] https://github.com/orgs/vercel/discussions/42

[2] https://github.com/orgs/vercel/discussions/7323

[3] https://archive.ph/fQwMz


There are some neat ideas here -- building a PAAS off of p2p technologies to enable network autodiscovery, automated load-balancing, distributed storage, Webassembly-native, etc.

Having put stuff through production though, I'm a bit skeptical about how well it works out in the wild, though I am interested in learning how well it does and what its failure modes are. If it works well-enough, it has the potential for democratizing production apps.

I'm not sure how they are going to make money with their enterprise offering though.


Scaling using p2p is something that some of the big Silicon Valley giants use internally. It scales very well. Tau uses a combination of techniques on top of the DHT to ensure response time is quick. Unfortunately, our documentation is not that great (working on it), but we have three services that accelerate discovery and routing: Seer (keeps tabs on nodes' health and offers a DNS interface as well), Gateway (a load balancer that forwards traffic to nodes it keeps tunnels with), and TNS (a replicated registry of resources: what is what).

Between Seer and Gateway, the system handles resiliency. Internally, CRDT is used for replication, which: * allows nodes to still work offline or if the network gets partitioned * prevents split-brain

The cons of CRDT are when it comes to orchestrating services like databases; we could end up with multiple instances of a master node, for example. This is why we plan to add a cluster mechanism that will allow the formation of dynamic clusters to facilitate the orchestration of stateful containers and VMs.

For the money part, we have a managed offering, enterprise support, a web console, and are building more.


First coolify[1] and now Tau. More competition the better for users.

From the quick look it seems like coolify is more fully featured?

[1]: https://coolify.io/


Before Coolify came CapRover and dokku.

My toy server (64ram, ryzen 5600G 60tb HDD, 4tb NVMe) is currently running fedora/caprover, though I've been considering just putting truenas scale on it, as it added custom deployments as well...


I found CapRover and dokku to have some limitations such as lack of a GUI or long setup process. In contrast with Coolify, it Just Worked™. I'll have to check out Tau as well.


CapRover has a UI to install new apps or configure existing ones.


Installing Portainer on top of Caprover (available as a one-click template) fills in most gaps in my experience.


CapRover set up was kind of annoying last I used it.


It’s one Docker command and an NPM script. Not sure where you got hung up, but I’ve spun up several with simple Bash scripts. Glad you found something that works at any rate.


Love Caprover. Using this on production to host many docker containers.


Same. I’ve contributed several app templates. It’s not terribly refined, and the creator took some flack for his approach to split licensing. But they are very responsive, and the platform is quite solid and easy to secure and adapt.


That's a beast server. What are you using it for?


It sounds bigger then it is, raid 10 cuts it in half after all (30tb HDD, 2tb NVMe)

It's mostly just for the *arr stack, various self hosted services like vaultwarden, seafile etc and my personal toy projects. I.e. a pwa book reader along with the occasional dev tool I wanna experiment with.


What HDDs do you have for it to be 60TB in total?


6x6GB + 2x12gb


Then you have 24 TB with RAID10, not 30 TB. I guess that's why he was wondering.


Technically I've got 18tb raid 10 and 12TB raid 1, I just didn't think it mattered and I was too lazy to explain that on my phone ( • ‿ • )

The 6x6TB were previouly in a Synology NAS (roughly 8yrs old now) and I was planning to slowly migrate everything over to 12TB disk's as things start failing


Coolify needs a lot of work imho.


Do you mean it’s missing features or that it requires lots of maintenance and handholding?

I’m evaluating some of the options for toy projects so I’m curious to read people’s experiences.


It's still in beta (under active development), so for fairly serious projects it's not yet viable in production in my opinion. But I'm using it for two personal sites, and it works perfectly. It's exactly the kind of tool I was looking for: open source, self-hosting, easy to install, easy to use and well maintained.


IMO it adds a bunch of complexity (as these types of catch all frontend solutions usually do) to a problem that can be easily solved by becoming proficient in using Docker/Podman and spending a little bit of time reading the documentation for the services you want to run. Its a cool idea but uncessary IMO. There are also a ton of people that like it and I'm just one opinion.

I reccomend you try it but I also think you'll realize you could host that hobby project with half the hardware requirements and half the effort with something like docker-compose or swarm.


I was paying cooler 5 dollars a month so I didn't have to worry about any of that.

And for that it worked remarkably well


I’ve been using docker-compose on a vm for half a decade. Works very well with a reverse proxy + letsencrypt.


Especially when scaling beyond one server. The next Tau release will include `spore-drive`, a tool for deploying and updating on thousands of servers with a single command.


Before coolify there was also exoframe[1]

[1]: https://github.com/exoframejs/exoframe


It's a different approach. Coolify is a container management platform with CI/CD integration. Tau is meant for developers, and Git is the only way to make changes. Moreover, Tau can handle compute, storage, pub-sub, and more. Note that, besides CI/CD, Tau cannot orchestrate containers currently.

So, being fully featured truly depends on your intended use case.


I've been reading through the docs and skipping through the one recentish YouTube tutorial trying to make sense of what this actually is. While it seems like an impressive thing for what appears to be a one-man-project, the almost complete lack of documentation makes it feel like a bit of a hard no in the current state. There seems to be a history of it being heavily linked to Web3 things that also feels weird.

Some suggestions for this to be able to succeed:

- Documentation, documentation, documentation, the only place where I could that the three supported ways to write a serverless function are with Go, Rust and AssemblyScript is somewhere hidden in a tutorial. It all has to compile to WebAssembly so I guess that's the limiting factor.

- Examples?

- Using git as source of truth for the configuration/state of a system is cool. Please link to sample repos so I can see what a system with a website, some functions that touch DB and files, and the configuration etc looks like.

- How does the database part work? Client SDKs?

- There are lots of protocols with unclear names that are only briefly mentioned here but then seen in random places in configuration: https://tau.how/01-getting-started/01-local-cloud/#protocols

- The Concepts part of the documentation is buzzword soup, it's impossible to derive any meaning from it other than that the author dislikes Kubernetes and probably used some generative AI for the content.

- Roadmap, plans, versioning, plans on how Tau version upgrades should go, ...


Great feedback. We're working on some of these.


Looks compelling, but the docs are extremely vague and full of fluff. The "Why One Binary" is hilariously bad. Almost feels like content to impress managers/recruiters.

https://tau.how/02-concepts/03-one-binary/#the-genesis-of-ta...


Pretty much all of the docs are very clearly the output of what the typical LLM produces. It's just words, no meaning.


Nothing like “docs by LLM” to warn you about the quality of the code you’re about to trust your entire infrastructure to.


Starting off with "In the realm of..." instantly gave it away



Ouch. This is llm output.


Agree the author should not lead with THAT copy! If you dig down though (I did) there’s quite a bit of technical documentation that I found interesting. “Different” for sure.


I thought you were probably exaggerating… you were not.


But, but,...

"By emphasizing ease and simplification, Taubyte aspires to transform cloud computing into a catalyst for creativity and innovation"


sorry about that. it's true our doc is so so... we're working on it.


"Taubyte's single binary philosophy advocates for a future where the full potential of cloud computing is unlocked through its accessibility and efficiency"


Isn't the entire point of vercel/netlify/cloudflare is that you *don't* have to self-host? The issue is the price of it, not the actual software.

Waking up to a 10k vercel bill is pretty common, especially when a DDoS goes undetected. That 10k bill is roughly $50 dedi from hetzner, but the problem with that is that you need a distributed system, for that you need something more advanced that tau, let's say kubernetes, then you need multi-site storage ok so ceph and then you realize you need a degree in openssh and bluestack to continue on and realize that the hassle from all of that and instead just hire a sysops employee that costs 10k a month and spend $1000+/month on hardware for geo-distribution.

Take this from personal experience. I've personally seen someone go k8s with very little experience and their general consensus was that they just want to go "managed" hosting instead.

Still better than 10k bill once your app becomes large enough, but it's simply not something devs that just want to get something out there want to bother with. In the end even with the insane hosting costs compared to the revenue they bring in is tiny. $10/month service user only racks in around $1 of api usage a month, heavily depends on the app though.


> Isn't the entire point of vercel/netlify/cloudflare is that you don't have to self-host? The issue is the price of it, not the actual software.

There's also a third way, which we're trying to do at stacktape[1].

We've built a PaaS platform on top of AWS, running in your own account. So you get all of the stability, flexibility and reliability of AWS, yet the deployment process is easy as using something like Heroku.

Also, compared to Vercel, the pricing is just a % on top of AWS fees, and not a sudden $10k bill, or $550/TB Netflify egress costs.

[1]: https://stacktape.com


> Isn't the entire point of vercel/netlify/cloudflare is that you don't have to self-host?

No. There are two pieces to those platforms. The first is a platform that supports git commit as a deploy method out of the box. That’s the big one.

The second is auto scaling. That’s where not having to self host is actually a big deal. But that’s also where the bills come. A lot of smaller builders are right now looking to have the same deploy experience but on their own cheap hetzner/DO server without crazy bandwidth and scaling bills that they can get hit with the moment they let their guard down.

A decent sized player in this field right now is Coolify. They offer a hosted version of their PaaS but without the servers. So the PaaS part itself, coolify, is managed by them but it deploys to hardware that you control. The existence and usage of this plan is evidence of the needs of this market imo.


The management when something goes wrong (take this from experience) is very time consuming, especially if you lack experience and/or dedicated staff.

Git history deployments is a simple k8s controller, pretty sure there's a helm chart for that.

Autoscaling is what I mean by kubernetes so yes totally agree.

Coolify seems pretty neat. Still has the overhead of management when dealing with clustering and multi-site.


Some good points to discuss here.

Firstly the git history managed via a controller / helm chart. That’s sufficiently complex. The mindset of k8s/cloudnative doesn’t translate easily from the pet vps control server which comes with its own disk and persistence. So conceptually a management layer like cooling is objectively easier.

But that’s a nit really.

I think the idea of management being time consuming is more interesting. It’s true. And I think it’s true no matter what you do.

Time consuming management applies to any sufficiently complex infrastructure or team. No matter what you have these questions to answer.

How is access ma aged?

How does debugging a broken build work?

How does secret and config management work?

How does disaster recovery work?

If you are storing config as code, how are you managing deployment of that?

If you use k8s, how do you manage feature deprecation across versions? Even a managed version won’t help you resolve having to move from some kind of resource/v1beta to resource/v1.

I don’t say this as a slam dunk against anything. I think there are different levels of comfort with certain paradigms depending on what you’ve been exposed to over your lifetime. And the solution that feels most convenient to you is what you’ll want to work with. And for each type of preference we are going to see different solutions. All of which will be time consuming to manage in their own ways at a sufficient level of complexity or scale. Basically I prefer talking in these terms because it shifts the conversation away from broad comparisons to something more tangible which is “where does the time complexity lie for this particular approach”. Teams can put that down on paper and decide which one is more palatable and then go with that.


You're right on all of these. I guess you you can compare this to comfort food.

I definitely do have the experience when it comes to geo distribution and multi-site deployments, but it's something that took me 2 years to build up to and I genuinely think that it's also not something most people have patience for or freedom for. I am lucky that I can afford to experiment with all of these things on my personal cluster and then turn that into something I deploy when dealing with customers.


The point of the first two seems to be to host static files at the least, and in the case of cloudflare, cache existing hosting for the most part.


talking about cloudflare workers here


Geo distribution is something that everyone that can, should avoid. It just scales up the problems.


I most definitely agree having deployed my own nameservers with geodns and multi-site storage clusters.


This is really neat! I'm working a message queue in go (drop-in replacement for SQS) and also thinking about autoscaling. I've been playing with raft vs using a central store (ie, postgres) to coordinate.

Can you tell me more about IPFS - I've never used it before. How has that been working, and can you tell me what you've observed when you have many nodes which need to coordinate?


IPFS is slow and impractical. The design of the content addresssble system is cool, but having tried approximately annually to run it for production usage since it was released, I can say that it is still firmly in the “research project” category, a decade later.


The weakest part of IPFS in my experience is how long it takes one node to find another with the requested data across the internet through the public DHTs. I imagine it might work much better in this system if they're limiting it to only do lookups and fetches within your own network of nodes.


I am using nats Jetstream with nex.

It’s a self hosted Cloudflare.

It uses nats Jetstream , as a work around to Nat having bog / anycast , like how Cloudflare does its magic.

https://github.com/synadia-io/nex

You have to run it on bare metal or any cloud that supports next virtualisation.

I use nats Jetstream listening to git repo web hooks to deploy.


love NATS. tau is about the developer experience and speed of development. Integrating and maintaining nex is not something every dev can do


we don`t use a full IPFS, but more like a lite version. it allow us to retrieve data quickly, cache and duplicate. check this article https://blog.ipfs.tech/2020-02-14-improved-bitswap-for-conta...

When it comes to scale, ipfs basic discovery mechanism (the dht) does not do that well. This is why we have services like tns (which is a replicated registry) that allows it to scale.


Lately I've stumbled upon https://www.goqite.com/ but haven't had a good use case for it yet.


Don't call this a Cloudflare alternative because it simply isn't.


Cloudflare Pages I assumed, from the title, since it already didn't make sense to compare Cloudflare (overall) to 'Vercel / Netlify'


why?


very interesting... here is a comparison of the community and enterprise offering

https://taubyte.com/pricing/

who is actually behind this?


> who is actually behind this?

"Samy Fodil"

https://github.com/samyfodil


If it’s a single binary why is there an installer, and why does it need to be curl-piped to sh? Looking at the installer script, why does it create directories in the root dir, instead of /opt or /usr/local? Also, I couldn’t find the install script in the linked repo.


the script downloads the latest release. it's still one binary you can deploy yourself or compile with `go build`. the script is really there to automate some steps. next release a new tool `spore-drive` will allow to deploy all your nodes running one command.

for the `/tb` folder, I see your point, i would love to discuss it further if you can open an issue on github.


Would this be a good combo to combine with Hetzner server auctions, or just too much trouble ? or even with todays connections having a server at home ? any success stories ?


yes. the cool thing is with tau servers can be in different locations.

looking at https://www.hetzner.com/sb/:

- you can build a beefy cloud for less than €150/mo!

- plus you can run LLM inference (check https://github.com/ollama-cloud) for another €150/mo per node

not bad!


They only auction servers in one region


they have servers in many regions


But not dedicated ones


Is the advantage of this, that you only have to keep the one binary up to date, in contrast to e.g. a host with docker-compose and the containers therein?


yes. tau will take care of versioning your deployed code just like git does. internally even the registry is versioned using branches and commits.


I've been out of web dev for a few years, but my understanding of the appeal of serverless is that it is theoretically pay only for what you use. But if you're hosting Tau to do serverless via Tau, well, it's not really serverless anymore. You are now definitely paying for the server running the serverless infra.

Why would anyone target Tau serverless, then? What am I missing?


That’s not the only appeal of serverless. In fact it’s not even really true - I pay Vercel a flat rate every month whether I’m heavily using it or not.

The appeal of serverless for me is simplicity. It abstracts the server away. Less to think about, more brain capacity focused on unique business logic.


> The appeal of serverless for me is simplicity.

That's interesting, because serverless is far from simple and Vercel is about the same distance from simplicity as the sun is from the edge of the observable universe.


as cal85 said, The appeal of serverless for me is simplicity. that said, tau will support containers and even VMs by the end of the year


Am I wrong, or all this ends up doing is round-robin DNS, which is no good for a geographic CDN?


right now, that is true. Seer (the dns service) returns all nodes that register with it, regardless of location. That said, we're working in adding advanced logic there that will: - take location into consideration - allows the execution of wasm code (we call smartops) to define custom logic


Wouldn't using talos be better than having to run this on custom managed ubuntu servers?


Which Talos, can you send a link?


I assume https://www.talos.dev/

Basically a small OS that will prop itself up and allow you to create/adopt into a Kubernetes cluster. Seems to work well from my experience and pretty easy to get set up on.



why would that be?


I love the verbiage. When I see a folder in the source named "dream", then files named "Universe" and "multiverse", it pulls me in :).

I also love the single binary in Go. That's on my todo list for a few things.

Well done!


thanks!

dream allows you to run tau locally and even write E2E unit tests.

we always try to have cool names :)


how is this achieving scale-up and scale-to-zero ? from my (rudimentary) investigation only knative k8s had scale-to-zero implemented well enough.

and if you dont have scale-to-zero, you cant claim a vercel alternative.


how does k8s implement scale-to-zero?


knative


neither the knative controller nor the underlying kubernetes scales down to zero.


What key features make Tau a compelling self-hosted PaaS alternative?


easy to deploy, built-in CI/CD, CDK and coming-up this month is spore-drive, a tool you can use to deploy and update tau on all your hosts with one command.


how does this compare to coolify and caprover already established and mature PaaS ? this is a welcome addition


it's developer-first, has a built in CI/CD, can do serverless, multi-hosts & scales across locations has a CDK and more. granted, it does not orchestrate containers as of today. for compute, WebAssembly (wasm) is used. but container runtime on the road-map for this year.


Missing SHOW HN: ?


Self-hosted platform as a service?!

Isn't the whole point of platforms as a service (from the customer perspective) that you don't need to do the hassle of self hosting.

There are pros and cons to using an external service and to self hosting. And just throwing all these words at me together makes me feel like there isn't a coherent mental model of what this is trying to be, or if there is it isn't clearly communicated.

If this is some sort of CDN software or attempt at running Lambda-like code Snippets on your own distributed cluster that's cool. But a description of that would be nice.

The GitHub read me jump straight into how this is just a single binary and how deploying it is easy, but not what the hell it is. CloudFlare can do like a million things, which features from cloudflare is this competing with? I just really want to know what the pros and cons of this are compared to other ways of rolling my own servers or renting out someone else's platform?


Replying just to “isn't a coherent mental model”:

I just took a deep dive into the documentation- which is comprehensive - and I feel the complete opposite; the author has a very well refined mental model of a PaaS and has created a modularized source-code expression of that model that I found very interesting. The CI module is called “patrick”, which gave me a chuckle.

You have some good feedback in your post, but I feel like the author may not be trying to replace hosted PaaS; they are essentially using tiny-go to build small distributable wasm modules that can interoperate in a distributed network, which aligns very closely with the localfirst ethos that brings compute and storage out of the datacenter. Does it feel a bit silly to even SAY “client-side CI”? Objectively, yes, but if a future architecture might need to safely deliver code to clients in a mesh this is a really interesting way to experiment with solutions.


I agree, the CI module name is not the best. While whiteboarding the concept with the team, this http://en.spongepedia.org/index.php?title=Hooky_%28Episode%2... episode from sponge bob found it way to the discussion so we called it `patrick`.


"Self-hostable as a service" is a common pattern because it provides the benefits of as-a-service without the threat of vendor lock-in. Plus you can run the same software in temporary or test environments.


It will be interesting to see how these companies evolve their business strategy once PE/VCs are pressuring them to IPO/get bought out. It seems like any customer that is large enough to have significant billing would just bring the platform in house instead of paying for the hosted version. I guess they could take the docker desktop approach with their licensing that >X million in revenue still requires a license of some sort.


They are using technologies such that the system can self-heal, and self-deploy (with auto discovery) ... so there's a misalignment of incentives for the product and a hosting business.

I like the ideas they are trying, so I wish the best of luck to them. Hopefully, they'll find a business model that is better aligned with the product.


Maybe they’ll become consultants helping big customers use the platform to fund its development.


Where Vendor can also be an open source project. The cost of moving away from a project like Tau can be equally high as a closed source PaaS of course.


and if that cost of moving away is high enough, a team or org "locked in" to a FOSS solution can continue to pay humans to support it internally while evaluating off-ramps instead of being told they need to re-arch their cloud stack in three months' time.


Amazon and the hyperscalers will eventually pick something like this up and offer it for free.


Further down on the README, it explains how it uses libp2p for network autodiscovery, IPFS for distributed storage, and how it can distribute and share routes and assets and automating load-balancing. It is Webassembly-native, so you don't have to mess with compiling dependencies or execution environments.

If it works as well as described, then the underlying technology (and the constraints they have) allows it to be self-hosted while having some of the benefits for a managed platform.


I don’t know if the author intended to be “local first- adjacent” with this project, but I am seeing a lot of wasm-target projects lately, including replicated databases, and I wonder if this project isn’t a peek at what a truly distributed (browser-to-browser) workload might look like. This project persists the system config in GitHub, but if the components are wasm then there’s at a chance that they can use that provision themselves in every browser.

Imagine a workload where a client does their own compute, by provisioning worker components locally and retrieving only shared data from your systems - how much cheaper would your hosting costs be?!


Thanks for articulating that. I was groping around for something along those lines — going beyond easy self-hosted to local-first deployment. Considering the web3 origins, I wonder if the project founders had that in mind as well.

There was a different, recent HN post about scoped propogators, which I find to have a lot of good potential for people to write and apply local customizations for their own apps.

I don’t know if those are the killer use case for this, but I think ideas along these lines takes it further out of alignment with incentives for business models.


Thanks for digging in.

Would be nice if READMEs opened up with what the thing was and maybe a one or two sentence problem and solution description.


Sometimes, it's hard to tell how significant something is, and the creators may not even know until hindsight, let alone articulate it in a concise, accessible way.

The initial marketing word usage such as "amazing" put me off at first ("Show me, don't tell me"), as well as how the author(s) poo-poo'ed on Kubernetes. (I've worked on both good and bad usage of K8S, so it isn't always a fairy tale, nor with a bad ending). However, it also read like someone who seemed to have a deeper understanding of infra writing about this, not just a vapid reinvention by someone who works mostly on the front-end, so I kept going with the README.

Having said that, while I am a big fan of IPFS, I know there are performance issues with it. (Maybe Tau set up a private IPFS that is only used within the cluster, which may help it work faster). It also sounds like they are working on general container support, not just Webassembly. Overall, if they keep iterating and improving things based on how things work in production, then they'll end up with a fairly robust system.


If it works as well as described (ie as well as can be expected with IPFS as a storage layer), it doesn't work.


We used to call those “turn-key solutions”.


The point of platform isn't just that someone else hosts it. It's that it's a consistent target for your teams & different projects, with a well defined set of capabilities.

Having patterns to deploy & ship software, that also can bring up & manage other resources along the way (databases, load balancers, geo-reicatikn, etc etc).


While I think this is a valid take for the term "platform" I do think that "Platform as a Service" implies that someone else is running the platform and I don't have to deal with the headache of managing it, I just use it.


Yes, from the developer's perspective, someone else is running the platform (the platform team). (At least, as long as the developers can avoid assuming ownership of the platform by managing terraform files or something, which in practice I've yet to see anyone avoid...)


And this is true even if you're not using a product that bills itself "as a service", get somehow we never called the next machines that programmers never touched running a LAMP stack "platform as a service". It's almost as if the "as a service" part meant as a service to your organization.


In a company, there may me multiple teams, each doing there own projects, PaaS can be within the same company and provide a common way to do stuff without each team having to start from scratch each time.


Two large marketplaces in my country I know of have their own self-hosted PaaS (I know some of their devs personally). They're microservice-driven and have many small teams. One of their devs showcased me their platform where a small team can launch a new microservice with a few simple configs/CLI commands without having to know anything about infrastructure (it has built-in monitoring, logging, scalability options, discovery etc., full package). I guess it lowers costs because they have a lot of traffic. And they made it super easy to use for small teams with no cloud expertise. Faster deployments because each small team manages their microservices on their own. Plus, no vendor lock-in. I can imagine the pain of migrating 3000 microservices off a vendor. However, I think for small companies self-hosted PaaS is an overkill. Those companies have dedicated teams who work solely on PaaS.


Not sure of Tau's exact features but one nice thing about the Vercels and Netlifys is the integration with Github and easy CI /CD setup.

Having CI taken care of by just installing a single package on your own server is compelling.

The main think that they take care of that this probably can't on bare metal is redundancy and uptime and scaling.


tau handles redundancy, high availability and scaling for you.


Exactly. If one has to self-host actually, it's best to deal with the direct infrastructure (IAAS) layers and get its efficiency instead of going through additional layer overhead.

I don't know much about NodeJS/React world but to compare with PHP, this is the equivalent of self-hosting an open source CPanel instead of creating a LAMP setup on VPS and working with Linux and PHP directly?


When you have an infrastructure team at your company - that is basically what they do - changing IaaS or your bare metal into PaaS. But yes that is only useful at certain scale of operations.

Maybe with such tooling as original post it will be useful in smaller scale.


Self hosted gmail P2P netflix server Bank account + bank combo Mortgage Hertz DIY concierge service Fast food from scratch Bootcamp taking bootcamp Oxygen Farm


Nah, this is just Dokku 2 and I love Dokku. I think you’re mistaking a software component “service” and a business model “service”


Yes, so that one doesn't need to learn those helly AWS config.


What's with the vilification of kubernetes? 99% of this document (https://tau.how/99-Misc/kubernetes/01-k8s-cons/) boils down to "You have to understand what a pod, deployment, container, etc is" once you remove every line that discusses the cons of managing a cluster yourself, because nobody actually does that except extremely large orgs. All of these problems go away when you utilize a managed offering like DOKS, EKS, AKS or GKE.


100% this.. There's also exciting projects like Talos, Rancher, and the like for self-hosting Kubernetes that makes it entirely more manageable.

So much saturation in this space of people trying to create one off solutions, which on some level I admire. However the further off the main path you go the more you lock yourself into problems you can't troubleshoot or edge cases that aren't supported.

Abstraction these days is alluring, and it's cool! However you want something well known, well supported, (from multiple companies ideally) and documented. The hate for understanding kubernetes is just hate for having to understand layers of orchestration, or worse the layers behind the application.

If it's too complicated then you might not need it. Any platform you use will have those same layers, it just depends on how much is assumed or exposed to you. If you don't want to see any dials or options then use a managed solution, not a roll your own platform tool. That's of course assuming a few virtual machines managed by hand doesn't satisfy your needs, but if that's the case you don't need a platform solution (and hopefully it's not production).


I like your take on this. I think K8s offers a _lot_ and it has a bad reputation because of its early days. Kubernetes has room to improve, like everything else, but the API now are becoming a lot easier to work with and the Custom Resources allows folks to extend Kubernetes.

I still think that projects like this one come from necessity. Folks want to have an alternative for vendor lock-in.

I'm building something like that too (https://github.com/pier-oliviert/sequencer) for Kubernetes, and it's also out of necessity.

Vercel, Heroku and others have a lot of helpful tools that are empower developers, and I think people want to have those without being locked-in.

It goes without saying that I'm totally bias :)


> What's with the vilification of kubernetes?

because a well-configured k8s cluster nullifies the need for this project. also hi!


There's a fair amount of friction to going from 0 to a well-configured k8s cluster with gitops and a local dev story....


Gitops and a local dev story are more about your application than your deployment environment. Especially because along the way you need to consider building, testing, CI, etc.

It's really hard to fault k8s these days, all the original problems are solved and all that remains is necessary complexity that can't be abstracted without lowering power.

That doesn't mean you can't abstract over it, you can and should but you should do so in the scope of your team or organisation where you already know which pieces of power you need/want or otherwise know the way you which you want to leak those capabilities.


I disagree hard with this. I love kubernetes to bits, but managing complex networking issues, having proper security, or just simply rightsizing the nodes is definitely not going away with having a managed control plane.


Google's autopilot takes care of that stuff for average use cases. You just point a few yaml files at it any your app is running.

It's not like netlify does better in terms of rightizing nodes.


"What's with the vilification of kubernetes?" I ask myself that question every day


If your goal is to self-host your own PaaS, why would you use a managed k8s offering?


"Self-hosting" doesn't necessarily mean you own the hardware. Many people self-host on cloud providers at different levels (Fly, Digital Ocean/Hetzner, GCP/Azure/AWS), and most of those have some managed K8s offering.


Yeah and, as I see with my clients, you are always overpaying for that service. We run a one binary CL (save and die) cluster for $50/mo without containers that makes us millions $ profit a month. Even if it would cost $1000 (it would be far more); a) I rather give that to someone on the street b) it would be far far more busy work for no benefit. Anyone can do what they want; I like profit and I simply don’t like giving these companies money. But each their own. Tau seems the same philosophy as us, so I will add them to our sponsor list.


Why wouldn't you? If you decide to not use a PaaS and self host your own servers would you reach for a VPS or would you manage your own rack in a colocated data center?


Precisely, it's just about which abstraction and responsibility level you want to engage with. Managed k8s (from good vendors) means the scheduler is as far as you need to go which is enough to do a great many "self hosted" things.


I found it interesting that this was the focus when users of Vercel etc. have probably decided against k8s already. For me, a k8s comparison would make sense if this was a platform for running containers/VMs in a more traditional server model.


Nothing wrong with vilification of a mature product. Everyone trying to sell you an alternative will try to differentiate itself from a mature competitor in the market very hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: