It does make it challenging to track operators as upstream usually only provide/document helm installation.
If you write your own tf definition of operator x v1, it can be tricky to upgrade to v2 - as you need to figure out what changes are needed in your tf config to go from v1 to v2.
The thing i would add to this is that in most cases, you need to manually provide config values to the install.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
Yes, I use flux which has a similar HelmChart/HelmRelease resource. One of the things that took me a while to "get" with K8s is operators are just clients running on the cluster.