These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).
No, I believe that money successful humans as single units are extremely rational and cold calculating.
The problem is that this rationality is often centered on a single beneficiary (You) because why would you care about any other beneficiary?
However times and times again it turns out that no company is as evil as government. Hence I am an anarcho capitalist.
On the whole even with every company only thinking about themselves, it is a distributed system self sustaining and self correcting. No single unit has unlimited power.
Historically it’s always the governments that are vastly more evil and chaotic than any private enterprise ever conceived.
And so we can see it now as another example from USA government. No company could ever get so corrupt and evil as current American elected officials.
You seem to tacitly acknowledge corporate America can also be evil, just not as evil as government can be? Why put corporate America on a pedestal at all then? Why content yourself with what you consider the lesser of two evils?
Demand accountability from your elected officials. It can be done by not electing them. You have no such agency over corporate America (short of boycotting, I suppose).
To my eye the U.S's highest elected official is in fact also a company.
Even better, train an entirely new LLM with your prompt added to its data set. It will be imbued with its own latent sense of purpose. All you need to do after that is type "let there be light!"
I'm probably 10 years out of date. Are ethereum smart contracts still a thing? I'm sure you could deploy one of those for every agent session to handle the notifications
"We have a million pieces of content to show you, but are not allowed to editorialize" sounds like a constraint that might just spark some interesting UI innovations.
Not being allowed to use the "feed" pattern to shovel content into users' willing gullets based on maximum predicted engagement is the kind of friction that might result in healthier patterns of engagement.
Depends how much staff they have? You realize daily newspapers in cities all over the world are just full of new articles every day, written by real humans (or at least, they all used to be, and I hope they still are).
These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).
reply