it's kind of hard to tell what your position is here. should people not ask chatbots how to scrape html? should people not purchase RAM to run chatbots locally?
i think in this thread the goalposts were slowly moved. people were initially talking about success being predicted by having the excess necessary to comfortably take many shots on goal. it seems like we've granted that this $250k shot was a one-time thing.
it is true but irrelevant to the original topic that this is more money than the global poor ever see, and that this is more money that most people get to have. i don't think anyone was arguing that this represents zero privilege
I'm pretty sure there's no reason that Anthropic has to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.
I am sure that there are some people who exhibit the behaviors you're describing, but I really don't think the group as a whole is disinterested in prior work or discussion of philosophy in general:
https://www.lesswrong.com/w/consciousness (the page on consciousness first citing the MIT and Stanford encyclopedias, then providing a timeline from Democritus, through Descartes, Hobbes,... all the way to Nagel, Chalmers, Tegmark).
Now, one may disagree with the particular choices or philosophical positions taken, but it's pretty hard to say these people are ignorant or not trying to be informed about what prior thinkers have done, especially compared to any particular reference culture, except maybe academics.
As for the thing about Aella, I feel she's not as much of a thought leader as you've surmised, and I think doesn't claim to be. My personal view is that she does some interesting semi-rigorous surveying that is unlikely to be done elsewhere. She's not a scientist/statistician or a total revolutionary but her stuff is not devoid of informational value either. Some of her claims are hedged adequately, some of them are hedged a bit inadequately. You might have encountered some particularly (irrationally?) ardent fans.
The epistemology skews analytic and also "philosophy of science". It's not inherently an issue, but it does mean that there's a reason that I spend a lot of time here on orange site talking about Kantian concepts of epistemology in response to philosophical skepticism about AI.
A good example of the failing of "rationality" is Zionism. There are plenty of rationalists who are Zionists, including Scott Aaronson (who I incidentally think is not a very serious thinker). I think I can give a very simple rational argument for why making a colonial ethnostate is immoral and dangerous, and they have their own rational reasons for supporting it. Often, the arguments, including Scott's, are purely self interest. Not "rational."
>My personal view is that she does some interesting semi-rigorous surveying
Posting surveys on Twitter, as a sex worker account, is so unrigorous that to take it seriously is very concerning. On top of that, she lives in a bubble of autistic rationality people and tries to make general statements about humanity. And on top of that, half her outrageous statements are obvious attempts at bargaining with CSAM she experienced that she insists didn't traumatize her. Anyone who takes her seriously in any regard is a fool.
I personally don't have that much of an interest in this topic, so I can't critique them for quality myself, but they may at least be of relevance to you.
I am really not sure where you get any of these ideas. For each of your critiques, there are not only discussions, but taxonomies of compendiums of discussions about the topics at hand on LessWrong, which can easily be found by Googling any keyword or phrase in your comment.
On "considering what should be the baseline assumption":
On the critique that rationalists are blind to the fact that "reason isn't the only thing that's important", generously reworded as "reason has to be grounded in a set of human values", some of the most philosophically coherent stuff I see on the internet is from LW:
Looking it the first link https://www.lesswrong.com/w/epistemology - it has frankly a comically shallow description of the topic, same with https://www.lesswrong.com/w/priors. In just about every discussion, I may be just the entirely wrong audience, but to me they don't even begin to address the narrow topic of choice, let alone form competent building blocks to form any solid world view.
I support anyone trying to form rational pictures of the universe and humanity. If the LessWrong community approach seems to make sense and is enriching to your understanding of the world then I am happy for you. But, every time I try to take a serious delve into LessWrong, and I have done it multiple times over the years, it sets off my cult/scam alerts.
Aside from the remark given in the other reply to your comment, I wonder what the standard is: how quickly should a community appear to correct its incorrect beliefs for them to not count as sheep?
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)
Haha my thoughts exactly. This HN thread is simultaneously criticizing them for being too assured, not considering other possibilities, and hedging that they may not be right and other plausibilities exist.
This is right, but doesn't actually cover all the options. It's damned if you [write confidently about something and] do or don't [hedge with a probability or "epistemic status"].
But the other option, which is the one the vast majority of people choose, is to not write confidently about everything.
It's fine, there are far worse sins than writing persuasively about tons of stuff and inevitably getting lots of it wrong. But it's absolutely reasonable to criticize this choice, irregardless of the level of hedging.
Well, on a meta level, I think their community has decided that in general it's better to post (and subsequently be able to discuss) ideas that one is not yet very confident about, and ideally that's what the "epistemic status" markers are supposed to indicate to the reader.
They can't really be blamed for the fact that others go on to take the ideas more seriously than they intended.
(If anything, I think that at least in person, most rationalists are far less confident and far less persuasive than the typical person in proportion to the amount of knowledge/expertise/effort they have on a given topic, particularly in a professional setting, and they would all be well-served to do at least a normal human amount of "write and explain persuasively rather than as a mechanical report of the facts as you see them".)
(Also, with all communities there will be the more serious and dedicated core of the people, and then those who sort of cargo-cult or who defer much, or at least some, of their thinking to members with more status. This is sort of unavoidable on multiple levels-- for one, it's quite a reasonable thing to do with the amount of information out there, and for another, communities are always comprised of people with varying levels of seriousness, sincere people and grifters, careful thinkers and less careful thinkers, etc. (see mobs-geeks-sociopaths))
(Obviously even with these caveats there are exceptions to this statement, because society is complex and something about propaganda and consequentialism.)
Alternately, I wonder if you think there might be a better way of "writing unconfidently", like, other than not writing at all.