Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Adversarial Collaboration (edge.org)
185 points by michael_nielsen on Sept 26, 2022 | hide | past | favorite | 207 comments


Kahneman hedges just a bit in his talk: " To a good first approximation, people simply don't change their minds about anything that matters."

Trying to explain the small number of people who do change their minds, David McRaney has an interesting book out called How Minds Change.[0]

And I imagine there are people who change minds, even outside his model. People who change their minds when it doesn't seem to help them strike me as important people to hear from.

[0] https://www.econtalk.org/david-mcraney-on-how-minds-change/


Assuming a good-faith discussion, I tend to ask people "can you tell me why it is important to you that X is true?" (or false, or whatever) these days. Of course I should also be open about why a particular view is important to me in those situations. It seems to be the most effective way maintaining a productive conversation when disagreeing on things, regardless of whether we'll agree in the end.


I would immediately end a discussion if I were asked that question.

If you want to dismiss me by trying to imply my opinion is driven by a want of something other than accuracy with reality, I can find other people to spend my time with.


Where are you getting that they'd want to "dismiss you?" Smart and well-informed people can come to different interpretations of reality that are accurate in their own ways. It's interesting to know what sorts of consequences to such interpretations a person finds to be important for their worldview.

Once we're talking about dismissing someone, we've long left the realm of good-faith discussion, which was assumed in the situation you're responding to.


> Where are you getting that they'd want to "dismiss you?"

Because that kind of information is used with a common rhetorical tactic called ad hominem.

> Once we're talking about dismissing someone, we've long left the realm of good-faith discussion, which was assumed in the situation you're responding to.

Assuming good faith is necessary and good for any kind of productive debate but it's not necessarily or need be accompanied by the trust necessary to answer personal questions. That kind of trust is developed over time with many people. Perhaps you are more trusting by default, all power to you, but it's naive to assume everyone ought to react the same way you do.


Is the content of your comment that you are explaining ad hominem to me and that vanderZwan's question requires mutual trust? Yes, I fully understand this.

The only thing I am trusting here is that vanderZwan is trying to accurately report their experiences. I trust, since they say it is done in good-faith discussions, that they are sensitive to the risk of this being perceived as being ad hominem and that in-person they ask the question in an appropriate way.

To be clear, I never said that I would respond well to the question. My point is that it is a large leap to go from what vanderZwan said to that this is for the purpose of dismissing someone.

Also to be clear, I think it's valid for P5fRxh5kUvp2th to think that this sort of question is well outside of what is ever respectful to ask. I just happen to think it's wrong to assume that disrespect is vanderZwan's intent.


“why it is important to you that X is true?” implies subjectivism in the belief system. While I think belief systems are largely subjective some people think of them as objective and take offense on suggestions that their beliefs are “just” subjective.


I have a close group of friends that consists of an athiest, an agnostic, a hindu, and 2 christians, and outside of 1 of those christians, we've all been friends for at least 13 years (1 of those relationships since 1994).

No one had to ask the hindu why they got upset when the athiest came by and stood next to the statue of buddha chomping on a mcdonalds beef biscuit.

The athiest didn't think about it, those two had a conversation afterwards, and they still get along just fine to this day (some 10 years later).

This is NOT about beliefs being objective or not. That group of friends has lasted over the years due to a fundamental respect of each other as PEOPLE.

Someone who asks that particular question lacks such respect and as such are not going to be given any headspace by me.


Do you realize you are making a ton of assumptions about the situation in which I ask such a question, or how? Most of them needlessly negative. Which I find odd given that I explicitly stated that I'm only doing this in a context of good faith and mutual openness.


You do not have a right to my time.

If you want a claim on my time, then you shall do so under my requirements.

No native speaker is going to be surprised I ended the conversation after a question like that.


If you frame it like that is doesn't sound like you're particularly willing to meet the people halfway when discussing things with them, and the No True Scotsman at the end feels like an unnecessary jab at the fact that I'm not a native speaker (I don't see how that is relevant here anyway).

Anyway, how you communicate with people close to you is up to you, and I'm still not sure why it upsets you so much how I claim to have healthy discussions with mine.

But for the record: my partner and I essentially saved our relationship during the pandemic despite being on complete opposite "sides" of the topics of covid, vaccines and lockdowns, by communicating this way.

Because truthfully, neither she nor I actually have enough expertise to really assert that the vaccine is safe or not in the long-term, whether or not lockdowns work, and so on. We just have to trust the people we consider authoritative on these matters. And for perfectly understandable personal reasons we trust different sources here and prioritize different issues.

So instead of trying to convince each other of a viewpoint we could not in good faith defend as something we knew with certainty, we decided to ask ourselves why we feel so strongly about these issues. Why these beliefs that we do not have strong evidence for were so important, and to be honest with each other about those beliefs.

And being willing to trust each other, open up like that and do that kind of self-reflection together works. Neither of us has changed our minds regarding covid, nor tried to change the other's mind, but we're more grateful to each other for taking the time to understand and respect each others views than that the differences upset us, because it shows we believe the other person is worth that effort. Which is more than most people who disagree as strongly about this topic as she and I do can say.

If you still believe this is being disrespectful, that's fine. Outside of this meta-discussion about discussions we're having right now I have no interest in "claiming your time" and I'll still have a healthy and loving relationship with my partner regardless of what you think.


If you were a non-native speaker I would give more leeway because the implication of the question is more likely to be missed by the person asking.

I heard a story once of an asian woman in a business meeting who started talking about erections. She was trying to say election. If a clearly native speaker had said it, the reaction would have been vastly different.

That question has all sorts of implications, not the least of which is being an accusation that the other person is disagreeing in bad faith, as well as an extreme level of condescension.

You don't speak to your coworkers the way you speak to your partner.


That's... an interesting interpretation of what I said, because my motivation is the opposite. I had not considered it could come across as dismissive, thank you for pointing that out.

However, when I ask this it is not to dismiss someone, it's because if I understand why something is important to them I am better at arguing in good faith without accidentally being dismissive of what matters to them. Which helps with reaching a consensus, or a respectful point of agreeing to disagree.

Also, it's an important question people should be asking themselves but rarely do.


> That's... an interesting interpretation of what I said

In a non-verbal exchange this choice of an explicit pause followed by the use of the phrase "interesting interpretation" could also be misconstrued as an underhanded compliment/sarcasm further undermining your alleged intention.


And? GPs response was way more combative than the explicit pause. The guy took pains to highlight "assuming a good faith discussion". Clearly, it's a question that IS on the table. It's driven by wanting to understand further the other side's thought process. I'd love to have a question like that. It would make me pause and ponder and reflect. Always a good thing. Not "immediately end the discussion". Crikey.


Sure, but at some point interpretation tells you just as much about the interpreter as about the person saying something, and I'm not responsible for the former.


> but at some point interpretation tells you just as much about the interpreter as about the person saying something

Maybe, but it's totally unnecessary if you want to foster open and good faith discussions as you claim.


I think the GP is being patient. Instead of policing his tone, you should reconsider yours. It sounds like you’re upset for an unclear reason.


> I think the GP is being patient.

Being patient in a textual medium means just saying what you mean instead of adding unnecessary pauses and reactions to their response. Regardless of how it was intended to come across it is regularly a sarcastic/dismissive response and not one that assumes good faith or openness which the GP purported to promote.

> Instead of policing his tone, you should reconsider yours. It sounds like you’re upset for an unclear reason.

You're imputing an emotional state on me and dismissing what I'm saying based on that, which is ironically the very thing that the original response was about and exactly why many people vehemently don't answer personal questions like the GP asks. Nor is this policing, the GP can do as they wish, it is critical feedback though about fostering good faith and openness. If that collapses at the first challenge then the GP isn't actually committed to it and ought not act like it.


If something is important to you for a reason the asker may not have known, it might make it more important to them too and change their mind a little.


I've seen a strange case, where someone argued blindly, but the next day advocated for the change. He didn't say it was his own idea but he didn't told he changed his mind :)


You will find this again and again in the business world, and you can use it to your advantage.

If you're arguing for a course of action, answer the objections well and then drop it. Often the person arguing against you (especially if they're "the boss") will come back a few days later and present it as basically their idea.

People get emotionally tied to what they came up with, and it can take a few nights to disconnect enough to move on (even if they never admit they were "wrong").


People are so strange. They don't see the waste and friction due to these reflexes :)

But about the 'drop' aspect, I think there's a reflex for other to argue someone to infinity if it become a belief confrontation, a kind of domination to impose their view. But if the other party drops the fight, it kills that aspect and the other person can accept the idea for its core value.


That's exactly it; we like to pretend we're some advanced computational AI operating on pure rationality, but we're actually a bag of meat with an inferiority complex. And much of the "unreasonable rules" of society are built around exactly that, and if we abandon those we often have to relearn them again and again.

Recognizing they exist can allow "the boss" to cut the issue short with a "I need to think about this" and de-escalate that way.


I was lucky enough to be in the group that ran Dr. Kahneman's talk at Google, so I was at lunch with him & 8 or so others. I asked him,

"Dr. Kahneman, you've been writing about thinking for 40 years. Do you think you've changed how people think at all?"

He said, "No, not even me." He proceeded to tell a story where he fell into the "eloquence trap" that he, himself, wrote about: a doctor said something, and he said to his daughter, "that doctor is very impressive!"

She'd learned his lessons better than he had: she said that what mattered was how much experience the doctor had in this particular area, not how good she sounded.


One of the most disappointing things that I have learned is that most people hold opinions in order to be part of some group. If someone is a member of a group, it is almost not worth listening to their arguments, especially arguments in support of views held strongly by that group. They are arguing in order to maintain their group membership, not to find the truth. It appears this is true of academic and scientific disciplines as much as anywhere else.


The most important thing I learned from my graduate advisor: anyone can have a worthwhile idea.

I watched him for years coming into weekly colloquia and seeming to tune out reading papers. But, occasionally there would be a speaker that was less than credible, and you could feel the entire hall close off. But, my advisor would hear a tidbit of a good idea (even amongst loads of bunk), and look up from his journals and ask a genuinely curious clarifying question. This would often lead to new lines of research in our labs.

In the end, many of these speakers were on the wrong overall track, but they definitely had insights that were incredibly valuable. Those who dismissed them entirely missed out, while my advisor had a knack for finding the signal in the noise and moving forward with that without missing it due to judgement.


My hobby is to look for a physics theory of everything (ToE). I have virtually no chance at succeeding at this, but it's fun reading through random junk on ArXiV.

I noticed something similar: You can find papers by clearly crazy people that have nuggets of good ideas in them. Odd bits of mathematics they reference might be an interesting rabbithole to go down, even if it ultimately leads nowhere.

The whole thing reminds me of the passtime of the hyper-intelligent Minds in the Culture series. They play in Infinite Fun Space, which is vaguely like coming up with new rules of physics and "seeing what happens". The rules don't have to be realistic, just fun.

I've found that practising physicists seem allergic to any such notion, too quick to dismiss unorthodox approaches. So what if they're wrong? They're fun, and maybe not that wrong in some rare cases.


If you want a good idea killed, have it presented by the wrong person. We attach an outsized amount of meaning between the two, sometimes so much that we kill the person delivering it.


Sometimes?

The easiest way to get killed by people in rage is to tell them the naked truth.

That's a hard rule since humanities dawn.


I don't think that is true.

People claiming to be brutally honest, are often far more interested in being brutal than in being honest.


That may be true. But I didn't say it goes this way.

It makes a big difference whether someone only claims to be honest or is actually honest.


I got a good laugh from your comment here. Imagine it from my perspective. Someone (you) is suggesting they search for a ToE in their spare time, and they’re labeling someone else as clearly crazy.

;)

All in good fun. I greatly enjoy the sentiment of your comment and it’s parent that great ideas can come from anywhere. At the risk of creating a segue into controversial topics, I think this plays a huge part of why it’s important that a team of programmers be made up of folks with different backgrounds. I am so often caught completely off guard by how different a valid idea is than mine. “I totally never would have thought of that.”


An analogy is that one can walk in the right direction for an awfully long way before getting stuck at the end of a valley, staring up a proverbial cliff. Someone walking parallel to you on a ridge might be nearby the whole time, but avoid being stuck.[1] Anyone studying AI/ML would know about getting stuck in local minima, and having to essentially restart training to shake things loose. Same thing.

My theory is that physics is stuck in a local minima where it's not sufficient to change just one, or even two or three fundamental things to get unstuck. That's too big a leap via traditional incremental publishing of new theories. Any one change to the status quo won't work, and is rejected. Multiple changes are too complex, and might need to have occurred too early in the timeline. It simply might not occur to people that the whole industry took a run turn... 100 years ago.

This is why I like crazy papers. They make you reconsider fundamental notions, the type that were in textbooks decades ago and are seen as foundational and unquestionable.

[1] This literally happened in New South Wales. For decades(!) nobody could cross the Blue Mountains, until Blaxland, Lawson, and Wentworth tried walking on the ridge tops instead of the valleys: https://en.wikipedia.org/wiki/1813_crossing_of_the_Blue_Moun...


> My theory is that physics is stuck in a local minima where it's not sufficient to change just one, or even two or three fundamental things to get unstuck. That's too big a leap via traditional incremental publishing of new theories. Any one change to the status quo won't work, and is rejected.

Makes me think of Stephen Wolframs current work. Guy's a genius, who's current stuff kind of reads like he's a crank. But at the same time I'm kind of rooting for a revolutionary paradigm that's gonna totally upend things.


He's starting from a totally blank slate and hoping he'll end up at Physics. In some sense, that's guaranteed by definition -- any sufficiently complex foundational system or algebra can represent any other, including the current models of reality. But this has no predictive power. It's like saying digital circuitry is a theory of physics because a computer can run a physics simulation!

My approach is more akin to assigning a lower probability of validity to papers that have long been generally accepted as 100% true. Then I try to hold all of them in my brain simultaneously while reassigning joint probabilities, almost like those computer game map generators that use "quantum decoherence".

The idea is to find a parallel path that goes through most of established physics but avoids the trap of local minima. The challenge is that it's really unclear which existing theories are the traps, and which are true and need to be kept.

Something like this is clearly needed, because existing theories are either contradictory or inconsistent. They can't all be right. Something somewhere must be discarded.


Have you taken a look at Wolframs ToE? I found it pretty intriguing.


He's about half a dozen giant steps from a theory making qualitative predictions matching the physics of our universe, but not others. His only advantage is that his starting point is more foundational than pretty much anyone else, but that also makes it very unlikely that he'll ever develop something with predictive power.

My current set of candidate pet theories are all based on a vaguely similar foundational notion that the physical universe isn't made up of "matter on top space-time", but rather that there is a single space-time-matter fabric.

For example, to create matter, space-time must be affected (curved). Or to put it another way, all particles have mass-energy (spacetime curvature) because they are space-time-matter curvature. The idea is to unify QM and GR by making fundamental particles have geometric properties that satisfy GR at all scales.

Some of these notions are present in Wolfram's ToE implicitly, so it's possible that there is a connection. However, he isn't yet at the point where he can derive, say, the mass of an electron from first principles based on how much its specific topology curves space-time-matter.


> My current set of candidate pet theories are all based on a vaguely similar foundational notion that the physical universe isn't made up of "matter on top space-time", but rather that there is a single space-time-matter fabric.

> For example, to create matter, space-time must be affected (curved). Or to put it another way, all particles have mass-energy (spacetime curvature) because they are space-time-matter curvature. The idea is to unify QM and GR by making fundamental particles have geometric properties that satisfy GR at all scales.

I also think this is the "obvious solution". Came to the same mental picture of "space-time-energy quants" long ago.

But I guess the main problem is to formulate this in a meaningful mathematical way. (Physics always needs some "stage" on that "things" can happen. GR did not change that; it made the "stage" just more dynamic, and alone that proved to be very hard to formulate in math, which is all about static relations between objects).

BTW: Something that I found very inspirational, and what makes very much sense to me, was this here:

http://www.platonia.com/research.html

(Mr. Barbour has also some pop-science books on his topics).

The basic idea is that there is nothing besides pure geometry on the fundamental level.

That makes sense because to me as what else could be there at all? Anything that is needs to come form somewhere. Only pure structure, something that "just happens" given the idea of "things in a space" could imho resolve this problem. (Which is also quite in line with Wolframs ideas, btw).

That leads to the idea that things "are" because they "must be" alone from the fact that you try to describe their relations.

Contrary to that all physics concentrate on things that aren't "pure". Almost everything in physics is "afflicted" by some "units". But how do you explain the "units"? You can't! They're a given. So imho, even the smallest set of them can't be fundamental. Only pure "proportions", from which "structures" and "shapes" emerge, make sense on the fundamental level. Because such structures "just are", as they're mathematical objects. (Mathematical objects and structure "exist" without being created; nor they can be ever changed or destructed. That makes "very good material" for the fundamentals of a universe, imho).

Also this way to look at things explains one of the most weird and quasi not understood parts of our world, namely time.

Time is a big mystery. Mr. Barbour's ideas were to me the first explanation ever that didn't produce more questions than answers.


Crazy -- just brought up Wolfram in response to a sibling comment before I saw this. Happy to see I'm not the only one.


I somehow converged around a similar idea, fun and exploration


> If someone is a member of a group, it is almost not worth listening to their arguments, especially arguments in support of views held strongly by that group.

Who isn't a member of a group? We're all members of a number of groups.

It's refreshing that Kahneman honestly admits that he doesn't change his mind either, and his tastes were formed when he was relatively young. He's not putting himself above the rest of humanity. He could easily say, "Well I'm a famous and distinguished scientist, so obviously my beliefs are rational, unlike everyone else", but he doesn't.


Maybe I'm uncommon in this aspect, but I don't feel like I'm a member of many groups. I'm not religious, but don't really consider myself an atheist. I'm not for any particular party. I don't care about sports. I'm Canadian, but don't live there and don't feel like I belong there either. I suppose you could say I'm part of the group of Caucasian males by birth or part of the group of software engineers by trade, but I don't think those are "real" groups with many ideas in common. Maybe I just am not aware of which ideas I get from those groups.


> I'm not religious, but don't really consider myself an atheist.

Oh, an agnostic.

There is something to be said about groups of people that hold similar ideas because their similar way of thinking lead them to similar conclusions. It's a very important exception to the OP's description, where the group came first, and may even be the most common kind.

(Of course, the question to answer is why do those people think in similar ways.)

> part of the group of software engineers by trade

I'd bet that one lead you into adopting some values.


I probably inherited some values from the group of software engineers, but it's not a well defined group. Can you point to any values we might have in common? I would say a love of logic and the scientific method, but even that isn't universal. I don't find software engineers much more logical than other people of comparable intelligence.


Yep, some adoration of logic and control, even if not applied. But those were probably there first.

You probably also have some abhorrence towards formal chaos (the ones that causes unpredictability; many are not even able to accept it exists) and want to improve every system (simple or not) you see.

On the more traditionally politics kinds of thing, you probably value freedom of expression to a very high amount, like other highly educated groups, but also probably value personal freedom and personal initiative. Those often come in an intensity that leads to some bullshit levels of belief in self-determination (like in not believing you share group values) and personal responsibility; but beliefs change faster than values, and many people tame those down with time.


Same here. Something I always find hilarious is conversations with people who's opinions are very polarised due to their membership in groups. For example, I have a friends that are variously: a Christian priest, libertarian, anti-vaxxers, and Trump supporters.

Invariably, in every discussion, they'll trot out this sentence: "You <opponent group name> people always think <notion>".

For example: "You Liberal party voters always support lowering taxes" or somesuch sentence.

I point out that I didn't vote for the Liberal party.

"The Labour party voters are the same!"

I point out I didn't vote for Labour either in the most recent election either.

"Err..."

-- at this point their brain locks up, because they're expecting a tribe-vs-tribe fight and they have no idea what to do when they discover I don't actually belong to their "enemy tribe".


You're lucky to have avoided encounters with the more advanced tribalists then. Most of the more engaged members of these groups will already have a prepared retort that insinuates that anyone not an active member of $THEIR_TRIBE is by necessity part of $OTHER_TRIBE.


I think this is not at all uncommon. I am largely in the same boat and assumed this was the 'de facto norm' amongst my peers as a teen (90s NZ), but it's interesting to consider that groups are often defined by what they are not.


That just means you’re being willfully blind to the groups, not that you aren’t a member.

Even saying you’re !group makes you explicitly a member of the !group group!


The exact same thingbhappens with groups like programming team or sports club.


Those with most priviledge are usually the least aware of it.

I've come to realize that "white male software engineer" is one of the groups with the highest ratio of priviledge versus priviledge awareness.

(Disclosure: I belong to that group as well.)


Why would you bring privilege into an unrelated discussion? Also you're assuming a lot about my background that would surprise you. I wasn't born a software engineer. I chose the field deliberately as the best way to make money, and then fell in love with it after the fact. I'm self taught, a child of immigrants. I had a typical middle class childhood.


This part:

> I don't feel like I'm a member of many groups.

And this part:

> I suppose you could say I'm part of the group of Caucasian males by birth or part of the group of software engineers by trade,

So, you are belonging to three priviledged groups. Male, Caucasian, software engineer. It's easy to overlook this group membership, since it silently represents priviledge. Those outside these groups definitely notice that they are not in them. Ask a discriminated woman, a discriminated non-white, or an amazon warehouse worker.

That's why I pointed out that it's easy to forget group membership that confers priviledge.

> Also you're assuming a lot about my background

No, I went by exactly the information you conveyed yourself. See above.


In Canada? Software engineers are traditionally not well paid there. Plus I assume gangs of other kinds of engineers are always beating you up for calling yourself an engineer without an iron ring.


This discussion is about ideas (or opinions), not privileges.


You're right, it's about ideas, but the original article handles the interesting case where we fail to form or handle our ideas using "reason."

> The power of reasons is an illusion. The belief will not change when the reasons are defeated.

So privilege plays an interesting role. Unaware privilege - failing to see ones own circumstances, failing to assess how we fit in the world - is almost the same thing as non-reason. I suppose there could be a class of people who are like, "I am totally privileged, and somehow I delude myself and don't notice, but in every OTHER way I am totally reasonable." That could be a subgroup. But probably not a big one?


The issue isn't group membership. It's what's being prioritized.

If you prioritize truth over group membership, you are more likely to see your own flaws, the flaws in others, and prevent yourself from making catastrophic errors in your inevitable ignorance.

If you prioritize group membership over truth, you are more likely to fall prey to lies that have only short term benefits for one or more members of the group and eventually lead to catastrophic errors.

The best groups are the groups of people who are legitimately pursuing the truth, even if they are temporarily ignorant, which again, is inevitable due to information constraints.

We've become so saturated with power dynamical thinking we've forgotten that it is possible to be motivated to try to see the world accurately so as to best cooperate with it regardless of power differentials.


> who isn't a member of a group? We're all members of a number of groups.

Not everyone is brainwashed to accept group think no matter in what cirles they have to evolve in.

There are individuals and then there are the parrots who mindlessly repeat what the group says.


Being aware of cognitive biases doesn't render you immune to them. Others shape and inform our beliefs about the world just as our own experiences do.

You might be a "rational, fully self-realized, and atomized" individual but your behaviors and thoughts are certainly influenced by those around you. Here's a simple example: https://dictionary.apa.org/social-facilitation



nice strawman that has nothing to do with the point being made


"People don't have ideas. Ideas have people."

https://wearehostsformemes.com/


There's a book called How to Win Friends and Influence People -- pay attention to the order :-) first become a group member, in the eyes of the other person, and then there's a chance he/she will be open to something new

> almost not worth listening to their arguments

At the same time, their arguments are likely the-least-dumb / most-intelligent things they've been able to come up with, to rationalize their group's ideas -- so at least you have a chance to get to know what those least-dumb-things are


You make it sound so simple. Becoming a group member in this context means corrupting oneself. You are what you do. You are the last action you take. If you deceive the other externally while holding different positions internally, you are those external positions, and the internal is the deception. Those that lie to themselves this way have nothing to offer the group and will certainly never "make a change from the inside."


More books for you :-)

Never Split the Difference. And Nonviolent Communication.

Because just listening to others, and showing that you care about and understand what they're saying, can go a long way. Not always a need to actually do anything in the real world (or to say or pretend that you agree).

Maybe theoretically this won't place you in their old group, still, you'll a bit form a new group where you and that other person are friends


Or you become a traitor to everyone and are rewarded as one deserves.

These tactics are just that, they are not strategies. They are not long term successful or sustainable.


The things on your mind are misunderstandings. Not sure exactly what you think, but what you wrote made no sense to me.

It seems you haven't read any of the books.

I did use the listening technique in Never Split... and it worked well, and long term good effects too for everyone. -- When listening to the other person, one gets a bit more open oneself -- sometimes the other person does have (partly) good ideas (although not in that case) and it's simpler to find something that works.


I don't see it that way - I think of it as a linguistic challenge. Every group's thoughts are limited by the language the group adopts. Learning a group's language makes one more prone to think as that group does, but it also allows one to keep those ideas at arm's length. And if an idea is difficult to communicate in a group's language, that doesn't make it deceptive to hold it, especially if one makes good-faith attempts to communicate important ideas using the group's language.

And while it is true that some groups hold language and ideas that are both infectious and dangerous, arms-length exposure to many such ideas is much less likely to result in pathology than close exposure to one. Refusing to learn an enemy tribe's language is an uninformed bet that you lucked into the best tribe by default.


Yes. And if someone knows books or discussions about this topic I want the names. Its such a strong force in social structures. Stating your truth is rarely possible, you better sing along and talk down in smaller sub groups later.


Read anything Charlie Munger writes on the topic. He said something to the effect of “A year in which you fail to kill at least one of your cherished ideas is probably a failed year.”

Anyway, he has a lot to say on the subject and on incentives and human psychology broadly.


Nice platitude, but sometimes cherished ideas are actually true.

A better way to phrase it would be a year when you don't challenge your idea is probably a failed year. It might still win the challenge.


I think the idea is you have many cherished ideas, and so many are actually false that you shouldn't have much difficulty killing one.

Or, another way of looking at it, your knowledge of so many areas is superficial and a deep dive into any one would let you know how you weren't even wrong, as the saying goes.

You can find people doing it in any HN thread about almost anything - often shows up as "why didn't they just do X". A non-domain expert with a "obvious solution" is almost always missing something major.


stuff like psychology of misjudgement ?


https://www.newyorker.com/magazine/2017/02/27/why-facts-dont...

Three mentioned in the article: * Enigma of Reason * The Knowledge Illusion * Denying to the Grave


This is a large chunk of "The Elephant in the Brain", though it's present as an example of the wider thesis. The overall topic of the book is that people basically always lie about their motivations, including to themselves. I highly recommend reading it.


Everything handwaving freakoutery (it's a blog) publishes on egregores I found highly relevant and entertaining.

Actually, everything contemporary mentioning egregores is probably relevant - they're also called 'AI autocults', but the concept existed long before group communication became enmeshed with computer infrastructure - though the dynamics are different now than in enlightenment-era Europe, and the pre-Newtonian assumptions of the older publications throws most modern readers off.


Not really books but did some undergrad research on the topic.

http://stephendavies.org/writings/mitterederEtAl.pdf

We referred to it as "issue alignment" but its gone by other things, IIRC. Basically, in theory, one could end up supporting positions that would make 0 sense outside the context of "well, I support "X" cause "team green" supports "X."

I've seen it happen often online, a lot less IRL.


https://youtu.be/QWkWeZOQZI8

Jordan Peterson’s zebra story


I think peterson is crazy but some of this story was fun


I’m curious why you think he’s crazy.


I agree with this. Many tend to think this is isolated to religious groups but it's just as bad in groups that present as "woke" or "enlightened".

There's a survival instinct that we can't get away from. That being in groups is innate to the core of our being and to go against the group or even change a group's mind requires potentially breaking that bond and losing the safety that the group represents.


There's some truth to this but it's a pessimistic and ultimately counterproductive take on people.

Rather I would say that high agreeableness (and maybe to some degree conscientiousness or extroversion) [1] is correlated with the likelihood that someone will change their opinions in order to conform with a group.

So if you don't like that personality trait, you can seek out less agreeable people, there are plenty of them out there. The only trouble will be that if you need to collaborate with them, you may have a hard time getting anything done!

There is survivor bias among groups - if a bunch of people all have different opinions and aren't willing to modify them, they're unlikely to remain a group for long.

[1] https://en.wikipedia.org/wiki/Big_Five_personality_traits


> There is survivor bias among groups - if a bunch of people all have different opinions and aren't willing to modify them, they're unlikely to remain a group for long.

I suspect this is why over time all members of a political party become near carbon-copies of each other. Even if the party started "big tent" over a single issue, natural forces will cause it to coalesce into "same thought on all issues".

One solution for the individual is to forcefully prevent yourself from associating with the group, but this often requires drastic action and gets you shunned by all groups. Compare the vitriol against someone who "votes wrong" vs "votes third party" vs "refuses to vote at all".


I think it is the other way around. When people are part of a group the view of the group becomes their own. Most of the thoughts that you think aren't really your own. It is simply a script written by the society around you.

For example: Someone born and brought up in the West would think that children don't owe anything to their parents because it was their parent's choice to have them.

While someone born and brought up in the East would think that they owe everything to their parents because they gave them the gift of life and because of all the sacrifices they made for their kids.

You can't convince someone in the West to think anything otherwise than that stated above. They are utterly convinced that that is is truth and likewise the reverse for someone born and brought up in the East.


That was kind of a ... weird example. I don't think I've asked many people in my circle if they think they "owe" anything to their parents, but by their actions I'd say 90% of them do think they owe a lot to them, and have a profound duty to care for them as they age.

I live in Canada, but I can't imagine it's that much different in the US.


> You can't convince someone in the West to think anything otherwise than that stated above.

That isn’t at all a uniform view in the West. I grew in a community where people expressed a great deal of what might be called filial piety. Not everyone held or maintained that belief though.


But not everyone holds that view based on where they live. So how do you those are the prevalent views?


Join us, the non-conformists!


"You are all individuals"

"We are all individuals"

"I am not!"


> If someone is a member of a group, it is almost not worth listening to their arguments, especially arguments in support of views held strongly by that group. They are arguing in order to maintain their group membership, not to find the truth. It appears this is true of academic and scientific disciplines as much as anywhere else.

I tend to agree with you, but based on the way you have worded this, I am curious if you think that people have conscious awareness (which is required to form intent) of this?

And as a follow-up: if this phenomenon is sub perceptual, might that change how one might go about addressing it, or even discussing it (it is a fairly common point raised in these sorts of discussions)?


I think there can be different things going on under the hood.

Some people will be self aware, and not take arguments or positions too seriously - their own or others - they know they hold the points of view for group membership - they tend to not to like to debate them, aware of the futility.

More common though is ego-defense/group-defense reaction. This is the zealot. Both a person's ego (you're wrong) and their group (your group is wrong) is on the line. This is in addition to losing their membership of a group if they adopt a different position.


Ya, it's a big chaotic mess, but I wonder if we humans are somewhat prone to letting the apparent complexity of it (the actual complexity isn't actually known) scare us away from trying to make sense of it all. Look how much complexity about the physical world that science has come to understand, because we applied some of the sharpest minds we have to the problem, and funded them well.

If we don't even try (including if the idea doesn't even cross our minds), we may never succeed.


What do you think of the wisdom of the great Groucho Marx: I wouldn't be a member of any group that would have me as a member.


i thought that was bertrand russell...


why don't you find people who are in a group that don't subscribe to this approach?


Because it's too hard to work everything out all the time, it is much easier to rely on others, which causes the group dynamic.

The best you can do may be to find a group that is based on something entirely outside everything else, but that's quite hard to locate.


> It appears this is true of academic and scientific disciplines as much as anywhere else.

Sure, of course!

That's why "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."

And we can be happy with at least that!

In anything outside science there is no progress at all. Humans in general behave exactly the same since thousands of years.

You could read some thousands of years old drama and the story will be very familiar. Lust, greed and power struggles, and the other typical human traits.

We did not manage to solve even one none technical problem since we came down the trees. But that's nothing unexpected actually given we're a horde of apes. I'm old enough to know that nothing will ever change as we had already more than enough time to easily mange at least some "humanity scale" issues by now. But we as a species are seemingly bound to our ape nature.

The few statistical outliers here and there could not and can not change the course of events. They never had any realistic chance. Because they're outnumbered by the billions. That's just how it is.

Eugenics would be a small chance. Or we'll become some day the "wet bootloader" of some truly intelligent beings. But both is not very likely.

Or, of course, we just kill our species eventually by some stupid mistake, or just out of rage; which is frankly the most likely outcome in the long run given human nature and its current technological possibilities.

There are no other realistic outcomes one could come up. Apes will stay apes. Likely to their very end. Time already proved that we just can't do better.

Now, anybody who likes to argue that I'm too pessimistic needs to explain away why there wasn't any substantial progress up until now (besides tech, which is something almost exclusively driven by singular people). I say it's all about human nature, psychological phenomena and that. Which is something coded by our common gene pool. And that's something that just can't change on any time scale that is meaningful to humanity as it is. (Besides doing this through tech, which isn't a realistic option at all given said human nature. My guess here would be that it's more likely that we'll go "the Borg path" than that we would try to re-engineer our-selfs to become a more generally friendly and intelligent species). Like said, it's imho already proved by time that we don't want maximize joy for everybody. Quite the contrary actually! We seek ever since only more efficient ways to extinguish our enemies. That's the one constant in human history. Actually, even most of our technological breakthroughs are direct results of this pursuit. Go figure…

Hmm, now I have the opening scene of "2001: A Space Odyssey" on my head. Not sure why.


> We did not manage to solve even one none technical problem since we came down the trees. But that's nothing unexpected actually given we're a horde of apes. I'm old enough to know that nothing will ever change as we had already more than enough time to easily mange at least some "humanity scale" issues by now. But we as a species are seemingly bound to our ape nature.

Counter-example: The use of slavery as an energy source replaced by the use of electricity as an energy source.

When you want to listen to music, you open up Youtube (or Spotify, or your carefully curated collection of FLAC files, or whatever) and press play.

When Romans wanted to listen to music, they would tell the slaves to pick up the instrument and start playing.

That's a significant change, IMO.

EDIT: also, written language went from non-existant to 8 billion humans and 86% of them can read in a few thousand years.


In fact that's how many get into the elite group and jockey for position within it.

"Think about what it takes to claw your way into America’s elite strata. Unless you were born into the upper-middle class, your surest route is to pursue an elite education. To do that, it pays to be exquisitely sensitive to the beliefs and prejudices of the people who hold the power to grant you access to the social and cultural capital you badly want. By setting the standards for what counts as praiseworthy, elite universities have a powerful effect on youthful go-getters."

"As the senior assistant director of admissions at Yale recently observed, “for those students who come to Yale, we expect them to be versed in issues of social justice. We encourage them to be vocal when they see an opportunity for change in our institution and in the world.” Picture yourself as an eager high schooler reading these words, and then jotting down notes. You absorb, assuming you hadn’t already, what it takes to make your way in contemporary elite America. And as you grow older, you lean into the rhetorical gambits that served you so well in the past. You might even build a worldview out of them."

https://www.theatlantic.com/ideas/archive/2018/08/the-utilit...


This comment is interesting because it kind of contradicts itself:

At first you say sometuing akin to “to become part of the elite you need to accept the elite's point of view and beliefs”, citing higher education as a gateway for elites, and the you use the discourse “social justice” as an example of what young people must believe in to get to higher education. But “social justice” is definitely not a foundational belief of the elite, because otherwise the US wouldn't work the way it does otherwise, for the elite is the one who actually shapes the society.

If anything, it's much more likely that Yale wants to attract young people by advertising values the young care about (young people being naturally much more progressive than older ones).


Whether the origin of social justice in universities comes from students or the faculty is irrelative. Fact is that elite universities in the US stand strongly for social justice, and expressing those values in your essay and interview does increase your chances of admission - especially in comparison to expressing contrary values. Thus, if your only goal is to get in, you are better off emulating the values the university has chosen for itself, regardless of your actual personal beliefs.


Only if they really “stand strongly” and give higher scores to such views and it's not just marketing. If they advertise about social justice, but the admission process is in fact pretty conservatives, this posture will not have a dramatic effect.

But anyway, even if this is true, that doesn't mean that it's the current Elite's views, just likely to be the future Elite's views (assuming that higher education pays a significantly higher role than the set of other Elite's beliefs they'll encounter later in their lives and that they'll mimic too in order to be successful later, since the process you're describing occurs over and over and over in someone's life).


I think the only thing where we differ is that I consider elite university faculty (and the institution as a whole) to be part of the elite and you don't. I'm not saying either is correct, it's just where you draw the line.


There's a difference between “being part of the elite” and “being representative of the Elite as a whole”. I'm not arguing that university members aren't part of the Elite, but I'd argue that they are a narrow part of a much broader Elite and they are commonly more progressive than the rest of the Elite.


That we can agree on.


Replace "social justice" by "performative social justice" and you have a much more accurate description.


> But “social justice” is definitely not a foundational belief of the elite

The selection committees at universities certainly believes in social justice or they wouldn't consider race so heavily for admissions.


It’s mostly an excuse to accept fewer Asians, just as personality and interviews used to be one to not accept Jews.


So they care about social justice, which is bad, and they are only pretending to care about social justice, which is bad, and they use it as cover to do the opposite of social justice, which is also bad.

So, higher education is definately bad, and whether social justice is bad or not depends on whether you open the box and find out if the cat is a Republican or a Democrat?


> So they care about social justice, which is bad, and they are only pretending to care about social justice, which is bad

That's P ^ !P, which is a contradiction. Your reading doesn't make any sense.


I'm pointing out the contradiction in this ongoing "political correctness/woke" backlash.

They're complaining about people doing the right thing. And they know that. So they have to complain they're doing the right thing wrong, or doing the right thing for the wrong reasons. Or the right thing is only a cover for a nefarious plot.

Which only re-inforces that it's the right thing, even before you examine the decades long track record of people arguing against it.


It's not hard to make the argument that it's a cover for a nefarious plot. Given the timing of when these ideas rose to prominence, that it was a reaction to the 'occupy wall street' movement and the prospect of an underclass uniting against the ruling class is a parsimonious explanation. The US ruling class very effectively prevented poor white and poor black citizens from making common cause in the early twentieth century by telling one of them they were better than the other. It's no great leap of the imagination to suppose they'd try the same in the twenty-first.

I don't personally subscribe to that narrative - dividing the losers of the new economic reality might have been an incidental goal, but the idea that social justice is a solution to elite overproduction[1] makes more sense to me. And as a means of curbing elite overproduction I actually like it - just as Edwardian scandal culture selected for self-control, cancel culture selects for a lot of the qualities we'd want in a successful elite. But I've yet to hear a good argument why it's not a cynical play by the elite to divide the poorest citizens against each other. Most who subscribe to the tenets of the system refuse to acknowledge the argument at all.

It's very hard to argue that social justice isn't designed to divide the poorest people against each other, since intended or not that's an effect it has[2]. I imagine the best counterargument to the belief I put forth in my first paragraph would be an alternate explanation for why these ideas arose when they did - does anyone who doesn't believe it's a cover for a nefarious plot have one?

[1] as argued very eloquently by, I believe, Charles Eisenstein in an essay I wish I could locate.

[2] and it puts one in the position of arguing that Yale isn't a racist institution, which is a losing position from the outset.


> for those students who come to Yale, we expect them to be versed in issues of social justice. We encourage them to be vocal when they see an opportunity for change in our institution and in the world.

Would you want anything less in an academic institution? That's the entire point of places of higher learning.


I consider this meaningless fluff, or at the very least trite marketing. It reminds me of management that claims to welcome criticism with open arms and then proceeds to pay lip service to concerns while changing nothing.


> Would you want anything less in an academic institution? That's the entire point of places of higher learning.

Is it really "higher learning" when LITERAL groupthink is required?

(Maybe you're being sarcastic and I'm just not getting it)


I can’t make sense of it either, in fact I would consider it a negative.

A mayor was once asked what he was doing to help minorities in his city. He replied “I don’t have minorities in my city, we all are all equal”.


Social justice is not an innocent word. It is charged with a political agenda.


So was "abolition".

You can decide you -disagree- with the ideas of 'social justice', or the implementation, or whatever, but I think you'll need a stronger case than "it's political" to warrant the dismissal of an educational institution trying to incorporate the term.


> warrant the dismissal of an educational institution trying to incorporate the term.

Kids are supposed to be in universities to learn and develop their critical thinking, not to be brainwashed by the most fashionable ideology of the time. Sorry if this is such a provocative line of thinking these days.


The problem with wokeism is that it’s more like a religion than an effective movement for change. Being seen to believe/say the right things is vastly more important than actively helping.

Woke Racism is a really good book about that. Don’t worry it’s written by a black guy who’s been fighting for social justice for decades. The fact I have to say that is one of his points/arguments.


This permeates all online discourse and a good chunk of face-to-face ones, too.

Try this sometime - choose a position (in politics or whatever). Use the terminology and dog whistles of the "other camp", but argue the opposite policy.


The author thinks 'the hard left' think Universities are racist, and they are wrong to think that.

Then he wrote a book called "Woke Racism: How a New Religion Has Betrayed Black America".

So, they were right after all?


Of course justice involves political agendas. How do you think poll taxes and literacy tests were eliminated? We don't even have to go that far back -- the legal recognition (& protection) of same-sex marriages is something even zoomers have a memory of.

People opposed to social justice aren't apolitical or morally neutral. Banning books, getting educators fired, and forcibly de-transitioning children are all charged political agendas in and of themselves. They all fall under the umbrella of "anti-SJW/woke" brigadism.


It is possible to largely be for social justice while disagreeing with some of the beliefs of the modern Social Justice movement. I know scientists that fall on both sides of the debate when it comes to issues like MIT cancelling Dorian Abbot's lecture or Thomas Jefferson HS changing their admissions policies. I don't know any scientists that have any of the "anti-SJW" beliefs that you've listed.

I also know some 18 year old robotics kids that are just truly indifferent about this stuff. I suspect some of them will grow out of it, but even for the ones that don't, I don't think that says much about their qualifications for Yale. Climate change is a massively important issue yet we don't ask random humanities students what they've done to improve climate research.


Banning books and firing teachers is one thing, and I agree we should protect free speech.

However, I'm not sure I'm okay with the idea that feeding complex hormones to children who are not old enough to vote, drive, or get tattoos and/or piercings, or as you so put it is wise, and if this is what you mean by "forcibly de-transitioning children", then I'm actually okay with it.

Hormone therapy, like tattoos and/or piercings has a very permanent effect of the body. I believe such long term changes should only be allowed if the person is old enough to take the decision as an "informed adult".

Put more bluntly, if all of a sudden some new condition is discovered called "un-inked skin skin dysphoria", where not having a tattoo is believed to cause deep psychological trauma, I'd still make my kid wait to be a legal adult before they get a tattoo.


Trying to prevent children from permanent chemical/physical hormone changes to treat a mental condition is also a form of social justice.


But it's not really higher learning is it?

As the parent indicated, they're not willing to even communicate with people who don't parrot their own ideas.


What do you consider higher learning? Learning how to screw in a lightbulb, or how to turn manipulate people into clicking on ads?

> As the parent indicated, they're not willing to even communicate with people who don't parrot their own ideas.

That's the exact opposite of what I quoted. They invite criticism of the status quo, and push for greater justice in the world. Most decent human beings agree with that.


In this particular context we call it "lip service".


I see. I wouldn't know, as I haven't done any research on or looked into the impact of this policy on things at Yale.

Since you seem to know enough to call it lip service, I assume you went there. what do you think should be done to ensure that those in charge actually made changes that you and your peers advocated for?


[flagged]


We've banned this account for breaking the site guidelines and ignoring our request to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


Care to elaborate at all?


Why did you point out academics and scientific disciplines specifically even though you said it happens everywhere?


Read the article.


> Let's start from the main domains where we know people don't change their minds—politics or religion.

> In politics and in religion, the main driver is social. We believe what the people we love and trust believe.

Except I've changed my mind on both, despite enormous pressure by people I love not to.

Also this article seems to be self-defeating: if we don't change our minds, why attempt to convince us of that fact? If we truly don't change our minds, no amount of evidence will help us change our minds about this topic.


I’m in the same boat as you, having changed my mind on many big, cherished ideas. Obviously, people change their minds.

Having some familiarity with Kahneman, my guess is he means this is a general, but not absolute rule. Sort of like “You can’t convince someone of a wrong if their paycheck depends on them seeing it as a right.” It’s a helpful general, but not absolute rule.


To be truly free, we should be paid by no one.


I don't think it was meant as an absolutist assertion.


Then it shouldn't have been titled as so.


The actual title is "Adversarial Collaboration: An EDGE Lecture by Daniel Kahneman". "People don't change their minds" is just the tagline, and he doesn't even actually say it in the lecture. The focus is much more on adversarial collaboration and why he thinks it is a positive thing for science.

...does that change your mind? ;)


Except he says things like:

> To a good first approximation, people simply don't change their minds about anything that matters.

> Let's start from the main domains where we know people don't change their minds — politics or religion.

Perhaps people familiar with his work interpret his statements differently. It's also understandable if this was a lecture for peers. But I'm a layperson whose only context is this article and the things he says sound absolute.

It's already an era of misinformation and low attention. We don't need content published in a way that may add to it. If there are conditions and nuances involved here, I think writers, especially scientists and professionals, and editors should include them and avoid saying anything that risks misinterpretation.


How do you interpret “To a good first approximation” in this context?


This is called a “rhetorical device” and you’d struggle to find a lengthy article that does not employ rhetorical devices. It’s a big part of how communication works.


I wonder if there is a term for this kind of objection. I’m starting to see more and more people demand quantifiers or conditional disqualifiers on statements that shouldn’t reasonably be considered as absolutes. And, the fact that bringing up that whatever assertion is being made is not a universal absolute doesn’t really add anything to the understanding.

Which statement is better, “people don’t like the smell of skunk spray” or “the vast majority of people do not like the smell of skunk spray”? The latter is more correct, as I am sure there is probably someone out there who is an outlier. But I can’t help but feel like my communication has been made less precise and has been made subject to interpretation as a result.


I advocate for everyone to use basic declarative statements when they are not completely certain, with uncertainty left implied and up to the reader to figure out.

If the writer is certain about something or wants to make an absolute statement, then that's when the verbosity comes into play. That's when the writer can start to add all the fluff words.

That's a convention that's way better, because the large majority of statements and propositions aren't known with certainty. I am exhausted at having to read long winded prose with endless "maybe" "mostly" and so on, with the obligatory "we don't know this for sure" at the end. I don't come away thinking you're humble, I am just annoyed that you've wasted my time when it was blatantly obvious you were saying something not absolute.


I hear you, but I'm also the type of person to admit my own uncertainty. And the people around me get way more uptake of their ideas by just leaving out those words, even when they have less or terrible foundations.


It's a cultural thing. The cultural norm in academic or technical circles is extreme precision as well as fear of being held to account by enforcers of that norm. It's a spillover of rigorous academic publishing norms into general conversation. It's a fool's errand given that speech is intrinsically imprecise, and given that everyday speech is just a communication tool and not a vehicle for perfect applied epistemology.

We should hedge statements depending on whether the context makes it self-explanatory that there's uncertainty. Any less, and we're being deceptive. Any more, and we're adding words without adding information.

You know what would be a cool? Some notation to denote the level of certainty that doesn't take up horizontal space in the sentence. Or an API to a fine tuned GPT-3 that can strip out caveats and hedges from text on a screen.


I know very little about it, but it's my understanding the conlang Láadan has ways to express degrees of certainty and degrees of "handedness" to the information (first-hand, etc).

https://en.wikipedia.org/wiki/L%C3%A1adan

Check out the tenses, things like, "Known to speaker because perceived by speaker, externally or internally" vs "Assumed true by speaker because speaker trusts source".

https://laadanlanguage.com


Personally, I am glad that culture is moving away from rewarding confident certain statements that are simply untrue.


Citation needed.

Also - relevance?

The tagline being questioned here is—

“People don't change their minds”

— the problem with which is not that it’s “simply untrue” but that it requires qualification, e.g. in the prose of the linked article it’s written:

“To a good first approximation, people simply don't change their minds about anything that matters”


> Citation needed.

Citation for personal preference? Or for claim that currently confidence is rewarded more then accuracy?

> the problem with which is not that it’s “simply untrue” but that

The statement is untrue, because people do change minds. People change minds also about stuff that matters. They dont do it often, but they do it.


> is moving away

I think the citation would be for "is moving away" as that's the most bold claim.


That was the premise of the comment I reacted to. It says: I’m starting to see more and more people demand quantifiers or conditional disqualifiers on statements that shouldn’t reasonably be considered as absolutes.

Seeing more and more people demand such qualifiers is culture moving away.


What you call a "rhetorical device" I call clickbait, and frankly I'm getting tired of it.

If the response to every title should be "You shouldn't judge us on the title, everyone knows titles are bullshit", then maybe folks should stop making bullshit titles, or else not feign innocence when they are judged on them.

(Note I understand the submitted title is not the title of the article, which I think is a valid defense in this instance, but my general point still stands)


There's a difference between using a broadly true generalization as the title and using a bullshit statement as the title.


I disagree. What we need less of is endless caveats and hedges in writing that make it impenetrable for no added value. Leave it to the reader to exercise common sense. It's extremely obvious he didn't mean it as an absolute assertion.


You’re doing the caricature attack thing. It’s a safe bet Kahneman is aware both that people sometimes change their minds on everything and that we often change their minds on small things.


What everyone is missing is that changing of the mind is a spectrum process, not a single binary process.

It happens over days, weeks, months, or years, depending on the subject. People who expect that if they refute a single then should then refute the entire opinion are the problematic ones (with binary thinking).

There are always edge cases, if all it took was a single counter-example, I would be a nihilist with no actual opinions. People change their minds when the bigger picture demands not, not because some young jackass decides binary logic should fully and completely apply to life.


> Also this article seems to be self-defeating: if we don't change our minds, why attempt to convince us of that fact?

I don't think the point was to convince us that we don't change our minds but rather to convince us to do adversarial collaboration instead of angry science.


You changed your mind yourself. Not because someone pleaded you to.


Isn't a big point of the whole discussion that you may change your mind because you changed the group you associate with?


> The power of reasons is an illusion. The belief will not change when the reasons are defeated. The causality is reversed. People believe the reasons because they believe in the conclusion.

This is one of my pet peeves with even casual conversation. Someone goes "I don't like X, because Y." You point out Y is false, and the response is "Well, I still don't like X".

It used to annoy me because wanted to help people not base their views on bad information, as that's what I would want people to do for me. But now, it mostly just annoys me because of how predictable it is. You know the only good answer is to agree with them, so what's the point in even conversing?


It’s actually even worse. For many people, disagreeing with them makes you their enemy, emotionally at least.

So you’ve mostly just been alienating people while thinking you’re trying to help them.

Most people have conversations to feed their emotional needs for social interactions and validation, not to learn anything.

We’re interesting animals.


Because it's predictable you can use that. You just have to use a different form of communication than the "scientific one".

Don't ask why do you not like X because that will get false reasons. Instead, ask how X affects them, how it makes them feel, etc.

But this is more difficult and basically boils down to "become their friend and know them as a person".


There's a simple trick that allows professionals in a given field to change their minds about something fundamental in their field - basically, it's the Feynman approach, where he trained himself to accept the data and analysis even if he didn't like the result, from an aesthetic viewpoint or whatever. A key strategy here is to have something else to put your ego and emotions into, such as a very subjective area like art, music, literature, etc.

Politics and religion might be trickier, as people tend to have external forcing factors (family and work opinions) that respond negatively to a fundamental change of some sort or other, up to being sent to prison for a decade for apostasy (see Saudi Arabia). In such situations, even if people do change their minds, they may not broadcast that change to anyone over fear of retaliation at family gatherings or in workplace environments. Even in cases where a particular member of a political party or religious group is shown to be criminally corrupt, many people will still embrace the politics or religion, on the basis that the individual in question is an outlier, not a representative sample - even after dozens of such examples are exposed. (Practically, this is why I've made it a rule to avoid political or religious discussions at work or at family gatherings, there's just not much value to be found there).


> To a good first approximation, people simply don't change their minds about anything that matters.

I love the model of adversarial collaboration, and I don't dispute the extremely strong influence of social bonds on knowledge formation, but Kahneman is just wrong about this. I know he's wrong because I change my mind relatively frequently, about things of at least some consequence.

For a recent example, I was fairly sure that at the beginning of the pandemic, in the US, widespread, cheap testing would enable us to drive COVID cases near zero, and I wasn't shy about telling everyone I met. Obviously, I was wrong, for a variety of reasons - so I updated.

That intimate experience with uncertainty and updating my own beliefs makes me wonder about Kahneman's research methods. It makes me doubt whether this question is even tractable or whether people are even legible enough to researchers to draw conclusions about this.

Interestingly (and disarmingly) Khaneman is very forthright about the role his own experiences have played in convincing him that people in general don't change their minds. He writes:

> I was also impressed by the fact that Anne and I didn't change our minds. I had read Kuhn and Lakatos about the robustness of paradigms, but I didn't expect that minor theories would also be impervious to evidence.

also:

> I will now share a personal experience of belief perseverance that I cannot shake ... However, it turns out that I only changed my mind about the evidence. My view of how the mind works didn't change at all. The evidence is gone, but the beliefs are still standing. Indeed, I cannot think of a single important opinion that I have changed as a result of losing my faith in the studies of behavioral priming, although they seemed quite important to me at the time.

I think the most likely explanation for this is 1) social desirability bias has a dramatic influence on what information people make accessible about their cognition and 2) Kahneman is unusually stubborn, and his generalization from his own personality to all humankind is a manifestation of the typical mind fallacy. [0]

[0] https://www.lesswrong.com/tag/typical-mind-fallacy


You didn't change your mind, you made a specific prediction (testing will drive US covid cases to zero) and then maintained that position, and weren't shy about sharing it, until it was demonstrated to be wrong. Weren't you stubborn about your prediction, ignoring what other people had to say about it until you were irrefutably proven wrong?


I don't understand what you're suggesting - that changing your mind only counts if there's no evidence?

Even now, I can think of ways to argue that testing could still drive COVID nearly to zero in the US, most of which revolve around the idea that we're not really testing or we're not doing it right. But I think I was wrong, partly because of things other people said earlier during the pandemic, including parallel arguments about why "masks work" were wrong, which I saw right away, though I didn't draw the obvious conclusions related to the effectiveness of testing.

I think Kahneman's position requires creative gerrymandering about what counts as an important belief, and about what counts as persuasion.


That's a recent example of something that you came to quickly (obviously, covid didn't exist before) and so you were relatively easily able to get back out of it.

As an approximation, it works insanely well. And as more and more things becomes "proxies for political decisions" we'll see it ramp up even more.


This turns into "no true Scotsman" pretty quickly.

I was a (relative) loudmouth about my position for over a year, since we didn't have access to home tests in the US until 2022. It was a very heated topic, since I often proposed testing as a preferred alternative to ineffective mask mandates that were popular where I live, and when my wish finally came true, I had to admit I had been wrong.

If changing one's mind about something of that magnitude doesn't count, the principle is badly overstated.


Yet another case of someone being overly confident in an opinion that had no basis in fact.

Imagine if you had looked up to realize the flu hasn't gone to near zero with testing, so why in the world would you think something as unknown as covid would?

Of course that didn't stop many like you from attacking those who tried to call for caution. So while it's "nice" that you eventually changed your mind, consider being more open-minded at the start.


This is all beside the point at hand, but I suppose it's still about epistemology, so I'd like to stand up for my old position, even though I understand it looks foolish in retrospect.

The fact that so much was unknown was part of the reason I thought testing might have a different effect than the flu. Well into the pandemic, the vast majority of people feared COVID much more than the flu, so it seemed they would be much more likely to take precautions to avoid getting it or spreading it. But that fear attenuated over time, especially as people lost trust in the alarmist if-it-bleeds-it-leads coverage.

Finally, your hostility is unwarranted: I am and have been a strong proponent of vaccines and I was fully on board to "flatten the curve", even for a short period of lockdowns. But I am not and never have been in favor of permanent midnight.


Your position looked foolish then. Consider that maybe you don't do well in situations without perfect information and take that into account in the future.

The two most authoritarian countries on this earth (china and North Korea) were not able to get covid "to near zero". China literally had tanks rolling up and down their streets to enforce said curfew.

The idea that somehow the US (or similar western countries) were ever going to get there was always outlandish, as was the idea that it was worth giving up what makes us different from China and NK to do so (Australia is struggling with this question now).

---

Assumptions you made.

1. That human behavior was anywhere near the major deciding factor in covid transmission approaching zero

2. That covid transmission approaching zero was the most important factor

#2 is akin to always turning right in a vehicle because safety is the only concern. It was NEVER a good thought process.

---

And finally, the point here isn't that there were a lot of unknowns, it's that many people were RIGHT, and they got shouted down by people like you whose thinking was entirely flawed. There's a canyon of difference between being wrong because you just didn't know and being wrong because the entire platform you were basing it on turned out to be wrong.


Who was right?

The shelter-in-place orders were UNQUESTIONABLY damaging to people's mental health and livelihoods. They were what made the difference - the whole "stand six feet apart" thing did very little, and mask mandates were rarely enforced in consistent, useful ways.

Aggressive testing would have allowed various jurisdictions to scale restrictions up and down as necessary to keep hospitals from being overwhelmed while also not wrecking many people's lives.

"Near zero" was probably never possible, but we could have managed the response much better if our government did not say horseshit like "the numbers go up if you measure them!"

If there is another pandemic, or another variant, or whatever - the government will not be able to ask for more shelter-in-place orders. They have burned so much goodwill and energy among the people who spent two years shut-in with deteriorating mental health who had to watch the hamfisted response of the United States to this crisis.


I think you should perhaps cool down.

1) I am admitting that I was wrong and updated. That includes updating on my thought process, just like you are so ardently suggesting.

2) Ninety percent of your assumptions about me are wrong. I'm not sure what you're accusing me of, or what "many people were RIGHT" about.

> There's a canyon of difference between being wrong because you just didn't know and being wrong because the entire platform you were basing it on turned out to be wrong.

What is the "platform" I was basing it on?


If I were to be charitable to the post you're replying to, they're trying to show that your takeaway is wrong - instead of saying you changed your mind based on new evidence you should instead say "I was wrong to think I had the knowledge, training, or skills to make a decision and use that decision to influence others".


Yes, and I agree that it's always important to keep that possibility in mind. I am generally keenly aware of uncertainty normally shy away from bad bets (predictions).

But taken too far, that principle leads to abdicating our responsibilities as citizens in a democracy, so while I was definitely chastened by the experience I will kindly decline the invitation to shut up: I choose to get better instead. The sum total of my influence on this matter has been, at worst, to encourage quicker and more widespread adoption of home tests, which falls very short of pernicious.

I'm quite puzzled by P5fRxh5kUvp2th's vitriol - maybe he or she is ardently pro-mask? Since I alluded to my low opinion of mask mandates (which I stand behind), that could explain it.


Heh my knowledge has lead me to realize that the best course of action is for me to abdicate any democratic responsibilities because I clearly don’t know anything about anything.


Correct, imagine if they had recognized not having enough information and chose not to form an opinion that needed changing once more evidence was gathered?

Imagine if they had chosen not to be a loudmouth (their words), and thereby adding to the cacophony of uselessness that happened around that time?

The problem isn't that they reached the wrong conclusion, it's how __CONFIDENT__ they were in that conclusion despite NO ONE knowing enough about covid at the time.


Do you think that every single facet of the response we had to COVID was correct and useful?

The evidence does suggest that better testing could have reduced the death rate while also allowing for gentler, more calibrated responses.

Near zero? Not in the US - but in the places that DID achieve that feat, testing was critical.

You are being absurdly harsh. Many experts have been trained by decades of "peacetime" to be conservative with their assumptions and requests. Just think back to "stand six feet apart" and "you can take your mask off while eating" to see why that is a problem.


I know this isn’t exactly a counterpoint, since “The belief will not change when the reasons are defeated. The causality is reversed. People believe the reasons because they believe in the conclusion.” — but I have changed my political views dramatically and repeatedly over the course of the past decades. Of course, I could disprove the conclusion by accepting it based on the evidence, but I don’t want the universe to collapse in on itself due to paradox, so I won’t.


"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." -- Max Planck


This was not such a problem when science was advancing slowly, but with acceleration of scientific development, this has become the retarding force. If we want to speed up science, we need to find a way of allowing truths to triumph faster.


At least in religion I think the idea that reasons follow belief rather than precede it is quite acknowledged. I think that's part of what in Christianity it meant when it says that faith is a gift of God. In my own conversion I felt it very intensely - my belief came over me in a sudden, external (or miraculous) way. I then spent 6 months learning everything there was to learn about my new found faith to placate my reason that I hadn't simply gone mad, but the temporal flow was clearly belief -> reason rather than the reverse.

In some ways it's like falling in love. Most of us don't evaluate potential partner on a bunch of metrics and then after they reach a sufficiently high ranking we declare ourselves in love. Rather we fall in love instinctively, then evaluate the person rationally (and sometimes reject them, like when the circumstances aren't right or we can see their flaws even through the rose tinted glasses, etc.)


In my entire life I've only met one person who was actually convinced by the religious arguments; everyone else dug into them after a similar event. And many people can go wildly too far and try to ascribe everything to it.

To take the love example, the "tricks of the trade" you see bandied about sometimes do NOT work to "make someone love you" but they can help if you have people who have already decided they love each other.


I'm not sure what he means by the naive scientific method. It has always been a fundamentally adversarial process, nothing new is being discovered here.

The scientific method as it has always existed requires two heuristics from you, the scientist: a hypothesis generator, a test generator (and an all important simplicity measure). The point of the method is that by making these things work adversarially, we can turn our very messy, fallible creativity, ie these heuristics, into facts about what the model isn't. Moreover no additional hypothesis of what the model is, or test which contradicts it, can derive a false fact of this form, relegating the entire concept of adversary to the meta who-gets-their-name-on-the-paper game.

If Daniel has only recently started "using science adversarially", then he has only recently started doing science.


If you haven’t read The Undoing Project, you should.



I believe this all boils down to mental energy an efficiency.

Our brains are great at pattern matching. We want to look at the landscape, see that there’s no hiding lion, and continue hunting-gathering. Constantly rechecking that there’s no lion is draining, and it also leads to less hunting-gathering. Adult humans really dislike having to do that (we even consider it a sign of mental illness)

The “back and forth” between husband and wife supports this theory. Every time one of them designed one experiment, “the ball was on somebody else’s field” and they could temporarily forget about the problem until the results came back. I am sure they were two very busy individuals and being able to “move on to other things” after designing the next experiment was gratifying.


This whole article reads like a cope for the fallout caused by Thinking Fast and Slow.

I purposely keep my copy faced forward on the bookshelf as a reminder of the dangers of reading too much into anything published in the past 15 years.


I'm curious what you mean by fallout, are you referring to the replication crisis?



I think what author describes and attributes to "everyone" is a trait of character. Yes, a common one, but not universal. From my own experience to that of (some of) my friends I can say with certainty people do change their minds.


Somewhat surprised to see Jeffery Epstein's "charity" still showing up on the front page of HN


(Edit: I interpreted your comment as asking why we'd allow this post to stay on HN's front page. The answer to that is below. If that wasn't your point, please ignore.)

On HN, we generally go by article quality, not site quality: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so..., and generally try to avoid guilt by association. These points both follow from the principle of intellectual curiosity that we're trying to optimize for: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor....

Reality is messier than simple heuristics, but I think the current article is a good test case. Can one be interested in Kahneman's views on adversarial collaboration and have a curious conversation about them while in no way endorsing the heinous Epstein? Clearly one can.


Bad call, dang.

In this case, I was unaware of the Edge Foundation's Epstein links. The information is salient and useful. Edge itself certainly doesn't make the information evident. C.f., a site search for "Epstein" which ... conspicuously omits any prominent results for a disclosure or apology:

<https://duckduckgo.com/?t=ffab&q=site%3Aedge.org+epstein&ia=...>


I interpreted the GP comment as asking why we allow posts from edge.org on HN. My comment is limited to answering that question. I've edited it to make this clearer.


Fair point.

Has robot_head's comment been downweighted at all independent of member votes? If so, that would also be inappropriate.


It hasn't, but it easily could be and that would be standard HN moderation. We downweight unsubstantive, generic, and/or offtopic comments all the time.

Btw, you rewrote your GP comment after I'd already replied to it. I don't know what you mean by "bad call".


I'll fairly frequently add to, and occasionally clarify, my comments. I wish HN had a better mechanism for indicating this. Usually it's to fix tyops (you've done that on my behalf at least once, with disclosure, in the past), add specific examples (the Edge searches, I believe, in this case), and just plain mind mush. I care enough that I'll comment to clarify even after the edit window's closed: <https://news.ycombinator.com/item?id=32851082>

That said, I try not to be deceptive. I've got a 5 minute delay on my comments as well which I understand gives me a short window to edit before the content is publicly visible.

That said, the "bad call" was my interpretation of considering The Edge's Epstein connections as off-topic, apparently not your own meaning.

Conformant to your apparent intent, "why we allow posts from edge.org on HN" (<https://news.ycombinator.com/item?id=32984632>), I'd be willing to consider a down-ranking or outright ban of the site on the general principle that blind eyes to absolutely destructive criminality should carry a very strong social stigma, regardless of the merits of any given article. Again, the fact that The Edge fails to note this themselves, or make any clearly apparent apology despite playing a huge role in enabling Epstein's behaviour --- something I've been exploring in the course of this exchange and as a result of robot_head's comment --- is quite significant.

There's a huge harm done when people ignore gross harms done to others. All the more so when those doing so benefit directly from doing so.

HN itself seems to have largely ignored this issue until today:

<https://hn.algolia.com/?q=edge%20epstein>

There've been 30 submission from The Edge since 19 October 2020 based on HN's own site link: <https://news.ycombinator.com/from%3Fsite%3Dedge.org> (Archive: <https://archive.ph/8adBm>)

The first post matching that search to break 20 comments (a minimal notability threshold we've discussed before) was posted 5 hours ago as I write:

<https://news.ycombinator.com/item?id=32982155>

I can see a legitimate argument for damnatio memoriae for The Edge.

<https://en.wikipedia.org/wiki/Damnatio_memoriae>


Much better title than "People don't change their minds", which is false


Fun fact, edge.org and the Edge Foundation was generously funded by Jeffrey Epstein -- without him, Edge probably wouldn't exist as it does. https://www.buzzfeednews.com/article/peteraldhous/jeffrey-ep.... From 2001 to 2017, "foundations associated with Epstein provided $638,000 out of a total of almost $857,000 received by Edge over this period." So basically Epstein funded 80% of Edge's budget.


Didn't he fund a ton of different scientific organizations?


Edge and TED were his favorites tho


I'm pretty sure that is how he built is network, right?


He paid for 100% of the "billionaire dinners" where all of Silicon Valley showed up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: