Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)

And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.

For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.



Additional note: compatibilism is only obviously correct if you accept that "free will" actually just means "the experienced perception/illusion of free will" as described by Schopenhauer.

Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.

And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.


I think free will as philosophically used is inherently self-defeating and one of the largest black marks on the entire field, to be fair.


Why is that?

Here's what we know:

- we appear to experience what we call free will from our own perspective. This isn't strong evidence obviously. - we are aware that we live in a world full of predictable mechanisms of varying levels of complexity, as well as fundamentally unpredictable mechanisms like quantum mechanics. - we know we are currently unable to fully model our experience and predict next steps. - we know that we don't know whether consciousness as an emergent property of our brains is fully rooted in predictable mechanisms or has some decree of unknowability to it.

So really "do we have free will" is a question that relies on the nature of consciousness.


No, I disagree with this conclusion. The problem is very much solvable if one simply keeps the map/territory split in mind, and for every thing asks himself, "am I perceiving reality or am I perceiving a property of my brain?" That is, we "experience free will" - this is to say, our brain reports to us that it evaluated multiple possible behaviors and chose one. However, this does not indicate that multiple behaviors were physically possible, it only indicates that multiple behaviors were cognitively evaluated. In fact, because any deciding algorithm has to evaluate a behavior list or even a behavior tree, there is no reason at all to expect this to have any connection to physical properties of the world, such as quantum mechanics.

(The relevant LessWrong sequence is "How An Algorithm Feels From Inside" https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg... which has nothing to do with free will, but does make very salient the idea that perceptions may be facts about your cognition as easily, if not more easily, as facts about reality.)

And when you have that view in mind, you can ask: "wait, why would the brain be sensitive to quantum physics? It seems to be a system extremely poorly suited to doing macroscopic quantum calculations." Once the alternative theory of "free will is a perception of your cognitive algorithm" is salient, you will notice that the entire free-will debate will begin to feel more and more pointless, until eventually you no longer understand why people think this is a big deal at all, and then it all feels rather silly.


> However, this does not indicate that multiple behaviors were physically possible

Okay, fine, but what indicates that multiple behaviors were not physically possible?

Our consciousnesses are emergent properties of networks of microscopic cells, and of chemicals moving around those cells at a molecular level. It seems perfectly reasonable that our consciousness itself could be subject to quantum effects that belie determinism, because it operates at a scale where those effects are noticable.


> Okay, fine, but what indicates that multiple behaviors were not physically possible?

I don't follow. Whether multiple behaviors are possible or not possible, you have to demonstrate that the human feeling of free-will is about that; you have to demonstrate that the human brain somehow measures actual possibility. Alternatively, you have to show that the human cognitive decision algorithm is unimplementable in either of those universes. Otherwise, it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality, because brains in general usually measure things around the scale of brains, not non-counterfactual facts about the basic physical laws.


Well, no, your hypothesis is not automatically the null hypothesis that's true unless someone else goes through all goalposts regardless of where you move them to.

I know you thought about it for a moment, and therefore had an obvious insight that 40% of the profession has somehow missed (just define terms so to mean things that would make you correct, and declare yourself right! Easy!) but it's not quite that simple.

Your argument that you just made basically boils down to "well I don't think it works that way even though no one knows. But also it's obvious and I'm going to arbitrarily assign probabilities to things and declare certain things likely, baselessly".

If you read elsewhere in this thread then you might find that exact approach being lampooned :-)


Okay, you know what?

I'll let my argument stand as written, and you can let yours stand as written, and we'll see which one is more convincing. I don't feel like I have any need to add anything.

edit: Other than, I guess, that this mode of argument not being there is what made LessWrong attractive. "But what's the actual answer?!"


The attraction of LessWrong is that they take unanswerable questions with unknowable answers and assign an "actual answer" to them?

That my friend is a religion.


The attraction is that they say "actually, this has an answer, and I can show you why" and then they actually do so.

Philosophy is over-attached to the questions to the point of rejecting a commitment to an answer when it stares them in the face. The point of the whole entire shebang was to find out what the right answer was. All else is distraction, and philosophy has a lot of distraction.


> and I can show you why

But you haven't, you've just said "I have decided that proposition X is more likely than proposition Y, and if we accept X as truth then Z is the answer".

You've not shown that X is more likely than Y, and you have certainly not shown that it must be X and not Y.

Your statements don't logically follow. You said:

> it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality

You said your opinion about some probabilities, and somehow drew the conclusion that it was "obvious that 40% of a field's practitioners are wrong".

Someone saying "actually, this has an answer, and I can show you why" to a currently fundamentally unanswerable question is simply going off faith and is literally a religion. It's choosing to believe in the downstream implication despite no actual foundation existing.


There isn't a single philosophical meaning. You are probably thinking of libertarian free will..furthering Obviously false because determinism isn't obviously true.


> And it turns out if you do this, you can discard 90% of philosophy as historical detritus

This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.


Sure, and I agree that LW is doing philosophy in that sense.


Sure, I just object to the characterisation of "actually correct", as though each of those ideas has not gone back and forth on philosophers thinking that particular idea is "actually correct" for centuries. LW does not appear to have much if any novel insight; just much better marketing.


I think philosophy has gone back and forth for so long that they're now, as a field, pathologically afraid of actually committing.

The AI connection with LessWrong means that the whole thing is framed with a backdrop of "how would you actually construct a mind?" That means you can't just chew on the questions, you have to actually commit to an answer and run the risk of being wrong.

This teaches you two things: 1. How to figure what you actually believe the answer is, and why, and make sure that this is the best answer that you can give; 2. how to keep moving when you notice that you made a misstep in part 1.


>actually, this is obviously correct

Nobody know what's actually correct, because you have to solve epistemology first, and you have to solve epistemology to solve epistemology...etc.etc.

>And it turns out if you do this, you can discard 90% of philosophy as historical detritus

Nope. For instance , many of the issues Kant raised are still live.

>The massive advantage of the Sequences is they have justified and well-defended confidence

Nope. That would entail answering objections , which EY doesn't stoop to.

>Compatibilism is really obviously correct

Nope. It depends on a semantic issue , what free will means.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: