> It's equally easy to imagine a machine that can't be generally intelligent because it can't experience.
I agree with this. I was just pointing out that the parent comment:
>I post this because I think the current neural-net hegemony of AI research underappreciates the importance of consciousness and implicitly assumes that it can just scale its way to AGI
assumes that you need conciousness for AGI, but we actually don't know if that's true.
It's a better bet that it's a requirement however.
The only known examples we have of general intelligence comes with consciousness.
Not to mention, he's positing that AI researchers are assuming that consciousness is unnecessary and he's saying he disagrees with that position. So saying that he could be wrong is really just circling back to what AI researchers are assuming. Or basically, disagreeing with his disagreement.
I would expect either some sort of exposition on why or a third option being presented.
I also was not a fan of the position of "I can imagine this therefore it is a valid option". I don't accept the logic that something is necessarily a possibility simply because we can imagine that it can happen.
It's also true that the only known examples of general intelligence are embodied in meat machines. Is this a prerequisite for AGI? Again, we don't know. I think probably not, but some people think it is, and the debate is unresolved.
Similarly, my argument is that it's premature to assume that consciousness is a prerequisite for AGI.
Finally, I don't think there's anything invalid about disagreeing with someone's disagreement, and then stating the reason why. In fact you also did this in response to my comment!
Until you can show me a counterexample, the null hypothesis is that intelligence requires consciousness. The two sides are not equally weighted. You need to come with something.
In your original response, you stated a reason why you disagree. And I pointed out why that reason is not good. It's so low as to not even be a threshold.
Other than that, all you've done is reiterate the disagreement.
I've given a counter-hypothetical to point out why your reasoning is flawed. I've illustrated reasons why the discussion is complicated. I'm not disagreeing with your disagreement, I'm pointing out not only why I disagree with your premise, but where I believe your premise doesn't hold.
So far, the only thing you've offered in response is "Well, maybe it's not required". Why do you believe that? Beyond, "We don't know, and I can imagine it". And even then, I'd treat your imagination with a little skepticism. Just because you can construct the sentence "AGI does not require consciousness" does not mean you can actually conceptualize what that means.
Mostly because it would require defining both AGI and consciousness in a mutually agreed upon way. And defining it in a way that would definitely include everything we accept as conscious and exclude everything we accept as lacking conscious.
To be blunt, the null hypothesis is that consciousness is not required for AGI.
One example of a correlation between AGI and consciousness without any theory (let alone a testable theory) for why there would be causation does not constitute evidence.
I agree with this. I was just pointing out that the parent comment:
>I post this because I think the current neural-net hegemony of AI research underappreciates the importance of consciousness and implicitly assumes that it can just scale its way to AGI
assumes that you need conciousness for AGI, but we actually don't know if that's true.