I was about to add a similar comment. Definitely interested to read his evaluation and whether there is more hype than substance here, though I'm guessing it may take some time.
If that's the case, the QC computer endeavors are still worthwhile since while failing to scale them we may develop another piece of a more complete model of reality.
> Fix death please, that's a good way to prevent the fear.
Even if you were able to postpone your non-existence to some late stage of the Universe, it's all but assured you will have to end eventually in light of postulated Cosmological denouements. In other words, you will not exist forever due to physics.
That being said it would be nice to know it's possible to get at least a few hundred years of healthy and happy living in. Yet, would the fear of the end grow even greater then?
If you redefine "you" and "exist" I'm not so sure. What if "I" am just my conscious being, and what if I "exist" across many systems that have a ~0% chance of failing catastrophically.
Anyway, this is going all scifi and silly. My point is really that death sucks, trying to say "oh but you can't fear death because it's not a thing" doesn't seem helpful, and we can probably do massively better than a ~125 year max lifespan.
A world of only perfect beings does sound pretty boring. I'm not sure if anything worthwhile would actually happen in such a place. That being said, mental (rational) tools that help one recognize the origin of anxiety and reduce its deleterious effects are quite welcome. I think this article could help programmers/founders do just that.
> nominally has a meaning of merely being aware of, and responding to, the environment, but this is not the self-aware consciousness that is currently unexplained
I was thinking of something simpler: the flexibility of meaning of the word 'consciousness' -- for example, in Merriam-Webster:
1a: the quality or state of being aware especially of something within oneself. (my emphasis.)
If you regard 'especially' to indicate that the emphasized clause is optional (edit: or maybe even if not), you could arguably say (and people do) that anything alive fits this definition, as do even things like thermostats. This, however, is not the consciousness that the science and philosophy of mind is concerned with: it is specifically concerned with the sort of consciousness that we exhibit, which is aware that it is an agent in the world, aware that it has this awareness, and aware that other people are also aware in this sense.
Update: I may have misunderstood your question. The nature of consciousness in other animals is also an open issue and is being studied. It would be very odd indeed if there was nothing like it in any other species, but, at the same time, no other living species on earth has it to the extent we do. I don't think these observations justify the extremely broad definitions of panpsychism, and I do not think panpsychism helps with studying these animals any more that it helps with humans.
Nagel's point is, I think, rather tangential - he is arguing that we probably will never know what it is like to be a bat, as it is likely too far from any experience we could have. I think he is probably right, and again, panpsychism will not change that.
Nagel's was a negative assertion, his viewpoint is the one that needs defending - how can we not be sure we can't understand the bat's perspective?
If taken to its logical ends, constructs like the crayon box and human fucking empathy are off the table too - I cannot and do not have any idea of who or what you are being, therefore I cannot comprehend consciousness? What?
> then you get a picture of an extraordinarily complicated system, and the nuts and bolts of exactly what that system is running on fall away into irrelevance
I think this is a common failing of thought experiments. Gedankenexperiments help us ask better questions and design better real experiments, but if you're trying to use them to make fundamental conclusions about reality in an a priori way, you're likely going to fail.