Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I had a lot of questions about similar issues, mostly surrounding the question of "what is the nature of 'subjective'?"

The article deals with this a bit but not as much as I would like — maybe because of the state of the literature?

The linked paper by Safranek et al on observational entropy was sort of interesting, noting how a choice of coarse graining in to macrostates could lead to different entropies, but it doesn't really address the question of why you'd choose a coarse graining or macrostate to begin with, which seems critical in all of this?

In the information theory literature, there's a certain information cost (in a Kolmogorov complexity sense) associated with choosing a given coarse graining or macrostate to begin with — in their example, choosing shape or color to define entropy against. So my intuition is that the observational entropy is kind of part of a larger entropy or informational cost including that of the coarse graining that's chosen.

This kind of loops back to what they discuss later about costs of observation and information bottlenecks, but it (and the articles it links to) don't really seem to address this issue of differential macrostate costs explicitly in detail? It's a bit unclear to me; it seems like there's discussion that there is a thermodynamic cost but not how that cost accrues, or why you'd adopt one macrostate vs another (note Alice and Bob in their subjectivity example are defined by different physical constraints, and can be thought of two observational systems with different constraints).

It's also interesting to me to think about it from another perspective, which is let's say you have a box full of a large number of particles that are "purely random". In that scenario it doesn't really matter what Alice and Bob see, only the number of particles etc. The entropy with regard to say, color, will depend on the number of colors, not the position of the particles because they're maximally entropic. In reorganizing the particles with reference to a certain property, they're each decreasing the entropy from that purely random state by a certain amount that I can think be related in some way to the information involved in returning the particles to a purely random state?

A lot of the article has links to other scientific and mathematical domains. Some of the stuff about information costs of observation has ties in the math and computer science literature through Wolpert (2008) who approaches it from a computational perspective, and later Rukavicka. There's similar ideas in the neuroscience literature about entropy reduction efficiency (the names of some of the people involved there I'm forgetting).

I really liked this Quanta piece but there's a lot of fuzziness around certain areas and I couldn't tell if that was just due to fuzzy writing,fuzzy state of the literature, or my poor understanding of things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: