Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As physicists have worked to unite seemingly disparate fields over the past century, they have cast entropy in a new light — turning the microscope back on the seer and shifting the notion of disorder to one of ignorance. Entropy is seen not as a property intrinsic to a system but as one that’s relative to an observer who interacts with that system.

Maybe I have the benefit of giant shoulders, but this seems like a fairly mundane observation. High-entropy states are those macrostates which have many corresponding microstates. The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?

I.e. if I consider 5 or 6 to be essentially the same outcome of the die, then that will be a more probable (higher-entropy) outcome. But that's just due to my classification, not inherent to the system!



> Maybe I have the benefit of giant shoulders, but this seems like a fairly mundane observation.

It is not mundane, and it is also not right, at least for entropy in Physics and Thermodynamics.

> High-entropy states are those macrostates which have many corresponding microstates.

That is how you deduce entropy form a given model. But entropy is also something that we can get from experimental measurements. In this case, the experimental setup does not care about microstates and macrostates, it just has properties like enthalpy, heat capacity and temperature.

We can build models after the fact and say that e.g. the entropy of a given gas matches that predicted by our model for ideal gases, or that the entropy of a given solid matches what we know about vibrational entropy.

That’s how we say that e.g. hydrogen atoms are indistinguishable. It’s not that they become indistinguishable because we decide so. It’s because we can calculate entropy in both cases and reality does not match the model with distinguishable atoms.

> The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?

It seems that way if we consider only our neat models, but it fails to explain why experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.


> But entropy is also something that we can get from experimental measurements. In this case, the experimental setup does not care about microstates and macrostates, it just has properties like enthalpy, heat capacity and temperature. […] experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.

https://bayes.wustl.edu/etj/articles/gibbs.vs.boltzmann.pdf

Thermodynamics does have the concept of the entropy of a thermodynamic system; but a given physical system corresponds to many different thermodynamic systems. […] It is clearly meaningless to ask, “What is the entropy of the crystal?” unless we first specify the set of parameters which define its thermodynamic state. […] There is no end to this search for the ultimate "true" entropy until we have reached the point where we control the location of each atom independently. But just at that point the notion of entropy collapses, and we are no longer talking thermodynamics! […] From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it.


Temperature is not just an anthropomorphic concept, and temperature and entropy are directly linked concept you can't have one without the other.


It is though. Temperature is an aggregate summary statistic used when the observer doesn't know the details of individual particles. If you did know their position, speed and velocities, you could violate the laws of entropy as Maxwell's thought experiment demonstrated in 1867 https://en.wikipedia.org/wiki/Maxwell%27s_demon


Maxwell's demon is a thought experiment that involves a magical being. It's an interesting thought experiment, and it provides some insights about the relationship between macro states and micro states, but it's not actually a refutation of anything, and it doesn't describe any physically realizable system, even in theory. There is no way to build a physical system that can act as the demon, and so it follows that entropy is not actually dependent on the information actually available to physical systems.

This is obviously visible in the observer-independence of many phenomena linked to temperature. A piece of ice will melt in a large enough bath of hot water regardless of whether you know the microstates of every atom in the bath and the ice crystal.


"large enough" and "bath" is doing a lot of work here. I don't think it's necessarily about size. It's about that size and configuration implying the particle states are difficult to know.

If for example, you had a large size but the states were knowable because they were all correlated, they were following a functionally predictable path, for example all moving away from the ice cube, or all orderly orbiting around the ice cube in a centrifuge such that they didn't quite touch the ice cube, it wouldn't melt.


The ice cube would melt, it would just take longer, heat does transfer through vacuum via radiation. And no, you can't control the positions of the electrons to stop that radiation from happening, it is a quantum effect not something you can control.


Temperature and entropy are directly linked; it follows that temperature is also anthropomorphic. Although I think "observer-dependent" would be a better way to put it; it doesn't have to specifically be relative to a human.


> It is not mundane, and it is also not right, at least for entropy in Physics and Thermodynamics.

Articles about MaxEnt thermodynamics by E.T. Jaynes where he talks about the “anthropomorphic” nature of entropy date back to 1960s. How is that not right in physics?


By the look of it, it is another misguided attempt to apply information theory concepts to thermodynamics. Entropy as information is seductive because that way we think we can understand it better, and it looks like it works. But we need to be careful because even though we can get useful insights from it (like Hawking radiation) it’s easy to reach unphysical conclusions.

> How is that not right in physics?

Why would it be right? Was it used to make predictions that were subsequently verified?


That misguided attempt has resulted in full graduate courses and textbooks. Maybe one look is not all that it takes to fully assess its worthiness.

https://www.amazon.com/Microphysics-Macrophysics-Application...

https://arxiv.org/pdf/cond-mat/0501322


Plenty of bad stuff made its way into textbooks, I saw some really dodgy stuff at uni. And for every uncontroversial statement we can find a textbook that argues that it is wrong. Sorting the good from the bad is the main point of studying science and it is not easy. What is also important is that approaches that can work in some field or context might be misleading or lead to wrong outcomes in others. Information theory is obviously successful and there is nothing fundamentally wrong with it.

Where we should be careful is when we want to apply some reasoning verbatim to a different problem. Sometimes it works, and sometimes it does not. Entropy is a particularly good example. It is abstract enough to be mysterious for a vast majority of the population, hence why these terribly misleading vulgarisation articles pop up so often. Thinking of it in terms of information is sometimes useful, but going from information to knowledge is a leap, and then circling back to Physics is a bit adventurous.


Plenty of bad stuff makes its way into hackernews comments as well. Saying that “it could be wrong” doesn’t really support the “it’s wrong” claim, does it?


I did not make that point, though. I rejected an appeal to authority because something was in a textbook. I made no comment about the validity of that person’s work in his field, I just pointed out that this transferability was limited.


My bad, I thought you considered Balian’s work another misguided attempt to apply information theory concepts to thermodynamics.

For the record, this is the abstract of the “Information in statistical physics” article: “We review with a tutorial scope the information theory foundations of quantum statistical physics. Only a small proportion of the variables that characterize a system at the microscopic scale can be controlled, for both practical and theoretical reasons, and a probabilistic description involving the observers is required. The criterion of maximum von Neumann entropy is then used for making reasonable inferences. It means that no spurious information is introduced besides the known data. Its outcomes can be given a direct justification based on the principle of indifference of Laplace. We introduce the concept of relevant entropy associated with some set of relevant variables; it characterizes the information that is missing at the microscopic level when only these variables are known. For equilibrium problems, the relevant variables are the conserved ones, and the Second Law is recovered as a second step of the inference process. For non-equilibrium problems, the increase of the relevant entropy expresses an irretrievable loss of information from the relevant variables towards the irrelevant ones. Two examples illustrate the flexibility of the choice of relevant variables and the multiplicity of the associated entropies: the thermodynamic entropy (satisfying the Clausius–Duhem inequality) and the Boltzmann entropy (satisfying the H-theorem). The identification of entropy with missing information is also supported by the paradox of Maxwell’s demon. Spin-echo experiments show that irreversibility itself is not an absolute concept: use of hidden information may overcome the arrow of time.”


"We need to be careful", agreed. "It's not mundane", agreed. (It's mundane in information theory because that's how they define entropy.)

"It's [...] not right" (from your first comment), can you give/link a specific physical example? It would be very cool to have a clear counterexample.


> can you give/link a specific physical example?

About the lack of subjectivity of the states? If we consider any bit of matter (for example a crystal or an ideal gas), the macrostate is completely independent of the observer: it’s just the state in which the law of physics say that bit of matter should be. In an ideal gas it is entirely determined by the pressure and volume, which are anything but subjective. For a crystal it is more complex because we have to account for things like its shape but the reasoning is the same.

Then, the microstates are just accessible states, and this is also dictated by Physics. For example, it is quite easy to see that a crystal has fewer accessible states than a gas (the atoms’ positions are constrained and the velocities are limited to the crystal’s vibration modes). We can calculate the entropy in the experimental conditions within that framework, or in the case of correlated liquids, or amorphous solids, or whatever. But the fact that we can come up with different entropies if we make different hypotheses does not mean that any of these hypotheses is actually valid. If we measure the entropy directly we might have a value that is consistent with several models, or none. The actual entropy is what we observe, not the theoretical scaffolding we use to try to make sense of it. And again, this is not subjective.


Agreed, sure. Of course it's not subjective.

Is there a concrete physical example where the information-theory definition of entropy conflicts with experiment?


Others in this thread do believe that entropy is a subjective measure, or more precisely a measurement of the information that an observer has about a system instead of a measurement about the state of the system itself. Information theory easily leads to this interpretation, since for example the informational content of a stream of bytes can very much be observer-dependent. For example, a perfectly encrypted stream of all 1s will appear to have very high entropy for anyone who doesn't know the decryption process and key, while in some sense it will be interpreted as a stream with entropy 0 by someone who knows the decryption process and key.

Of course, the example I gave is arguable, since the two observers are not actually observing the same process. One is looking at enc(x), the other is looking at x. They would both agree that enc(x) has high entropy, and x has low entropy. But this same kind of phenomenon doesn't work with physical entropy. A gas is going to burn my hand or not regardless of how well I know its microstates.


I feel that my careful distinction between "information theory" entropy and "physical" entropy seems to vanish in your first sentence.


As far as I understand it, the original thread was about whether this distinction exists at all. That is, my understanding is that the whole thread is about opposition to the Quanta article's assertion, which suggests that thermodynamic entropy is the same thing as information-theory entropy and that it is not a "physical" property of a system, but a quantity which measures the information that an observer has about said system.

If you already agree that the two are distinct measures, I believe there is no disagreement in the sub-thread.


>A gas is going to burn my hand or not regardless of how well I know its microstates.

Or perhaps that's the secret of the Shaolin monks!


> Fundamentally, entropy depends on the probability distribution, not the observer.

Right but a probability distribution represents the uncertainty in an observer so there is no inconsistency here (else you're falling for the Mind Projection Fallacy http://www-biba.inrialpes.fr/Jaynes/cc10k.pdf).


This is only true if one assumes that (a) physical processes are fundamentally completely deterministic, and that (b) it is possible to measure the initial state of a system to at least the same level of precision as the factors that influence the outcome.

Assumption (a) is currently believed to be false in the case of measuring a quantum system: to the best of our current knowledge, the result of measuring a quantum system is a perfectly random sampling of a probability distribution determined by its wave function.

Assumption (b) is also believed to be false, and is certainly known to be false in many practical experiments. Especially given that measurement is a time-consuming process, events that happen at a high frequency may be fundamentally unpredictable on computational grounds (that is, measuring the initial state to enough precision and then computing a probability may be physically impossible in the time that it takes for the event to happen - similar to the concept of computational irreducibility).

So, even in theory, the outcomes of certain kinds of experiments are probabilistic in a way that is entirely observer-independent; this is especially true in quantum mechanics, but it is also true in many types of classical experiments.


The idea that entropy represents the uncertainty in our description of a system works perfectly well in quantum statistical mechanics (actually better than in classical statistical mechanics where the entropy of a system diverges to minus infinity as the precision of the description increases). The entropy is zero when we have a pure state, a state perfectly defined by a wave function, and greater than zero when we have a mixed state.

Regarding your second point, how does the practical impossibility of measuring the initial state invalidate the idea that there is uncertainty about the state?


The post I was replying to claims that probability is not a physical property of an observed system, that it is a property of an observer trying to observe a system. The examples given in the quoted link all talk about experiments like rolling dice or tossing coins, and explain that knowledge of mechanics shows that these things are perfectly predictable mechanical processes, and so any "probability" we assign to them is only a measure of our own lack of knowledge of the result, which is ultimately a measure of our lack of knowledge of the initial state.

So, the link says, there's no such thing as a "fair coin" or a "fair coin toss", only questions of whether observers can predict the state or not (this is mostly used to argue for Bayesian statistics as the correct way to view statistics, while frequentist statistics is considered ultimately incoherent if looked at in enough detail).

I was pointing out however that much of this uncertainty in actual physics is in fact fundamental, not observer dependent. Of course, an observer may have much less information than physically possible, but it can't have more information about some system than a physical limit.

So, even an observer that has the most possible information about the initial state of a system, and who knows the relevant laws of physics perfectly, and has enough compute power to compute the output state in a reasonable amount of time, can still only express that state as a probability. This probability is what I would consider a physical property of that system, and not observer-dependent. It is also clearly measurable by such an observer, using simple frequentist techniques, assuming the observer is able to prepare the same initial state with the required level of precision.


> an observer may have much less information than physically possible, but it can't have more information about some system than a physical limit.

Still the probability represents the uncertainty of the observer. You say that "the most possible information" is still not enough because "measurement is a time-consuming process" and it's not "possible to measure" with infinite precision. I'd say that you're just confirming that "the lack of knowledge" happens but that doesn't mean the physical state is undefined.

You call that uncertainty a property of the system but that doesn't seem right. The evolution of the system will happen according to what the initial state was - not according to what we thought it could have been. Maybe we don't know if A or B will happen because we don't know if the initial state is a or b. But if later we observe A we will know that the initial state was a. (Maybe you would say that at t=0 the physical state is not well-defined but at t=1 the physical state at t=0 becomes well-defined retrospectively?)


I would say that any state that we can't measure is not a physical state. So if we can only measure, even in theory, a system up to precision dx, then the system being in state a means that some quantity is x±dx. If the rules say that the system evolves from states where y>=x to state A, and from states where y<x to state B, then whether we see it in state A or state B, we still only know that it was in state a when it started. We don't get any extra retroactive precision.

This is similar to quantum measurement: when we see that the particle was here and not there, we don't learn anything new about its quantum state before the measurement. We already knew everything there was to know about the particle, but that didn't help us predict more than the probabilities of where it might be.


> I would say that any state that we can't measure is not a physical state.

Ok, so if I understand correctly for you microstates are not physical states and it doesn't make sense to even consider that at any given moment the system may be in a particular microstate. That's one way to look at things but the usual starting point for statistical mechanics is quite different.


That would be a bit too strong a wording. I would just say that certain theoretical microstates (ones that assume that position and velocity and so on are real properties that particles possess and can be specified by infinitely precise numbers) are not in fact physically distinguishable. Physical microstates do exist, but they are subject to the Heisenberg limit, and possibly other, less well defined/determined precision limits.

Beyond some level of precision, the world becomes fundamentally non-deterministic again, just like it appears in the macro state description.


> That’s how we say that e.g. hydrogen atoms are indistinguishable. It’s not that they become indistinguishable because we decide so. It’s because we can calculate entropy in both cases and reality does not match the model with distinguishable atoms.

How does using isotopes that allow atoms to be distinguished affect entropy?


Add more particles, get more possible states. Hydrogen is a proton and an electron. Deuterium adds a neutron. More possible states means more entropy, assuming your knowledge of the system stayed constant.


>> The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?

> It seems that way if we consider only our neat models, but it fails to explain why experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.

I am not sure that I agree with this -- it feels a little too "neat and tidy" to me. One could argue, for example, that these seemingly-emergent agglomerations of states into these cohesive "macro" units are an emergent property limitations of modelling based of the physical properties of the universe -- but there's no way to necessarily easily tell if this set of behaviors comes from an underlying limitation of _dynamics_ of the underlying state of the system(s) based on the rules or this universe or the limitations of our _capacity to model_ the underlying system based on constraints imposed by the rules of this universe.

Entropy by definition involves a relationship (generally at least) between two quantities -- even if implicitly, and oftentimes this is some amount of data and a model used to describe this data. In some senses, being unable to model what we don't know (the unknown unknowns) about this particular kind of emergent state (agglomeration into apparent macrostates) is in some form a necessary and complete requirement for modelling the whole system of possible systems as a whole.

As a general rule, I tend to consider all discretizations of things that can be described as apparently-continuous processes inherently "wrong", but still useful. This goes for any kind of definition -- the explicit definitions we use for determining the relationship of entropy between quantities, how we define integers, words we use when relating concepts with seemingly different characteristics (different kinds of uncertainty, for example).

We induce a form of loss over the original quantity when doing so -- entropy w.r.t. the underlying model, but this loss is the very thing that also allows us to reason over seemingly previously-unreasonable-about things (for example -- mathematical axioms, etc). These forms of "informational straightjackets" offer tradeoffs in how much we can do with them, vs how much we comprehend them. So, even in this light, the very idea of modelling a thing will always induce some form of loss over what we are working with, meaning that said system can never be used to reason about the properties of itself in a larger form -- never verifiably, ever.

Using this induction, we can extend it to attempt to reason then about this meta-level of analysis, showing that because it is indeed a form of model sub-selected from the larger possible space of models, that there is some form of inherent measurable loss, and it cannot be trusted to reason even about itself. And therein lies a contradiction!

However, one could postulate that this form of loss results in any model necessarily has some form of "collision" or inherent contradiction in it -- theories like Borsuk-Ulam come to mind, and so we must eventually come to the naked depravity of picking some flawed model to analyze our understanding of the world, and hope to realize along the way that we find a sense of comfort and security in the knowledge that it is built on sand and strings, and its validity may unwind and slip away at any minute.

A very curious ideal, indeed.


Maybe the experimental apparatus is not objective. The quantities we choose to measure are dictated by our psychological and physiological limitations. The volume of a container is not objectively defined. An organism which lives at a faster time scale will see the walls of the container vibrating and oscillating. You must convince the organism to average these measurements over a certain time scale. This averaging throws away information. This is the same with other thermodynamic quantities.


> The quantities we choose to measure are dictated by our psychological and physiological limitations.

No. The enthalpy changes measured by a calorimeter are not dependent on our psychological limitations.

> The volume of a container is not objectively defined.

Yes, it is, for any reasonable definition of "objective". We know how to measure lengths, we know how they change when we use different frames of reference so there is no situation in which a volume is subjective.

> An organism which lives at a faster time scale will see the walls of the container vibrating and oscillating.

This does not matter. We defined a time scale from periodic physical phenomena, and then we know how time changes depending on the frame of reference. There is no subjectivity in this, whatever is doing the measurement has no role in it. Time does not depend on how you feel. It’s Physics, not Psychology.

> This is the same with other thermodynamic quantities.

No, it’s really not. You seem to know just enough vocabulary to be dangerous and I encourage you to read an introductory Physics textbook.


Sometimes I wish HN had merit badges. Or if you like, a device to measure the amount of information contained within a post.


I am not sure it would help, I think it would just enhance groupthink. I like how you need to write something obviously stupid or offensive for the downvotes to have a visible effect and that upvotes have no visible effect at all. (Yes, it changes ranking, but there are other factors). People are less prejudiced when they read the comment than if they see that it is already at -3 or +5.

Yes, it means that some posts should be more (or less) visible than they are but overall I think it’s a good balance.

Besides, I am not that interested in the absolute amount of information in a post. I want information that is relevant to me, and that is very subjective :)


The enthalpy changes measured by a calorimeter are dependent on the design of the calorimeter, which could have been a different piece of equipment. In a sense, that makes it dependent on the definition of enthalpy.

If you introduced a new bit of macro information to the definition of an ensemble, you'd divide the number of microstates by some factor. That's the micro level equivalent of macroscopic entropy being undefined up to an additive constant.

The measurables don't tell you S, they only tell you dS.


> The enthalpy changes measured by a calorimeter are dependent on the design of the calorimeter, which could have been a different piece of equipment.

Right, but that is true of anything. Measuring devices need to be calibrated and maintained properly. It does not make something like a distance subjective, just because someone is measuring it in cm and someone else in km.

> If you introduced a new bit of macro information to the definition of an ensemble, you'd divide the number of microstates by some factor. That's the micro level equivalent of macroscopic entropy being undefined up to an additive constant.

It would change the entropy of your model. An ensemble in statistical Physics is not a physical object. It is a mental construct and a tool to calculate properties. An actual material would have whatever entropy it wants to have regardless of any assumptions we make. You would just find that the entropy of the material would match the entropy of one of the models better than the other one. If you change your mind and re-run the experiment, you’d still find the same entropy. This happens e.g. if we assume that the experiment is at a constant volume while it is actually under constant pressure, or the other way around.

> In a sense, that makes it dependent on the definition of enthalpy.

Not really. A joule is a joule, a kelvin is a kelvin, and the basic laws of thermodynamics are some of the most well tested in all of science. The entropy of a bunch of atoms is not more dependent on arbitrary definitions than the energy levels of the atoms.

> The measurables don't tell you S, they only tell you dS.

That’s true in itself, the laws of Thermodynamics are invariant if we add a constant term to the entropy. But it does not mean that entropy is subjective: two observers agreeing that the thing they are observing has an entropy of 0 at 0 K will always measure the same entropy in the same conditions. And it does not mean that actual entropy is dependent on specific assumptions about the state of the thing.

This is also true of energy, and electromagnetic potentials (and potentials in general). This is unrelated to entropy being something special or subjective.


Entropy isn't subjective, but it is relative to what you can measure about a given system, or what measurements you are analyzing a system with respect to. In quantum mechanics there are degenerate ground states, and in classical mechanics there are cases where the individual objects (such as stars) are visible but averaged over. You should take a look at the Gibbs paradox.


I would say an even more limiting measure of entropy shows that entropy isn't subjective. That is entropy is the ability to extract useful work from a system by moving the system from a state of lower entropy to a state of higher entropy (in a closed system)

No subjective measure of entropy can allow you to create a perpetual motion machine. The measurements of any two separate closed systems could be arbitrary, but when said systems are compared with each other units and measurements standardize.


> entropy is the ability to extract useful work from a system

https://www.damtp.cam.ac.uk/user/tong/statphys/jaynes.pdf

The amount of useful work that we can extract from any system depends - obviously and necessarily - on how much “subjective” information we have about its microstate, because that tells us which interactions will extract energy and which will not; this is not a paradox, but a platitude. If the entropy we ascribe to a macrostate did not represent some kind of human information about the underlying microstates, it could not perform its thermodynamic function of determining the amount of work that can be extracted reproducibly from that macrostate. […] the rules of thermodynamics are valid and correctly describe the measurements that it is possible to make by manipulating the macro variables within the set that we have chosen to use. This useful versatility - a direct result of and illustration of the “anthropomorphic” nature of entropy - would not be apparent to, and perhaps not believed by, someone who thought that entropy was, like energy, a physical property of the microstate.

Edit: I’ve just noticed that the article discussed links to this paper, and quotes the first sentence above, and details the whifnium example given in the […] above.


I think this is a very weird thought experiment, and one that is either missing a lot of context or mostly just wrong. In particular, if two people had access to the same gas tank, and one knew that there were two types of argon and the other didn't, so they would compute different entropies, one of them would still be wrong about how much work could be extracted out of the system.

If whifnium did exist, but it was completely unobtainable, then both physicists would still not be able to extract any work out of the system. If the one that didn't know about whifnium was given some amount of it without being told what it was, and instructed in how to use it, they would still see the same amount of work being done with it as the one who did know. They would just find out that they were wrong in their calculation of the entropy of the system, even if they still didn't know how or why.

And of course, this also proves that the system had the same entropy even before humanity existed, and so the knowledge of the existence of whifnium is irrelevant to the entropy of the system. It of course affects our measurement of that entropy, and it affects the amount of work we can get that system to perform, but it changes nothing about the system itself and its entropy (unless of course you tautologically define entropy as the amount of work the experimenter/humanity can extract from the system).


> unless of course you tautologically define entropy as the amount of work the experimenter/humanity can extract from the system

Do you have a different definition? (By the way the entropy is the energy that _cannot_ be extracted as work.) The entropy of the system is a function of the particular choice of state variables - those that the experimenters can manipulate and use to extract work. It’s not a property of the system on its own. There is no “true” entropy anymore that there is a “true” set of state variables for the “true” macroscopic (incomplete) description of the system. If there was a true limiting value for the entropy it would be zero - corresponding to the case where the system is described using every microscopic variable and every degree of freedom could be manipulated.


Yes, the more regular statistical mechanics theory of entropy doesn't make any reference to an observer, and definitely not to a human observer. In that definition, the entropy of a thermodynamic system is proportional to the amount of microstates (positions, types, momentum, etc. of individual particles) that would lead to the same macrostate (temperature, volume, pressure, etc.). It doesn't matter if an observer is aware of any of this, it's an objective property of the system.

Now sure, you could choose to describe a gas (or any other system) in other terms and compute a different value for entropy with the same generalized definition. But you will not get different results from this operation - the second law of thermodynamics will still apply, and your system will be just as able or unable to produce work regardless of how you choose to represent it. You won't get better efficiency out of an engine by choosing to measure something other than the temperature/volume/pressure of the gases involved, for example.

Even if you described the system in terms its specific microstate, and thus by the definition above your computed entropy would be the minimum possible, you still wouldn't be able to do anything that a more regular model couldn't do. Maxwell's demon is not a physically possible being/machine.


> the entropy of a thermodynamic system is proportional to the amount of microstates (positions, types, momentum, etc. of individual particles) that would lead to the same macrostate (temperature, volume, pressure, etc.). It doesn't matter if an observer is aware of any of this, it's an objective property of the system.

The meaning of "would lead to the same macrostate" (and therefore the entropy) is not an "objective" property of the system (positions, types, momentum, etc. of individual particles). At least not in the way that the energy is an "objective" property of the system.

The entropy is an "objective" property of the pair formed by the system (which can be described by a microstate) and some particular way of defining macrostates for that system.

That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system (and that description is external to the physical system itself).

Of course, if you define "system" as "the underlying microscopical system plus this thermodynamical system description that takes into account some derived state variables only" the situation is not the same as if you define "system" as "the underlying microscopical system alone".


> That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system (and that description is external to the physical system itself).

I understand that's what they mean, but this is the part that I think is either trivial or wrong. That is, depending on your choice you'll of course get different values, but it won't change anything about the system. It's basically like choosing to measure speed in meters per second or in furlongs per fortnight, or choosing the coordinate system and reference frame: you get radically different values, but relative results are always the same.

If a system has high entropy in the traditional sense, and another one has lower entropy, and the difference is high enough that you can run an engine by transferring heat from one to the other, then this difference and this fact will remain true whatever valid choice you make for how you describe the system's macrostates. This is the sense in which the entropy is an objective, observer-independent property of the system itself: same as energy, position, momentum, and anything else we care to measure.


> I understand that's what they mean, but this is the part that I think is either trivial or wrong. That is, depending on your choice you'll of course get different values, but it won't change anything about the system. It's basically like choosing to measure speed in meters per second or in furlongs per fortnight, or choosing the coordinate system and reference frame: you get radically different values, but relative results are always the same.

I would agree that it's trivial but then it's equally trivial that it's not just like a change of coordinates.

Say that you choose to represent the macrostate of a volume of gas using either (a) its pressure or (b) the partial pressures of the helium and argon that make it up. If you put together two volumes of the same mixture the entropy won't change. The entropy after they mix is just the sum of the entropies before mixing.

However when you put together one volume of helium and a one volume of argon the entropy calculated under choice (a) doesn't change but the entropy calculated under choice (b) does increase. We're not calculating the same thing in different units: we're calculating different things. There is no change of units that makes a quantity change and also remain constant!

The (a)-entropy and the (b)-entropy are different things. Of course it's the same concept applied to two different situations but that doesn't mean it's the same thing. (Otherwise one could also say that the momentum of a particule doesn't depend on its mass or velocity because it's always the same concept applied in different situations.)


> However when you put together one volume of helium and a one volume of argon the entropy calculated under choice (a) doesn't change but the entropy calculated under choice (b) does increase. We're not calculating the same thing in different units: we're calculating different things. There is no change of units that makes a quantity change and also remain constant!

Agreed, this is not like a coordinate transform at all. But the difference from a coordinate transform is that they are not both equally valid choices for describing the physical phenomenon. Choice (a) is simply wrong: it will not accurately predict how certain experiments with the combined gas will behave.


It will predict how other certain experiments with the combined gas will behave. That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system - and what experiments we can perform acting on that description.


What would be an example of such an experiment?

By my understanding, even if we have no idea what gas we have, if we put it into a calorimeter and measure the amount of heat we need to transfer to it to change its temperature to some value, we will get a value that will be different for a gas made up of only argon versus one that contains both neon and argon. Doesn't this show that there is some objective definition of the entropy of the gas that doesn't care about an observer's knowledge of it?


Actually the molar heat capacity for neon, or argon, or a mixture thereof, is the same. These are monotomic ideal gases as far as your calorimeter measurements can see.

If the number of particles is the same you’ll need the same heat to increase the temperature by some amount and the entropy increase will be the same. Of course you could do other things to find out what it is, like weighing the container or reading the label.


No, they are not. The entropy of an ideal monatomic gas depends on the mass of its atoms (see the Sackur–Tetrode equation). And a gas mix is not an ideal monatomic gas; its entropy increases at the same temperature and volume compared to an equal volume divided between the two gases.

Also, entropy is not the same thing as heat capacity. It's true that I didn't describe the entropy measurement process very well, so I may have been ambiguous, but they are not the same quantity.


I'll leave the discussion here but let me remind you that you talked (indirectly) about changes in entropy and not about absolute entropies: "if we put it into a calorimeter and measure the amount of heat we need to transfer to it to change its temperature to some value".

Note as well that the mass dependence in that equation for the entropy is just an additive term. The absolute value of the entropy may be different but the change in entropy is the same when you heat a 1l container of helium or neon or a mixture of them from 300K to 301K. That's 0.0406 moles of gas. The heat flow is 0.506 joules. The change in entropy is approximately 0.0017 J/K.

> And a gas mix is not an ideal monatomic gas; its entropy increases at the same temperature and volume compared to an equal volume divided between the two gases.

A mix of ideal gases is an ideal gas and its heat capacity is the weighted average of the heat capacities (trivially equal to the heat capacity of the components when it's the same). The change of entropy when you heat one, or the other, or the mix, will be the same (because you're calculating exactly the same integral of the same heat flow).

The difference in absolute value is irrelevant when we are discussing changes in entropy and measurements of the amount of heat needed to increase the temperature and whether you "will get a value that will be different for a gas made up of only argon versus one that contains both neon and argon".


> mundane observation

What makes an observation mundane? I think what you said is insightful and demonstrates intelligence. I don't think it's at all obvious to the masses of students that have poured over introductory physics textbooks. In fact, it seems to me that often entropy is taught poorly and that very few people understood it well but that we are beginning to correct that. I point to the heaps of popsci magazines, documentaries and YouTube videos failing to do anything but confuse the public as additional evidence.


Maybe if you opened only introductory physics textbooks then it’s not mundane; but If you opened introductory information theory textbooks, statistical textbooks, introductions to Bayesian probability theory, articles about MaxEnt thermodynamics including articles by E.T. Jaynes, then it’s a quite mundane observation.


Jaynes definitely made the case for this a long time ago, and in my opinion he's correct, but I think that view is still not mainstream; or even if mainstream, certainly not dominant. So I think we should welcome other people who reach it on their own journey, even if they aren't the first to arrive there.


So, what you're saying is, not very mundane at all and, in fact, that it's specialized knowledge requiring advanced education. ;)


In physics the requirement for a valid changing the frame of reference is, that the laws of physics transform according to the transformation.

Every observer should discover the same fundamental laws when performing experiments and using the scientific method.

To stay in your analogy, saying 5 and 6 are the same would only work if the rules of the game you play could transform in such a way that an observer making a distinction between the two would arrive at the correctly transformed rules in his frame of reference.

Given that we have things like neutron stars, black holes and other objects that are at the same time objects of quantum physics and general relativity, the statement feels pretty fundamental to me, to a degree even that I wonder if it might be phrased to strongly.


I think you may have misunderstood the OP's point - the entropy you calculate for a system depends on how you factor the system into micro and macro states. This doesn't really have anything to do with changes of reference frame - in practice it's more about limitations on the kinds of measurements you can make of the system.

(you can't measure the individual states of all the particles in a body of gas, for instance, so you factor it into macrostate variables like pressure/temperature/volume and such)


Can we take the anthropic out of this? I reckon it'll make things easier.

Instead of me knowing, do other physical objects get affected. I might get anemsia and forget what the dots on a dice mean and say they are all the same: all dotty!

Imagine each hydrogen atom has a hidden guid but this is undetectable and has no effect on anything else. This is a secret from the rest of physics!

I guess!!! (Armchair pondering!) that that guid cannot be taken into account for entropy changes. At least from any practical standpoint.

You could imagine each atom having a guid and come up with a scheme to hash the atom based on where it came from ... but is that info really there and if so does it affect anything physically beyond that atoms current state (as defined by stuff that affects other stuff).


What anthropic do you mean? I'm describing properties of models, not people. Physics (probably) doesn't care what you "know".

On the guid idea - fundamental particles are indistinguishable from one another in quantum mechanics, so they don't have anything like a guid even in principle. There is no experiment you could perform on an electron to determine whether it had been swapped out for a "different" one, for instance.

Maybe I'm missing your point though?


Sorry ... I am replying mostly to the dice idea which wasn't you.

Yes correct about the guid idea. My point is the discussion is easier to follow if grounded in reality (as best modelled since that is all we have plus some evidence stored in the same "SSD"!)


Oh I see. But on your guid thing, people often describe entropy in terms of the set of micro states of your system (the actually physical states in your model) and the macro states (sets of microstates that are described by a collection of high-level state variables like pressure/temperature).

Physically indistinguishable stuff would have the same micro state, so yeah, they wouldn't affect entropy calculations at all, no matter what macro states you picked.

But I disagree a bit about grounding things in reality - some concepts are quite abstract and having clean examples can be helpful, before you start applying them to the mess that is our universe!


From a thermodynamics point of view only the differential of the entropy matters, so if there is only a fixed difference between the two computations they do not influence the physics.

If the way one does the coarse graining of states results in different differentials, one way should be the correct one.

There is only one physics.

If I remember one of Plancks relevations was that he could explain why a certain corrections factor was needed in entropy calculations, since phase space had finished cell size.


That's true - for instance I believe many of the results of statistical mechanics rely on further assumptions about your choice of macrostates, like the fact that they are ergodic (i.e. the system visits each microstate within a macrostate with equal probability on average). Obviously exotic choices of macrostates will violate these assumptions, and so I would expect the predictions such a model makes to be incorrect.

But ultimately that's an empirical question. Entropy is a more general concept that's definable regardless of whether the model is accurate or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: