We use gamma in order to emulate human sensibility that is logarithmic, lot linear. Most machine sensors work in the linear space, they are either linear or linearized around a specific point(like most cameras).
Human senses adapt continuously to the strength of the signal. Your eardrum has muscles that become more rigid with loud sounds and decouple it. Your eyes have pupils that contract and let much less light to pass. Half pupil's diameter means 4 times less light, not linearly but as a power of two.
Then the sensors itself reduce the signal again. If you look at a bright thing in sunlight for two seconds and then look away you see a "shadow" of the bright object because the specific part of the retina has adapted to the level of sunlight and automatically subtracts the brightness in this region.
Neurons itself work logarithmically. The hairs in the ears reacts proportionally less than the signal itself. The chemical diffusion of to cells is also proportional to itself. That is they work in the logarithmic (or the inverse, exponential if you want to recover the original signal).
The reason float works is because float encoding is logarithmic. But there is a problem, 16 float bits is not standard and it is very limited. You need 32 bit floats per image channel and that is wasteful.
Adobe raw format(dng) created a 16 bit float. But if you do something, do it well, use 32 bits like the Gimp does. The great thing about 32 bits is that you could combine multiple exposures in a single image and that way you are not wasting so much resources because instead of 3 or 4 pictures(each for a different exposure), you just have one with almost the same info.
We use gamma in order to emulate human sensibility that is logarithmic, lot linear. Most machine sensors work in the linear space, they are either linear or linearized around a specific point(like most cameras).
Human senses adapt continuously to the strength of the signal. Your eardrum has muscles that become more rigid with loud sounds and decouple it. Your eyes have pupils that contract and let much less light to pass. Half pupil's diameter means 4 times less light, not linearly but as a power of two.
Then the sensors itself reduce the signal again. If you look at a bright thing in sunlight for two seconds and then look away you see a "shadow" of the bright object because the specific part of the retina has adapted to the level of sunlight and automatically subtracts the brightness in this region.
Neurons itself work logarithmically. The hairs in the ears reacts proportionally less than the signal itself. The chemical diffusion of to cells is also proportional to itself. That is they work in the logarithmic (or the inverse, exponential if you want to recover the original signal).
The reason float works is because float encoding is logarithmic. But there is a problem, 16 float bits is not standard and it is very limited. You need 32 bit floats per image channel and that is wasteful.
Adobe raw format(dng) created a 16 bit float. But if you do something, do it well, use 32 bits like the Gimp does. The great thing about 32 bits is that you could combine multiple exposures in a single image and that way you are not wasting so much resources because instead of 3 or 4 pictures(each for a different exposure), you just have one with almost the same info.