Hearing has a few quirks too:
- When we measure sound pressure, we measure it in log (so, every 3dB is a doubling in sound pressure), but our hearing perceives this as a linear scale. If you make a linear volume slide, the upper part will seem as if it barely does anything.
- The lower the volume, the less perceivable upper and lower ranges are compared to the midrange. This is what "loudness" intends to fix, although poor implementations have made many people assume it is a V-curve button. A proper loudness implementation will lessen its impact as volume increases, completely petering off somewhere around 33% of maximum volume.
- For the most "natural" perceived sound, you don't try to get as flat a frequency response as possible but instead aim for a Harman curve.
- Bass frequencies (<110Hz, depending on who you ask) are omnidirectional, which means we cannot accurately perceive which direction the sound is coming from. Subwoofers exploit this fact, making it seem as if deep rich bass is coming from your puny soundbar and not the sub hidden behind the couch :).
https://en.wikipedia.org/wiki/Power,_root-power,_and_field_q...
Consider a loudspeaker playing a constant-frequency sine wave. Sound pressure is proportional to the excursion of the cone. To increase sound pressure, the excursion has to increase, and because frequency is fixed the cone will have to move faster. If it's covering twice the distance in the same time interval it has to move twice as fast. Kinetic energy is proportional to the square of velocity, so doubling the sound pressure requires four times the power, and doubling the power only gets you sqrt(2) times the sound pressure.
Human loudness perception generally requires greater than 6dB increase to sound twice as loud. This depends on both frequency and absolute level as you mentioned, with about 10dB increase needed to double perceived loudness at 1kHz and moderate level.
Pitch perception is also logarithmic; an octave in music is a 2x ratio.
Memory is sort of logarithmic; you'd say a thing happened 1-2 days ago or 1-2 years ago, but not 340-341 days ago.
Same with age; someone being 10 years older than you is a much bigger deal when you're 10 than when you're 80.
> Stevens' power law is an empirical relationship in psychophysics between an increased intensity or strength in a physical stimulus and the perceived magnitude increase in the sensation created by the stimulus
When we measure sound pressure, we measure it in log (so, every 3dB is a doubling in sound pressure), but our hearing perceives this as a linear scale
Its the other way around : we perceive logarithmically so we created the logarithmic decibel scaleI now have to read up on the Harman curve because I'm curious about this and it has a direct practical application.
This has issues. When you go from a dark space to a bright space, the eye's iris stops down. But not instantaneously. It takes a second or two. This can be simulated. Cyberpunk 2077 does this. Go from a dark place in the game to bright sunlight and, for a moment, the screen becomes blinding, then adjusts.
In the other direction, go into a dark space, and it's dark at first, then seems to lighten up after a while. Dark adaptation is slower then light adaptation.
Tone mapping is not just an intensity adjustment. It has to compensate for the color space intensity problems the OP mentions. Human eyes are not equally sensitive to the primary colors.
Some visually impaired people hate this kind of adjustment, it turns out.
Here's a clip from Cyberpunk 2077.[2] Watch what happens to screen brightness as the car goes into the tunnel and then emerges into daylight.
This looks nothing like actual Cyberpunk 2077. It looks actually realistic - unlike the base game itself where, just like in about every other game since "Physically Based Rendering" became a thing, everything looks like it's made of painted plastic.
Per the video description, this version of C2077 is modded up to its eyeballs. I'm very interested in those mods now, because it's the first time I ever saw someone managed to get materials in a game look somewhat realistic.
Or, very famously 20 years ago in Half-Life 2: Lost Coast, which was released as a tech demo for HDR rendering.
OSA-UCS takes the Helmholtz-Kohlrausch effect into consideration.
TL;DR: Oklab is pretty simple, but is already pretty nice as a perceptually uniform color space. Darktable UCS takes Oklab and tries to reduce the residual error.
Feel free to correct me if I got anything wrong
"After trying to fix Oklab for a dozen of hours, it appeared that the numerical issues it raises are grounded into design constraints we don’t need for the current task. And so do most of the other perceptual spaces.
[..]
So we could fit an Lch model directly from Munsell hue-value-chroma dataset, without going through LMS space and without even tilting the lightness plane. Doing so, we will not try to model the physiology of vision, but approach the problem as a geometric space distortion where the Munsell hues and the saturations (as a ratio of chromas / values) can be predicted by a 1D mapping from the hues, chromas and lightnesses of forming the principal dimensions of the model.
This model will need to be invertible. We will try to fit brightness data to derivate correlates of perceptual brightness accounting for the Helmholtz-Kohlrausch effect. Using the brightness and saturation correlates, we will rewrite our image saturation algorithm in terms of perceptually-even operators."
They obviously took a lot from Oklab but it seems to me they did more than just modifying it to reduce the residual error. But again, I just skimmed and I can be completly wrong.
#ff0000 is, in terms of brightness, pretty dark compared to #ffffff yet there is a way it seems to "pop out" psychology. It is unusual for something red to really be the brightest color in a natural scene unless the red is something self-luminous like an LED in a dark night.
Right. Just the extremely common cases where humans deliberately use red lights because of the fact that they are perceived as being very bright. Stop lights, brake lights, tail lights, emergency vehicle lights, aviation obstruction lighting, emergency flairs, emergency exit lights, etc.
Green lights have the greatest perceived brightness, and green is used as the go signal in traffic lights for that reason. Red light (1) has a common cultural association with danger stemming from its association with natural fire, (2) has the least impact on night vision because of the relative insensitivity of human eyes to it; the former is the reason it is used or things like stop/warning/etc. lights that might be used in any background lighting conditions, and the latter is the reason it is used for emergency guide lighting and signage that are likely to be used in dark background conditions especially where people would be at substantial likelihood of transitioning the illuminated area to an even darker environment (such as one without or with nonfunctional emergency lighting.)
It's not. Reds can be dark or bright relative to other reds, and "bright red" -- as you would expect from the way that adjectives work -- refers to a red that is bright relative to other reds.
It was actually quite shocking how much more sense most color choices in art and design made to me, which was a much bigger reason for me to keep wearing the glasses than being able to distinguish red, green and brown better than before. The world just looks "more balanced" color-wise with them.
While it was very obvious early on in my life that I experienced most green, red and brown colors as ambiguously the same (I did not know peanut butter was not green until my early thirties), the fact that there also were differences in perceptual brightness had stayed completely under the radar.
¹ And yes, these lenses do work, at least for me. They're not as scummy as enchroma or other colorblind-"correcting" lenses, for starters you can only buy them after trying them out in person with an optometrist, who tests which type of correction you need at which strength. Ironically their website is a broken mess that looks untrustworthy[0]. And their list of scientific publications doesn't even show up on Google Scholar, so they probably have near-zero citations[1]. But the lenses work great for me.)
[0] https://www.colorlitelens.com/
[1] https://www.colorlitelens.com/color-blindness-correction-inf...
> evaluate relative brightnesses between art assets, and improve overall game readability
The method in Color2Gray is trying to enhance salience, but the paper does a good job of comparing the problems (including red / blue examples in particular).
Like other commenters, I think oklab would look better than CIELAB on the example given in the OP. https://bottosson.github.io/posts/oklab/#comparison-with-oth... and the Munsell data below it show it to be a lot more uniform than either CIELAB or CIELUV.
A Better Default Colormap for Matplotlib | SciPy 2015 | Nathaniel Smith and Stéfan van der Walt https://www.youtube.com/watch?v=xAoljeRJ3lU
A benefit of doing it this way is you account for color blindness and accessibility e.g. all colors at L=50 will have the same WCAG contrast ratio against all colors at L=25. This helps when finding colors with the contrast you want.
Related, I'm working on a color palette editor based around creating accessible palettes where I use the HSLuv color space which has the above property:
https://www.inclusivecolors.com/
You can try things like maxing out the saturation of each swatch to see how some some hues get more bold looking at the same lightness (the Helmholtz-Kohlrausch effect mentioned in the article I think). You can also explore examples of open source palettes (Tailwind, IBM Carbon, USWDS), where it's interesting to compare how they vary their saturation and lightness curves per swatch e.g. red-700 and green-700 in Tailwind v3 have different lightnesses but are the same in IBM Carbon (the "Contrast > View colors by luminance only" option is interesting to see this).
https://www.inclusivecolors.com/ includes the APCA contrast measurement which is meant to be more accurate than WCAG, if you want to experiment with how it compares.
WCAG and APCA mostly agree on what has good vs bad contrast for dark on light color pairs with some exceptions. For light on dark colors though, WCAG isn't accurate and ACPA is much stricter in what's allowed.
Accurate color reproduction on uncalibrated consumer devices is just wishful thinking and will not be fixed in the forseeable future.
So unless you work in a color controlled and calibrated environment it's hard to make any reliable statements about perception.
I simply would not worry too much about optimizing perceptual color spaces at this point.
So in those cases, filmmakers had to counteract the limitations of the b/w transformations in the actual sets.
But which axis you care about is very context specific.
https://keithjgrant.com/posts/2023/04/its-time-to-learn-oklc...
Y = 0.299 * R + 0.587 * G + 0.114 * B
> Unfortunately, I haven’t been able to find any perceptually uniform color spaces that seem to include these transformations in the final output space. If you’re aware of one, I would love to know.
Traditional painting.
Also, to the author on the same blog, came across this: https://johnaustin.io/articles/2023/how-we-can-actually-move...
Get off the internet.