- new
- past
- show
- ask
- show
- jobs
- submit
But if I embed it in a photo and then open the photo in GraphicConverter, it shows up as "sRGB IEC61966-2.1", which to my understanding is identical to Apple’s sRGB Color Space Profile.icm.
But that's an sRGB v2 profile. Should I download and use a v4 profile instead? Or download the ArgyllCMS sRGB.icm [1] and convert all photos to it? Or just select the Apple default sRGB profile everywhere?
I'm not a pro and don't have a calibrated display, but it annoys me when photos I upload online look vastly different in my browser than they look in my editing software on the same display.
For some reason video workflows never adapted to the ICC system (probably because in CRT days you couldn't really adapt your decode gamma on the fly) which is basically where the whole debate in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/m... comes from.
I'm not saying that the people using 2.2 EOTF are wrong, but all this just adds to the absurdity: in the modern day where LUTs are cheap and plentiful, instead of tagging content as an ambiguous sRGB it could simply be tagged as gamma 2.2 if it's actually intended to be decoded at that gamma.
Regardless of what you use for a linearizing function, the more important thing is that you use the correct encoding function afterward, so that you don’t introduce any additional gamma correction. For example, it was common to use a simple squaring function for speed. This gives fairly good results as long as you apply the square root function afterward to restore the original gamma correction. It doesn’t matter if the source is 2.2 or 2.4 gamma encoded or something else, that correction will be preserved. The blending post-linearization will be less accurate, but much better than not linearizing at all.
I guess this is the part I find anachronistic. Why do we work in the source scene light for photography, but do the opposite for videos? It makes sense if you assume the viewing device is "dumb" (like a television or CRT, especially in the analog days) but by now I assume the workflows are all fully digital, and even the most basic output device can apply LUTs. When digital video container formats were introduced, why didn't they align with what ICC did? It would have saved a lot of headache for everyone, compared to limited NCLC tags and the mess around EOTFs.
But for the most part this shouldn't really matter much. A huge amount of things these days are properly color managed, so as long as the thing that wrote the profile actually, you know, wrote what it actually wanted then it'll display just fine regardless of how many different "sRGB" profiles there are floating around. We're largely past the days of just hoping that the image and the display happen to agree on roughly the same colors.
That would be calibration and it's still necessary if you want color accuracy. That's about ensuring that what your monitor thinks it's displaying and what it's actually physically emitting are the same. The main thing that's changed here is that factory calibration has become a lot more common and is often more than good enough for anything short of serious professional work. Even for things that aren't professional displays. Like most flagship or even midrange smartphones are factory calibrated with dE values that would make reference monitors from 20 years ago blush. Right up until the OEM shoves a shitty color curve on it intentionally to make it "pop" or be more "vibrant" (Samsung calls this "Vivid", Pixel calls it "Adaptive", etc.. - but they at least usually have a "natural" option that gets you back to the properly calibrated display)
Your correction is backwards. Profiling + colour-managed apps gets you accurate colours regardless of source colourspace. Calibration doesn’t, and is not strictly needed either.
No. You can profile an uncalibrated (unadjusted) display and the profile will be correct. There is zero inherent requirement to adjust the display, or for the display itself to report a profile.
> It's required for the monitor itself to be able to display accurate colors at all.
Also no. Once you have profiled the uncalibrated display, you can accurately display colours within its gamut by converting to the profile.
<Heath Ledger Joker>Ah haa ha ha haaaa!</Heath Ledger Joker>
We're nowhere near past that point, we haven't even begun to approach that point. That point is something I would like to reach before I die, but since that's maybe just a couple of decades away, it's not looking likely.
In general, Windows and Linux does not color manage, or so badly that it is counter-productive.
Most sub-$500 monitors do not report their native gamut! By default, operating systems assume monitors are sRGB (they're typically not), and send un-calibrated 8-bit RGB as-is.
On Windows and MacOS, enabling HDR mode typically sets the correct gamut, etc... and mostly makes things "just work", but that's at the OS level only.
Almost all applications map wide-gamut images to sRGB even on HDR monitors or simply re-interpret the RGB values as-if they're sRGB without even bothering to color space convert.
Firefox has color management off by default. Microsoft Edge defaults to "crush to sRGB". Apps with embedded web view controls are "who knows?"
In general, widge-gamut, 10 bits per channel, and HDR support are all a total shit show. I'm perpetually surprised if any of it works!
As a random example, my Nikon Z8 DSLR can natively record HDR 10 bit wide gamut HEIF files in-body. Windows can't display those at all. MacOS and iPhones can... sometimes... but then the viewer apps will often "get confused" and the brightness will jump around randomly and non-deterministically as you switch between thumbnail and full screen views. You can't forward such an image to anyone via iMessage, they'll get gibberish on their end, and SMS/MMS is hopeless.
Meanwhile, YouTube HDR generally "just works" on most devices, so I've started sending people my still image photography by converting them to a HDR 4K slideshow in DaVinci Resolve and giving them a YouTube link.
It's sad and pathetic that Meta set $80 billion on fire for the Metaverse and the rest of the industry found a decent chunk of a trillion dollars under the couch cushions to throw at AI slop, but nobody can "afford" to have one or two engineers fix their imaging pipeline.
Upload a HDR or wide-gamut image to Faceobook successfully and then tell me it "just works".
Or send one in an email.
Or do anything with it other than view it on your own device.
The rest of your rant seems mostly anchored around the fact that most things still produce sRGB, which is true, but they still typically map inputs into sRGB appropriately (that is, they get clipped). Which still is being properly color managed.
HDR has one extra minor issue on top, which is that there needs to be an output mapping to the display max brightness capabilities but this isn’t that different to what’s needed to correctly tone map wide gamut to arbitrary displays.
Also: if the spec is so “wrong” why does YouTube HDR just work? Also… Apple TV, NetFlix, etc… on practically all devices?
They got it right!
There is no fundamental color management difference between moving and still images.
If anything, moving images are harder!
There is always a historic reason for a colour profile, sadly most software avoids terminology like the plague.
1. The matrix implied by the reference primaries in Table 1: [X; Y; Z] = [506752/1228815, 87098/409605, 7918/409605; 87881/245763, 175762/245763, 87881/737289; 12673/70218, 12673/175545, 1001167/1053270]*[R; G; B].
2. The matrix in section 5.2: [X; Y; Z] = [1031/2500, 447/1250, 361/2000; 1063/5000, 447/625, 361/5000; 193/10000, 149/1250, 1901/2000]*[R; G; B].
3. The inverse of the matrix in section 5.3: [X; Y; Z] = [248898325000/603542646087, 71938950000/201180882029, 36311670000/201180882029; 128304856250/603542646087, 143878592500/201180882029, 14525360000/201180882029; 11646692500/603542646087, 23977515000/201180882029, 191221850000/201180882029]*[R; G; B].
The distinction starts to matter for 16-bit color. The CSS people seem to take the position that the matrix implied by primaries is the true version, but meanwhile, the same document's Annex F (in Amd. 1) seems to suggest that the 5.2 matrix is the true version, and that the 5.3 matrix should be rederived to the increased precision. There's no easy way to decide, as far as I can tell.
Meanwhile, I agree with the author that the ICC's black-point finagling in their published profiles has not helped with the confusion over what exactly sRGB colors are supposed to map to.
Even most modern displays are not really capable of more than 10bit color (RGB miniLED and QD-OLED barely are). Even REC2020 doesn't need 16bit.
sRGB doesn't even have a consistent gamma, and it's not anywhere close to uniformly covering the color volume. Why use it? DCI-P3 works fine.
Certainly, eyes don't have a consistent gamma, they don't even match between people, much less outside the foveal field.
Just last week I noticed that when a reddit user uploads a screenshot taken on MacOS as PNG image to a reddit post, the PNG will still contain uniquely identifying information about the monitor that is attached to the MacOS system and when it was last calibrated. You can deduce type of Macbook they are using from the screen resolution and see when they switched machines once you notice a different monitor calibration timestamp. Just from a single PNG image that was uploaded by the user themselves. If those two pieces of information are not stored in the PNG you know that they must be Windows or Linux user.
It's these small breadcrumbs all over the place which make forensics so interesting.