- new
- past
- show
- ask
- show
- jobs
- submit
https://techcrunch.com/2018/10/22/the-future-of-photography-...
Not to say there is no movement on the other fronts. Glass was pushing for a crazy anamorphic lens and far larger sensor that would have been a serious improvement, but I don't know if it went anywhere.
https://techcrunch.com/2022/03/22/glass-rethinks-the-smartph...
I think that those who buy a pro camera nowadays do it because they care about photography itself. For that public post-processing is a touchy subject: besides the philosophical aspect of "is this a true representation of the moment I captured?", there's a real chance that their work will be under-appreciated or rejected altogether if their camera is known as "the one that retouches your photo automatically".
I'm not sure that Nikon's sales would improve that much if they offered auto-post-processing (is that something amateurs a tively want?), but I can imagine that their current customers would be unhappy.
In other words, no camera exists that has great computational photography and a lens bigger than your thumbnail. Why?
Ah, but what if instead they could get that $700 0.3kg lens you own to perform as well as that $2000 1kg lens you don't own?
There is no right or wrong. Generally the sky is not the most important thing. So cameras will expose for the subject below the sky. In a high dynamic range situation (outside scene with sunlight), that means over exposing the photo. The Fuji will actually under expose, which is a good thing. But then it immediately throws the baby away with the bathwater with the one size fits all tone curve it applies to produce the jpg. Try shooting raw + jpg. It's usually fixable in the raw version if the jpg is over exposed.
With raw, you'd expose such that the sky has no blown out highlights. And then in post you adjust your tone mapping to bring up the darker parts without destroying the sky. There are many different ways of doing tone mapping and a gazillion ways you can configure that. That's because different scenes call for different ways to deal with over/under exposed areas.
If you expose correctly, use the aperture appropriately for the lens, scene, direction of light, pick the right lens, etc. you can control the outcome. But it requires knowing and understanding how all of that interacts. You can get very different results for the same scene just by fiddling with aperture and exposure.
The mark of a good photographer is that they don't spend a lot of time in post and instead switch lenses to deal with different scenes. They know how it's going to come out before they click the shutter. It used to be that they wouldn't even see the end results until after developing the film. So, they'd be measuring light and calculating optimal aperture and exposure settings given the scene, light, and lens. Modern cameras make this a lot more interactive and easy. If it looks alright in the digital view finder, it probably is alright. If you have an SLR, pay attention to the exposure indicator.
If you are not interested in doing or learning that, stick to your smart phone. The lens and sensor are not amazing but the camera AI is probably a lot better than what comes with your bigger camera and it will compensate by doing smarter things in post processing.
The best camera is the one you have with you. And this is why smartphones are so great, but the author also does a great job of expressing the limitations and problems.
I've worked as a professional photographer and videographer when between work. I've owned pro Nikon, canon and currently roll with some bonkers expensive leica gear. What makes smartphones so special as cameras is how easy they are taking unobtrusive photos of friends, family and kids. There are few things more frightening to me than a kid charging my way to rip the leica out of my hands because it looks so interesting. They don't do that with smartphones. Smartphones are especially stealthy because it's not clear what the user is doing with them. They could be just browsing the web or whatever. Are they taking video? Or a photo ect.
This is why I ended up picking up an (admittedly quite expensive) Ricoh GR IV. It's tiny enough to take with me everywhere, has a modern APS-C sensor and great IBIS.
The ISP pipeline that turns raw Bayer data into a JPEG includes black level correction, lens shading correction, defect pixel correction, demosaic, auto white balance, color correction matrix, tone mapping, multi-stage noise reduction (raw domain, luma/chroma in YUV, temporal across frames), sharpening, local contrast, and multi-frame fusion. Each stage has dozens of parameters and most of them are scene-dependent.
"Tuning" means months in a light booth shooting color charts at every CCT from roughly 2300K to 7500K, then more months outdoors capturing skin tones across ethnicities, foliage, sky at different times of day, neon, candles, fluorescent shop lighting, mixed lighting. Every sensor + lens module gets its own calibration. Then you tune the perceptual layer on top: skin tone preservation, sky and foliage segmentation feeding AWB, AE metering weights with face priors, highlight roll-off, the chroma vs luma noise tradeoff across frequency bands.
This is why two phones with the same Sony IMX sensor look completely different. Sony ships a reference tuning. Apple and Google throw it away. Pixel phones famously have small sensors and beat phones with 2-3x the sensor area on real-world output, almost entirely on the strength of ISP tuning and computational photography stacked on top.
The headcount is genuinely large. Apple and Google each have camera organizations in the hundreds, with a substantial fraction doing nothing but tuning: color scientists, ISP tuning engineers, perceptual quality engineers running blind A/B against competitor output. It's why nobody else -- not Samsung, not Xiaomi, not the smaller players -- quite matches them even when they buy the same or better sensors. (Disclosure: I run a camera company and we live this problem.)
The sensor size advice in the article is correct but easy to misread. A small sensor with great tuning regularly beats a big sensor with mediocre tuning, and the gap is bigger than people expect.
I care way less about strict objective/subjective quality in comparisons and more about which one chooses the fastest shutter speed. You can have the best sensor, colour science and dynamic range in the world but if there is movement blur its unusable.
My pixel has been okay at this but I'm apprehensive about how to find good comparisons when I need to replace this phone.
Similar story on Android but I won't bore with the model names and numbers. Older handsets gave better results. Pre-smartphone, king of the hill was the Sony K800, I loved the photos from that phone. No fancy software, just quality hardware and straightforward processing gave consistent results.
That said, I should try Halide app. Bringing it back to basics with less computation might be the way forward for me.
If you don't want any frame stacking, you'd need to use a dedicated camera instead of a smartphone, because a smartphone without HDR isn't viable.
I have an old Android with a 13MP camera (Sony IMX214, 1/3.06") that leaves HDR off by default. I haven't had a need to turn HDR on except if I'm trying to photograph something with regions of extreme contrast.
Hopefully the camera doesn't upscale and then downscale again if told so save at its actual native-ish resolution.
the alternative, which many smartphone cameras do now, is to capture a burst of many photos of a short shutter speed and then combine them in software. For static things, this is equivalent to a longer shutter speed (with the additional advantage of not blowing out the highlights), and for moving things, we can filter in software to avoid smearing them out.
The final picture is the result of digital processing of the 200 megapixels, which is quite different from losing the data, all other things being the same. His point is right, but this paragraph isn't worthy of the rest of the essay.
To counter the unnatural look of noise reduction I often add a film grain effect.
From what I remember, the core thesis is “take a lot of pictures and take the best parts”, which works for a surprising number of cases.
Having tried the iPhone Pro lenses, while impressive, the sensor size is never going to match a full-frame mirrorless.
410/411/422 is the least of the problems. If it was just that, it'd largely just be compression artifacts around red/blue things like you often see on streaming / TV new text banners at the bottom. i.e. things like Stop signs, etc...
But the aggressiveness of the de-noising in the native JPG/HEIF images otherwise is really unfortunate if you want to look at the images on a screen larger than the phone's screen. The amount of detail lost (other than in areas like people's faces where the phone knows to specialise) can be very considerable.
I'd really like a way to dial that aggressiveness down a fair bit, even at the cost of more noise/grain and larger file size (through less compression due to the extra noise).
Another thing is the amount of lens flare you can get when shooting at the sun for sunsets/rises, etc or other large bright light sources. With very small lens elements, from a physics perspective it's understandable that suppressing the reflections and inter-reflections is very difficult on such a small surface area (even with special coatings to reduce the fresnel reflection ratios), but if you care about image quality and wanting to look at images on screen larger than the phone which took them, larger format cameras still have some benefit despite their larger and heavier size and therefore inconvenience (looks at 5D Mk IV on shelf).
[1] https://apps.apple.com/us/app/lumina-manual-camera/id1617117...
Start by adjusting the black levels and the exposure. This is where the histogram can help to visualize how much you are adjusting (if you have one). As the exposure goes up, you can adjust the highlights to recover some of the areas trying to blow out. As you pull the black level down, you can recover some of the details getting crushed using with the shadows adjustment. You can then adjust the contrast/saturation/warmth/tint as needed. The order of adjustments in the iOS Photos editor pretty much follow that order.
Obviously that will be slow, so probably do some kind of gradient descent, or perhaps depending on what the parameters are there may be a closed-form solution, I don't know.
Yes the result will be subtly different but it's just a starting point.
It's mostly because in the VFX/CG space for ray tracing/path tracing de-noisers, they almost always rely on extra outputs/AOVs of things like 'albedo' (diffuse reflectance), normal / world position, etc, to help guide them in many cases.
So they often can 'cheat' a bit, and know where the edges of things are (because say the object ID AOV changes - minus pixel filtering, which complicates things a bit).
They can also 'cheat' in other ways, by mixing back in some of the diffuse texture detail that the denoiser might have removed from the 'albedo' AOV channel.
Cameras don't really have anything to guide them, so they have to guess. And often, they seem to use very primitive methods like bi-lateral filters (or at least things which look very similar), to try and guide them, but it doesn't work very well.
Portrait cameras on phones can use depth sensors a bit to help if the camera has them, but for things like hair strands, it doesn't really work, and is mostly useful for fake-depth-of-field depth-based blurring.
Fwiw, Topaz -- which I have a license for but essentially never use -- has pretty incredible denoising & upsizing features (for both photo & video), but to get the optimal quality output you offload the processing to their cloud infra (and buy credits from them to pay for it). It's roughly the equivalent of a SWE using a local LLM that's good enough" vs a frontier model that's SOTA but requires a consumption-based subscription.
1: https://blogs.nvidia.com/blog/ai-decoded-ray-reconstruction/
2: https://gpuopen.com/amd-fsr-rayregeneration/
Autoencoders can presumably do this except that it would be operating at levels of patches rather than the entire Moon.
https://news.ycombinator.com/item?id=35107601
It was probably a result of an explicit company charter but such things can happen with modern ML models quite unintentionally.
Bear in mind, a modern flagship phone doesn't just need to take photos - it also needs to record 4k video at 60fps.
Can't go too hard with the ML when you've only got 1/60th of a second to do it.
One day smartphone cameras might get there, but right now, the sensor technology isn't there yet. The problem isn't merely noise. It's that rain and snow are moving about in the scene, which means that the camera can't just do its usual trick of taking multiple exposures.
AI denoisers are nice, but they aren't strictly necessary for ILCs. Even those full-frame cameras without IBIS are able to take pictures of nighttime snow just fine.