Look at the decoders for each format that darktable supports here: https://github.com/darktable-org/rawspeed/tree/develop/src/l...
It's some binary parsing, reading metadata, maybe doing some decompression-- a thousand lines of C++ on average for each format. These aren't complex codecs like HEVC and only reach JPEG complexity by embedding them as thumbnails!
Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Photographers rarely care, so it doesn't appreciably impact sales. Raw processing software packages have generally good support available soon after new cameras are released.
One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras. It does seem rawspeed handles this from a quick scan of the code.
It can get more tricky - Sinar digital backs have an extra dark frame file (and flat frame!) that is not part of the RAW files, and that is not handled by any open source library to my knowledge - though I did write a basic converter myself to handle it: https://github.com/mgolub2/iatodng_rs
I'm not sure how DNG would be able to handle having both a dark and flat frame without resorting to applying them to the raw data and saving only the processed (still unbayered) data.
In astronomy/astrophotography the FITS format[1] is commonly used, which supports all these things and is, as the name suggests, extremely flexible. I wonder why it never caught on in regular photography.
Especially for really old setups that had RGB color wheels and multiple exposures, exactly like a multispectral astro image might. Phase one also has a multispectral capture system for cultural heritage, which just shoots individual IIQs to my knowledge… It would work great too for multiple pixel shift shots.
Possibly, the engineers just didn’t know about it when they were asked to write the firmware? It’s funny, I think most RAW formats are just weird TIFFs to some degree, so why not use this instead.
Considering how often I witnessed engineers trying to build something to solve a problem instead of sitting down and researching if someone else did that already, and likely better, I really wouldn’t be surprised if that is the answer to most questions in this thread.
I'm fact I'm not sure how that saga ended and CR3 support was finally added a few years after the release of the Canon mirrorless cameras that output CR3.
Technically speaking, implementing DNG would be another development activity on top of a RAW export, because RAW also has a purpose in development and tuning of the camera and its firmware.
It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
It just "happens" to be also available to select for the end-user after product-launch. Supporting DNG would mean adding an extra feature and then hiding the RAW-option again.
I can imagine it's hard to make this a priority in a project plan, since most of the objectives are already achieved by saving in RAW
Second, the native raw images do include a ton of adjustments in brightness, contrast and color correction. All of which gets lost when you open the image file with apps provided from other companies than the camera vendor. Eg. open a Nikon-raw in NC Software and then in Lightroom. Big difference. Adobe has some profiles that get near the original result, but the Nikon raw standards often are better.
So DNG would absolutely be an advantage because then at least these color corrections could natively be implemented and not get lost in the process.
It "just happens" to be selectable because it is a byproduct of the internal development: The existing RAW format is used internally during development and tuning of the product, and is implemented to work with vendor-internal processes and tools.
Supporting DNG would require a SEPARATE development, and it would still not replace a proprietary RAW-format in the internal toolchain.
(because the DNG patent-license comes with rights for Adobe as well as an option to revoke the license)
RAW (any format) is an essential requirement for many photographers. You just can't get the same results out of a jpeg.
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development. It just "happens" to be also available to select for the end-user after product-launch.
--> Even if DNG-support would be adopted as a feature for the end-user, the proprietary RAW would still need to be maintained because it has a core-purpose during development of the product. The utilization AFTER that is the product-feature
None of this negates the value of RAW for photographers, this is completely beside the topic
It is up to you now to ingest new information and adjust your interpretation, a process I'm afraid I can't help any further with.
Good luck ¯\_(ツ)_/¯
This is what I was thinking, that there are potentially so many RAW formats because there are so many sensors with potentially different output data. There should be a way to standardize this though.
Supporting DNG means that those few steps later it should be standardised into ANOTHER RAW-equivalent. A format which happens to be patented and comes with a license and legal implications.
Among them the right for Adobe to every method you used to make this conversion from your proprietary bare-metal sensor-data. This is not trivial, because if you're a vendor working on sensor-tech you wouldn't want to be required to share all your processing with Adobe for free...
Well, DNG ("Digital Negative") is such a format, defined and patented by Adobe, but with a license allowing free use under certain conditions.
The conditions are somewhat required to make sure that Adobe remains in control of the format, but at the same time they create a commitment and legal implications for anyone adopting it.
...and what do you think DNG is?
In a development environment, this format competes with an already-implemented proprietary RAW-format which already works and can be improved upon without involvement of a legal department or 3rd party.
It doesn't seem to reward innovation, it seems to reward anti-competitive practices.
That is the intended purpose of a patent. From WIPO [1]:
> The patent owner has the exclusive right to prevent or stop others from commercially exploiting the patented invention for a limited period within the country or region in which the patent was granted. In other words, patent protection means that the invention cannot be commercially made, used, distributed, imported or sold by others without the patent owner's consent.
While having two file formats to deal with in software development definitely "competes" with the simplicity of just having one, patents and licensing aren't the reason they're not choosing Adobe DNG.
--> Your information source is incomplete. Please refer to the license of DNG [0].
The patent rights are only granted:
1. When used to make compliant implementations to the specification,
2. Adobe has the right to license at no cost every method used to create this DNG from the manufacturer, and
3. Adobe reserves the right to revoke the rights "in the event that such licensee or its affiliates brings any patent action against Adobe or its affiliates related to the reading or writing of files that comply with the DNG Specification"
--
None of this is trivial to a large company.
First of all, it requires involvement of a legal department for clearance,
Second, you are in risk of violation of the patent as soon as you are not compliant to the specification,
Third, you may have to open every IP to Adobe at no charge which is required in order to create a DNG from your sensor (which can be a significant risk and burden if you develop your own sensor) and
Fourth, in case the aforementioned IP is repurposed by Adobe and you take legal action, your patent-license for DNG is revoked.
--
--> If you are a vendor with a working RAW implementation and all the necessary tools for it in place, it's hard to make a case on why you should go through all that just to implement another specification.
[0] https://helpx.adobe.com/camera-raw/digital-negative.html#dng
Occam's razor here suggests that the camera manufacturers' answers are correct, especially since they are all the same. DNG doesn't let them store what they want to and change it at will -- and this is true of any standardized file format and not true of any proprietary format.
Considering that you entered this discussion instantly claiming that others are wrong without having even read the license in question makes this conversation rather..."open-ended"
> Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
As I wrote above, Adobe reserves the right to use every patent that happens to be used to create this DNG from your design at no cost, and will revoke your license if you disagree i.e. with what they do with it.
> Occam's razor here suggests [..]
Or, as I suggested, it's simply hard to make a case in favor of developing and maintaining DNG with all that burden if you anyway have to support RAW
> granted by Adobe to individuals and organizations that desire to develop, market, and/or distribute hardware and software that reads and/or writes image files compliant with the DNG Specification.
If I use it for something it's not images because I want to create a DNG file that's a DNG file and a Gameboy ROM at the same time. Or if I'm a security researcher testing non compliant files. Or if I'm not a great developer or haven't had enough time to make my library perfectly compliant with the specification... Will I be sued for breaking the license?
You not only have to remove DNG-support on those products, but due to warranty-law in many countries have to provide an equivalent feature to the customer (--> develop a converter application again, but this time for products you already closed development for years ago).
Alternative would be to settle with Adobe to spare the cost for all that. So Adobe has all the cards in this game.
Now: Why bother transitioning your customers to DNG...?
You can argue that maybe those things shouldn’t be considered trade secrets or whatever. But there’s just a bit more to it than that.
Despite the name, this is rarely a pure raw stream of data coming from the sensor. It's usually close enough for practical photographic purposes though.
Despite this, people eventually used it for photographic purposes.
its a full fledged format, that contains the extensive metadata already in the exif formats including vendor blocks etc, and then its the sensor readout, which is relatively similar between nearly all sensors, theres certainly not many types, considering you can express the bayer pattern. This can all be expressed in DNG, and would NOT need to be an "extra" on top of "raw".
and indeed, some camera vendors do in fact do this.
What are you talking about? Canon could implement DNG instead of CR3. It's not that hard. Both of these formats are referred to as "RAW".
DNG would not replace CR3, because CR3 would still be needed before launch, and Canon has no incentive to change their entire internal toolchain to comply to Adobes DNG specification.
Especially not because the DNG format is patented and allows Adobe to revoke the license in case of dispute...
[1] https://github.com/darktable-org/rawspeed/issues/366
And in my experience there has been lots of bugs with Fujifilm raws in darktable:
[2] https://github.com/darktable-org/rawspeed/issues/354
[3] https://github.com/darktable-org/darktable/issues/18073
However, Fujifilm lossless compressed raw actually does a decent job keeping the file sizes down (about 50% to 60% the file size of uncompressed) while maintaining decent write speed during burst shooting.
https://www.color.org/scene-referred.xalter
https://ninedegreesbelow.com/photography/display-referred-sc...
Inexplicably I didn't understand at the time why he (Bryce Bayer) wanted this. He was modest about his work.
I do now!
I don't know the details of DNG but even the slightest complication could be a no-go for some manufacturers.
A simple example is white balance. The sensor doesn't know anything about it, but typical postprocessing makes both a 2700K incandescent and a 5700K strobe look white. A photographer might prefer to make the incandescent lights look more yellow. There's a white balance setting in the camera to do that when taking the picture, but it's a lot easier to get it perfect later in front of a large color-calibrated display than in the field.
Another example is dealing with a scene containing a lot of dynamic range, such as direct sunlight and dark shadows. The camera's sensor can capture a greater range of brightness than a computer screen can display or a printer can represent, so a photographer might prefer to delay decisions about what's dark grey with some details and what's clipped to black.
That’s not why we use RAW. It’s partly because (1) if you used Adobe RGB or Rec. 709 on a JPEG, a lot of people would screw it up, (2) you get a little extra raw data from the pre-filtering of Bayer, X-Trans, etc. data, (3) it’s less development work for camera manufacturers, and (4) partly historical.
No - the non-RAW image formats offered were traditionally JPG and 8-bit TIFF. Neither of those are suitable for good quality post-capture edits, irrespective of their colour space (in fact, too-wide a colour space is likely to make the initial capture worse because of the limited 8-bit-per-colour range).
These days there is HEIF/similar formats, which may be good enough. But support in 3rd party tools (including Adobe) is no better than RAW yet, i.e., you need to go through a conversion step. So...
Another advantage of RAW is non-destructive editing, at least in developers that support it and are more than import plugins for traditional editors. I rarely have to touch Photoshop these days.
Try adjusting a 8-bit RAW file and you will have the same problem.
You are conflating format and bitrate.
The actual main thing about RAW is that the transforms for white balance, gamma, brightness, colour space, etc. haven't yet been applied and baked into the file. With JPEG, at least some of those transforms have already been applied, which then limits how mucn you can do as opposed to starting with the untransformed sensor data.
You could definitely do much more with a 12-bit JPEG than you could with an 8-bit JPEG, but still not as much as you can do starting from RAW data.
As I understand it, the reason some professional sports photographers don't shoot RAW (or it's less important) is more because they are in an environment where publishing quickly is important, so upload speeds matter and there isn't really much time to postprocess.
I don't know Canon well, but 120fps w/ dual CFExpress + 800-1200 frames buffer is fairly standard on top-end professional sports/wildlife mirrorless cameras these days.
Personally I only shoot at 6fps in continuous for birds because anything faster is usually unnecessary (except for hummingbirds) and just creates more exposures to review/delete in post. I generally preference doing quiet single exposure (Qs) when doing wildlife to avoid any sounds, although since switching to the Z8 it's not really an issue since mirrorless is effectively silent in all modes at fairly open apertures.
I really wish they had raw precapture on the Z8, but i doubt they will do it
but the video mode supports full 8k60 atleast, so only a very tiny crop.
It can only do 8k60 though, not 8k120, so obviously the video pipeline and the C120 pipeline aren't identical.
When you say it doesn't go into DX mode for 11MP, you're correct for C120, but for C60 it /does/ go into DX mode (which captures a 19MP image). How this differs between C60 and C120, I'm not entirely sure in the camera internals. I had thought the resolution reduction is from cropping, but confirmed in the manual that when you enable C120, it's an 11MP photo but is full frame (no cropping).
Obviously this stuff is complex (maybe overly complex) and I haven't delved into it super deeply since I don't need it for my type of photography (and I never do video).
but this suggests that the limitations are not in sensor readout, but processing/saving. Its speculated that its due to heat problems if doing faster than 20fps full raw
These days, the bottleneck for achieving continuous shooting rate is probably writting to the sd card (which is the standard for the consumer/pro-sumer models).
I think this is being too generous.
DNG is just an offshoot of TIFF. Having written a basic DNG parser having never read up on TIFFs before, it really isn’t that hard.
As far as experimental features, there’s room in the spec for injecting your own stuff, similar to MakerNote in EXIF if I recall.
If you are planning to do experimental stuff, I’d say what Apple pulled off with ProRAW is the most innovative thing that a camera manufacturer has done in forever. They worked with Adobe to get it into the spec. All of these camera manufacturers have similar working relationships with Adobe, so there’s really no excuse. And if you can’t wait that long, again, MakerNote it.
In my opinion, custom RAW formats are a case study in “Not Invented Here” syndrome.
(Edit: I mean, if you want to get a basic debayered RGB image from a raw, that's not too hard. But if you want to cram out the most, there are a lot of devils in a lot of details. Things like estimating how many green pixels are not actually green, but light-spill from what should have been red pixels is just the beginning.)
But yeah, it would be preferable to have them use the digital negative (DNG) format, but why bother when the community makes the work for them? Reminds me of how Bethesda does things.
What's complex is the metadata. All the cameras have different AF, WB and exposure systems.
I am a weirdo and always liked and used Pentax (now Ricoh) they do support the DNG format.
I've worked with medical imaging systems from the largest imaging companies in the world -- GE, Siemens, etc. -- all of which use a standardized image format/protocol/etc. called DICOM. DICOM has standardized fields for the vast majority of information you would need to record for medical imaging - patient ID, study ID, image # if it's an image sequence, etc. - as well as metadata about where it came from, like the vendor ID of the machine that did the scan (the CT scanner, MRI, X-ray, etc). There are also arbitrary fields for vendor-specific information that doesn't have a defined field in the specification.
All of these fields have clear purposes and definitions and all are available to every DICOM reader/writer, and yet the company I worked for had a huge table of re-mappings because some scanners, for some reason, would put the patient ID in the vendor field, or the vendor ID in the scanner name field, and so on. There's no reason for this, there's no complication that might cause this; it's all standard fields that everything supports.
These are manufacturers who, while using the standard that everyone else uses, deliberately screw things up in ways that their own hardware and software can silently compensate for but which other vendors then have to work around in order to inter-operate.
In other words cameras absolutely could emit DNG instead, but aside from the arguments that you've made, I have every confidence that manufacturers would fuck it up on purpose just to make it harder for other vendors' software to inter-operate, which would mean that instead of software having to explicitly support e.g. Canon's RAW format, and being able to say "we don't yet support this new format", software would "support" DNG but it would be completely broken for some random cameras because the software developer hasn't had the chance to implement idiotic workarounds for these specific broken images yet.
It’s the best place to add “signature steps.” Things like noise reduction, chromatic aberration correction, and one-step HDR processing.
I used to work for a camera manufacturer, and our Raw decoder was an extremely intense pipeline step. It was treated as one of the biggest secrets in the company.
Third-party deinterlacers could not exactly match ours, although they could get very good results.
If you claim to support a particular format, then you're responsible for supporting that format, and there's no reason why a company would do that, if they have no intentions of supporting anyone other than themselves from accessing the data.
"Not supporting" != "Not allowing"
They may not be thrilled by third parties reverse-engineering and accessing their proprietary formats, and can't necessarily stop them, but they are under no obligation to help them to do it, and they are free to change the rules, at their own whim.
Think of Apple, regularly borking cracking systems. It may not be deliberate. They may have just introduced some new data that cracked the crack, but there's no duty to support the crackers.
However, modern deep learning-based joint demosaicing and denoising algorithms handily outperform Darktable's classical algorithms.
The issue is that companies want control of the demosaicing stage, and the container format is part of that strategy.
If a file format is a corporate proprietary one, then there's no expectation that they should provide services that do not directly benefit them, or that expose internal corporate trade secrets, in service to an open format.
If they have their own format, then they don't have to lose any sleep over stuff that doesn't interest or benefit them.
They lost sleep over having images from their devices looking bad.
They wanted ultimate control of their images, and they didn't trust third-party pipelines to render them well.
Not kidding. These folks are serious control freaks. They are the most anal people I've ever met, when it comes to image Quality.
But it was a pretty major one, and I ran their host image pipeline software team.
[Edited to Add] It was one of the “no comment” companies. They won’t discuss their Raw format in detail, and neither will I, even though it has been many years, since I left that company, and it’s likely that my knowledge is dated.
Can you share the reason for that?
It seems to me that long ago, camera companies thought they would charge money for their proprietary conversion software. It has been obvious for nearly as long that nobody is going to pay for it, and delayed compatibility with the software people actually want to use will only slow down sales of new models.
With that reasoning long-dead, is there some other competitive advantage they perceive to keeping details of the raw format secret?
They feel that their images have a "corporate fingerprint," and are always concerned that images not get out, that don't demonstrate that.
This often resulted in difficulty, getting sample images.
Also, for things like chromatic aberration correction, you could add metadata that describes the lens that took the picture, and use that to inform the correction algorithm.
In many cases, a lens that displays chromatic aberration is an embarrassment. It's one of those "dirty little secrets," that camera manufacturers don't want to admit exists.
As they started producing cheaper lenses, with less glass, they would get more ChrAb, and they didn't want people to see that.
Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy." If you can apply it before, you can minimize data loss. Same with noise reduction.
Many folks here, would absolutely freak, if they saw the complexity of our deBayer filter. It was a pretty massive bit of code.
It seems to me that nearly all photographers who are particularly concerned with image quality shoot raw and use third-party processing software. Perhaps that's a decision not rooted firmly in reality, but it would take a massive effort focused on software UX to get very many to switch to first-party software.
> Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy."
Are you saying that they're baking chromatic aberration corrections into the raw files themselves so that third-party software can't detect it? I know the trend lately is to tolerate more software-correctable flaws in lenses today because it allows for gains elsewhere (often sharpness or size, not just price), but I'm used to seeing those corrections as a step in the raw development pipeline which software can toggle.
If the third-party stuff has access to the raw Bayer format, they can do pretty much anything. They may not have the actual manufacturer data on lenses, but they may be able to do a lot.
Also, 50MP, lossless-compressed (or uncompressed) 16-bit-per-channel images tend to be big. It takes a lot to process them; especially if you have time constraints (like video). Remember that these devices have their own, low-power processors, and they need to handle the data. If we wrote host software to provide matching processing, we needed to mimic what the device firmware did. You don't necessarily have that issue, with third-party pipelines, as no one expects them to match.
What you can store, is metadata that informs these "first step" filters, like lens data, and maybe other sensor readings.
One of the advantages to proprietary data storage, is that you can have company-proprietary filters, that produce a "signature" effect. Third-party filters may get close to it (and may actually get "better" results), but it won't be the same, and won't look like what you see in the viewfinder.
That was my suspicion initially. In fact, when I read about mass DNG adoption, my first thought was "but how would it work for this company?" (admittedly I don't know much about DNG, but intuitively I had my doubts).
And then I saw your comment.
This is all accommodated for in the DNG spec. The camera manufacturers specify the necessary matrix transforms to get into the XYZ colorspace, along with a linearization table.
If they really think the spectral sensitivity is some valuable IP, they are delusional. It should take one Macbeth chart, a spreadsheet, and one afternoon to reverse engineer this stuff.
Given that third party libraries have figured this stuff out, seems they have failed while only making things more difficult for users.
(Eg Nikon's format is 'NEF', Canon's is 'CR3', and so on, named after the file extensions.)
I don't know if DNG can contain (optional) spectral response information, but camera makers were traditionally not enthused about sharing such information, or for that matter other information they put in their various raw formats. Nikon famously 'encrypted' some NEF information at one point (which was promptly broken by third party tools).
The reason I’m less fussy now is because the combination of edits, metadata and image data in a single file didn’t necessarily help me when I switched from Lightroom to Capture One. I would love to be able to update the files to use newer RAW processors and better IQ, but I lose the Lightroom edit information in C1. That makes sense as they do things differently. But I hoped that with DNG there was a universal format for handling edits.
My JPEGs remain the definitive version of the images but I would love to be able to recover all those original edits again in C1, or any other editing program.
In practice too, if consistent results are desired. The format being identical doesn't mean the values the sensor captures under the same conditions will be identical, so a color-calibrated workflow could produce wrong results.
It would be nice to have a setting for "treat camera Y like camera X (here there be dragons)" though. I've had to do something similar with the Lensfun database to get lens corrections working on Mk. II of a lens where Mk. I was supported, but a GUI would be nice. A prompt to guess the substitution automatically would be even nicer.
As an anecdote, I have a Sony a7r and operating it via its mobile app is one of the worst user experiences I have had in a while.
Same goes to the surrounding ecosystem of software. E.g. Adobe's Lightroom is full of obsolete paradigms and weird usability choises.
Usability of the camera hardware and software ecosystem is another matter. I think the common wisdom is that most paying users don't want beginner-friendly, they want powerful and familiar. So everything emulates the paradigms of what came before. DSLRs try to provide an interface that would be familiar to someone used to a 50 year old SLR camera, and Lightroom tries to emulate a physical darkroom. Being somewhat hostile to the uninitiated might even be seen as a feature.
There's also Sigma BF if that's what you want; Sigma actually do pretty good job from perspective of minimalistic, idealistic, on-point, field usable UI, though the return of that effort just isn't worthwhile. I have the OG DP1, it feels natural as IntelliMouse PS/2. I've tried dp2 Quattro once and it felt natural as any serious right-handed trackballs. They scratch so many of camera nerds' itching points.
Most people just buys an A7M4 and an 24-70 Zeiss. And then they stupidly leave it all to auto and never touches the dials. And it puts smiles on people's faces 80% of times. And that's okay. No?
You can achieve maybe a quarter of the kinds of shots on a phone that an interchangeable-lens camera will let you make.
That's an extremely important quarter! For most people it covers everything they ever want a camera to do. But if you want to get into the other 75%, you're never going to be able to do it with the enormous constraints imposed by a phone camera's strict optical limits, arising from the tight physical constraints into which that camera has to fit.
Whereas a $1500 Nikon 15MP from 20 years ago is real crisp, and I can put a 300mm lens on it if I want to "zoom in".
Even my old nikon 1 v1 with its cropped sensor 12MP takes "better pictures" than the two 108MP phone cameras.
But there are uses for the pixel density and I enjoyed having 108MP for certain shots, otherwise not using that mode in general.
People make much of whatever Samsung it was a couple years back, that got caught copy-pasting a sharper image of Luna into that one shot everyone takes and then gets disappointed with the result because, unlike the real thing, our brain doesn't make the moon seem bigger in pictures. But they all do this and they have for years. I tried taking pictures of some Polistes exclamans wasps with my phone a couple years back, in good bright lighting with a decent CRI (my kitchen, they were houseguests). Now if you image search that species name, you'll see these wasps are quite colorful, with complex markings in shades ranging from bright yellow through orange, "ferruginous" rust-red, and black.
In the light I had in the kitchen, I could see all these colors clearly with my eyes, through the glass of the heated terrarium that was serving as the wasps' temporary enclosure. (They'd shown a distinct propensity for the HVAC registers, and while I find their company congenial, having a dozen fertile females exploring the ductwork might have been a bit much even for me...) But as far as I could get the cameras on this iPhone 13 mini to report, from as close as their shitty minimum working distance allows, these wasps were all solid yellow from the flat of their heart-shaped faces to the tip of their pointy butts. No matter what I did, even pulling a shot into Photoshop to sample pixels and experimentally oversaturate, I couldn't squeeze more than a hint of red out of anything without resorting to hue adjustments, i.e. there is no red there to find.
So all I can conclude is the frigging thing made up a wasp - oh, not in the computer vision, generative AI sense we would mean that now, or even in the Samsung sense that only works for the one subject anyway, but in the sense that even in the most favorable of real-world conditions, it's working from such a total approximation of the actual scene that, unless that scene corresponds closely enough to what the ISP's pipeline was "trained on" by the engineers who design phones' imaging subsystems, the poor hapless thing really can't help but screw it up.
This is why people who complain about discrete cameras' lack of brains are wrongheaded to do so. I see how they get there, but there are some aspects of physics that really can't be replaced by computation, including basically all the ones that matter, and the physical, optical singlemindedness of the discrete camera's sole design focus is what liberates it to excel in that realm. Just as with humans, all cramming a phone in there will do is give the poor thing anxiety.
I think people mostly put up with it because on the one hand it doesn't matter all that often (sunset is a classic worst-case test for imaging systems!) and, on the other, well, "who are you going to believe? Fifty zillion person-centuries of image engineering and more billions of phones than there are living humans, or your own lyin' eyes?"
I still have to manually focus (by pushing the screen where i want it to focus), but on newer phones the focus tries to "track" what you touched, which is... why would they change that? I tilt the phone down to interact with it, i know where in the frame i want it to focus, because before i tilted the phone down, i was looking at the frame! Rule of thirds, i can reframe the image to put focus exactly in one of the areas it ought be, zoom in or out, whatever. But no, apparently it has been decided i want the focus to wander around as it sees fit.
I just unplugged the honor 8 to take a picture and apparently the battery is kaput since the last time i used it. Sad day, indeed.
http://projectftm.com/#H-6GJlHgGFA8Yek86MrkVw "Neutral Density" unedited but cropped
What always stood out most for me compared to Canon was Nikon's larger viewfinders, letting you commit to actual photography rather than being stuck with a feeling of peeping through a keyhole, and placement of buttons on the camera body allowing for maintained control of the most necessary functions (shutter speed, aperture and even ISO) without having to change your grip or move the camera away from your face.
Canon bodies are designed by engineers, who all had to prove they could palm a cinder block in order to get hired.
Sony bodies are designed by the cinder block.
On the digital front I found Fuji X-Txx series to be like tiny Nikons in their usability with all common controls on dials.
(One reason I shoot Nikon is because I can still shoot his glass on modern bodies. Indeed, that's what my D5300 spends a lot of its time wearing these days.)
True revolutions in consumer imaging excepted, I doubt I'll feel more than an occasional gadget fan's urge to replace my D850 and D500 as my primary bodies. Oh, the Z series has features, I won't disagree, even if I'm deeply suspicious of EVFs and battery life. But the D850 is a slightly stately, super-versatile full-frame body, and the D500 is a light, 20fps APS-C, that share identical UIs, lens and peripheral lineups, and (given a fast card to write to) deep enough buffers to mostly not need thinking about.
For someone like me who cares very little about technical specs, and a great deal for the ability to hang a camera over their shoulder and walk out the door and never once lose a shot due to equipment failure, there really isn't much that could matter more. I may have 350 milliseconds to get a flight shot of a spooked heron, or be holding my breath and focusing near 1:1 macro with three flash heads twelve inches away from a busily foraging and politely opinionated hornet. In those moments, eye and hand and machine and mind and body all must work as one to get the shot, and having to think at all about how to use the camera essentially guarantees a miss.
Hence the five years of work I've put into not having to think about that. I suppose I could've done more than well enough with any system, sure. But my experiences with others have left me usually quite glad Nikon's is the system I invested in.
The 105mm f/2.8 VR II Micro-Nikkor is still better for the field, of course; that kind of work requires a lens which can talk to my body and flashes, and the stabilizer is actually useful. But for folks not chasing wasps around or the like - and willing to be a little old-fashioned about their working, in a way that will teach you about photography some of what a Piper or Cessna does about flying - there really is no better way to get anywhere near that kind of performance at a similar price point, and a well-maintained lens of such stately age is a joy to work with besides.
After all, most of the time she's watching me every bit as closely as I her, and I like to be able to show that. From the ways people look at and talk about that work, the effort has not been wholly wasted, but it is a more demanding task than I expect a median EVF, or if I'm honest really any even remotely affordable model, to handle. My eyes barely handle it, such that even in the D850's bright and generous viewfinder, the way I perceive this kind of focus is not as a clear sense of seeing those fine divisions between optical elements, but rather as minimizing a sort of unpleasant perceptual "static" or "interference," and it doesn't work at all even in my dominant eye through the lens of my glasses. (My cameras' eyepieces have diopter inserts adjusted to match my prescription.)
On reflection, maybe that's why the EVFs I've tried (Nikon Z5 and Z7 iirc, so previous generation) felt like they had a kind of weird shimmer I didn't like. I assume the Z8 does better, and sure, all the focus peaking and trick shot stuff in the viewfinder is nice. I'll even grant it feels like looking at the future. It's just that, so far at least, I find I seem to prefer looking through a camera.
But even then once youve metered a scene how often do you adjust iso on the fly? Hardly ever. Fixed iso, aperture priority, center dot focus and metering, off to the races.
Lightroom most likely has “obsolete paradigms” for the same reason Photoshop does: because professionals want to use what they know rather than what is fashionable. Reprogramming their muscle memory is not something people want to be doing. Anyway, I find Lightroom’s UI very nice to work with.
I still have the Loupedeck, on one of the shelves behind my desk. I think I might have used it twice last year.
Biggest thing is they never really improved the mobile apps... and in some cases IMO they got worse.
Amusingly, their Instax control app is actually pretty good!
Anyone know of any fujifilm firmware jailbreaks fwiw?
You might be able to use XApp instead, which is still crap but better than Camera Remote at least.
But how do you test this? While the DNG specification is open source, the implementation was/is(?) not. Do I really need a copy of Photoshop to test if my files are good? How would I find good headers to put into my files? what values are even used in processing?
Maybe the situation has changed, but in the old days when I was building cameras there was only a closed-source Adobe library for working with DNGs. That scared me off.
You'll find the whole spec there, too. I think the source is also available somewhere.
It sounds like DNG has so much variation that applications would still need to support different features from different manufacturers. I'm not sure it (DNG) will really solve interoperability problems. This issue smells like someone is accidentally playing politics without realizing it.
Kind of reminds me of the interoperability fallacy with XML. Just because my application and your application use XML, it doesn't mean that our applications are interoperable.
I suspect that a better approach would be a "RAW lite" format that supports a very narrow set of very common features; but otherwise let camera manufacturers keep their RAW files as they see fit.
RAW is ultimately about sensor readings. As a developer, you just want to get things from there into a linear, known color space (XYZ in the DNG spec). So from that perspective, interoperability isn’t the issue.
How you process that data is another matter. Handling a traditional bayer pattern vs a quad-bayer vs Fujifilm’s x-trans pattern obviously requires different algorithms, but that’s all moot given DNG is just a container.
GFX 100s II’s apply a transform to RAW data at iso 80, see: https://blog.kasson.com/gfx-100-ii/the-reason-for-the-gfz-10...
I don’t know much about ARW, but I do know that they offer a lossy compressed format - so it’s not just straight off the sensor integer values in that case either.
The GFX 100s II thing is very interesting. Totally not what I would expect from such a "high end" camera.
DNGs have added benefits, like including compression (optional) (either lossy or lossless) and error correction bytes to prevent corruption (optional). Even if there's some concerns like unique features or performance, I'd still rather use DNGs without these features and with reduced performance.
I always archive my RAWs as lossy compressed DNGs with error correction and without thumbnails to save space and some added "safety".
Typically you want to pack them to avoid storing 30% of zeros. So often the bytes need unscrambling.
Any sometimes there is a dark offset: In a really dark area of an image, random noise around zero can also go negative a little. You don't want to clip that off, and you don't want to use signed integers. So there typically is a small offset.
You mean, your proprietary, closed-source Photo-editing sofware?
Why can't the vendors of that shit just make a library that they all share ...
DNG (24MP) ~90 MB
It cost about 4 times more to store RAW files in DNG format.
Claiming that DNG takes up 4x space doesn't align with any of my own experiences, and it didn't happen on the RAF file that I just tested.
$ /Applications/Adobe\ DNG\ Converter.app/Contents/MacOS/Adobe\ DNG\ Converter DSCF6001.RAF
SPL-LOG-1002: starting logger thread
*** GPU Warning: GPU3 disabled via cr_config at init time. ***
SPL-LOG-1003: terminating logger thread
SPL ~DefaultMemoryManagerImpl bytesAllocated = 0
$ ls -la DSC*
-rw-r--r-- 1 danielh staff 50377216 2025-04-07T10:59:30 DSCF6001.RAF
-rw-r--r-- 1 danielh staff 30747896 2025-04-07T11:00:13 DSCF6001.dng
Maybe your method of converting to DNG is embedding the original RAF image and ... something else?On my own files:
edengate:1$ ls -l
total 163664
-rwx------@ 1 aram staff 40894672 Apr 4 15:25 DSCF1483.RAF
-rw-r--r--@ 1 aram staff 42894224 Apr 7 16:51 DSCF1483.dng
Also, I'm not confident to replace entire RAF collection with converted DNGs and delete originals.
So yes, of course that files produced by Iridient X-transformer are large, they are linear files. They are exactly three times as large because there are three color channels, four times as large if you also embed the original.
There is zero reason to convert RAF files to DNG files if you camera produces RAF files. The discussion we're having here is cameras producing mosaiced DNG natively, which as I hoped I showed you wouldn't come with any size penalty. The DNG can use modern lossless compression techniques, and can encode the same mosaiced (not debayered) data. And it works in every program, unlike RAF which always needs to be reverse engineered for every new camera release.
Coincidentally, most proprietary RAW formats are just bastardized TIFFs, and DNG is also a TIFF derivative...
There is zero technical reason not to use DNG. Leica and Pentax use it just fine.
(See: ActivityPub)
Of course that even Adobe DNG converter can do what the GP asked for (I just tried it[1]), not that I would recommend it for Fuji files. And not that it matters anyway, since the whole point is producing DNG files directly, not converting them.
Edit: on my Fuji X-T5 files, using mosaiced data with lossless JPEG-XL compression (supported by MacOS 14+, iOS 17+, and the latest Lightroom/ACR):
edengate:1$ ls -l
total 163664
-rwx------@ 1 aram staff 40894672 Apr 4 15:25 DSCF1483.RAF
-rw-r--r--@ 1 aram staff 42894224 Apr 7 16:51 DSCF1483.dng
[1] https://llum.chat/?sl=3MCDl4Why no simply provide documentation of the camera-specific format that is used?
EDIT: This should have been an answer to https://news.ycombinator.com/item?id=43584261
For stills photography, Adobe's '.dng' format does fairly well, from 8-bit to 16-bit. It copes with any of the 4 possible standard RGGB Bayer phases and and has some level of colour look-up table in the metadata. Sometime this is not enough for a camera's special features and The Verge's article covers those reasons quite well.
For video, things get much more complicated. You start to hit bandwidth limits of the storage media (SD cards at the low end). '.dng' files were not meant to be compressed but Blackmagic Design figured out how to do it (lightly) and still remain compatible with standard '.dng' decoding software. Other, better compressed formats were also needed to get around the limits of '.dng' compression.
Red cameras used a version of JPEG 2000 on each Bayer phase individually (4 of them), but they wrapped it in patents and litigated hard against anyone who dared to make a RAW format for any video recording over 4k. Beautifully torn apart in this video: https://www.youtube.com/watch?v=IJ_uo-x7Dc0
So, for quite a few years, video camera companies tip-toed around this aggressive patent with their own proprietary formats, and this is another reason why there's so many (not mentioned by The Verge).
There's also the headache of copying a folder of 1,000+ '.dng' stills that make up a movie clip; it takes forever, compared to a single concatenated file. So, there's another group of RAW video file formats that solve this by recording into a single file which is a huge improvement.
[1] Shoot ISO 12,800, process with DxO, people will think you shot at ISO 200; makes shooting sports indoor look easy, see https://bsky.app/profile/up-8.bsky.social/post/3lkc45d3xcs2x so I got zero nostalgia for film.
I think the primarily reason is that they have great hardware developers and terrible software developers. So, having ARWs it is maximum they could provide to photographer, so they could take the files and run away from sony as soon as possible (i.e. do the rest in the better software).
Pentax could save DNGs, there are zero reasons for other companies not to do the same.
It's not /dev/urandom written to disk, no. Yes, a raw format has a structure. There's not one "RAW" format though (and TFA notes this): e.g., my Canon's RAW format specifically referred to is called "CR3". And its predecessor was "CR2", so even within a manufacturer there are multiple such formats. All undocumented.
But a Pentax won't write out CR3s, it'll write out some other, yet bespoke format.
I've vaguely reverse engineered some of CR3: it is a container that contains multiple copies of the photo taken; IIRC it contains a thumbnail JPEG, the JPEG, and the raw data itself.
I doubt it's the most performant to write to storage: the format is vaguely TLV (its fairly similar to RIFF, if you understand the RIFF format), so it can't really be streamed to storage due to needing to know the lengths of the containing chunks (all the way out to the outermost chunk).
A 1920x1080 24-bit RAW image is a file of exactly 6,220,800 bytes. There are only a few possible permutations of parameters: Which of the 4 corners comes first, whether the row-major or column-major order, what order the 3 colors are in (RGB or BGR), and whether the colors are stored as planes or not. (Without planes, a pixel's R, G and B bytes are adjacent. With planes, you essentially have three parallel monochrome images, i.e. cat r.raw g.raw b.raw > rgb.raw) [1]
What the article is describing sounds like something that's not a raw file, but a full image format with a header.
[1] One may ask, how does the receiving software know the file is 1920 x 1080 and not, say, 3840 x 540? Or for that matter, a grayscale image of size 5760 x 1080?
The answer is that, with no header, you have to supply that information yourself when importing the image. (E.g. you have to manually type it into a text entry field in your image editor's file import UI.)
We’ll, yes. You’re thinking of the classic RAW format that was just a plain array of RGB pixels without a header.
When talking about digital cameras RAW refers to a collection of vendor specific file formats for capturing raw sensor data, together with a bunch of metadata.
Camera raw files typically come in a raw bayer mosaic so each pixel has only one colour.
Sigma’s camera’s are notorious for their lack of support in most editors because their Foveon files require extra steps and adjustments that don’t fit the paradigm assumed by DNGs (and they claim it would release proprietary information if they used dngs).
The bigger issue is that at the end of the day the dng format is very broad (but not broad enough) and you rely on the editor to implement it correctly (and completely). DNGs that you can open in one of the major editors will simply not open in another.
And more to the point, for their foveon cameras that produced with both x3f and dng files, the image quality from their dng files are objectively and substantially worse than the x3f files.
My company specifically deals with one of the post-processing steps and we've had to build our own 'universal adapter'. Its frustrating because it feels like microscope companies are just re-inventing the wheel instead of using some common standard.
There has been an effort to standardize how these TB size datasets are distributed[1]. A different but still interesting issue.
I never understood why everyone didn’t just use TIFF
I always wondered if the problem was a lack of interest in writing fast decoders for DNGs, or if this was inherent to the format.
So if they are forced to use the open DNG format, the cameras are in parity now?
There is a long list of issues like this which have prevented ecosystems from forming around cameras, in the way they have around Android or iOS. It's like the proprietary phones predating the iPhone.
The irony is that phones are gradually passing dedicated cameras in increasing numbers of respects as cameras are now in a death spiral. Low volumes means less R&D. Less R&D and no ecosystem means low volumes. It also all translates into high prices.
The time to do this was about a decade ago. Apps, open formats, open USB protocols, open wifi / bluetooth protocols, and semi-open firmware (with a few proprietary blobs for color processing, likely) would have led things down a very different trajectory.
Sony is still selling cameras from 2018:
https://electronics.sony.com/imaging/interchangeable-lens-ca...
The price new fell by just 10% over the 7 years ($2000 -> $1800).
And in a lot of conditions, my Android phone takes better photos, by virtue of more advanced technology.
I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
The quote of the day is: "I wish it weren’t like this, but ultimately, it’s mostly fine. At least, for now. As long as the camera brands continue to work closely with companies like Adobe, we can likely trudge along just fine with this status quo."
No. We can't. The market has imploded. The roof is literally falling in and everyone says things are "fine."
Does any know how much volume there would be if cameras could be used in manufacturing processes for machine vision, on robots / drones, in self-driving cars, on building for security, as webcams for video conferencing, for remote education, and everywhere else imaging is exploding?
No. No one does, because they were never given the chance.
> I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
I thought the same thing, and then I went and rented a Nikon Z8 to try out over a weekend and I was blown away by the "somewhat better autofocus". As someone who used to travel with a Pelican case full of camera gear, to just carrying an iPhone, I'm back to packing camera gear because I'm able to do things like capture tack-sharp birds in flight like I'm taking snapshots from the hip thanks to the massive increase in compute power and autofocus algorithms. "Subject Eye Detection AF" is a game-changer, and while phones do it, they don't have enough optical performance in their tiny sensors/lenses to do it at the necessary precision and speed to resolve things on fast-moving subjects.
In terms of IQ, weight, and all that, it's definitely not a huge difference. I would say it's better, but not so much that I particularly cared coming from a 12-year old DSLR. But the new AF absolutely shocked me with how good it is. It completely changed my outlook.
I say this, not to take away from your overall point, however, which is that a phone is good enough for almost everyone about 90% of the time. It's good enough that even though I upgraded my gear, I only bought one body when I traded in two, because my phone can handle short-focal length / landscape just fine, I don't need my Z8 for that. But a phone doesn't get anywhere close to what I can do with a 300mm or longer focal length lens on the Z8 with fast moving subjects.
For sure a new high end phone will do better than a mid-range camera that's older, but on the high-end it's the other way around. My Z8 has significantly better low-light performance than my iPhone 16 Pro, however the upside from the iPhone is that I don't need to do additional denoising in post-processing (I usually use DXO) where it's required on anything taken above around ISO 12800 on a digital body.
The Z8 is usable to print (e.g. noise is almost completely removable if you aren't cropping) up to ISO 25600 (which is the maximum ISO of a 90D), and is usable for moment capture (e.g. not trying to win any awards) nearly to its maximum ISO (102400). Many newer camera sensors, including the Z8's sensor, are "dual gain", meaning I can shoot basically noiseless at ISO 500 w/ almost 13 stops EV of dynamic range preserved, which is simply not possible on any phone camera or on most older bodies.
If you're shooting in low-light often enough, there are specific sensors and cameras which are far better than others, even if the other cameras would be better than in other situations. Generally speaking though, larger sensors are better than smaller sensors in low-light at the same pixel pitch.
In the Canon world, an R6 II is comparable to the Z8 in low-light performance, although I think the Z8 just barely edges it out. So don't take anything I'm saying here as being brand-specific. Modern full-frame mirrorless cameras are almost all better at low-light performance than any preceding full sized (DSLR style) camera, mirrorless or not, because the sensors have gotten better but maybe even more importantly the native denoising has gotten better.
People are leaving off which lens. In my experience, for low-light:
Large sensor + kit (zoom) lens < Pixel Pro < Large sensor + f/1.4 prime
It's not apples-to-apples, since my phone has no optical zoom in the lens (although it somewhat makes up for it by having wide/normal/tele fixed lenses). But shooting with the main lens, it definitely beats a large sensor for low-light with a kit lens.
I think the key difference is intelligent multiframe denoising algorithms on the phone. It, in effect, shoots a video and combines.
That said, a /lot/ of low light performance is simply having a much larger sensor with a wider pixel pitch that is able to gather more light in the given time allotted. You cannot beat physical size in some ways for digital photography and light gathering is one of them, as it is primarily about surface area.
I started buying the EF mount superfast primes because they're affordable now, but the 7D (more likely it was me) couldn't get the focus just right with such a shallow DOF
The R6 just doesn't miss. Low light/high ISO image quality is also MILES better.
Cameras are not in a death spiral. Artistically speaking, phones can't do what even a low end slr/mirrorless can do, its just that phones are good enough for the low-effort content 95% of people are interested in producing. Standalone cameras are inconvenient, bulky and require some level of artistic intention.
>Does any know how much volume there would be if cameras could be used in manufacturing processes for machine vision, on robots / drones, in self-driving cars, on building for security, as webcams for video conferencing, for remote education, and everywhere else imaging is exploding?
I don't know about the manufacturing or drone stuff, but for video conferencing and remote education, the point of the video really isn't image quality or "art" but just good enough picture to not get in the way of the real purpose of the interaction, so a whole camera kit is just added complexity/annoyance for no benefit.
IMO
Sales numbers tell a different story.
> Artistically speaking, phones can't do what even a low end slr/mirrorless can do, its just that phones are good enough for the low-effort content 95% of people are interested in producing.
This is not correct.
A Pixel Pro has a 50 MP, f/1.7, 1/1.31" sensor. This is equivalent to f/4.6 in u43, f/6.6 in APS, and f/9.5 in FF.
This is slightly slower than a kit lens on paper, but this is more than made up for by more advanced sensor technology, and especially the ability to do things like fast sensor readout, which can read out many frames and combine exposures.
Side-by-side, shooting with a phone and a Panasonic u43 camera with a kit lens, I was getting perfectly good photos with the phone, and useless photos with the u43.
> I don't know about the manufacturing or drone stuff, but for video conferencing and remote education, the point of the video really isn't image quality or "art" but just good enough picture to not get in the way of the real purpose of the interaction, so a whole camera kit is just added complexity/annoyance for no benefit.
It depends on the context. People buy $100k Cisco remote conference rooms for a reason.
I've definitely spent >$10k on equipment in remote presentation / education contexts myself, and know many other people who have done likewise.
You should, at some point, figure out what popular education Youtubers, twitch streamers, etc. spend :) But there are similar contexts in scalable education, various kinds of sales, etc.
One of the core issues -- in context I've worked in -- is that reliability is king. I don't want interruptions. I'm happy to have three cameras feeding into OBS and a set of fixed setups, and I've even done custom plug-ins, but something like a mirrorless adds layers of complexity which can lead to bugs:
- Mirrorless
-> HDMI out
-> Elgato
-> USB
-> OBS
-> Virtual camera
A direct USB connection would remove a cable and an adapter.
Most modern mirrorless cameras can be connected to a computer via USB and used as a video source. Some are nerfed to only run for 30 minutes or some other arbitrary number consistently, but most are not.
f/9.5 in Full Frame is abysmal and generally past the point where scene sharpness suffers from stopping down. Even when doing street photography or landscapes, I rarely stop down past f/8. Running something like my Nikkor 50mm f/1.2 S Z-mount lens at f/4 is sharper edge-to-edge than most other lenses at f/8, and gathers enough light to operate a pleasingly fast shutter speed for handheld work even in low-light. A phone does not compare. My wife has the latest Samsung Galaxy S, I have an iPhone 16 Pro, we both also have cameras (her a Fuji APS-C body, me the Nikon Z8 FF body), and we walk around and take photos composed correctly within each camera. We can see it, even without cropping. A camera body is much better than a phone if you care about the quality of your work, and especially if you ever intend to print.
Most modern cameras can stream video to a computer through a proprietary protocol. These are implemented under Linux in gphoto2, and in other OSes, through some proprietary tool. During the great webcam shortage of covid, many companies made special, flaky Windows utilities to allow those to be used for web conferencing. Very few can natively as a USB Video Class (UVC) device. This is Canon's version:
https://www.dpreview.com/news/4796043082/canon-s-new-softwar...
Now, for Canon, it's a monthly subscription:
https://www.usa.canon.com/cameras/eos-webcam-utility
As a footnote: The general rule-of-thumb is about f/11 is where you start to notice diffraction limiting sharpness on full frame. That's a rule-of-thumb, and you're welcome to not step down below f/8, but calling f/9.5 "abysmal" is more than a little over-the-top. But no, a phone will not compare to a full frame with a $2000 f/1.2 lens. But it's quite competitive with a kit lens.
It's simply not the case to say that diffraction doesn't affect sharpness below f/11, and diffraction is not the only impact that can affect outcomes from stopping down, when you stop down you are letting in less light over the same sensor area which affects almost every aspect of exposure, and has to be compensated for either by increasing ISO which increases noise or by reducing shutter speed which limits motion compensation when shooting handheld, all of which can affect the level of detail that is rendered sharply in a frame, either due to blurring or due to unrecoverable noise.
Generally, my personal preference is to stop down enough to get a sharp frame edge to edge across the center when trying to capture wide scenes, and no more, on many lenses f/4 is enough, generally no more than f/6.3 is required. You begin making serious tradeoffs as you stop down further, especially if, like me, you shoot handheld almost always, and often manually focus (e.g. subtle movements can affect your critical focus distance).
Your rule of thumb is largely irrelevant, you should be making these decisions each time you make an exposure to achieve whatever artistic effect you are going for.
Regarding Canon, true enough, they gimp their products to be greedy. That's why https://www.magiclantern.fm/ exists.
Your general rule of thumb is irrelevant. There are many optics tests done of available modern cameras, including phones. Phones get nowhere close to the photographic quality of a proper camera, but are totally fine for viewing on another small screen or small prints.
My wife has had prints of photos taken with her phone hanging in galleries, but even she (who prefers a phone as an artistic style preference) would never dream of printing anything larger than a 5x8 from a phone. My photography prints on the small side tend to be 12x18, and I often print as large as 40x60. A photo from a phone is simply unusable for me.
"The time to do this was about a decade ago. Apps, open formats, open USB protocols, open wifi / bluetooth protocols, and semi-open firmware (with a few proprietary blobs for color processing, likely) would have led things down a very different trajectory."
And the rest of your posts also misquote what I said and, ironically, just as often, what you said. There are also minor technical errors: diffraction limits are basic physics. It's a simple relationship between (a) the radius of the circle of confusion (in units of angle); (2) the frequency of light (in linear units, typically nanometers); and (3) the radius of the aperture (in linear units, typically mm). There is no voodoo with "sensor size, pixel pitch, and the lens optics." Most of your post is taking statements like a basic rule-of-thumb of what you need for decent photos and exaggerating to statements like "diffraction doesn't affect sharpness." Of course it's easy to beat up a statement if you misquote it. That's called a strawman.
So I think I'm done here. Give me your downvote, and I'll argue somewhere else.
I haven't misquoted you, or myself, at all. Your original complaint was around the need for adapters and additional cables. I never even mentioned UVC in my reply, and you are now rejecting my clarification that you can do USB video (yes, with a driver not UVC) on pretty much any modern mirrorless camera.
Diffraction limits of the optics /alone/ are not the only thing that affects sharpness as it relates to aperture, which is why I pointed out the impact of stopping down on light gathering, and light gathering is most certainly affected by sensor surface area and pixel pitch. Additionally, as I pointed out sensor size also affects the diffraction limit because sensor size influences the size of the circle of confusion. I don't think either one of us has any misunderstanding of the basic physics of light in a digital camera, you're just being obtuse.
We cannot downvote each other because the system prevents it since we're replying. I wouldn't downvote you anyway, I don't consider a downvote to be a form of disagreement, nor an upvote a form of agreement. Even though I don't think you're interacting with me in good faith, you have made valuable contributions to the conversation for a 3rd party reader to learn more, and that's good enough that I upvoted your replies to me even while I disagree.
I mean, it's just binary data, right? Why can't they just write all their ones and zeros the same way?
They are just trying to lock people in to their format and make them dependent on the company instead of an open source and universal format.
If anything maybe you'd have a point if you said they should open up the specs of the mount and lens/body communication, but the RAW format really just has near-zero impact in the real world
Comparing different RAW converters (Lightroom, DXO), their image rendering is slightly different. If you compare the colors with the JPEG image, even more so. If the goal is to faithfully reproduce the colors as they were shown in the camera, you depend on the manufacturer's knowledge. To me, it makes not sense to have some "open" DNG format in the middle, when it's flanked by proprietary processing.
It's not about the format, it's about knowing the details, including parameters, of the image processing pipeline to get a certain look.