Kodak Unveils Brighter CMOS Color Filters 184
brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."
Re:Sacrifices color resolution: is it worth it? (Score:5, Informative)
DPReview has a good explanation (Score:2, Informative)
Re:too little, too late? (Score:3, Informative)
Re:Sacrifices color resolution: is it worth it? (Score:5, Informative)
This additional intensity resolution is, of course, only at a quarter of that of the resolution a full bayer... but nobody ever said you had to discard the intensity measured by the red/green/blue filtered bits; in fact, you can't, or you can't very well determine color at all.
It's actually a pretty obvious setup (it has likenings to the RGBe storage format.. though that has much larger range, it also mostly separates color (RGB) and intensity (exponent)) - can't wait to see it patented - and makes me wonder why the Bayer pattern was the choice in the first place. I certainly know why they picked green as the go-to channel (human visual sensitivity, blabla), and why the there have to be groups of 4 in the first place (cells are square/rectangular.. design a triangular sensor cell, somebody - quick! gimme that hexagonal sensor).. but why just now Kodak pops this up..
Re:why are sensors in RGB instead of CMY? (Score:5, Informative)
http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]
They don't actually provide any practical benefit over RGB in terms of noise, if your final output is meant to be RGB, due to the mathematics of the color space transformation. And your final output is generally RGB, for digital photography; even if you print, the intermediate formats are generally RGB, and cheap consumer printers take input in RGB, not CMYK.
Re:why are sensors in RGB instead of CMY? (Score:2, Informative)
But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light.
Let me just turn that around for you...
A Green filter would let cyan and and yellow through, but keep Magenta out, instead of blocking two parts of the visible spectrum for each pixel.
The color spaces are complimentary. Each color in one space is halfway between two colors in the complimentary space.
A filter of any color will, in one color-space allow one color and block the other two, while in the other color space allow two colors, and block one.
RGB is the color space usually used for additive color (i.e. light -- More/different light means brighter). A sensor is capturing light. CMY(K) is usually used in subtractive color (i.e. ink -- More/different ink means darker).
Loss of color resolution is not that big a deal (Score:5, Informative)
Better than Foveon? (Score:3, Informative)
Re:Sacrifices color resolution: is it worth it? (Score:3, Informative)
Tim.
Re:resolute colors required? (Score:4, Informative)
No... not really.
First of all, the Bayer pattern is...
RG
GB
Giving up one of the four sites to wide-band sensitivity as Kodak proposes, the same spatial pattern still has exactly the same sensitivity to red and blue; nothing has changed there. Red and blue sensor sites still alternate at the exact same spatial rate. But the new pattern has 1/2 the spatial (not intensity) sensitivity to green (which we are most sensitive to, remember); it has the same sensitivity to luma; and it probably has considerably enhanced sensitivity to infrared and ultraviolet, though that remains to be seen, and such an advantage is not as generally useful to most photographers (though those who enjoy IR and/or UV photography will love this thing if the sensor is truly wide-band.)
But there are complications; such as, Bayer filters tend to produce significant moire patterns, and the filters applied to prevent that reduce the available spatial resolution by as much as 1/2 along each axis anyway.
I've written numerous RAW image plugins for Bayer (and other) patterns, and believe me, it isn't as simple as 1/4 the color. This is a new configuration, and I've not written code for it as yet, but I would bet my boots that when the time comes to do so, the color resolution of an image will not suffer much, if at all. You'll still have RGB info available at about twice the moire filter rate. Spatial resolution shouldn't suffer either, because luma information is still available from the new arrangement. In terms of color images, what I'm trying to figure out is what the perceived advantage is.
Thinking outside the box of color images, though, I can imagine a simple 1/4 resolution B&W mode that can do infrared and ultraviolet with the proper blocking filters... that'd be trippy. :-)
RG +BG arguments missing the point? (Score:4, Informative)
In printing technologies, at least in the early '90s they were using a technique called either "GCR" (gray color removal) or "UCR" (under color removal) which basically transfer almost all of the "light density" information from the cyan-magenta-yellow films of a color separation to the "K" film (black) -- because black ink is quite a bit cheaper than the alternatives. I have seen images printed with up to 90% of the density in the black that are virtually indistinguishable from images printed from a "normal" color separation by the naked eye, and sometimes if a high enough line screen value is used (+200 LPI) it is hard to tell that a print is a GCR'd image even with a magnifying glass.
So it stands to reason for me at least that if I devote more attention to capturing the "amount" of light with "one CCD eye" completely open, and the "quality" (hue and tint) of the light with my "other three CCD eyes" that are filtering for spectra, I should be able to do the same thing digitally that they have been doing optically in printing for yearsand still yield a superior result.
I'd love to hear a discussion about the best way to use the digital bits in a 32 bit "GCR" digital world by the way. For example, using 10 bits (1024 levels) for luma, 8 bits (256 hues and tints) for green, and 7 bits (128 hues and tints each) for red and blue, or whatever the optimal case could be
Thoughts?
you get what you pay for (Score:1, Informative)
that said, Zeiss makes some awesome high end lenses. The digiprimes (for 2/3" digital cinema cameras) are amazing, and run around $12k/each. The digizooms are like $60k, iirc, and also very good. I haven't use the zooms, but i've shot with the digiprimes and they are amazingly sharp.
my $.02