Kodak Unveils Brighter CMOS Color Filters 184
brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."
Sacrifices color resolution: is it worth it? (Score:5, Insightful)
Re:Sacrifices color resolution: is it worth it? (Score:5, Informative)
Re:resolute colors required? (Score:4, Informative)
No... not really.
First of all, the Bayer pattern is...
RG
GB
Giving up one of the four sites to wide-band sensitivity as Kodak proposes, the same spatial pattern still has exactly the same sensitivity to red and blue; nothing has changed there. Red and blue sensor sites still alternate at the exact same spatial rate. But the new pattern has 1/2 the spatial (not intensity) sensitivity to green (which we are most sensitive to, remember); it has the same sensitivity to luma; and it probably has considerably enhanced sensitivity to infrared and ultraviolet, though that remains to be seen, and such an advantage is not as generally useful to most photographers (though those who enjoy IR and/or UV photography will love this thing if the sensor is truly wide-band.)
But there are complications; such as, Bayer filters tend to produce significant moire patterns, and the filters applied to prevent that reduce the available spatial resolution by as much as 1/2 along each axis anyway.
I've written numerous RAW image plugins for Bayer (and other) patterns, and believe me, it isn't as simple as 1/4 the color. This is a new configuration, and I've not written code for it as yet, but I would bet my boots that when the time comes to do so, the color resolution of an image will not suffer much, if at all. You'll still have RGB info available at about twice the moire filter rate. Spatial resolution shouldn't suffer either, because luma information is still available from the new arrangement. In terms of color images, what I'm trying to figure out is what the perceived advantage is.
Thinking outside the box of color images, though, I can imagine a simple 1/4 resolution B&W mode that can do infrared and ultraviolet with the proper blocking filters... that'd be trippy. :-)
Re: (Score:2)
I have trouble deciding how much thi
Re: (Score:2)
That is incorrect. You'd know it if you simply thought about it; a Bayer filter (in most digital cameras today) only captures color information through red, green and blue filters. The color image that results can be converted into a luma version by a simple fractional scaling factor applied to each channel, then summed. Basically, put the image into any decent image processor, select it, and apply the software's luma e
Re: (Score:2)
That is not how raw conversion works; if it did, your 8M pixel camera would only give 2M pixel images. Actual raw conversion essentially uses heuristics to reconstruct the actual luminance at each sensor site, even though that information cannot be reconstructed in general from the measurements.
Depends on the application (Score:2)
I expect this will have more value in cellphone cameras. Typically the noise floor goes up when the sensor shrinks, and increasing the brightness without increasing noise would be a massive boon for most cellphone photographers.
Re: (Score:2)
Re: (Score:2)
Digital is already better than film, but the fact is, DSLR owners continue to pay good money for big, heavy lenses precisely to obtain more sensitivity, and vibration reduction to cope with longer-than-ideal shutter speeds.
Cameras aren't "good enough" until I ca
photon noise (Score:2)
Then you will never be happy because it's intrinsically impossible to capture low noise images "in near dark" with a small area sensor because of photon noise.
Probably not intended for SLRs (Score:3, Insightful)
Modern 'compact' digital cameras, however, which stuff 7-12 megapixels on 1/1.8" and 1/2.5" sensors (smaller than your fingernail) could benifit enormously from this. These sensors are already past the diffraction limit of most of the lenses, so a drop in color resolution may not be too damaging (the eye being less sensitive to color resolution, than luminance anyway). Kodak is claiming a 1-2
Re: (Score:2)
Re: (Score:2)
This might make a nice second camera for the serious user, but most folks would be better off with the current technology.
Re: (Score:2)
First, there is likely no significant "loss in color resolution"; the resolution you're getting in the color channels right now is already only based on heuristics.
Second, even if there were a loss of resolution in the color channels, you wouldn't notice it: you can't see high frequencies in the color channels.
Re:Sacrifices color resolution: is it worth it? (Score:5, Informative)
This additional intensity resolution is, of course, only at a quarter of that of the resolution a full bayer... but nobody ever said you had to discard the intensity measured by the red/green/blue filtered bits; in fact, you can't, or you can't very well determine color at all.
It's actually a pretty obvious setup (it has likenings to the RGBe storage format.. though that has much larger range, it also mostly separates color (RGB) and intensity (exponent)) - can't wait to see it patented - and makes me wonder why the Bayer pattern was the choice in the first place. I certainly know why they picked green as the go-to channel (human visual sensitivity, blabla), and why the there have to be groups of 4 in the first place (cells are square/rectangular.. design a triangular sensor cell, somebody - quick! gimme that hexagonal sensor).. but why just now Kodak pops this up..
Re: (Score:2)
But the fact is that hundreds of millions of digital cameras have been made in an intensely competitive R
Re: (Score:2)
I've certainly discussed this idea with people at least a year ago completely independent of any research done at Kodak. Four colour sensors, CYGM and RGBE, have been around for years.
Other ideas that have been played with are non regular (fractal) CFAs.
An obvious further extension to what Kodak has done (assuming it isn't what they have done) is to have something like RYYB (bayer but with G replaced with Luminance). This ought to capture still more light. Infact, as CCD
Re: (Score:2)
Re: (Score:2)
RGRY
GBYB
RYRG
YBGB
Tim.
Re: (Score:3, Informative)
Tim.
How about using that new non-reflecting material? (Score:4, Interesting)
Yes it is (Score:2)
The marketing hype surrounding resolution just keeps spinning further away from reality.
Digital photographic prints off the average production photo printer (my costco has them right on the floor) the lines per milimeter resolution is _way_ below what even a **really** good digital SLR with **great** optics can capture.
Also keep in mind the color gamut of the average digital camera is quite narrow, and unsophisticat
Re: (Score:2)
Well, you lose color resolution and I'd say that there's a good chance that in bright sunlight you're going to be blowing out quite a few of the clear pixels, losing luminance information there as well. Being "more" sensitive helps when there's less light, not when there's too much.
Translation: There's always a downside.
Wrong Again (Score:2)
In a production CMOS/CCD assembly this is not likely. In order to get a digital camera sensor to produce a pleasing image in many lighting conditions the CCD/CMOS assemblies already have controls for this.
The best example of proof is to try using a scanner head as a digital camera. You will find that the CCD assembly in a scanner is not designed to handle variable light, so most things outside a narrow range of brightness (luminance maybe?) are
Re: (Score:2)
'd hate to lose 1/4 of my color resolution *all of the time* to get the added sensitivity that I only need for a small fraction of the shots I take.
To be honest, I wouldn't mind. If you buy a 10 megapixel camera that isn't a good quality SLR, you won't be getting much better quality than a 6 megapixel camera since the bottleneck for quality becomes the lens.
All it would really mean is that we absorb a delay in the relentless rise in pixel density for a dramatic improvement in colour depth.
This technology will sell, there's no doubt about it.
Re: (Score:2)
Re: Zeiss lenses (Score:2)
Re: (Score:2)
Loss of color resolution is not that big a deal (Score:5, Informative)
Re: (Score:2, Interesting)
not hardly (Score:2)
Now as for losing color resolution, I think you won't lose much. The only place you are going to notice it is in dim light and it will be less than 1 bit of loss. Those would be shots you would nt have gotten anyhow because they would have been below the camera's ability.
Prior art? LCD projectors do this same trick
Re: (Score:2)
quite to the contrary (Score:2)
Making one of the RGBG cells into a "white" cell doesn't really change much of anything in terms of resolution: color resolution is still half what grayscale resolution is. And it does actually help with color accuracy, since having four different receptors lets cameras deal a lot better with fluor
The proof is in the pudding (Score:2)
we had 400 speed reversal film in the 50s (Score:4, Interesting)
I refer you to Tri-X b/w, and to Fujichrome 400 around 1972. a really nicely balanced and warm film. if you pushed it to 1200, you could peel the grains off the base and go bowling with them, but the picture held up remarkably well on the small screen. it was THE go-to magic film for 16mm newsfilm when it came out.
if that was a negative film, it would have been asa 800 with little more grain than the "fast" 125 color film of the time.
Not the point (Score:2)
Re: (Score:2)
Re: (Score:2)
Transparent AND absorbs light? (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nothing too revolutionary (Score:2, Interesting)
CMOS version of Rods and cones (Score:5, Insightful)
Re: (Score:2)
And I'll bet they've already filed a patent on it....
Re: (Score:2, Funny)
Fixed that for you. : )
Re:CMOS version of Rods and cones (Score:4, Interesting)
The new Kodak filter pattern is still taking advantage of our better resolution for luminance, but is implementing it better by basing it on color filters (or the lack of them) that let more light through, thereby increasining signal-to-noise (especially needed in low-light conditions).
I'm not sure that this new filter pattern is optimal though. As another poster noted, R/G/B filters are too narrow and cut out a lot of light. You could still capture the color information with two broader filters more directly corresponding to the U & V of the YUV color space.
Re: (Score:2)
Sounds just like the new LCD display (Score:2)
why are sensors in RGB instead of CMY? (Score:3, Interesting)
They divide each sensor of the regular bayer pattern to 4, half white, half color. This way one can also report a 4-fold increase in the number of pixels, without really increasing the resolution. (which actually will be a boon for digital photography, since no one needs the current resolution anyway, because the optics doesn't keep up, but a megapixel race is on...)
But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of use or something like that. One could even combine the two methods, and use white pixels, to gain a slight further increase in light sensitivity (from 8/12 to 10/12). Is there any reason that current cameras use RGB?
Re:why are sensors in RGB instead of CMY? (Score:5, Informative)
http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]
They don't actually provide any practical benefit over RGB in terms of noise, if your final output is meant to be RGB, due to the mathematics of the color space transformation. And your final output is generally RGB, for digital photography; even if you print, the intermediate formats are generally RGB, and cheap consumer printers take input in RGB, not CMYK.
Re: (Score:3, Interesting)
But I don't understand why you don't have less noise. The wikipedia article mentions higher dynamic range. Isn't it true that twice as much light falls on each sensor, so you gain a stop, and because of that have less noise (because you need the shutter open for only half the time)? Or
Re: (Score:3, Interesting)
Re: (Score:2)
Filters can contribute to noise in a final image by creating and imbalance in white balance or by limiting the amount of light to a level below optimal. Otherwise, they aren't involved. This claim that the CMY to RGB conversion increases noise is nonsense.
Re: (Score:2, Interesting)
Re: (Score:2)
http://www.dpreview.com/reviews/canons10/ [dpreview.com]
Re: (Score:2)
Re: (Score:2, Informative)
But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light.
Let me just turn that around for you...
A Green filter would let cyan and and yellow through, but keep Magenta out, instead of blocking two parts of the visible spectrum for each pixel.
The color spaces are complimentary. Each
Re:why are sensors in RGB instead of CMY? (Score:5, Interesting)
It is no coincidence (I suppose it's arguable if you call evolution a "theory" (with quotes)) that our eye is most sensitive to green light.
As a side note, the need for white balance on cameras is that spectral density for different light sources are not the same. Incandescents differ from fluorescents which differ from sunlight which is why incandescents have an orangeish tint and fluorescents have a blueish tint (that's where their frequencies have their peak power).
(The theory behind why chlorophyll is green (which means it reflects green and, thus, does not absorb the frequencies with the most power) are quiet interesting to boot.)
Re: (Score:2)
That is wrong. Sunlight varies dramatically and no such generalization can be made but "daylight" is quite well balanced; it is not biased toward green.
"The Bayer filter has twice as many green pixels as red or blue, which reflects the sunlight power spectral density more than having one cyan, one magenta, one yellow, and one intensity would."
No, the Bayer filter has twice as many green pixels because (a) there are 4 pixels and only 3 colors so one
Re: (Score:2)
http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]
The result was, as you say, better light sensitivity, but at the expense of color accuracy. I guess in the end they decided the tradeoff wasn't worth it. I don't claim to understand any of the details, but I just read that page and then read your question
Re: (Score:2)
A good sensor wants resolution AND sensitivity AND accuracy. Since you can't have all three at the same time, you make tradeoffs. Your solution might increase sensitivity, but at the cost of accuracy and resolution.
Re: (Score:2)
More interestingly, the very first HDV camcorder, the JVC HD-GR1, used both of these techniques back in 2003... see http://www.jvc.com/promotions/grhd1/unprecedent/s_ right.html [jvc.com]. Their sensor is White (clear), Yellow, Cyan, and Green in a Bayer-like pattern. They made similar claims: the effect is 50% luma, rather than
I'd Rather Have Less Noise, Wider dMax (Score:3, Interesting)
For me, though, the problem is not so much speed as it is noise and dynamic range. That's because a lot of the time I still do fine-art level landscape and studio glamour photography -- neither of which are speed starved, but even the finest digitals could still use even less noise and wider dynamic ranges.
While DSLRs have a huge advantage over handhelds in this regard, it would still be nice to see improvements in s/n such that the darker zones maintained their clarity and detail. Even the finest Canon cameras suffer to a degree in this regard, at least for people with very high standards. Some of us have those standards because that is what our clients demand - and in some cases we still must use film to meet their criteria.
It's a virtual law that to obtain the best noise performance you need to use the lowest ISO speed that the camera can attain. So instead of bottoming out at 100, like most DSLRs, I'd like to see 25. Or better, 12.
For more info, visit http://www.normankoren.com/digital_tonality.html [normankoren.com]
Re: (Score:2)
There's a physical limit to how insensitive you can make a sensor, of course, which is what you're really asking for when you want lower ISO. At a certain point, you're just artificially crippling the technology to get a lower ISO, without any real benefit in terms of noise control.
Sounds good to me. ;)
Seriously... I've never understood why that's not an option that can be carried out on the processing chip. If someone wants an equivalent film speed of say 12 and your sensor can only go to 100, why can't the chip take 12 back to back shots and simply average them?
I realize that's not giving true light sensitivity... But I'd still much rather have the option to make my camera WAY less sensitive to light than have to deal with a 2x, 4x, 8x, etc. set of neutral densitiy filters every ti
Where is the transparent pixel? (Score:4, Interesting)
One of the problems with DLP projection TVs with a "color wheel" was that since every color lets only 1/3 of the light through, the picture was dim. So they added a fourth element "clear" that lets out all the light to get every projected pixel a blast of light they need and the remaining portions of the color wheel adds only additional brightness for each color.
This technology seems to be kind of similar. The transparent sub pixel detects over all lumninosity and the remaining pixels "adjust" for color. Very close to what we have in our retina too. Almost all our cylindrical cells respond only to luminosity and the cones respond, to varying degrees, three colors. A poster was complaining about losing "color resolution". I think millions of years of evolution has shown us the balance. You need about 90% of the pixels responding to luminosity and just 10% to color. The same ratio in our retina.
"White" sensor? (Score:2)
More complexity for RAW filters.
Evolution Has It Right? Different Goals (Score:2)
You need about 90% of the pixels responding to luminosity and just 10% to color.
My camera doesn't care about avoiding being eaten by a lion. Nor does it care about sensing smaller prey running through the edges of its field of vision so it can turn its sensor to focus the sharper resolution on it.
The human eye is awesome for what it's evolved to do. Photography, however, is a different task. The human eye is good at resolving things infront of it while catching movement to the sides and only turning if it's interested. A camera with a well resolved center section but lousy edge resolu
This Is Too Obvious (Score:2)
Re: (Score:2)
Re: (Score:2)
Why not this pattern (Score:2, Interesting)
The patterns they suggested in the article were not as elegant as the Bayer filter (where each color formed an evenly spaced grid). They may be hiding the actual pattern for now or there may be some technical reason for those patterns that I don't understand, but I would suggest this pattern (C = Clear):
it keeps the same 4clear:2green:1red:1blue ratio but the different color pixels all form a regularly spaced grid.
Although I am old enough to remember the 80s.. (Score:2)
Is the clear array sensitive across the spectrum? (Score:2)
Does the clear array have a flat sensitivity level across the spectrum? Where it will give the same data value for the same number of photons striking it with a 700nm wavelength as it would for photons striking it that vibrate at 400nm?
If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.
Re:Is the clear array sensitive across the spectru (Score:2)
I imagine that's part of the reason it hasn't been done yet. Finding the "true luminosity" from a nearby Red, Green, Blue, and Clear CCD is probably nontrivial. I imagine that IR sensitivity isn't as troublesome as you'd suggest, though, since most cameras now come with IR filte
Re:Is the clear array sensitive across the spectru (Score:2)
Re:Is the clear array sensitive across the spectru (Score:2)
I would flip that around and say that that behaviour might actually be advantageous. If you're in a low (visible) light situation, maybe you could use an IR flash to get luminance values and merge that with the dim visible colour data to get a halfway-decent colour image with no visib
Re: (Score:2)
Other ideas for alternative color patterns (Score:5, Interesting)
1) Sony was building cameras for a while with four color channels. There was the normal green, but also a different green they called "emerald" for one of the four Bayer pattern locations. Unfortunately, this was a solution in search of a problem, it never really caught on because there just wasn't any perceived benefit.
2) I do visual effects for films. For the last 50 years or so, people have been using bluescreen and greenscreen effects. The idea is to put a constant color background, and process the image so that any pixels of that color become transparent. Over the years, more and more lipstick has been applied to this pig -- so that you can now often extract shadows that fall on the greenscreen, pull transparent smoke from the greenscreen plate -- these things have become even more possible through digital processing.
Still, it sucks. Greenscreen photography forces so many compromises that I often recommend shooting without it and laboriously hand-rotoscoping the shots.
But -- say you had a fourth color filter, with a very narrow spectral band. Perhaps the yellow sodium color -- commercial lights that put out very narrow-band yellow are sometimes used for street lighting. If you had a very narrow-band sodium filter over 1/4 of the pixels, you could pull perfect mattes without 99% of the artifacts of traditional greenscreen and bluescreen photography. Finally (and this is killer!) you could make glasses that the director of photography and other lighting crew could wear that block just that frequency, so they could see the set as it really is -- without the sodium light pollution.
Still, Kudos to Kodak for thinking outside the box.
Thad Beier
Re: (Score:2)
Sadly, this appears to be anot
Better than Foveon? (Score:3, Informative)
RG +BG arguments missing the point? (Score:4, Informative)
In printing technologies, at least in the early '90s they were using a technique called either "GCR" (gray color removal) or "UCR" (under color removal) which basically transfer almost all of the "light density" information from the cyan-magenta-yellow films of a color separation to the "K" film (black) -- because black ink is quite a bit cheaper than the alternatives. I have seen images printed with up to 90% of the density in the black that are virtually indistinguishable from images printed from a "normal" color separation by the naked eye, and sometimes if a high enough line screen value is used (+200 LPI) it is hard to tell that a print is a GCR'd image even with a magnifying glass.
So it stands to reason for me at least that if I devote more attention to capturing the "amount" of light with "one CCD eye" completely open, and the "quality" (hue and tint) of the light with my "other three CCD eyes" that are filtering for spectra, I should be able to do the same thing digitally that they have been doing optically in printing for yearsand still yield a superior result.
I'd love to hear a discussion about the best way to use the digital bits in a 32 bit "GCR" digital world by the way. For example, using 10 bits (1024 levels) for luma, 8 bits (256 hues and tints) for green, and 7 bits (128 hues and tints each) for red and blue, or whatever the optimal case could be
Thoughts?
Patents? (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Foveon implementation is crap? What have you been smoking?
The SD14 is a 4.7 megapixel camera. It is doing very well when compared against 8 megapixel Bayer-based cameras. [popphoto.com] If that doesn't validate the technology, I don't know what does. Perhaps you're confused by the claim that it is a "14.1 megapixel" camera. That's just marketing hype, and should be ignored right out of the gate. There are 4.7 million sensor sites, meaning, spatially distinct sensors. It's a 4.7 MP sensor, period. But considered as suc
Re: (Score:2)
The spatial resolution is of course better.
Re: (Score:2)
That isn't what these third-party test results [popphoto.com] and these images [dpreview.com], and this one [dpreview.com], and these [dpreview.com], and these [dpreview.com], and these [dpreview.com] indicate. Plenty of good yellows and oranges, including saturated ones, in
Cellphone cameras (Score:2)
Re: (Score:2)
DPReview has a good explanation (Score:2, Informative)
Re: (Score:2)
Same reason as people use cheesy stock photography. Even newspaper articles do this; they include an irrelevant or generic image beside a story because it takes up space, makes it more attractive and headline stories are "meant" to have photos.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The article says that sensors based on this will start to become available early next year, but I'd guess it may be a little longer until camera manufactures have tuned their on-camera image processing algorithms (and off-camera RAW algorithms) for the production sensors.
The larger format sensor cameras like the EOS 30D/350D (both are APS-C) don't suffer so much in low light anyway sinc
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don't object to all patents. I do object to software patents, because I see no evidence that the bribe of a 17 or 20 year monopoly is in any way necessary to spur innovation in the software industry. Given that there are always dead weight losses associated with monopolies it is best not to create them unless there is strong positive evidence that the benefits outweight the costs.
In other industries, such as the chip industry where you have to blow billions of dollars on each new fab, patent monopoli
Re: (Score:2)
"Hey, I think the headline should be, 'Kodak announces latest attempt to maintain relevance after failing in its attempts to keep a captive market via Bill Cosby and Japan-bashing.'"
But yours is good too.
*attaching this hypothetical troll to one already labled as such to keep it away from productive discussion*