Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Media Hardware Technology

Kodak Unveils Brighter CMOS Color Filters 184

brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."
This discussion has been archived. No new comments can be posted.

Kodak Unveils Brighter CMOS Color Filters

Comments Filter:
  • by lurker412 ( 706164 ) on Friday June 15, 2007 @10:55AM (#19519155)
    I'm not sure you would lose "color resolution" at all. The current RGB scheme combines color and luminosity. Under the new scheme, those could be separated, much the way LAB color space works. Potentially, this could give you a greater dynamic range, which would address the biggest weakness of current digital cameras. Of course, the proof will be in the execution. If it yields more noise in the process, then it won't be worth a damn. We'll see.
  • by MonorailCat ( 1104823 ) on Friday June 15, 2007 @11:08AM (#19519363)
    They posted a full press release with images and sensor layout diagrams, additionally there is an excellent discussion in their news forum with a lot of good information. http://www.dpreview.com/news/0706/07061401kodakhig hsens.asp [dpreview.com]
  • by Zarhan ( 415465 ) on Friday June 15, 2007 @11:08AM (#19519365)
    Only problem is that Foveon (at least current implementation) is crap. The three colors have too much overlap and they also aren't very sensitive, either. Fine, you get rid of some of the bayer artifacts, but in return you lose most of the extreme colors and lots of sensitivity.
  • by Animaether ( 411575 ) on Friday June 15, 2007 @11:11AM (#19519383) Journal
    You don't really lose a quarter of your color resolution... you lose half the resolution in a specific wavelength, the one normally corresponding to green (though how this is mapped to RG or GB (rarely purely G) is up to the demosaicing algorithm. On the up side, you gain light sensitivity by a factor more than two; assume the filters were perfect and light only existed in the wavelengths they let through. Then any single filtered cell only receives 33% of the stimulans. An unfiltered cell would get the full 100%.

    This additional intensity resolution is, of course, only at a quarter of that of the resolution a full bayer... but nobody ever said you had to discard the intensity measured by the red/green/blue filtered bits; in fact, you can't, or you can't very well determine color at all.

    It's actually a pretty obvious setup (it has likenings to the RGBe storage format.. though that has much larger range, it also mostly separates color (RGB) and intensity (exponent)) - can't wait to see it patented - and makes me wonder why the Bayer pattern was the choice in the first place. I certainly know why they picked green as the go-to channel (human visual sensitivity, blabla), and why the there have to be groups of 4 in the first place (cells are square/rectangular.. design a triangular sensor cell, somebody - quick! gimme that hexagonal sensor).. but why just now Kodak pops this up..
  • by Anonymous Coward on Friday June 15, 2007 @11:26AM (#19519603)
    CMYK filters were actually tried:

    http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]

    They don't actually provide any practical benefit over RGB in terms of noise, if your final output is meant to be RGB, due to the mathematics of the color space transformation. And your final output is generally RGB, for digital photography; even if you print, the intermediate formats are generally RGB, and cheap consumer printers take input in RGB, not CMYK.
  • by slagheap ( 734182 ) on Friday June 15, 2007 @11:56AM (#19520033)

    But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light.

    Let me just turn that around for you...

    A Green filter would let cyan and and yellow through, but keep Magenta out, instead of blocking two parts of the visible spectrum for each pixel.

    The color spaces are complimentary. Each color in one space is halfway between two colors in the complimentary space.

    ___R___
    _Y___M_
    _G___B_
    ___C___

    A filter of any color will, in one color-space allow one color and block the other two, while in the other color space allow two colors, and block one.

    RGB is the color space usually used for additive color (i.e. light -- More/different light means brighter). A sensor is capturing light. CMY(K) is usually used in subtractive color (i.e. ink -- More/different ink means darker).

  • by Solandri ( 704621 ) on Friday June 15, 2007 @12:04PM (#19520169)
    It's done on TV [nfggames.com] all the time [nfggames.com] and nobody complains (chrominance is separated from luminance and often transmitted at much lower resolution). As has been pointed out below, your eyes are made up of rods (which see black and white) and cones (which see color), and only a fraction of those cones are devoted to each individual red, green, or blue spectrum. So your color resolution is already significantly lower than your luminance resolution. You can even see photos demonstrating this [nfggames.com] with a 9x decrease in color resolution (3x in each linear direction). You're most sensitive to green, which is why the Bayer sensors commonly used in digital cameras divide each 4 pixels into GRGB.
  • Better than Foveon? (Score:3, Informative)

    by mdielmann ( 514750 ) on Friday June 15, 2007 @12:24PM (#19520491) Homepage Journal
    I wonder how this is going to compare to the Foveon [foveon.com] sensors. They capture RGB data at all pixels - filtered based on depth rather than location. Now if only those babies cost less.
  • by locofungus ( 179280 ) on Friday June 15, 2007 @12:55PM (#19520955)
    Infact, a quick google turns up http://www.patentgenius.com/patent/6704046.html [patentgenius.com] - which just mentions RGBW and points out that all three of the RGB values will have to be interpolated at the white pixel.

    Tim.
  • by fyngyrz ( 762201 ) * on Friday June 15, 2007 @03:37PM (#19523393) Homepage Journal
    In response to the previous post, however, the fourth, unfiltered pixel would decrease color resolution by 1/4

    No... not really.

    First of all, the Bayer pattern is...

    RG
    GB

    ...in a square as shown. Because there are three color channels desired, and four cells in a square, and green carries the most spatial information to the eye, the green sensor is duplicated. Recovering image data from a Bayer patterned sensor involves getting luma from all four cells, adjusted for how luma looks when viewed through such filters, and interpolating R, G and B from the staggered sensors in adjacent 4-cell Bayer groups. In a Bayer grouping, you always have RGRGRGRGR.... on one line and GBGBGBGB... on the next, which also gives you vertical lines of RGRGRG.... and GBGBGB...

    Giving up one of the four sites to wide-band sensitivity as Kodak proposes, the same spatial pattern still has exactly the same sensitivity to red and blue; nothing has changed there. Red and blue sensor sites still alternate at the exact same spatial rate. But the new pattern has 1/2 the spatial (not intensity) sensitivity to green (which we are most sensitive to, remember); it has the same sensitivity to luma; and it probably has considerably enhanced sensitivity to infrared and ultraviolet, though that remains to be seen, and such an advantage is not as generally useful to most photographers (though those who enjoy IR and/or UV photography will love this thing if the sensor is truly wide-band.)

    But there are complications; such as, Bayer filters tend to produce significant moire patterns, and the filters applied to prevent that reduce the available spatial resolution by as much as 1/2 along each axis anyway.

    I've written numerous RAW image plugins for Bayer (and other) patterns, and believe me, it isn't as simple as 1/4 the color. This is a new configuration, and I've not written code for it as yet, but I would bet my boots that when the time comes to do so, the color resolution of an image will not suffer much, if at all. You'll still have RGB info available at about twice the moire filter rate. Spatial resolution shouldn't suffer either, because luma information is still available from the new arrangement. In terms of color images, what I'm trying to figure out is what the perceived advantage is.

    Thinking outside the box of color images, though, I can imagine a simple 1/4 resolution B&W mode that can do infrared and ultraviolet with the proper blocking filters... that'd be trippy. :-)

  • by CodeShark ( 17400 ) <ellsworthpc@NOspAm.yahoo.com> on Friday June 15, 2007 @04:12PM (#19523933) Homepage
    Having read all the arguments about giving up 1/2 of the green sensors, and admittedly not as an electronics fiend but as someone who worked in printing for years before moving to IT, I think the "sacrificing color" arguments are somewhat overstated. Here's why:

    In printing technologies, at least in the early '90s they were using a technique called either "GCR" (gray color removal) or "UCR" (under color removal) which basically transfer almost all of the "light density" information from the cyan-magenta-yellow films of a color separation to the "K" film (black) -- because black ink is quite a bit cheaper than the alternatives. I have seen images printed with up to 90% of the density in the black that are virtually indistinguishable from images printed from a "normal" color separation by the naked eye, and sometimes if a high enough line screen value is used (+200 LPI) it is hard to tell that a print is a GCR'd image even with a magnifying glass.

    So it stands to reason for me at least that if I devote more attention to capturing the "amount" of light with "one CCD eye" completely open, and the "quality" (hue and tint) of the light with my "other three CCD eyes" that are filtering for spectra, I should be able to do the same thing digitally that they have been doing optically in printing for yearsand still yield a superior result.

    I'd love to hear a discussion about the best way to use the digital bits in a 32 bit "GCR" digital world by the way. For example, using 10 bits (1024 levels) for luma, 8 bits (256 hues and tints) for green, and 7 bits (128 hues and tints each) for red and blue, or whatever the optimal case could be

    Thoughts?

  • by Anonymous Coward on Friday June 15, 2007 @09:07PM (#19527657)
    i'm a karmawhore, so i'm ACing this. but you'll still get an informed answer, heh. basically, you get what you pay for. Zeiss has the name, but i believe much of the low end consumer digicam stamped with ZEISS isn't particularly great. i mean, if Rolex made a $12 watch, how good would it be? probably better than other $12 watches, but certainly not a comparison for a $5k submariner.

    that said, Zeiss makes some awesome high end lenses. The digiprimes (for 2/3" digital cinema cameras) are amazing, and run around $12k/each. The digizooms are like $60k, iirc, and also very good. I haven't use the zooms, but i've shot with the digiprimes and they are amazingly sharp.

    my $.02

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...