Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Media Hardware Technology

Kodak Unveils Brighter CMOS Color Filters 184

brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."
This discussion has been archived. No new comments can be posted.

Kodak Unveils Brighter CMOS Color Filters

Comments Filter:
  • by swschrad ( 312009 ) on Friday June 15, 2007 @10:56AM (#19519167) Homepage Journal
    and color in the 70s.

    I refer you to Tri-X b/w, and to Fujichrome 400 around 1972. a really nicely balanced and warm film. if you pushed it to 1200, you could peel the grains off the base and go bowling with them, but the picture held up remarkably well on the small screen. it was THE go-to magic film for 16mm newsfilm when it came out.

    if that was a negative film, it would have been asa 800 with little more grain than the "fast" 125 color film of the time.
  • by bsundhei ( 1053360 ) on Friday June 15, 2007 @11:02AM (#19519257)
    This is really not anything new to the image industry, just a new application. There is already the CMYK colorspace for printers, which is effectively an RGB + black to get deeper colors. I don't see this as really revolutionary, as much as "Can't believe this hasn't been done yet." Though, at least they admitted this too :) My biggest hope for this is to reduce per pixel noise by being able to reference the fourth plane, but I doubt they will get there for a while, they still have to work out the color conversions.
  • by leehwtsohg ( 618675 ) on Friday June 15, 2007 @11:07AM (#19519351)
    The gain here seems to come from the fact that they use a white sensor (i.e. unfiltered), which sees ~3 times more light.

    They divide each sensor of the regular bayer pattern to 4, half white, half color. This way one can also report a 4-fold increase in the number of pixels, without really increasing the resolution. (which actually will be a boon for digital photography, since no one needs the current resolution anyway, because the optics doesn't keep up, but a megapixel race is on...)

    But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of

    R G
    G B
    use

    M C
    C Y
    or something like that. One could even combine the two methods, and use white pixels, to gain a slight further increase in light sensitivity (from 8/12 to 10/12). Is there any reason that current cameras use RGB?
  • by ausoleil ( 322752 ) on Friday June 15, 2007 @11:08AM (#19519357) Homepage
    Sure, "faster" sensors will be a boon to the consumer market, and will surely have some applications in the pro market as well -- existing light press photography come to mind.

    For me, though, the problem is not so much speed as it is noise and dynamic range. That's because a lot of the time I still do fine-art level landscape and studio glamour photography -- neither of which are speed starved, but even the finest digitals could still use even less noise and wider dynamic ranges.

    While DSLRs have a huge advantage over handhelds in this regard, it would still be nice to see improvements in s/n such that the darker zones maintained their clarity and detail. Even the finest Canon cameras suffer to a degree in this regard, at least for people with very high standards. Some of us have those standards because that is what our clients demand - and in some cases we still must use film to meet their criteria.

    It's a virtual law that to obtain the best noise performance you need to use the lowest ISO speed that the camera can attain. So instead of bottoming out at 100, like most DSLRs, I'd like to see 25. Or better, 12.

    For more info, visit http://www.normankoren.com/digital_tonality.html [normankoren.com]

  • by 140Mandak262Jamuna ( 970587 ) on Friday June 15, 2007 @11:13AM (#19519423) Journal
    The Bayer pattern has one red, one blue and two green sub-pixels per pixel. They could lose one green and replace it with transparent. Or they could come up with a different packing to accomodate a transparent sub-pixel.

    One of the problems with DLP projection TVs with a "color wheel" was that since every color lets only 1/3 of the light through, the picture was dim. So they added a fourth element "clear" that lets out all the light to get every projected pixel a blast of light they need and the remaining portions of the color wheel adds only additional brightness for each color.

    This technology seems to be kind of similar. The transparent sub pixel detects over all lumninosity and the remaining pixels "adjust" for color. Very close to what we have in our retina too. Almost all our cylindrical cells respond only to luminosity and the cones respond, to varying degrees, three colors. A poster was complaining about losing "color resolution". I think millions of years of evolution has shown us the balance. You need about 90% of the pixels responding to luminosity and just 10% to color. The same ratio in our retina.

  • Why not this pattern (Score:2, Interesting)

    by Bob-taro ( 996889 ) on Friday June 15, 2007 @11:21AM (#19519533)

    The patterns they suggested in the article were not as elegant as the Bayer filter (where each color formed an evenly spaced grid). They may be hiding the actual pattern for now or there may be some technical reason for those patterns that I don't understand, but I would suggest this pattern (C = Clear):

    C G C G
    B C R C
    C G C G
    R C B C

    it keeps the same 4clear:2green:1red:1blue ratio but the different color pixels all form a regularly spaced grid.

  • by leehwtsohg ( 618675 ) on Friday June 15, 2007 @11:37AM (#19519757)
    Thank you for the link! That is very interesting. So CMY was already tried in cameras. Once you have a digital pixel, it pretty much doesn't matter if you represent it in RGB or CMY - just a transform of the same information.
    But I don't understand why you don't have less noise. The wikipedia article mentions higher dynamic range. Isn't it true that twice as much light falls on each sensor, so you gain a stop, and because of that have less noise (because you need the shutter open for only half the time)? Or is it somehow that when you get noise, it is in two channels, and thus you have the same amount of noise?
  • by SpinyNorman ( 33776 ) on Friday June 15, 2007 @11:52AM (#19519961)
    The old/current Bayer pattern (also a Kodak "invention") also reflects the lower resolution of our vision to color vs brightness (as does JPEG and YUV based image compression - UV can be downsampled compared to Y with little loss in perceived resolution). In the Bayer pattern each block of 2x2 pixels have 2 with green filters, described as luminance-sensitive in the original patent, and one each of red and blue filter described as chrominance sensitive.

    The new Kodak filter pattern is still taking advantage of our better resolution for luminance, but is implementing it better by basing it on color filters (or the lack of them) that let more light through, thereby increasining signal-to-noise (especially needed in low-light conditions).

    I'm not sure that this new filter pattern is optimal though. As another poster noted, R/G/B filters are too narrow and cut out a lot of light. You could still capture the color information with two broader filters more directly corresponding to the U & V of the YUV color space.
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Friday June 15, 2007 @11:57AM (#19520053) Journal
    While I like Kodak's idea quite a bit, here are a couple of other ideas.

    1) Sony was building cameras for a while with four color channels. There was the normal green, but also a different green they called "emerald" for one of the four Bayer pattern locations. Unfortunately, this was a solution in search of a problem, it never really caught on because there just wasn't any perceived benefit.

    2) I do visual effects for films. For the last 50 years or so, people have been using bluescreen and greenscreen effects. The idea is to put a constant color background, and process the image so that any pixels of that color become transparent. Over the years, more and more lipstick has been applied to this pig -- so that you can now often extract shadows that fall on the greenscreen, pull transparent smoke from the greenscreen plate -- these things have become even more possible through digital processing.

    Still, it sucks. Greenscreen photography forces so many compromises that I often recommend shooting without it and laboriously hand-rotoscoping the shots.

    But -- say you had a fourth color filter, with a very narrow spectral band. Perhaps the yellow sodium color -- commercial lights that put out very narrow-band yellow are sometimes used for street lighting. If you had a very narrow-band sodium filter over 1/4 of the pixels, you could pull perfect mattes without 99% of the artifacts of traditional greenscreen and bluescreen photography. Finally (and this is killer!) you could make glasses that the director of photography and other lighting crew could wear that block just that frequency, so they could see the set as it really is -- without the sodium light pollution.

    Still, Kudos to Kodak for thinking outside the box.

    Thad Beier
  • by ChrisMaple ( 607946 ) on Friday June 15, 2007 @12:08PM (#19520235)
    Useually random noise sums as "root sum of squares". So the signal level would double, the noise would increase by about 1.4X. The net improvement would be 2/1.4 = 1.4. The more complicated electronics would reduce the S/N improvement a bit more, so the net improvement would probably be in the range of 1/3 to 1/2 stop (1.25 to 1.4), I guess.
  • This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of ...
    The issue is that the spectral density [wikipedia.org] of sunlight is not flat. (I can't seem to find a good image for you.) Basically, it peaks at about 500 nm (yellowish-green) and tapers off toward infrared and ultraviolet. The Bayer filter has twice as many green pixels as red or blue, which reflects the sunlight power spectral density more than having one cyan, one magenta, one yellow, and one intensity would. In other words, sunlight is more green than red and blue.

    It is no coincidence (I suppose it's arguable if you call evolution a "theory" (with quotes)) that our eye is most sensitive to green light. :) Notice that of the three cone cells [wikipedia.org] in our eyes, two heavily favor (534 & 564 nm) the yellow-green end of the spectrum. IMHO, the ideal colors for a camera filter would match the three peaks in our cones which decently lines up with the sunlight PSD.

    As a side note, the need for white balance on cameras is that spectral density for different light sources are not the same. Incandescents differ from fluorescents which differ from sunlight which is why incandescents have an orangeish tint and fluorescents have a blueish tint (that's where their frequencies have their peak power).

    (The theory behind why chlorophyll is green (which means it reflects green and, thus, does not absorb the frequencies with the most power) are quiet interesting to boot.)
  • by Spy Hunter ( 317220 ) on Friday June 15, 2007 @12:19PM (#19520415) Journal
    Well considering that the human eye does much the same thing (rods vs cones), I'd say yes.
  • by ringm000 ( 878375 ) on Friday June 15, 2007 @12:29PM (#19520565)
    In a camera, you cannot convert CMY to RGB by just inverting the components. Even in ideal model like (C,M,Y)=(G+B,R+B,R+G) you have to convert like R=(M+Y-C)/2, increasing noise level by 50%. Absorption spectra of the cones overlap a lot, so this model is obviously unreachable, requiring complex color correction which would probably give imperfect results. However, these are all color-related problems, and the dynamic range of luminance should still be improved.
  • by GreenSwirl ( 710439 ) on Friday June 15, 2007 @02:44PM (#19522599) Homepage Journal
    Researchers here at Rensselaer Polytechnic Institute recently came up with a super non-reflective coating -- it basically has nano-spikes that help absorb light from all angles and at all frequencies. Seems like it would be good to use for the dark pixel. http://news.rpi.edu/update.do?artcenterkey=1956 [rpi.edu]

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...