Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Media Hardware Technology

Kodak Unveils Brighter CMOS Color Filters 184

brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."
This discussion has been archived. No new comments can be posted.

Kodak Unveils Brighter CMOS Color Filters

Comments Filter:
  • by chennes ( 263526 ) * on Friday June 15, 2007 @10:47AM (#19519059) Homepage
    Of course, you achieve this increased light sensitivity at the expense of losing 1/4 of your color resolution. Maybe if you want the increased sensitivity it might make more sense to pick up something like the Canon 1D Mk III, which, at least according to Ken Rockwell, gives great results all the way up to ISO 6400. I'd hate to lose 1/4 of my color resolution *all of the time* to get the added sensitivity that I only need for a small fraction of the shots I take.
    • by lurker412 ( 706164 ) on Friday June 15, 2007 @10:55AM (#19519155)
      I'm not sure you would lose "color resolution" at all. The current RGB scheme combines color and luminosity. Under the new scheme, those could be separated, much the way LAB color space works. Potentially, this could give you a greater dynamic range, which would address the biggest weakness of current digital cameras. Of course, the proof will be in the execution. If it yields more noise in the process, then it won't be worth a damn. We'll see.
    • I already feel that my digital rebels have remarkably low noise sensors and give me better results that shooting Velvia 50 and scanning. Still I usually carry a tripod and shoot at virtually never shoot at high ISO so it doesn't really affect me.

      I expect this will have more value in cellphone cameras. Typically the noise floor goes up when the sensor shrinks, and increasing the brightness without increasing noise would be a massive boon for most cellphone photographers.
      • I have two words for you: Sports Photography
      • I already feel that my digital rebels have remarkably low noise sensors and give me better results that shooting Velvia 50 and scanning. Still I usually carry a tripod and shoot at virtually never shoot at high ISO so it doesn't really affect me.

        Digital is already better than film, but the fact is, DSLR owners continue to pay good money for big, heavy lenses precisely to obtain more sensitivity, and vibration reduction to cope with longer-than-ideal shutter speeds.

        Cameras aren't "good enough" until I ca

        • Cameras aren't "good enough" until I can shoot fast action at high magnification in near dark with a compact camera. Then I will be happy.

          Then you will never be happy because it's intrinsically impossible to capture low noise images "in near dark" with a small area sensor because of photon noise.
    • As you state, DSLRs already have fairly decent sensitivity, so this is not likely to be a good compromise for them.

      Modern 'compact' digital cameras, however, which stuff 7-12 megapixels on 1/1.8" and 1/2.5" sensors (smaller than your fingernail) could benifit enormously from this. These sensors are already past the diffraction limit of most of the lenses, so a drop in color resolution may not be too damaging (the eye being less sensitive to color resolution, than luminance anyway). Kodak is claiming a 1-2
    • Exactly. The potential loss in color resolution is a pretty steep price for two stops worth of sensitivity. There may be a niche market for this with sports or astro photos, but most users shoot most of their shots with available lighting or fill flash and don't need the extra sensitivity.

      This might make a nice second camera for the serious user, but most folks would be better off with the current technology.
      • The potential loss in color resolution is a pretty steep price for two stops worth of sensitivity

        First, there is likely no significant "loss in color resolution"; the resolution you're getting in the color channels right now is already only based on heuristics.

        Second, even if there were a loss of resolution in the color channels, you wouldn't notice it: you can't see high frequencies in the color channels.
    • by Animaether ( 411575 ) on Friday June 15, 2007 @11:11AM (#19519383) Journal
      You don't really lose a quarter of your color resolution... you lose half the resolution in a specific wavelength, the one normally corresponding to green (though how this is mapped to RG or GB (rarely purely G) is up to the demosaicing algorithm. On the up side, you gain light sensitivity by a factor more than two; assume the filters were perfect and light only existed in the wavelengths they let through. Then any single filtered cell only receives 33% of the stimulans. An unfiltered cell would get the full 100%.

      This additional intensity resolution is, of course, only at a quarter of that of the resolution a full bayer... but nobody ever said you had to discard the intensity measured by the red/green/blue filtered bits; in fact, you can't, or you can't very well determine color at all.

      It's actually a pretty obvious setup (it has likenings to the RGBe storage format.. though that has much larger range, it also mostly separates color (RGB) and intensity (exponent)) - can't wait to see it patented - and makes me wonder why the Bayer pattern was the choice in the first place. I certainly know why they picked green as the go-to channel (human visual sensitivity, blabla), and why the there have to be groups of 4 in the first place (cells are square/rectangular.. design a triangular sensor cell, somebody - quick! gimme that hexagonal sensor).. but why just now Kodak pops this up..
      • by clodney ( 778910 )
        I had something of the same reaction to your remark about a patent. This is a fairly good example of how hard the "obviousness" test of a patent can be to judge. When you hear about this, it is something of a "Doh!" moment, and you think how obvious it is. (I was immediately reminded of the chroma subsampling options in JPEG compression, which use different sampling rates for color and luminosity).

        But the fact is that hundreds of millions of digital cameras have been made in an intensely competitive R
        • Depends when the idea was originally.

          I've certainly discussed this idea with people at least a year ago completely independent of any research done at Kodak. Four colour sensors, CYGM and RGBE, have been around for years.

          Other ideas that have been played with are non regular (fractal) CFAs.

          An obvious further extension to what Kodak has done (assuming it isn't what they have done) is to have something like RYYB (bayer but with G replaced with Luminance). This ought to capture still more light. Infact, as CCD
      • by GreenSwirl ( 710439 ) on Friday June 15, 2007 @02:44PM (#19522599) Homepage Journal
        Researchers here at Rensselaer Polytechnic Institute recently came up with a super non-reflective coating -- it basically has nano-spikes that help absorb light from all angles and at all frequencies. Seems like it would be good to use for the dark pixel. []
    • For most photography applications, it is a meaningful advance for which there is no downside.

      The marketing hype surrounding resolution just keeps spinning further away from reality.

      Digital photographic prints off the average production photo printer (my costco has them right on the floor) the lines per milimeter resolution is _way_ below what even a **really** good digital SLR with **great** optics can capture.

      Also keep in mind the color gamut of the average digital camera is quite narrow, and unsophisticat
      • by shmlco ( 594907 )
        "For most photography applications, it is a meaningful advance for which there is no downside."

        Well, you lose color resolution and I'd say that there's a good chance that in bright sunlight you're going to be blowing out quite a few of the clear pixels, losing luminance information there as well. Being "more" sensitive helps when there's less light, not when there's too much.

        Translation: There's always a downside.
        • you're going to be blowing out quite a few of the clear pixels

          In a production CMOS/CCD assembly this is not likely. In order to get a digital camera sensor to produce a pleasing image in many lighting conditions the CCD/CMOS assemblies already have controls for this.

          The best example of proof is to try using a scanner head as a digital camera. You will find that the CCD assembly in a scanner is not designed to handle variable light, so most things outside a narrow range of brightness (luminance maybe?) are
    • 'd hate to lose 1/4 of my color resolution *all of the time* to get the added sensitivity that I only need for a small fraction of the shots I take.

      To be honest, I wouldn't mind. If you buy a 10 megapixel camera that isn't a good quality SLR, you won't be getting much better quality than a 6 megapixel camera since the bottleneck for quality becomes the lens.

      All it would really mean is that we absorb a delay in the relentless rise in pixel density for a dramatic improvement in colour depth.

      This technology will sell, there's no doubt about it.

      • by jafac ( 1449 )
        Any comments from experienced digital photographers on the Carl Zeiss lenses? Sony seems to have an exclusive deal; but when I buy a camera with the "Carl Zeiss" name stamped on the lens - what am I getting, exactly? I know that Carl Zeiss had a great reputation in the 1970's - but I'm not sure if that means anything today. . . (fwiw, I have 3 Sony cameras; including the dsc-h5, which seems to take pretty good pictures despite lack of RAW capability).
        • Are world class optics -- used by Hasselblad, Rollei, Yashica and now Sony .and unless things have changed radically since I last checked, have been the top lenses along with Nikon for years and years. I think Leica used Zeiss lenses also and Leica cameras WERE top notch back when. Not sure how far Canon, Minolta, etc. have made up the distance since about 1996, by the way.
    • by art6217 ( 757847 )
      That't not so simple. You lose the resolution of green, but increase the resolution of red and blue. For example, if there is only blue light, then the ccd matrix has half the resolution both vertically and horizontally. With a white pixel, algorithms mith guess that there is only blue, as red and green sensors do not get any light, and then use the white sensor to increase the resolution of blue. It's a simple case, but smart heuristic algorithms might get a lot in various ways from the white pixel, also
    • by Solandri ( 704621 ) on Friday June 15, 2007 @12:04PM (#19520169)
      It's done on TV [] all the time [] and nobody complains (chrominance is separated from luminance and often transmitted at much lower resolution). As has been pointed out below, your eyes are made up of rods (which see black and white) and cones (which see color), and only a fraction of those cones are devoted to each individual red, green, or blue spectrum. So your color resolution is already significantly lower than your luminance resolution. You can even see photos demonstrating this [] with a 9x decrease in color resolution (3x in each linear direction). You're most sensitive to green, which is why the Bayer sensors commonly used in digital cameras divide each 4 pixels into GRGB.
    • Re: (Score:2, Interesting)

      by Spy Hunter ( 317220 )
      Well considering that the human eye does much the same thing (rods vs cones), I'd say yes.
    • Actually the place you will lose more bits is not the use of the are (25%) but the faster shutter speed. if the camera can shoot two stops faster then you 75% of the light on the RGB detectors.

      Now as for losing color resolution, I think you won't lose much. The only place you are going to notice it is in dim light and it will be less than 1 bit of loss. Those would be shots you would nt have gotten anyhow because they would have been below the camera's ability.

      Prior art? LCD projectors do this same trick
    • by Ed Avis ( 5917 )
      Still, the human eye is more sensitive to changes in light intensity (luminance) than to changes in colour (chrominance), so it may be worthwhile trading off some colour resolution that you won't notice for some light sensitivity that you will. Remember that with existing colour digital cameras you need software to interpolate and guess colours for pixels because of the alternating RGB pattern on the sensor. The guessing job won't be that much more difficult if there are a few clear pixels in there as wel
    • The 8 Mpixel color image that comes out of your camera is already a complicated guesswork; in terms of real color information, it's more like 2-3 Mpixel, since there really are only 2 million complete RGBG cells.

      Making one of the RGBG cells into a "white" cell doesn't really change much of anything in terms of resolution: color resolution is still half what grayscale resolution is. And it does actually help with color accuracy, since having four different receptors lets cameras deal a lot better with fluor
  • It is hard to evaluate this from the press release. People have tried all sorts of variations, including ditching the whole pattern thing for true color (Carver Mead) and the results are about the same as other cameras.
  • by swschrad ( 312009 ) on Friday June 15, 2007 @10:56AM (#19519167) Homepage Journal
    and color in the 70s.

    I refer you to Tri-X b/w, and to Fujichrome 400 around 1972. a really nicely balanced and warm film. if you pushed it to 1200, you could peel the grains off the base and go bowling with them, but the picture held up remarkably well on the small screen. it was THE go-to magic film for 16mm newsfilm when it came out.

    if that was a negative film, it would have been asa 800 with little more grain than the "fast" 125 color film of the time.
    • The point she was trying to make was that when it became available everywhere (pharmacies and five and dimes) at a similar price point to ASA 200 then there was mass adoption, and most peoples' snapshots gained quality. Sure, they picked up some grain and lost some saturation, but most people care about non-blurry and better exposure.

  • by Burb ( 620144 ) on Friday June 15, 2007 @10:59AM (#19519207)
    That's a neat trick. I wonder how they can do that?
    • The filter is transparent. The sensor behind it 'absorbs' light.
    • by vondo ( 303621 )
      The pixel is not transparent, the filter on top of it is. If a sensor has 4M pixels, the current design has 1M of them with little red filters on them, 1M with little blue filters, and 2M with green (our eyes are most sensitive to green). This new design, as I understand it, just replaces half of the green filters with "clear" filters. The sensor underneath is sensitive to whatever light makes it through.
    • by Goaway ( 82658 )
      By not being literal-minded nerds, and by being able to understand meaning from context, probably.
  • This is really not anything new to the image industry, just a new application. There is already the CMYK colorspace for printers, which is effectively an RGB + black to get deeper colors. I don't see this as really revolutionary, as much as "Can't believe this hasn't been done yet." Though, at least they admitted this too :) My biggest hope for this is to reduce per pixel noise by being able to reference the fourth plane, but I doubt they will get there for a while, they still have to work out the color
  • by G4from128k ( 686170 ) on Friday June 15, 2007 @11:04AM (#19519299)
    Kodak has rediscovered what evolution found millions of years ago -- design a dual system such as the rods and cones of the biological eye. The average human eye has about 120 million sensitive, panchromatic rods and only 6 or 7 million color-sensitive cones (many in the central fovea). The brain merges the limited amounts of color information with the larger volume of B/W image data to paint color into the image that we think we see.
    • by imadork ( 226897 )
      Kodak has rediscovered what evolution found millions of years ago....

      And I'll bet they've already filed a patent on it....

    • Re: (Score:2, Funny)

      by Anonymous Coward
      Kodak has rediscovered what God found six thousand years ago

      Fixed that for you. : )
    • by SpinyNorman ( 33776 ) on Friday June 15, 2007 @11:52AM (#19519961)
      The old/current Bayer pattern (also a Kodak "invention") also reflects the lower resolution of our vision to color vs brightness (as does JPEG and YUV based image compression - UV can be downsampled compared to Y with little loss in perceived resolution). In the Bayer pattern each block of 2x2 pixels have 2 with green filters, described as luminance-sensitive in the original patent, and one each of red and blue filter described as chrominance sensitive.

      The new Kodak filter pattern is still taking advantage of our better resolution for luminance, but is implementing it better by basing it on color filters (or the lack of them) that let more light through, thereby increasining signal-to-noise (especially needed in low-light conditions).

      I'm not sure that this new filter pattern is optimal though. As another poster noted, R/G/B filters are too narrow and cut out a lot of light. You could still capture the color information with two broader filters more directly corresponding to the U & V of the YUV color space.
    • by dfghjk ( 711126 )
      Except that rods and cones are entirely different mechanisms and the kodak design uses identical underlying pixels. They are, in reality, not analogous at all.
  • There was a story here a few days ago about them adding a "clear" pixel element to allow more light through. Sounds like the same premise.
  • by leehwtsohg ( 618675 ) on Friday June 15, 2007 @11:07AM (#19519351)
    The gain here seems to come from the fact that they use a white sensor (i.e. unfiltered), which sees ~3 times more light.

    They divide each sensor of the regular bayer pattern to 4, half white, half color. This way one can also report a 4-fold increase in the number of pixels, without really increasing the resolution. (which actually will be a boon for digital photography, since no one needs the current resolution anyway, because the optics doesn't keep up, but a megapixel race is on...)

    But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of

    R G
    G B

    M C
    C Y
    or something like that. One could even combine the two methods, and use white pixels, to gain a slight further increase in light sensitivity (from 8/12 to 10/12). Is there any reason that current cameras use RGB?
    • by Anonymous Coward on Friday June 15, 2007 @11:26AM (#19519603)
      CMYK filters were actually tried: []

      They don't actually provide any practical benefit over RGB in terms of noise, if your final output is meant to be RGB, due to the mathematics of the color space transformation. And your final output is generally RGB, for digital photography; even if you print, the intermediate formats are generally RGB, and cheap consumer printers take input in RGB, not CMYK.
      • Re: (Score:3, Interesting)

        by leehwtsohg ( 618675 )
        Thank you for the link! That is very interesting. So CMY was already tried in cameras. Once you have a digital pixel, it pretty much doesn't matter if you represent it in RGB or CMY - just a transform of the same information.
        But I don't understand why you don't have less noise. The wikipedia article mentions higher dynamic range. Isn't it true that twice as much light falls on each sensor, so you gain a stop, and because of that have less noise (because you need the shutter open for only half the time)? Or
        • Re: (Score:3, Interesting)

          by ChrisMaple ( 607946 )
          Useually random noise sums as "root sum of squares". So the signal level would double, the noise would increase by about 1.4X. The net improvement would be 2/1.4 = 1.4. The more complicated electronics would reduce the S/N improvement a bit more, so the net improvement would probably be in the range of 1/3 to 1/2 stop (1.25 to 1.4), I guess.
          • by dfghjk ( 711126 )
            Changing the filter, or removing it entirely, does not change the amount of signal one bit. The amount of signal and the amount of noise are characteristics of the underlying pixel. Altering the filter only effects sensitivity.

            Filters can contribute to noise in a final image by creating and imbalance in white balance or by limiting the amount of light to a level below optimal. Otherwise, they aren't involved. This claim that the CMY to RGB conversion increases noise is nonsense.
        • Re: (Score:2, Interesting)

          by ringm000 ( 878375 )
          In a camera, you cannot convert CMY to RGB by just inverting the components. Even in ideal model like (C,M,Y)=(G+B,R+B,R+G) you have to convert like R=(M+Y-C)/2, increasing noise level by 50%. Absorption spectra of the cones overlap a lot, so this model is obviously unreachable, requiring complex color correction which would probably give imperfect results. However, these are all color-related problems, and the dynamic range of luminance should still be improved.
        • Check out this review and sample photos from the PowerShot S10 which uses the CYGM filter - it does seem to have very low noise, and awesome image quality in general. I wonder why they stopped using it?

    • by Zarhan ( 415465 )
      Canon G1 had a CMY pattern if I recall correctly. This also meant that it didn't suffer from the nice IR artifact (take a picture of hot charcoal and you actually get reddish image, lots of other cameras see it as purple...)
    • Re: (Score:2, Informative)

      by slagheap ( 734182 )

      But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light.

      Let me just turn that around for you...

      A Green filter would let cyan and and yellow through, but keep Magenta out, instead of blocking two parts of the visible spectrum for each pixel.

      The color spaces are complimentary. Each

    • This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of ...
      The issue is that the spectral density [] of sunlight is not flat. (I can't seem to find a good image for you.) Basically, it peaks at about 500 nm (yellowish-green) and tapers off toward infrared and ultraviolet. The Bayer filter has twice as many green pixels as red or blue, which reflects the sunlight power spectral density more than having one cyan, one magenta, one yellow, and one intensity would. In other words, sunlight is more green than red and blue.

      It is no coincidence (I suppose it's arguable if you call evolution a "theory" (with quotes)) that our eye is most sensitive to green light. :) Notice that of the three cone cells [] in our eyes, two heavily favor (534 & 564 nm) the yellow-green end of the spectrum. IMHO, the ideal colors for a camera filter would match the three peaks in our cones which decently lines up with the sunlight PSD.

      As a side note, the need for white balance on cameras is that spectral density for different light sources are not the same. Incandescents differ from fluorescents which differ from sunlight which is why incandescents have an orangeish tint and fluorescents have a blueish tint (that's where their frequencies have their peak power).

      (The theory behind why chlorophyll is green (which means it reflects green and, thus, does not absorb the frequencies with the most power) are quiet interesting to boot.)
      • by dfghjk ( 711126 )
        "In other words, sunlight is more green than red and blue."

        That is wrong. Sunlight varies dramatically and no such generalization can be made but "daylight" is quite well balanced; it is not biased toward green.

        "The Bayer filter has twice as many green pixels as red or blue, which reflects the sunlight power spectral density more than having one cyan, one magenta, one yellow, and one intensity would."

        No, the Bayer filter has twice as many green pixels because (a) there are 4 pixels and only 3 colors so one
    • Not exactly what you're saying, but Canon did something like this in the late 90's: []

      The result was, as you say, better light sensitivity, but at the expense of color accuracy. I guess in the end they decided the tradeoff wasn't worth it. I don't claim to understand any of the details, but I just read that page and then read your question :)
    • by shmlco ( 594907 )
      Since the final result wants to be RGB it's easier to start out that way. Second, you WANT light to be blocked by the peak filters in order to differentiate color.

      A good sensor wants resolution AND sensitivity AND accuracy. Since you can't have all three at the same time, you make tradeoffs. Your solution might increase sensitivity, but at the cost of accuracy and resolution.
    • by hazydave ( 96747 )
      Actually, some cameras do use CMY, or more likely, CMYG. My old Canon Pro90IS had such a sensor. Maybe they're trying to minimize the color error rather than maximize sensitivity? Hard to say.

      More interestingly, the very first HDV camcorder, the JVC HD-GR1, used both of these techniques back in 2003... see right.html []. Their sensor is White (clear), Yellow, Cyan, and Green in a Bayer-like pattern. They made similar claims: the effect is 50% luma, rather than
  • by ausoleil ( 322752 ) on Friday June 15, 2007 @11:08AM (#19519357) Homepage
    Sure, "faster" sensors will be a boon to the consumer market, and will surely have some applications in the pro market as well -- existing light press photography come to mind.

    For me, though, the problem is not so much speed as it is noise and dynamic range. That's because a lot of the time I still do fine-art level landscape and studio glamour photography -- neither of which are speed starved, but even the finest digitals could still use even less noise and wider dynamic ranges.

    While DSLRs have a huge advantage over handhelds in this regard, it would still be nice to see improvements in s/n such that the darker zones maintained their clarity and detail. Even the finest Canon cameras suffer to a degree in this regard, at least for people with very high standards. Some of us have those standards because that is what our clients demand - and in some cases we still must use film to meet their criteria.

    It's a virtual law that to obtain the best noise performance you need to use the lowest ISO speed that the camera can attain. So instead of bottoming out at 100, like most DSLRs, I'd like to see 25. Or better, 12.

    For more info, visit []

  • by 140Mandak262Jamuna ( 970587 ) on Friday June 15, 2007 @11:13AM (#19519423) Journal
    The Bayer pattern has one red, one blue and two green sub-pixels per pixel. They could lose one green and replace it with transparent. Or they could come up with a different packing to accomodate a transparent sub-pixel.

    One of the problems with DLP projection TVs with a "color wheel" was that since every color lets only 1/3 of the light through, the picture was dim. So they added a fourth element "clear" that lets out all the light to get every projected pixel a blast of light they need and the remaining portions of the color wheel adds only additional brightness for each color.

    This technology seems to be kind of similar. The transparent sub pixel detects over all lumninosity and the remaining pixels "adjust" for color. Very close to what we have in our retina too. Almost all our cylindrical cells respond only to luminosity and the cones respond, to varying degrees, three colors. A poster was complaining about losing "color resolution". I think millions of years of evolution has shown us the balance. You need about 90% of the pixels responding to luminosity and just 10% to color. The same ratio in our retina.

    • It'd be interesting to see the algorithm for that sensor. The color values of the adjacent pixels are going to have to be taken into account. A bare photoelectric sensor will provide a higher voltage for higher frequencies of light. To a "white" sensor, a blue photon looks brighter than a red one.

      More complexity for RAW filters.
    • You need about 90% of the pixels responding to luminosity and just 10% to color.

      My camera doesn't care about avoiding being eaten by a lion. Nor does it care about sensing smaller prey running through the edges of its field of vision so it can turn its sensor to focus the sharper resolution on it.

      The human eye is awesome for what it's evolved to do. Photography, however, is a different task. The human eye is good at resolving things infront of it while catching movement to the sides and only turning if it's interested. A camera with a well resolved center section but lousy edge resolu

  • This is so obvious - I've personally wondered why 1CCD sensors they don't have a fourth pixel group to carry brightness information only. There must be good reasons why this has not been done before now; I hope we get to find out why.
  • Why not this pattern (Score:2, Interesting)

    by Bob-taro ( 996889 )

    The patterns they suggested in the article were not as elegant as the Bayer filter (where each color formed an evenly spaced grid). They may be hiding the actual pattern for now or there may be some technical reason for those patterns that I don't understand, but I would suggest this pattern (C = Clear):

    C G C G
    B C R C
    C G C G
    R C B C

    it keeps the same 4clear:2green:1red:1blue ratio but the different color pixels all form a regularly spaced grid.

  • ..the ISO 400 reolution was largely lost on me.
  • This gets me wondering:

    Does the clear array have a flat sensitivity level across the spectrum? Where it will give the same data value for the same number of photons striking it with a 700nm wavelength as it would for photons striking it that vibrate at 400nm?

    If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.
    • If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

      I imagine that's part of the reason it hasn't been done yet. Finding the "true luminosity" from a nearby Red, Green, Blue, and Clear CCD is probably nontrivial. I imagine that IR sensitivity isn't as troublesome as you'd suggest, though, since most cameras now come with IR filte

    • I would imagine that the camera is built to take into account the sensitivity of the sensor across the spectrum when converting the RGB + Luminance to RGB for output. It would be similar to the same calibration necessary to to get the colors right in the first place. You would have to figure out how the sensor reacts to the R, G, and B wavelength and apply a gamma transformation (or whatever, I'm not photography or light expert) to what the sensors detect to get a result that represents what the human eye
    • If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

      I would flip that around and say that that behaviour might actually be advantageous. If you're in a low (visible) light situation, maybe you could use an IR flash to get luminance values and merge that with the dim visible colour data to get a halfway-decent colour image with no visib
      • by sahonen ( 680948 )
        Only low-end consumer gear doesn't put an IR filter in front of the sensor. Since the goal of a camera is to faithfully reproduce the color in the scene as visible to the human eye, not putting an IR filter in defeats that purpose.
  • by Thagg ( 9904 ) <> on Friday June 15, 2007 @11:57AM (#19520053) Journal
    While I like Kodak's idea quite a bit, here are a couple of other ideas.

    1) Sony was building cameras for a while with four color channels. There was the normal green, but also a different green they called "emerald" for one of the four Bayer pattern locations. Unfortunately, this was a solution in search of a problem, it never really caught on because there just wasn't any perceived benefit.

    2) I do visual effects for films. For the last 50 years or so, people have been using bluescreen and greenscreen effects. The idea is to put a constant color background, and process the image so that any pixels of that color become transparent. Over the years, more and more lipstick has been applied to this pig -- so that you can now often extract shadows that fall on the greenscreen, pull transparent smoke from the greenscreen plate -- these things have become even more possible through digital processing.

    Still, it sucks. Greenscreen photography forces so many compromises that I often recommend shooting without it and laboriously hand-rotoscoping the shots.

    But -- say you had a fourth color filter, with a very narrow spectral band. Perhaps the yellow sodium color -- commercial lights that put out very narrow-band yellow are sometimes used for street lighting. If you had a very narrow-band sodium filter over 1/4 of the pixels, you could pull perfect mattes without 99% of the artifacts of traditional greenscreen and bluescreen photography. Finally (and this is killer!) you could make glasses that the director of photography and other lighting crew could wear that block just that frequency, so they could see the set as it really is -- without the sodium light pollution.

    Still, Kudos to Kodak for thinking outside the box.

    Thad Beier
  • Better than Foveon? (Score:3, Informative)

    by mdielmann ( 514750 ) on Friday June 15, 2007 @12:24PM (#19520491) Homepage Journal
    I wonder how this is going to compare to the Foveon [] sensors. They capture RGB data at all pixels - filtered based on depth rather than location. Now if only those babies cost less.
  • Having read all the arguments about giving up 1/2 of the green sensors, and admittedly not as an electronics fiend but as someone who worked in printing for years before moving to IT, I think the "sacrificing color" arguments are somewhat overstated. Here's why:

    In printing technologies, at least in the early '90s they were using a technique called either "GCR" (gray color removal) or "UCR" (under color removal) which basically transfer almost all of the "light density" information from the cyan-magenta-yellow films of a color separation to the "K" film (black) -- because black ink is quite a bit cheaper than the alternatives. I have seen images printed with up to 90% of the density in the black that are virtually indistinguishable from images printed from a "normal" color separation by the naked eye, and sometimes if a high enough line screen value is used (+200 LPI) it is hard to tell that a print is a GCR'd image even with a magnifying glass.

    So it stands to reason for me at least that if I devote more attention to capturing the "amount" of light with "one CCD eye" completely open, and the "quality" (hue and tint) of the light with my "other three CCD eyes" that are filtering for spectra, I should be able to do the same thing digitally that they have been doing optically in printing for yearsand still yield a superior result.

    I'd love to hear a discussion about the best way to use the digital bits in a 32 bit "GCR" digital world by the way. For example, using 10 bits (1024 levels) for luma, 8 bits (256 hues and tints) for green, and 7 bits (128 hues and tints each) for red and blue, or whatever the optimal case could be


  • A choice quote: "It's almost inconceivable that nobody else thought of, or acted on this idea, until now." That sure sounds like they think this is obvious. Does that mean they'll skip getting a patent?

It is not for me to attempt to fathom the inscrutable workings of Providence. -- The Earl of Birkenhead