New Camera Sensor Filter Allows Twice As Much Light 170
bugnuts writes "Nearly all modern DSLRs use a Bayer filter to determine colors, which filters red, two greens, and a blue for each block of 4 pixels. As a result of the filtering, the pixels don't receive all the light and the pixel values must be multiplied by predetermined values (which also multiplies the noise) to normalize the differences. Panasonic developed a novel method of 'filtering' which splits the light so the photons are not absorbed, but redirected to the appropriate pixel. As a result, about twice the light reaches the sensor and almost no light is lost. Instead of RGGB, each block of 4 pixels receives Cyan, White + Red, White + Blue, and Yellow, and the RGB values can be interpolated."
Wow! Computational Electromagnetics rock! (Score:5, Interesting)
"We've developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds."
What I don't get is calling the FDTD (finite difference time domain) analysis as the "usual" method. It is the usual method in fluid mechanics. But in computational electromagnetics finite element methods have been in use for a long time, and they beat FDTD methods hollow. The basic problem in FDTD method is that, to get more accurate results you need a finer grids. But finer grids also force you to use finer time steps. Thus if you halve the grid spacing, the computational load goes up by a factor of 16. It is known as the tyranny of the CFL condition. The finite element method in frequency domain does not have this limitation and it scales as O(N^1.5) or so. (FDTD scales by O(N^4)). It is still a beast to solve, rank deficient matrix, low condition numbers, needs a full L-U decomposition, but still, FEM wins over FDTD because of the better scaling.
The technique mentioned here seems to be a variant of boundary integral method, usually used in open domains, and multiwavelength long solution domains. I wonder if FEM can crack this problem.
Re:Wow! Computational Electromagnetics rock! (Score:5, Interesting)
I'm not sure any of the comparison of FDTD and FEM-FD in this post is right. FDTD suffers from the CFL limitation only in its explicit form. Implicit methods allow time steps much greater than the CFL limit. The implicit version requires matrix inversions at each time step, whereas the explicit version does not. Comparing FEM-FD and FDTD methods is silly. One is time domain, one is frequency domain, they are solving different problems. There is no problem doing FEM-TD (time domain), in which case the scaling is worse for FEM, when compared to explicit FDTD since the FDTD method pushes a vector, not a matrix, requires only nearest neighbor communication, whereas FEM requires a sparse-matrix solve, which is the bane of computer scientists as the strong scaling curve rolls over as N increases. FDTD does not have this problem, requires less memory and is more friendly toward GPU based compute hardware that is starting to dominate todays supercomputers.
The real question is... (Score:5, Funny)
Interesting comments from both, but I believe you both missed the point. The real question is, which one of these methods, FDTD or FEM-FD, will allow optimal reprocessing in the frequency domain that makes my dinner look prettier with an Instagram vintage filter?
Re: (Score:3)
A GPU is actually pretty good at sparse matrix computations, unlike CPUs.
Re: (Score:2)
Re: (Score:3)
Sparse reads and writes are not vectorizable and are not cache-friendly. A GPU has fast memory without cache and is not limited by traditional vectorization
Re: (Score:3)
So essentially... (Score:3)
...we've switched from calculating rggb values based on attenuated rggb values sensed, to calculating rgb values from sensing cyan (usually a color of reflected light with red subtracted, white+blue ?, white+red ?, and yellow (again reflected white light minus the blue spectral light.)
I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.
And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes.
Re: (Score:3)
how this advantage is used it up to the engineers.
it could be used to make sensors that are smaller and just as good as current sensors, or better quality out of the same sensors. because this improvement is in the signal/noise domain it will also allow for better high speed image capture.
Re: (Score:3)
it could be used to make sensors that are smaller and just as good as current sensors
I'm not sure if it could. Pixel sizes for really tiny cameraphone sensors (1.1 microns, or 1100 nm) are getting close to the wavelength of visible red photons (750 nm). If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.
Re:So essentially... (Score:4, Interesting)
If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.
Re: (Score:2)
Or it may not be an advantage at all.
It is possible that the extra photons not being passed through traditional filters will actually degrade performance. In the past there have been complementary Bayer filter arrays for the same purpose, improved light sensitivity. These cameras delivered inferior color performance.
It is important to have good light sensitivity AND good dynamic range. Dynamic range is not just what your sensor can provide but what you can consistently use. Sometimes filtering light imp
Re: (Score:2)
Re: (Score:2)
The only difference here is that rather than using lenses to focus the light onto individual photosites, they're splitting the light to hit those same photosites. So, at least in theory, you're getting more of the photons as the ones that were being blocked by the filters aren't being wasted.
Re: (Score:2)
"And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes."
You're right. The colour filters used in cameras generally need extra filtering to block out portions of the IR and UV that our eyes are not sensitive to.
Re: (Score:3)
I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.
You can losslessly, mathematically translate between this and RGB (certainly not sRGB) and CMYK. But that's just math. Printing is difficult due to the physical variables of the subtractive color model. The more money you throw at it -- that is to say, the better and more inks and quality of paper you use -- the better it gets. No new physical or mathematical colorspace will improve color reproduction.
Re: (Score:3)
No new physical or mathematical colorspace will improve color reproduction.
'cept we arent dealing with 'preproduction' - we are dealing with 'capture' - while the RGB color space can indeed encode "yellow" it cannot encode how it got to be yellow (is it a single light wave with a wavelength of 570 nm, is it a combination of 510 nm and 650 nm waves, or is it something else?)
(hint: Your monitor reproduces yellow by combining 510 nm and 650 nm waves, but most things in nature that appear yellow do so because the waves are 570 nm)
Re:So essentially... (Score:5, Informative)
Your eyes actually aren't sensitive to red, green, and blue. Here are the spectral sensitivities [starizona.com] of the red, green, and blue cones in your eye. The red cones are actually most sensitive to orange, green most sensitive to yellow-green, and blue most sensitive to green-blue. There's also a wide range of colors that each type of cone is sensitive to, not a single frequency. When your brain decodes this into color, it uses the combined signal it's getting from all three types of cones to figure out which color you're seeing. e.g. Green isn't just the stimulation of your green-yellow cones. It's that plus the low stimulation of your orange cones and blue-green cones in the correct ratio.
RGB being the holy trinity of color is a display phenomenon, not a sensing one. In order to be able to stimulate the entire range of colors you can perceive, it's easiest if you pick three colors which stimulate the orange cones most and the other two least (red), the green-blue cones most and the others least (blue), and the green-yellow cones most but the other two least (green). (I won't get into purple/violet - that's a long story which you can probably guess if you look at the left end of the orange cones' response curve.) You could actually pick 3 different colors as your primaries, e.g. orange, yellow, and blue. They'd just be more limited in the range of colors you can reproduce because their inability to stimulate the three types of comes semi-independently. Even if you pick non-optimal colors, it's possible to replicate the full range if you add a 4th or 5th display primary. It's just more complex and usually not economical (Panasonic I think made a TV with extra yellow primary to help bolster that portion of the spectrum).
But like your eyes, for the purposes of recording colors, you don't have to actually record red, green, and blue. You can replicate the same frequency response spectrum using photoreceptors sensitive to any 3 different colors. All that matters is that their range of sensitivity covers the full visible spectrum, and their combined response curves allow you to uniquely distinguish any single frequency of light within that range. It may involve a lot of math, but hey computational power is cheap nowadays.
It's also worth noting that real-world objects don't give off a single frequency of light. They give off a wide spectrum, which your eyes combine into the 3 signal strengths from the 3 types of cones. This is part of the reason why some objects can appear to shift relative colors as you put them under different lighting. A blue quilt with orange patches can appear to be a blue quilt with red patches under lighting with a stronger red component. The "orange" patches are actually reflecting both orange and red light. So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes. And when you display a picture of that object, your monitor is simply doing its best using three narrow-band frequencies to stimulate your cones in the same ratio as they were with the wide-band color of the object. So a photo can never truly replicate the appearance of an object; it can only replicate its appearance under a specific lighting condition.
Re: (Score:2)
What's happening when i shine my violet laser at a tennis ball green dog toy and it seems to get brighter and reflect white, or on a marble coffee table and it gets blue-white? Really liked your breakdown.
Re: (Score:2, Informative)
Then the material is phosphorous.
The photons from the light source are able to put electrons in the material in a higher orbit (skipping at least one orbit level), then when the electron drops its orbit it doesn't go all the way back to the original orbit. Since the distance of the electron going up, is not the same as going down, the photon produces is of a different frequency (color) than the photon from the light source.
The second drop of the electron to the original orbit will also cause another photon
Re:So essentially... (Score:5, Informative)
Yup - this is fluorescence.
It is worth nothing that a related term is phosphorescence, which is what most people think of when they thing of phosphors. For the benefit of those reading, the two are basically the same phenomena on different timescales.
When light hits an object that is fluorescent it absorbs the light and re-emits it. The re-emitted light has a different spectrum than the absorbed light. The re-emitted light is also emitted AFTER the light is absorbed. In most cases it is emitted almost instantaneously and this is called fluorescence. However, some materials take much longer to emit the absorbed energy as light and this is called phosphorescence.
So, that T-shirt that lights up under a blacklight is exhibiting fluorescence. The watch hands that continue to glow 30 seconds after going from daylight to darkness is exhibiting phosphorescence. They're the exact same thing, but with different dynamics. They both involve electrons absorbing energy and releasing it, but with phosphorescence they get stuck in metastable states (read wikipedia for a decent explanation, but a full one requires a bit more quantum physics than I've mastered).
Re: (Score:2)
but with phosphorescence they get stuck in metastable states
So be sure to put at least two d-flops on the output of the phosphorescent material in the clock domain of the viewer.
Re: (Score:2)
So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes.
What's more, many surfaces reflect different colors at different amounts depending on the exact angle you view them at. Butterfly wings are an extreme example of this, or soap bubbles, but the phenomenon is common. (If you ever want to write a physically-accurate ray tracer, you get to deal with a lot of this complexity.) This can make a surface made of a single substance look very different across it. Now, these effects are functions of the wavelength of the incoming light (and the reflection angle, with t
Re: (Score:2)
Mathematically, a spectrum is an infinite dimensional vector space and any three color sensors will pick out a three dimensional subspace. In general, you cannot reproduce the response in one three dimensional subspace (human response) from another three dimensional subspace (camera res
Good luck with that one, Panasonic (Score:2)
In other words, technological superiority doesn't always win in digital photography.
I agree with point, but the Foveon works... (Score:5, Informative)
In other words, technological superiority doesn't always win in digital photography.
This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day.
But the Foveon chip does persist in cameras, currently Sigma (who bought Foveon) still selling a DSLR with the Foveon sensor, and now a range of really high quality compact cameras with a DSLR sized Foveon chip in it. (the Sigma DP-1M, DP-2M and DP-3M each with fixed prime lenses of different focal lengths)
I think though that we are entering a period where resolution has plateaued, that is most people do not need more resolution than cameras are delivering - so there is more room for alternative sensors to capture some of the market because they are delivering other benefits that people enjoy. Now that Sigma has carried Foveon forward into a newer age of sensors they are having better luck selling a high-resolution very sharp small compact that has as much detail as a Nikon D800 and no color moire...
Another interesting alternative sensor is Fuji with the X-Trans sensor - randomized RGB filters to eliminate color moire. The Panasonic approach seems like it might have some real gains in higher ISO support though.
Foveon is awful at colors (Score:2)
This is very true, although the Foveon was superior in resolution and lack of color moire only
Foveon is only superior in resolution if the number of output pixels is the same. But if you count photosites, i.e. 3 per pixel in a Foveon, then Bayer wins. A Foveon has about the same resolution as a Bayer with twice the pixel count, but the Foveon has three times the number of photosites.
But the problem is colors.
Foveon has a theoretical minimum color error of 6%. Color filter sensors (eg. Bayer) have a theoretical minimum error of 0%. Color filter sensors can use organic filters that are close to the fi
False information (Score:2)
Foveon is only superior in resolution if the number of output pixels is the same.
That is a pretty bad way to measure things, because it ignores things like color moire and other artifacts you get with bayer sensors. As I stated, resolution is not everything. And a Foveon chip delivers a constant level of detail, whereas a bayer chip inherantly will deliver levels of detail that vary by scene color.
In a scene with only red (say the hood of a red car) you are shooting with just 1/3 of the camera sensors cap
Re: (Score:2)
Quite right.
Some of the more impressive shots I've seen were on an A series (A85) 4MP camera which can be had for thirty bucks, and some majestic HDRs from a 6mp Konica Minolta. If you have a decent camera and time and tenacity you can make pretty pictures. And conversely I am sure it wouldn't take long to find someone who should just go sell their 5D.
Re: (Score:2)
"This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day."
The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations, and it has been, in fact, a dismal technology in terms of high ISO. It is not simply "not been as good as the top performers", it is notably worse than Bayer sensors categorically. Fove
Re: (Score:2)
The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations
The "small subset" is any photographic subject with blue or red. Like fall leaves, anything with detail against a sky, red or blue fabrics with fine detail, etc.
That sure is a "small subset".
it has been, in fact, a dismal technology in terms of high ISO
In the past possibly. The current cameras handle ISO up to ISO 1600 well in color, up to ISO 6400 in B&W.
An ISO 6400
Re: (Score:2)
Foveon was never superior, if they had been able to make it work properly it would have taken over, but it's always had issues with noise and resolution that the CMOS and CCD sensors don't. It's a shame because I wanted it to win, but realistically it's been like a decade and they still haven't managed to get it right, they probably won't at this rate.
Re: (Score:2)
Point being?
It's been a decade and there's no sign of progress on the issue. And the people pushing the technology thought we'd be there by now. Those things you list are far, far more difficult and there isn't already a technology that does any of those things.
Re: (Score:2)
Whether or not the Foveon is technologically superior is pretty debatable. It was a neat idea that had some pretty serious shortcomings and, even forgiving those, the difficulty of producing the things left them in the dust as conventional sensors improved.
Re: (Score:2)
In other words, technological superiority doesn't always win in digital photography.
In Panasonic's case it's not achieving superiority but dealing with inferiority, their consumer-grade camera sensors have always had terrible problems with chroma noise in low-light conditions, so this may just be a way of improving the low-light performance.
Re: (Score:2)
Depends on what technological superiority means. In photography light sensitivity is absolutely key for trying to sell a sensor. Most people are interested in figures for noise and range of ISO settings (provided the camera has more than about 12mpxl otherwise they are interested in more resolution too). Foveon failed in all these regards. Their superior colour rendition and absolute lack of moire did not help them at a time when people were scratching their heads at the low resolution and poor sensitivity.
Re: (Score:2)
"Their superior colour rendition ..."
Foveon NEVER had superior color rendition. All it offers is lack of color moire at the expense of many other flaws that are, in the balance, vastly more important. Color moire is not the most problematic issue in digital photography.
Re: (Score:2)
Two, actually. Sigma SD9, Sigma SD14.
Re: (Score:2)
Don't believe all the marketing hype. The Foveon sensor failed because it was not technically superior; it gave you lower resolution, less sensitivity, and worse color reproduction than comparable sensors based on Bayer patterns. The one problem it addressed, namely occasional bad color reproduction around edges with Bayer sensors, simply didn't matter enough to make up for its disadvantages.
I just wish they would... (Score:5, Interesting)
Simply use three sensors and a prism. The color separation camera has been around for along time and the color prints from it are just breath taking. Just use three really great sensors then we can have digital color that rivals film.
Check out the work of Harry Warnecke and you will see what I mean.
Re: (Score:2)
Re:I just wish they would... (Score:5, Informative)
Re: (Score:2)
Structural coloration is possible: http://en.wikipedia.org/wiki/Structural_coloration [wikipedia.org]
So using similar concepts couldn't they use some nanotech structures to split/redirect the colours?
Re: (Score:2)
That's what they are doing.
Re: (Score:3, Insightful)
colors look awful (Score:2)
a 3-ccd camera has awful color rendition.
The extra space between lens and sensor also makes for worse lenses (wide-angle at least, telephotos don't care).
Re: (Score:2, Interesting)
Pro video 3CCD cameras do this. Interestingly those cameras can make use of a trick so that the lens becomes cheaper.
Normally a lens needs to focus all three colours on the same plane, this is difficult due to the prism effects of a lens, therefor normal lenses need to use glass from two different materials with different refractive indices to compensate for this.
Since the colour for a 3CCD video camera is split, you can simply place each sensor on the focus plane of each colour for a non-compensating lens.
Re: (Score:2)
"Just use three really great sensors then we can have digital color that rivals film."
Digital surpassed film long, log ago.
"Three sensors and a prism" is not a new idea nor has it escaped camera manufacturers. What do you think "3CCD" means on video cameras? Given that, don't you think the lack of that technology in stills might be for a reason?
Re: (Score:2)
Might I direct your attention to: http://www.nytimes.com/slideshow/2012/03/14/arts/design/13PORTRAIT.html [nytimes.com]
Re: (Score:2)
3CCD will disappear from the video segment as well. I would say RED is "non-consumer digital video" and I don't see 3CCD there.
Re: (Score:2)
Here is a link ( NO paywall, NO signup). These portraits are simply fantastic.
http://www.nytimes.com/slideshow/2012/03/14/arts/design/13PORTRAIT.html [nytimes.com]
why RGB? (Score:2)
Re: (Score:2)
The answer is color accuracy, which this chip severely sacrifices for better luminance info.
Mixing R+G, B+G, etc. together means that figuring out the correct R,G,B color corresponding to an observed signal requires taking sums and differences between pixels that sums a bunch more noise into the color channels.
Example: Consider sensor (A) with R,G,B-sensing pixels, and (B) with Y=R+G, C=G+B, M=B+R sensing pixels.
Suppose light consisting of R',G',B' hits each sensor: sensor (A) directly tells you R',G',B'
Sen
Re: (Score:2)
Well, RGB vs CMY is really more of an additive/subtractive problem. CMY doesn't work on its own in additive space where sensors operate.
Re:I call bullpucky (Score:5, Informative)
Foveon has 3 photodiodes per pixel, and theoretically should have the most accurate colors and sharpness by avoiding moire and interpolation issues with bayer filters. In practice, though, a lot of light is lost by the time it reaches the 3rd photodiode.
There is indeed white light because not every pixel has a filter over it. Many pixels pass the light through a hole to the pixel, while a neighbor pixel funnels red light (e.g.) to it. Thus, you get white + 1/2 the neighbor's red. You also get half the neighbor's red on the other side, resulting in white + red for the three pixels in a line.
Cyan is part of the color spectrum as a "subtractive color". What remains under each neighbor pixel when you strip away the red, is the cyan.
From what I can tell, this will not get rid of the need for the anti-aliasing.
Re:I call bullpucky (Score:4, Insightful)
"From what I can tell, this will not get rid of the need for the anti-aliasing."
You ALWAYS need antialiasing when you discretize.
Comment removed (Score:5, Funny)
Re: (Score:3)
"From what I can tell, this will not get rid of the need for the anti-aliasing."
You ALWAYS need antialiasing when you discretize.
I think the word you are looking for is "quantize"
Re: (Score:3)
Re: (Score:2)
Discretising is just quantising in the spacial domain!
First I've heard of it. 20 years of farting around with sampling systems and the associated DSP, I've never heard it called anything other than quantizing. Is this some alternate universe I've slipping into?
Re: (Score:2)
Eh, even if he made up this usage case for discretizing, it's a reasonable interpretation of the word, especially given the context - take something that is continuous (say, the range of possible values a thing being measured) and transform it to something that is a series of discrete values (the actual measurement of that thing).
Communication happened in that post, and the use of the word in that context does not preclude its usage in other contexts with more precise meaning, so other communication was not
Re: (Score:3)
Nope, it's not. Quantization is the process of taking a continuous valued measurement and rounding, truncating or otherwise cramming it onto a discrete scale. For example, taking the value 5.382... and recording it as 5.
I COULD have said "sampling." Sampling is measuring a signal at several points. The measured values are on the same scale the original was - if you're sampling sound with a microphone, for example, the samples are on a continuous scale. We almost always then quantize the samples, puttin
Re: (Score:2)
Not when you can handle all frequencies that you will encounter. There are cameras on the market without anti-aliasing filters. When you stop down enough your aperture limits resolution to potentially less than the aliasing limit anyway.
Re: (Score:2)
If you signal is already low pass filtered you don't need to low pass filter it. Sure, I'll give you that. You've still got antialiasing, you're just not doing it with a piece of glass.
Re: (Score:3)
That's wrong too. For example, if your image consists of widely spaced point light sources, it isn't low-pass filtered, but you still don't need or want an anti-aliasing filter to reconstruct the position of the point light sources. Not only don't you need an anti-aliasing filter, the image will look better without it. That's the case in astrophotography.
Whether you need anti-aliasing filters depends on what kinds of pictures you take, what you know about the scene, and what you are trying to get out.
Re: (Score:2)
Nope. Stars are essentially point sources, and so have a very high spatial frequency. If you had a hypothetical telescope that had a flat and infinite modulation transfer function your unfiltered star field would look like crap with all the aliasing. It's possible you could still measure distances between stars, depending on how extended those stars really are, and the distribution of the starfield, but it would look like crap and you'd get a better measurement with an appropriate low pass filter.
In real
Re: (Score:2)
"From what I can tell, this will not get rid of the need for the anti-aliasing."
Which was not the goal, nor is it a goal of a Foveon sensor. Aliasing exists whenever there is frequency content greater than a sensor can handle.
"Foveon has 3 photodiodes per pixel, and theoretically should have the most accurate colors and sharpness by avoiding moire and interpolation issues with bayer filters."
Foveon does not promise more accurate colors. Sharpness is a function of a number of things, not just photosite layo
Re: (Score:2)
Foveon is a loser in the market because it doesn't perform.
Er.. And it costs more.
Re: (Score:2)
Re: (Score:2)
Magenta?
Re:I call bullpucky (Score:4, Informative)
Magenta is a combination of colours just like white isn't "in the colour spectrum".
Indigo/violet however is in the spectrum but as it's outside of the range of values which can be created with red green and blue we approximate it using magenta which is a mixture of blue and red.
Re: (Score:2)
"there is no cyan in the color spectrum"
You might want to open your eyes and look in the 490–520nm range on a representation of the visual range of the EM spectrum.
Re:I call bullpucky (Score:4, Funny)
Is that one of those colors only women can see? Like mauve?
Re: (Score:2)
Actually you'd see those too and more if you took out the part of your eyeball that filters out UV and a few other wavelengths.
Re: (Score:2)
You might want to open your eyes and look in the 490–520nm range on a representation of the visual range of the EM spectrum.
To nitpick, that's actually not cyan. Cyan is a combination of green and blue light. The wavelength you're describing stimulates the green and blue receptors in our eyes in a way that looks (to us) identical to cyan, but it's not the same thing. Sort of like how violet (in the sense of being around 400nm) light stimulates the red and blue receptors in our eyes, similar to (but distinct from) certain shades of purple.
This becomes important when discussing things like optical filters. A cyan filter passes gre
Re: (Score:2)
There is really also no "R" in the color spectrum; anything a digital camera captures is going to involve measuring the response of some wide band color filter. Terms like "R", "cyan", and "white" describe roughly what kind of filter we are talking about, enough so that people get an idea of how this and other cameras work.
As for Foveon, it measures "RGB" directly at each pixel, but that's a bad tradeoff: it gives you lower resolution than interpolation, loses a lot of light, and actually doesn't give you m
Re: (Score:2)
The problem with that is space, you'd have to either substitute the greens for the extra colors or you'd have to have an additional photosite in the mix. I suppose you could stack it, but that has it's own issues with regards to resolution.
Gamut on Cameras is perfectly fine, at least until we get better methods of display and these are photos for people, not birds.
Re: (Score:2)
Too bad you're displaying them on a screen or printing them with a process that only uses three colours....
Additionally, it's not really a four-different-colour sensor. It's just got a different division of the usual red green and blue, and the result is processed into regular RGB pixels.
Re: (Score:2)
Re:yeay four sensors (Score:4, Informative)
So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels? What software are you using that supports it?
Note that in CMYK, which is the most by far the most popular "four colour" system (and is the one all those "four colour" printers use), black is one of the colours. That makes up for a shortcoming in the colour inks (which is not shared by camera sensors or displays) in which you can't make a decent black by mixing the colours. I suspect the eight colour printer is doing something very similar - mixing colours to give you a better (they say anyway) representation of the three colour additive system that your computer, camera and monitor use.
Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.
Screen Printing (Score:3)
When working with designs meant for screen printing, the original artwork was done in RGB, then a team would separate the color channels (in Photoshop), one channel per ink to be used. They could technically do CMYK directly, but it didn't look good for a wide variety of purposes -- you can imagine a flat-filled cartoon character would be pretty much impossible. It would look a bit like comic book halftoning, probably. The shop would use that when they wanted to print Thomas Kincaide-esque sweatshirts for g
Re: (Score:3)
Re:yeay four sensors (Score:5, Informative)
I'm not complaining about anything. I'm replying to your erroneous assertion (you DID read the whole thread before replying, right?) that the existence of printers with eight inks somehow means they'll be able to reproduce data from a hypothetical four colour channel camera sensor.
I do like your fake quotes though. Please indicate where I said "there's no printer with 4 colours." What I DID say was "Too bad you're displaying them on a screen or printing them with a process that only uses three colours." If you bothered to understand what you're talking about, or even read my comments, you'd realize that the process is indeed three colour. Even if you imagine a four colour camera sensor, the file you store the data in is three colour channel, the software you use to edit it is three colour channel, the screen you show it on is three channel and the data you send to the printer driver is three channel. IF you could somehow send the four channel data to the printer you might be able to reproduce some extra colours (which the vast majority of humanity probably wouldn't be able to see anyway), but probably not very well since all those extra inks are formulated specifically to help reproduce RGB.
Re: (Score:2)
Re: (Score:2)
So how many colour channels do you suppose Adobe RGB has? CMYK anything LOOKS like it has four colour channels, but one of those is black so not really. If you like you can find the equation that demonstrates that CMYK and RGB are mathematically interchangeable. Still only three colour channels. CIE TIFF can contain LAB data but LAB is also based on... three primary colours. Of course, you can put whatever you want into a TIFF file but you won't find any software or hardware that knows what you're talk
Re: (Score:2)
what file format is your image stored in that has eight colour channels
Hoo boy, it's colour theory time! Do you want to store your colours as an intensity and a base colour (located in a 3d colour space e.g. L*,a*,b*), or a linear combination of emissive wavelengths, or an explicit spectrum, etc? Storing the exact colour something emits takes a lot of data. Three primaries correctly chosen are sufficient to record every colour the human eye can perceive, however, which is good enough for every application that involves image reproduction for people to look at (hello computatio
Re: (Score:2)
You know, you really should at least glance at the thread before you reply. The OP was enthusiastic about how a four colour channel camera (which this isn't) would improve visual reproduction because it would let you reproduce some intermediate wavelengths that would help better match differences in the frequency sensitivity of different peoples' eyes.
In the first place I doubt very much that there are big differences in the frequency sensitivity of peoples' eyes, except for tetrachromats and colour blind
Re:yeay four sensors (Score:4, Informative)
So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels?
You don't seem to understand the purpose of the colours or how colour is managed in a workflow. A file stored in your computer will have a certain gamut, if not specified this gamut is sRGB. Your printer also has a certain gamut. This is a function of the ink, colours it can print and the paper printed on. Colour management will take care of ensuring what you see on your screen will be reproduced on the printer providing the printer is physically capable of printing the colours in the gamut.
This is a quite common problem for instance with a CMYK printer which is unable to print any of the primary colours shown as red green and blue on the monitor. The result is a printer that prints a subset of the available colours a screen can display, but at the same time can print outside the gamut of your monitor too.
You don't need a file that has 8 primary colours to take advantage of the really wide gamuts 8 colour printers can print, you just need maths on your side. The ProPhotoRGB colour space works around this by defining the primary for green and blue as imaginary negative values which don't exist in reality. As such using red, green and blue primaries you can create for instance a colour that *almost* represents a pure cyan.
This is something that many photographers who print images already do. I think even the latest Photoshop comes setup out of the box to import raw camera files using ProPhotoRGB as the working colour space.
Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.
You don't know photographers very well do you? The vast majority of amateur and all professional photographers I've ever met calibrate their screens. Printer calibration is often not needed as the vast majority of photographers I know outsource their printing to someone else, and that someone else will typically provide them with the colour profile of their printer's last calibration to ensure accurate results can be obtained. Pretty much every printing company will do this for you, even cheap mass production ones like Snapfish.
Re:yeay four sensors (Score:4, Informative)
You don't seem to know what we're talking about. Let me quote the OP:
"I've been hoping for 4-sensor cameras for ages. People only have three color sensors, but what those colors are vary a bit from person to person, and capturing 4 colors stands a better chance of getting images that look good for everyone."
Yes, more inks in your printer help it reproduce the RGB values that you capture with your camera, save in your files, display on your screen, and send to the printer. Just like in the example I gave, the K channel in CMYK helps make up for deficiencies in the mixing properties of the C, M and Y that don't let you make a proper black by mixing. Extra ink won't do squat to match extra colour information from a theoretical extra colour sensor in the camera though, because everything in between is RGB.
Yes, actually, I know lots of photographers. I calibrate my screen, and I use a printer I chose specifically because they do a good job of frequent calibration. Most professional photographers do. But if you haven't noticed, with the availability of digital cameras a LOT of people took up photography. Hardware screen calibrators are still a niche item, nowhere near as popular as cameras. In particular, Panasonic doesn't make any still cameras that are likely to be used extensively by professionals, so it's likely that even fewer people who shoot Panasonic would calibrate their equipment.
Re: (Score:2)
The Canon inkjet on my desk is has a CcMmYK 6 color cartridge set loaded in it right now; the ink was sold as a 'photo color cartridge set'. For image format, TIFF with 32bit CMYK or CIE, or . . . well, you get the point. The extra color range means less banding even if I don't calibrate both the monitor and printer, but since I do photography as a hobby, why wouldn't I calibrate my monitor?
The printer, on the other hand, is just for proofs. I care less about how it's calibrated, and send the photos off for
Re: (Score:2)
The K in CMYK is not a "color."
Re: (Score:2)
Re: (Score:2)
But it must be treated as a colour, for computational purposes - it's represented by values at one extreme, as is white at the other extreme.
In other words, your statement is irrelevant to this discussion.
Re:Just say no to Gizmodo (Score:4, Interesting)
Ironically, the last paragraph at Gizmodo somewhat answers your question:
What's particularly neat about this new approach is that it can be used with any kind of sensor without modification; CMOS, CCD, or BSI. And the filters can be produced using the same materials and manufacturing processes in place today. Which means we'll probably be seeing this technology implemented on cameras sooner rather than later.
Re: (Score:2)
The problem is that the sensor system on a camera is not collecting an image destined for the human brain at a given moment. It's dumping data to best represent the original color spectrum that the human eye is able to sense, across the entire field of view of the sensor. As a result of that you are presented with an image via a screen or print, that allows you to look at any portion of the image and gather the approximate image that the sensor received.
A better question would be why don't we build displays
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That's what the Bayer sensor does. But it can only go so far, because your brain can trick you into not noticing that there is something missing, but the same information missing in an actual photograph will be completely obvious.
Also, the human eye actually isn't all that sensitive in color. In low light, your eyes switch to more sensitive black-and-white receptors. So, "doing what the human eye does" would mean taking only B/W photographs in low light.
Re: (Score:2)
An f-stop here, an f-stop there, soon you're talking a big difference.
Backlit sensors, new CMOS technologies, this kind of filter, image stabilization, better image processing, etc.: it's the combination of these 1-2 f-stop advances that in aggregate has really pushed photography much further.
Re: (Score:2)
And to quote the PDF:
On a more technical note...
oversampling eliminates Bayer pattern problems. For example, conventional 8MPix sensors include only
4Mpix green, 2Mpix red and 2Mpix blue pixels, which are interpolated to 8Mpix R, G, B image. With pixel
oversampling, all pixels become true R, G, and B pixels. What’s more, based on Nyqvist theorem, you
actually need oversampling for good performance. For example, audio needs to be sampled at 44 kHz
to get good 22 kHz quality.
Now hands up who thinks that Canon will dump the consumer camera market?