Is the 4th Yellow Pixel of Sharp Quattron Hype? 511
Nom du Keyboard writes "Sharp Aquos brand televisions are making a big deal about their Quattron technology of adding a 4th yellow pixel to their RGB sets. While you can read a glowing review of it here, the engineer in me is skeptical because of how all the source material for this set is produced in 3-color RGB. I also know how just making a picture brighter and saturating the colors a bit can make it more appealing to many viewers over a more accurate rendition – so much for side-by-side comparisons. And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials. It sounds more like hype to extract a higher profit margin than the next great advance in home television. So is it real?"
Yellow... yawn (Score:2, Insightful)
Re:Yellow... yawn (Score:5, Funny)
Re:Yellow... yawn (Score:4, Funny)
Or squant. The time is long overdue for squant support in televisions.
Re: (Score:2)
I agree, it's been years and the world still hasn't adopted this remarkable color.
Re:Yellow... yawn (Score:4, Funny)
Re:Yellow... yawn (Score:4, Funny)
I didn't recognize George Takei at first. Sure he's famous, but he's not William Shatner famous.
Re: (Score:3, Funny)
Oh My!
Re:Yellow... yawn (Score:4, Informative)
Re:Yellow... yawn (Score:5, Informative)
XYZ space is not perceptually uniform. In particular, the green/cyan area in XYZ occupies a much larger area than would be justified by the eye's ability to distinguish colors in that range. Yellow on the other hand is very under-represented in XYZ.
If you look at the gamuts in a perceptually uniform space such as LUV, you'll find that LCD panels are actually fairly limited in the yellows.
Re: (Score:3, Interesting)
And isn't all the content avaliable in YUV meaning it has red, green, blue and yellow colour information? All the source material might be made using RGB (is that even true though?) but the transmission is done using luminance, R/G and B/Y values isn't it?
Re:Yellow... yawn (Score:5, Interesting)
True, but the regular LCD color gamut is smaller than the sRGB/Rec 709 gamut that is encoded in the HDTV video standard.
Basically, LCD panels use relatively wide spectrum color filters, so that they don't loos too much light in absorption. The result is a relatively small gamut - smaller than plasma or CRT.
Re:Yellow... yawn (Score:4, Interesting)
Admittedly one of the problems here is that adding a fourth channel would require a 4-dimensional color space to fully utilize that extra channel. To really utilize this sort of new feature would require a whole new image recording system.
One of the problems facing would-be extensions of the color gamut like adding another color is in part an unlearning of what it means to make a color. In reality, a given color that you see from an object is made up of an entire spectrum from near infrared to ultraviolet (UV-A, to define a "color"), and is a wave function of all possible frequencies along that spectrum.
Somewhere along the way some crude but generally effective simplifications of this philosophy have resulted in things like the YUV and RBG systems, but it should be noted those are 3-dimensional color spaces. Note that the word "dimension" is not in reference to lengths here, but rather representations of the color. Each dimension is merely one more piece of information to display that color.
So whenever you use a 3-dimensional color space, you are reproducing that color spectrum wave function by only selecting three frequencies out of the whole spectrum with which to "broadcast" that information... and you are discarding quite a bit of additional information along the way. This is why color reproduction is quite difficult, and never really gets it "right" in most cases. Try as you might, no possible method of reproducing a color where the information has been discarded can be recreated. Heck, that is basic information theory here too.
Some may counter that a human eye perceives only three colors anyway. Well, that isn't quite true, as there are people with sensitivity to more than three colors (tetra-chromaticity) and of course people who only perceive effectively two or even one color ("color blindness"). Even with all that, not all people perceive the same colors either in the same way, so what may look "good" to one person may look "awful" to somebody else. What it all boils down to is that to really do a proper representation of the color, it really is vitally important to completely and accurately reproduce that entire wave function which represents all possible frequencies.
Think of it more this way, perhaps. Imagine if you were listening to some music, but the recording medium only reproduced the songs with three frequencies for playback. It would be some rather boring music. BTW, it is possible to "sample" light in the same manner that sound is sampled to give a more accurate reproduction of a color, but that would be an insane amount of data as the sampling frequency would have to be on the same order as the frequency of the light.... actually a higher rate of sampling to be precise.
One of the really nice things about light from an incandescent light bulb is that it is spread out over nearly the entire frequency spectrum. Traditional film projectors take advantage of that fact and when color film is shown in front of that light bulb, the frequency spread of various color layers on the film tend to smooth out with each other and generate that continuous spectrum. It still isn't perfect and color film still has only three channels (usually) but at least an attempt to re-create that whole frequency spectrum is there. Also note that different film manufacturers have a frequency response that is sometimes different, which is why some film manufacturers are preferred over others and certainly impacts the film making process in some subtle but interesting ways.
For LCD screens and worse yet for LED systems, the frequency curve isn't nearly so good. If you would look at it with a diffraction grating or prism that separates the colors out, you would see some sharp lines rather than a continuous spectrum. That is where the real problem lies with LCD panels and why color accuracy is not very good. Certainly adding a yellow line to that curve would generally help to smooth out that spectrum or at least add some more visual information, even if that color information isn't being directly recorded by the storage medium.
Clearly missing a trick. (Score:5, Funny)
To get truly astonishing pictures, they should add a black pixel, to improve contrast.
Re:Clearly missing a trick. (Score:5, Interesting)
joking aside... some of the newer TVs with LED backlighting actually do something like this... Lighting up the picture with thousands(ish?) of independent LEDs (as opposed to a couple of souped up flourescent tubes) means they can selectively dim or turn off entirely sections of the backlighting. So when large parts of the scene are dirk, large parts of the backlighting is dimmed as well, thus increasing the contrast. It also saves a bit of power, making it easier for them to meet energy star standards, etc...
Local dimming has a problem (Score:5, Interesting)
Re: (Score:3, Interesting)
A lot of TV sets that use local dimming have a big problem showing starfields. The average color in a starfield is pretty dark, so the LED goes dim and not bright enough to show the stars. It really takes the punch out of Star Wars Special^n Edition if you can't see the stars.
My brother-in-law bought an LED set not too long ago. I believe it's an edge-lit model, so that may contribute to the problem, but... the bright parts of dark scenes tend to be about half as bright as they should be.
This includes loading screens on some video games as well as movie credits. There's one short scene in the recent Star Trek movie showing the Narada flying by that dims so much that you can't make out any details in the ship.
Like I said, I believe his particular model is edge-lit, so I can't rea
Re: (Score:3, Interesting)
My brother-in-law bought an LED set not too long ago. I believe it's an edge-lit model, so that may contribute to the problem, but... the bright parts of dark scenes tend to be about half as bright as they should be.
I tend to lose all visual fidelity in dark parts of a high contrast image.
Re: (Score:3, Insightful)
It really takes the punch out of Star Wars Special^n Edition if you can't see the stars.
Didn't George Lucas do that already?
Re: (Score:3, Funny)
Oh my god... it's not full of stars...
Re: (Score:3, Funny)
That's good, because Dragon's Lair [gamesonsmash.com] consumes huge wads of power.
Re: (Score:3, Informative)
This certainly accomplishes its goal, but the downsides are also pretty high. Variable backlighting means that color calibration goes completely and utterly out of whack - a different backlight level than what it was calibrated at changes the properties of the panel. So you can have more accurate darks, but you lose accurate colors in return.
Re: (Score:3, Insightful)
Except that the difference can be accurately modeled in software and corrected at the LCD pixel - the performance and effectiveness of the algorithms used for this process are a key difference in the resultant picture quality in the models currently available.
The brightside demo models apparently had excellent correction; and I imagine this is what a lot of the company's IP investment was based in.
RGB (Score:5, Informative)
Re:RGB (Score:5, Insightful)
Only is the camera recording the picture recorded that same color.
As it has been stated, adding a new color on the TV is literally the last place that it needs to be. (First the camera that films, then the storage medium(DVD?), then broadcast(HDMI?) THEN the TV )
Re:RGB (Score:5, Informative)
That 1931 color gamut is misleading because it overempasizes greens. In fact, the original NTSC green primary was much closer to the peak, but as a result, yellows were too muted, so they changed it. But you're right - a turquoise primary would increase the RGB gamut significantly.
The ideal would be that all color information in video would be in device-independent xy color space instead of RGB. See LogLUV encoding for example: http://www.anyhere.com/gward/papers/jgtpap1.pdf
Re:RGB (Score:4, Insightful)
It would increase the gamut, but it wouldn't improve the rendition of skin tones (uh... the skin tones of most European/Native American/Asian/Middle Eastern/Medditerranean people. eep.) When people complain about the colors on their TV, it's generally because the skin tones don't look right.
Re:RGB (Score:5, Informative)
Re: (Score:2)
Only one problem. No Y encoded in the data stream, so it has to be interpolated.
Re:RGB (Score:5, Informative)
In some cases, it could actually be useful. While most cameras shoot with RGB sensors, most video compression is in some variation of YUV (1) color space. If you shoot on something like a Red One (2) camera, you get a RAW format with more than 8 bits (3) of color information. If you have a sensible post pipeline, you can go to YUV for your distribution format and have plenty of color data to completely fill out the 8 bit YUV data. YUV and RGB don't have identical color reproduction and gamut, so you can wind up with the odd situation where you shot on an RGB sensor, and you decimated to 8 bit data for distribution, but a normal 8 bit RGB display can't quite show every color that you have.
I wouldn't expect brick-shittingly amazing results on such a system. I'd need to see it in person and see a measured gamut chart to have any particular opinion on this particular display, but I can't dismiss the concept out of hand.
(1) : Y in YUV isn't Yellow, it's Luma. Still, the imperfect conversion between YUV and RGB means that a fourth primary could make it possible to more accurately show YUV data on an RGBY display.
(2) : "Red" is a brand name. "Red" in the name of the camera doesn't specifically imply any relationship to RGB color space or anything like that. The camera does use a standard RGB Bayer pattern sensor, though.
(3) : 8 bit color in this context is always "per component" rather than "per pixel" and doesn't imply old school 256 total colors palleted mode. In a X11 config file for example, this would be referred to as 24 bit color. Video guys are more interested in per-component colors because they always do operations on components. When you are writing misc. GUI software, you are generally more concerned with bits per-pixel because you would never care about how much space it takes to upload a fraction of a pixel to a video card since you have to upload a full pixel to display it.
(4) : This footnote doesn't correspond to anything in the text. After all that, I'm now just in the habit of writing footnotes.
Re:RGB (Score:5, Informative)
I have the impression that you are implying that the bits per channel are related to the color gamut. That more bits per pixel or channel produce a wider color gamut. That is not the case, and the 2 are unrelated. More bits per pixel only give you more shades within a given gamut. In practice, more bits per channel are desirable in video production to allow finer control over color correction, without producing artifacts like banding.
Re:RGB (Score:4, Funny)
brick-shittingly amazing results
Damn, I'm going to use this again on Slashdot.
Re: (Score:2, Informative)
Actually the eye is more sensitive to yellows than reds if you look at that wikipedia page you cited.
As it is now there is a slight dip in the yellow part of the color spectrum on displays because they use a pretty narrow band of red.
Cameras on the other hand for the red color uses a filter that basically takes all light between yellow and infrared.
So the input is both yellow an red combined while the output is just red, by adding yellow the display can correct some of that loss.
Though i would like to see c
Re:RGB (Score:4, Informative)
Nit: sRGB isn't synonymous RGB, nor even with RGB as used in displays.
Plenty of RGB colorspaces don't have the green-deficiency problem, and it's nothing innately required by an RGB LED system if it's willing to do a non-sRGB display.
Re:RGB (Score:5, Interesting)
Parent is correct. Any colours around green and cyan are usually terribly unsaturated on most monitors. In fact, even in 'real life', it isn't theoretically possible to experience true cyan/aqua because the nearest direct wavelength will stimulate the red eye cone to some extent creating colour pollution.
There is a trick around this, which can be found by over-saturating the red cone. This weakens it temporarily, and then when shortly afterwards you see anything resembling cyan, it will appear as close to the true qualia as you could ever expect. The "Eclipse of Mars" illusion that follows in the below link demonstrates this for those who are curious:
http://www.skytopia.com/project/illusion/2illusion.html [skytopia.com]
Re:RGB (Score:5, Insightful)
That's misleading. A lack of a fully saturated green on a monitor is a limitation with the phosphors or dyes it uses. But monochromatic light of around 515 nm is pure, fully saturated green. Fully saturated green stimulates both your M and L cones ("G" and "R" cones); that's the way your eye works.
You can achieve non-physical responses from your photoreceptors via oversaturation, drugs, or electrical stimulation. That's interesting, but it isn't "green" and it isn't a "true qualia". Thinking of that as "green" is simply because you think of the M cone as a "green" cone and the L cone as a "red" cone, but those are just arbitrary names.
Re:RGB (Score:5, Interesting)
That's my point though. How can a apx 515nm wavelength be a fully saturated green if the L cone is also being activated to some degree? That would be the extra pollution I'm talking about. I believe a much purer green would result if you somehow disabled the L cone. Unless you think we might see a more cyan/blue-like hue here?
To get a definitive answer, I would be interested to see what one would experience if you disabled two of the three S/M/L cones. I'm suspecting you would see pure red (disable S+M), green (disable S+L) and blue (M+L). Any research into that?
That's interesting, but it isn't "green"
What is it then?
Re:RGB (Score:5, Interesting)
That's my point though. How can a apx 515nm wavelength be a fully saturated green if the L cone is also being activated to some degree?
Because all light of a single wavelength is automatically "pure"; it doesn't matter what your cone responses are. The cone responses are just a code to transmit that information to your brain. Your cone responses are such that they overlap (for good reason), but that doesn't keep you from seeing pure colors.
And actually, you perceive color contrast anyway, not absolute RGB values or wavelengths. So, even if you get a group of cones to produce a pure "green" response somehow, that will simply be processed as being part of a strong red/green contrast and result just in a vivid green percept.
Re:RGB (Score:5, Interesting)
http://en.wikipedia.org/wiki/Imaginary_color [wikipedia.org]
Neat snippet from the article:
"At Walt Disney World, Kodak engineered Epcot's pavement to be a certain hue of pink so that the grass would look greener through the reverse of this effect."
Sneaky!
Of course it's hype, just SHARPer :-) (Score:3, Interesting)
It's like the "120 hz lcd display" stuff. The dvd they use to show you the difference in-store is bogus. If you want REALLY sharp, you'd buy a 600hz plasma. The whole screen changes from one image to the next in 1/600 of a second, with no interpolation (and interpolation algorithms are just "best guesses", so they're no better than an upscaler would be).
Re: (Score:2)
you'd buy a 600hz plasma. The whole screen changes from one image to the next in 1/600 of a second
technically, the source input is still running at 25 frames a second, not 600, so while it can change the whole image in 1/600 second... it doesn't. The 600hz thing is more marketing hype, which does perform interpolation to try and get you a smooter image. I find that the image processing doesn't work so well and results in jaggy movement instead.
Best look to the plasma's black levels and contrast ratios inste
Re: (Score:3, Informative)
A 120 Hz display provides a better result for 24 fps input (from film sources) than will a 60 Hz display. With 120 Hz, each frame is displayed for 1000/24 ms instead of varying between 1000/30 ms and 1000/20 ms on a 60 Hz display.
Re: (Score:3, Insightful)
Not to forget that works for both 50hz and 60hz.
60hz*10=600hz
50hz*12=600hz
Re: (Score:3, Interesting)
LCD screens are lighter - my 50" plasma might weigh a lot more, but it's on a swivel stand, so moving it takes one finger.
LCD screens are thinner - not much thinner, and most people spend their time looking at the front, not the side.
LCD screens are more efficient - depends. Hook it up to a playstation to watch a blu-ray, and you've more than lost any "efficiency" claim. The playstation burns 188 watts to decode blu-ray disks, while a stand-a
Re:Of course it's hype, just SHARPer :-) (Score:4, Informative)
Your talk of efficiency doesn't make sense at all. An LCD uses less electricity than a plasma. It doesn't matter what is hooked up to the display.
Re: (Score:3, Informative)
Except you're completely missing the point. It's not about sharpness or speed. It's about being an even multiple of 24hz so you can display film material (e.g. about everything you'd really want on a 1080p set) without any tricks that ruin the smoothness of motion.
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Nope. The "fluorescent" light in the back is a cold cathode fluorescent lamp driven by an inverter running anywhere from 20kHz to 50kHz. Beat effects with the backlight are not an issue (except on badly designed monitors that PWM too slowly to control brightness).
Re: (Score:3)
You're definitely confused about this (see my earlier reply to you). The backlight on an LCD panel runs at a rate in the kHz range and has nothing to do with the refresh rate. For all intents and purposes it's a constant source of light.
Careful What You Laugh At (Score:3, Insightful)
And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials.
Well, I'm not sure if you're correct to laugh at this or not. But all televisions are approximations of something analogue that was captured and in that capturing process, some information was lost. To illustrate, entertain a scenario where I have N standard definition television sets that are displaying footage from standard definition video cameras. I daisy chain them together (each camera directed at the last screen) to record something. As I move from the 0th screen to the Nth screen, I will begin to see degradation as more information is lost and randomness comes into play. The same can be done with HD but since HD captures more information, it can safely be assumed that the sampling and resampling will retain more of the original image.
If you played the Nth HD screen next to the Nth SD screen and piped that through an SD television, you'd still be able to see some difference (for reasonable non-astronomical numbers of N) even though you went through yet another SD television in the end.
I don't know what the fourth color is supposed to buy, I'm unfamiliar with this technology. But the side by side comparison through an SD or HD TV might still be able to demonstrate that the fourth color adds some meaningful information to the image that -- when resampled to be viewed on your device -- suffers less information loss than the three color implementation. Thus successfully demonstrating some superiority. Not showing you precisely what the final product is supposed to be like but instead give you relativity in signal loss and noise.
I also know how just making a picture brighter and saturating the colors a bit can make it more appealing to many viewers over a more accurate rendition
Well, I know that there is a huge photography following that is totally enamored with HDR photography [wikipedia.org] and to many people it makes the images come to life ... I think it's overdone (like autotuning in modern music) but it definitely has a place. Perhaps similarly four color displays hope to widen the dynamic range they can display? I wish I could give you better answers about four color displays but this is the first I've heard of them. Perhaps your questions to a large engineer base are the most effective kind of marketing?
Re: (Score:2, Informative)
Re: (Score:2)
You cannot add information once it's been thrown away, you can only simulate it. IF the camera had a yellow channel and the video signal actually carried the yellow channel, it MIGHT be useful for the TV to display it, but that's not what's happening.
I say might, because other than a very few tetrachromates out there we probably cannot actually perceive the extra color space anyway. The ideal color reproduction would require a trichromate camera (we're good there) where the three colors are exactly those of
Re: (Score:3, Insightful)
Re:Careful What You Laugh At (Score:4, Interesting)
Back in the 1960s there was an ad that did some trick that caused a black-and-white television to display what the eye perceived as colour. There was an explanation as to how it was achieved but lo these many decades later I have no recollection what it was (nor what the ad was for, either). If I hadn't seen it myself, I'd not believe it could be done.
Re:Careful What You Laugh At (Score:4, Informative)
Fechner color [wikipedia.org]
is an illusion of color seen when looking at certain rapidly changing or moving black-and-white patterns. They are also called pattern induced flicker colors (PIFCs). Not everyone sees the same colors.
Re:Careful What You Laugh At (Score:4, Informative)
HDR is something which enables photographers to approach the dynamic range available in print photography while largely retaining the color saturation and other qualities of transparency film
That doesn't make much sense, because transparencies and computer displays have a higher dynamic range than prints, not lower.
I reality, HDR photography is about capturing a scene that has a very high contrast ration, beyond what cameras can capture or monitors display. It is done by using shots with different exposures, so parts of the image that would otherwise be over or under-exposed retain detail and don't just get clipped or blown out.
It does tend to be overdone, but so is saturation and the colors that people use in their photos/video don't particularly reflect reality very well either.
Actually, HDR photos are often a better representation of reality, because the human eye adjusts to different brightness levels, which is what the HDR process is doing.
Yes (Score:2)
Re:Yes (Score:5, Funny)
How do you decide which pixel to sacrifice for the colour gambit?
Re:Yes (Score:5, Funny)
Re: (Score:2)
I think the word you are looking for is "Gamut" [wikipedia.org]
While you are looking at that link (yeah right) notice the first image shown, representing the gamut of a "standard" CRT monitor. notice that each corner of the triangle is one of the phosphor colors Red, Green, and Blue. Now see where the yellow stripe is? How far outside of the triangle do you think they can push a yellow "corner"? Even if they push the yellow ALL the way to the edge of the visible spectrum, you end up with a very small increase in the overal
Human retinas (Score:2, Interesting)
Re: (Score:2)
From what I understand, this is not true. The reason is that you eye can notice a larger amount of green/blue combinations than the RGB combinations are capable of creating.
Re: (Score:2)
Re:Human retinas (Score:5, Interesting)
http://en.wikipedia.org/wiki/Opponent_process [wikipedia.org]
Re: (Score:2)
Re: (Score:3, Informative)
You can't tell the difference, assuming of course that the RGB phosphors are evenly matched with your cones.
Take for instance printers. We have CMYK precisely because C+M+Y doesn't equal to black, as the inks aren't perfect. I think some sort of muddy brown actually results. So a black ink is needed to fix that imperfection. There exist printers with 6 ink colors as well, because that still doesn't make it perfect.
I think better monitors would be a good thing, but I'm more interested in a higher bit depth.
Re:Human retinas (Score:4, Informative)
But that assumes the "RGB" sensitivity of our eyes lines up with the emmision spectra of RGB screens; which is not true. Perhaps this Sharp screen brings it closer, actually shows more faithfully the colors which are in the signal; as far as human eye is concerned.
Except... (Score:2)
... the red one actually "peaks" at yellow. [wikipedia.org]
Re: (Score:3, Informative)
Some women have 4 cones..
Re: (Score:3, Informative)
As I understand it, only in a small, relatively isolated Northern population. And it's not for yellow. Still cool though.
Re:Human retinas (Score:4, Informative)
Generally speaking, the human eye is less sensitive to blue and most sensitive to red (more yellow, actually) and green. Making sure that the blue pixels are the brightest in the screen and changing the red pixel to something a little more yellow (assuming the firmware adjusts when recreating colors) would probably be the best approaches to catering to the human eye.
Not necessarily fake (Score:5, Informative)
Adding an extra phosphor can extend your gamut, increase your dynamic range within your gamut, or give you finer quantization within the gamut, or some combination of all three. The fact that your source material is provided as three quantities (YCbCr, not RGB) doesn't mean four phoshors won't help.
Doesn't mean it will, either.
Re: (Score:3, Interesting)
The fact that your source material is provided as three quantities (YCbCr, not RGB) doesn't mean four phoshors won't help.
Well there's two possibilities here:
1) You have non-RGB information like for example in xvYCC [wikipedia.org], which still uses only three quantities but has a much wider gamut. Then four phosphors could definitely help you reproduce more colors. I guess it also proves the problem isn't that you have three quantities, but the way RGB works which doesn't match the eye.
2) You have RGB or clipped-for-RGB YCbCr encoding, then you don't have more than RGB no matter what. In this case, it only makes sense to improve the gamut in
Re:Not necessarily fake (Score:5, Insightful)
If you've ever had a display calibrated, you'd know that even the existing RGB color space can't be completely recreated with existing RGB-based displays. The problem is in the inability of LEDs or LCD or plasma panels to produce light uniformly in the three color channels. If you can add a 4th channel that lets the RGB color space be more accurately produced by the display, then you will see an improvement. It won't make the source any better, but the output generated by the display for that input will be better.
time to wait (Score:2, Insightful)
Re: (Score:2)
sounds (Score:2)
To be as real as quoting extrapolated mega pixels to sell digital cameras.
Review (Score:4, Funny)
Is that supposed to be some kind of joke?
So source material (Score:2)
As the FS says, "all the source material for this set is produced in 3-color RGB".
So while you might get an improved gamut with this, it won't be accurate color reproduction. Same with the LED sets that advertise things like "123% of televisions gamut". No way to accurately map that color onto your existing source media well.
What's wrong? (Score:5, Interesting)
Representing yellow with a mix of green and red is already a hack. What's wrong with software determining that the color of a pixel is yellow and actually lighting up a yellow light?
Maybe a yellow light looks more convincing than a red and green light right next to each other. I'd want to see for myself before making blanket judgments.
Open Mind (Score:3, Insightful)
At first blush it appears to be hype but I am trying to keep an open mind because of something that happened to me when I saw my first HD TV picture. I was of the opinion that HD couldn't be that much better than SD. Shortly after I saw my first HD images I was ready to admit that I was wrong. From the moment I laid eyes on HD I knew there was a whole new world out there! I am now a certifiable HD snob. I don't know what I did before but I do know I watched less TV.
I haven't seen one of the new TVs yet to day I think it makes a difference or not. I will know, and probably rather quickly when I see it if I believe it or not. The first place I will look is at white/black interfaces. That should tell me a lot.
I really do hope it is hype. I think the 47" TV is a little too big to be moved into the bedroom.
Pure hype (Score:2)
It does work (Score:2, Informative)
It *could* be good (Score:5, Informative)
First, check out http://en.wikipedia.org/wiki/Gamut [wikipedia.org] for reference. The sample gamut picture in the top right shows a typical CRT--lets assume for the sake of argument that LCDs are similar.
If you add a yellow LED to that it just isn't going to add much. The yellow part of the spectrum is already fairly well represented.
*But* if they also change the hue of the green LED toward the blue spectrum then it has a good chance of really opening up the gamut.
The people saying RGB is enough don't understand chromaticity--go look for gamut plots of your favorite output devices and see how little of the full spectrum of colors they can actually reproduce. Printers are especially embarrassing. Your eyes can really see a whole lot of color detail.
The difference between stereo and surround sound? (Score:2)
Some people believe that since we have just two ears that stereo sound is enough. Others, on the other hand, believe the experience to be enhanced with 5.x surround sound systems.
I have not seen the results of this 4th yellow pixel display, but I might guess that there comes with it a newer and better enhancement over traditional RGB output. One might believe that since the eyes can only see combinations of red, green and blue light, that display devices only need to produce light of those colors. But pe
Re: (Score:2)
What you just said might as well have been doublespeak. It says nothing at all. Why bother?
Hype for higher profit margins (Score:2)
Oh, you mean like a 240 Hertz refresh rate, when the actual changes to the product cost virtually nothing? Or "LED" TVs that aren't driven by LEDs at all but merely backlit by them?
Yellow is the "gay" color? (Score:4, Funny)
It works for printers (Score:3, Insightful)
Digital images are displayed in RGB, yes.
But colors are printed in CMYK (Cyan Magenta Yellow Black), and you'll notice that the best photo inkjet printers have more than just those four color cartridges. They often have the four plus "photo cyan", "photo magenta", etc. and it does make a huge difference.
As you know, some colors cannot be accurately expressed in CMYK, nor can some in RGB (even though theoretically any color is possible, but theory is not reality in this case).
While the extra color may or may not make a big difference, there is at least precedent indicating that the idea is sound.
Submitter fail. (Score:5, Insightful)
"And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials."
But the script of the commercial is written almost entirely with deference to that fact.
The estimable Mr. Takei tells you, while you're no doubt ogling his adam's apple instead of listening, that he can't actually show you the difference itself, but, "I can show you this," wherupon he looks at the screen and gives his review in a single, somewhat gaudily overacted word.
I'm not sure how anyone misses that, since his behavior is utterly bizarre without the concept of telling-not-showing being in play.
Pictures just about sums it up (Score:5, Informative)
http://regmedia.co.uk/2010/05/07/quattron_4.jpg That just about sums up the entire article.
Better frequency coverage? (Score:3, Interesting)
If you look at the color spectrum and its frequencies, you will notice the following:
red -- 610 to 760 nm
gap - 590 to 620 nm
green -- 500 to 570 nm
blue -- 450 to 500 nm
Now I couldn't find any actual explanation on the net for why Yellow would make a better picture. But if you look at the frequencies above, you will notice that adding yellow DOES do something. It reduces the gap between Red and Green by half; Yellow is in that gap, and comprises the frequencies from 570 to 590.
By this theory, maybe adding Orange (590 to 610 nm) would make an even more realistic picture?
Re: (Score:3, Informative)
OPPS! The chart should have been:
red -- 610 to 760 nm
gap - 570 to 610 nm
green -- 500 to 570 nm
blue -- 450 to 500 nm
The tone of this article isn't like the summary (Score:3, Insightful)
The tone of this article isn't like the summary states. TFA doesn't portray the TV as some magical device; because the article is actually somewhat critical of the TV.
I think the thing that a lot of us don't realize, because we spend so much time looking at TV and computer screens, is that colored light isn't really a combination of red, green, and blue. The reality is that light gets its color from its wavelength; and we can get a very close approximation by combining light we perceive as red, green, and blue.
The question is, can we get a more accurate picture by using light that's closer to the original wavelength? Clearly, the information isn't lost, as the original wavelength can be inferred by digitally processing the original RGB levels.
Something to consider is that the original NTSC (American Color) TV standards didn't just include Red, Green, and Blue, but also included Yellow and Orange. These parts were essentially deprecated, but the concept of TVs displaying yellow isn't new.
Linear algebra and color gamut (Score:3, Informative)
Quick terminology: Spectral color- Pure, single wavelength color, like a laser. Composite color- A combination of many spectral colors of different intensity.
To truly reproduce a color, each pixel should be able to not only make one spectral color, but a combination of all of them.
This would be very expensive, and fortunately, our eye have sensors only for Red 580 nm, Green 540nm, and Blue 440 nm (RGB), if we exclude the low light rods. We can therefore get away with RGB screens. There are slight errors. For example, assume each R-G-B pixel emits light matching the eyes R-G-B sensors peak sensitivity. Now, we can reproduce any light stimulation by exiting a linear combination of the three emitters. The eye however is sensitive from 380 nm to 740 nm, and can obviously not create the stimulation for neither 400 nm light, nor 700 nm, as your linear combination of only positive values will not cover these spectral colors (outside the gamut of the display). Take a picture of a prism spectrum or rainbow, and compare the original with what you see on the monitor, and you can see this.
So bottom line, RGB covers almost all colors, but adding emitters allows linear combination to cover more of the possible stimulation, but a high cost for little value. It is primarily the near UV purplish blue below 440 nm and the warm reds near IR that can not be reproduced.
The point is to let more light through. (Score:3, Interesting)
With RGB pixels on an LCD, yellow is shown by allowing light to pass through neighboring red and green subpixels. For the red subpixel, blue and green are filtered out. For the green subpixel, blue and red are filtered out. Then the eye fuses the neighboring pixels together to get yellow from two sources that have already filtered out much of the spectrum. But with a single yellow subpixel, only blue light is filtered out and more light reaches the viewer. I'm sure the effect is to make certain colors more vivid.
Additionally, the use of these yellow subpixels is also to somewhat increase the effective resolution.
Sure, it could be (Score:3, Interesting)
In terms of color theory, nothing stops is potentially being real. If you expect to hook this up to some random source and get an improvement, though ... good luck. It's not going to happen. With an appropriate 10-bit or 12-bit wide-gamut source, though, it's certainly capable of better results.
The input may be 3-color (RGB), but if it's defined with a wide-gamut space like Adobe RGB, possibly with up to 16 bits of precision per colour channel, then it can represent a huge range of colours. It can do this by defining near-"perfect" primary colours and assuming perfect control over blending of those primaries.
A regular TV, though also an RGB device, has a very different gamut. That's largely because the primary colours the TV uses aren't as bright/saturated or as "perfect" as those in the Adobe RGB space, but it also can't blend its colours as well. Most likely it only uses 8 bits per colour channel, so it has a much more limited range of graduations, further forcing the colour space to be narrowed to avoid banding due to imprecision.
The regular TV must "scale" a wide-gamut input signal in a colour space like Adobe RGB to display it on its own more limited panel. It can do this by "chopping off" extreme colours, by scaling the whole lot evenly, or several other methods that're out of scope here. Point is, that they're both RGB devices, but they don't share the same colour space and must convert colours.
So, if the yellow pixel (another primary) expands the gamut of this new TV, then yes, even though it too only takes an RGB signal, it's in theory better, because it can convert a wide-gamut RGB input to its own RGBY space for display with better fidelity than a TV with the same RGB primaries but no Y channel colour achieve.
Another device might still be plain RGB, but for each of the red green and blue primaries it might have much better (closer to "perfectly red" etc) colour. This device might have an overall wider gamut (ie better range of colours) than the RGBY device, though it's likely that the RGBY device's gamut would still be capable of better yellows. (If you're struggling to figure out what I mean, google for "CIE diagram RGB CMYK" to get a feel for it).
Attaining better results through adding a channel and/or having better R,G,B primaries presumes properly colour-managed inputs to gain any benefit, though. In reality, video colour management is in a pathetic and dire state - inputs can be in any number of different colour spaces, there's no real device-to-device negotiation of colour spaces, and it's generally a mess. If you feed a "regular" narrow gamut source through to a TV that's expecting a wide gamut signal, you'll get a vile array of over-saturated over-bright disgusting colour, so this is important. Since this device would rely on wide-gamut RGB input to have any advantage, it'll need a 10-bit or 12-bit HDMI or DisplayPort input with a source that's capable of providing a wider gamut signal (say, BluRay) and is set up to actually do so rather than "scaling" the output video gamut to the expections of most devices.
The fact that most inputs only support 8 bits per channel (and thus aren't very useful for wide-gamut signals because they'll get banding/striping in smooth tones) really doesn't help.
Trying to solve problems that don't exist. (Score:3, Informative)
Color on RGB monitors currently is a fine match for standard broadcast/HDTV/Blu Ray gamut, and LCD monitors are plenty bright, this really doesn't solve a problem anyone was actually having.
Sharp has among the worse LCD tech(IMO) with weak (grey) blacks and a lot of viewing angle shift.
The first reviews that I read, say these problems persist, so Sharp didn't work on real (hard) they have with their technology. Instead they decided to tackle something they can use as a marketing differentiator to impress the rubes.
Re: (Score:2)
But I think a better single answer to both questions is "yes." That is, yes -- adding the pixel changes things. But yes, it is hype (in the sense that the difference isn't meaningful.)
Re: (Score:2)
Some women can see four primary colours (Score:3, Informative)