Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Television Hardware Technology

Is the 4th Yellow Pixel of Sharp Quattron Hype? 511

Posted by timothy
from the hi-fi-jumprope dept.
Nom du Keyboard writes "Sharp Aquos brand televisions are making a big deal about their Quattron technology of adding a 4th yellow pixel to their RGB sets. While you can read a glowing review of it here, the engineer in me is skeptical because of how all the source material for this set is produced in 3-color RGB. I also know how just making a picture brighter and saturating the colors a bit can make it more appealing to many viewers over a more accurate rendition – so much for side-by-side comparisons. And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials. It sounds more like hype to extract a higher profit margin than the next great advance in home television. So is it real?"
This discussion has been archived. No new comments can be posted.

Is the 4th Yellow Pixel of Sharp Quattron Hype?

Comments Filter:
  • Yellow... yawn (Score:2, Insightful)

    by Anonymous Coward
    i'd be much more interested if it was a colour that RGB couldn't produce.
    • by uglyduckling (103926) on Saturday May 08, 2010 @03:50PM (#32141360) Homepage
      Like Octarine?
    • by $RANDOMLUSER (804576) on Saturday May 08, 2010 @04:28PM (#32141678)
      And the commercials with George Takei (I'm sure there's a "yellow peril" joke in there somewhere) in a white lab coat and the caption reads "actor portrayal" LIKE WE DIDN'T KNOW - IT'S GEORGE FREAKIN' TAKEI YOU IDIOTS!!
    • Re:Yellow... yawn (Score:4, Informative)

      by IWannaBeAnAC (653701) on Saturday May 08, 2010 @04:30PM (#32141694)
      Obviously, if it was a color that RGB could produce then there wouldn't be any point making a special color channel with it. You should read up on the color gamut [wikipedia.org] and learn a bit about the limitations of RGB.
  • by queazocotal (915608) on Saturday May 08, 2010 @03:40PM (#32141282)

    To get truly astonishing pictures, they should add a black pixel, to improve contrast.

    • by jjoelc (1589361) on Saturday May 08, 2010 @04:35PM (#32141736)

      joking aside... some of the newer TVs with LED backlighting actually do something like this... Lighting up the picture with thousands(ish?) of independent LEDs (as opposed to a couple of souped up flourescent tubes) means they can selectively dim or turn off entirely sections of the backlighting. So when large parts of the scene are dirk, large parts of the backlighting is dimmed as well, thus increasing the contrast. It also saves a bit of power, making it easier for them to meet energy star standards, etc...

      • by tepples (727027) <tepples@[ ]il.com ['gma' in gap]> on Saturday May 08, 2010 @06:04PM (#32142358) Homepage Journal
        A lot of TV sets that use local dimming have a big problem showing starfields. The average color in a starfield is pretty dark, so the LED goes dim and not bright enough to show the stars. It really takes the punch out of Star Wars Special^n Edition if you can't see the stars.
        • Re: (Score:3, Interesting)

          by Cl1mh4224rd (265427)

          A lot of TV sets that use local dimming have a big problem showing starfields. The average color in a starfield is pretty dark, so the LED goes dim and not bright enough to show the stars. It really takes the punch out of Star Wars Special^n Edition if you can't see the stars.

          My brother-in-law bought an LED set not too long ago. I believe it's an edge-lit model, so that may contribute to the problem, but... the bright parts of dark scenes tend to be about half as bright as they should be.

          This includes loading screens on some video games as well as movie credits. There's one short scene in the recent Star Trek movie showing the Narada flying by that dims so much that you can't make out any details in the ship.

          Like I said, I believe his particular model is edge-lit, so I can't rea

          • Re: (Score:3, Interesting)

            by negRo_slim (636783)

            My brother-in-law bought an LED set not too long ago. I believe it's an edge-lit model, so that may contribute to the problem, but... the bright parts of dark scenes tend to be about half as bright as they should be.

            I tend to lose all visual fidelity in dark parts of a high contrast image.

        • Re: (Score:3, Insightful)

          It really takes the punch out of Star Wars Special^n Edition if you can't see the stars.

          Didn't George Lucas do that already?

        • Re: (Score:3, Funny)

          by daeg (828071)

          Oh my god... it's not full of stars...

      • Re: (Score:3, Funny)

        by camperdave (969942)
        So when large parts of the scene are dirk, large parts of the backlighting is dimmed as well, thus increasing the contrast. It also saves a bit of power

        That's good, because Dragon's Lair [gamesonsmash.com] consumes huge wads of power.
      • Re: (Score:3, Informative)

        by rsmith-mac (639075)

        This certainly accomplishes its goal, but the downsides are also pretty high. Variable backlighting means that color calibration goes completely and utterly out of whack - a different backlight level than what it was calibrated at changes the properties of the panel. So you can have more accurate darks, but you lose accurate colors in return.

        • Re: (Score:3, Insightful)

          Except that the difference can be accurately modeled in software and corrected at the LCD pixel - the performance and effectiveness of the algorithms used for this process are a key difference in the resultant picture quality in the models currently available.

          The brightside demo models apparently had excellent correction; and I imagine this is what a lot of the company's IP investment was based in.

  • RGB (Score:5, Informative)

    by Kell Bengal (711123) on Saturday May 08, 2010 @03:42PM (#32141292)
    It strikes me that a better use of a fourth colour pixel would be to represent all those greens the RGB colour space doesn't actually represent [wikipedia.org].
    • Re:RGB (Score:5, Insightful)

      by Anonymous Coward on Saturday May 08, 2010 @03:56PM (#32141408)

      Only is the camera recording the picture recorded that same color.

      As it has been stated, adding a new color on the TV is literally the last place that it needs to be. (First the camera that films, then the storage medium(DVD?), then broadcast(HDMI?) THEN the TV )

    • Re:RGB (Score:5, Informative)

      by Anonymous Coward on Saturday May 08, 2010 @04:11PM (#32141526)

      That 1931 color gamut is misleading because it overempasizes greens. In fact, the original NTSC green primary was much closer to the peak, but as a result, yellows were too muted, so they changed it. But you're right - a turquoise primary would increase the RGB gamut significantly.

      The ideal would be that all color information in video would be in device-independent xy color space instead of RGB. See LogLUV encoding for example: http://www.anyhere.com/gward/papers/jgtpap1.pdf

      • Re:RGB (Score:4, Insightful)

        by iluvcapra (782887) on Saturday May 08, 2010 @06:28PM (#32142544)

        That 1931 color gamut is misleading because it overempasizes greens. In fact, the original NTSC green primary was much closer to the peak, but as a result, yellows were too muted, so they changed it. But you're right - a turquoise primary would increase the RGB gamut significantly.

        It would increase the gamut, but it wouldn't improve the rendition of skin tones (uh... the skin tones of most European/Native American/Asian/Middle Eastern/Medditerranean people. eep.) When people complain about the colors on their TV, it's generally because the skin tones don't look right.

    • by coolgeek (140561)

      Only one problem. No Y encoded in the data stream, so it has to be interpolated.

      • Re:RGB (Score:5, Informative)

        by forkazoo (138186) <wrosecrans AT gmail DOT com> on Saturday May 08, 2010 @04:29PM (#32141690) Homepage

        Only one problem. No Y encoded in the data stream, so it has to be interpolated.

        In some cases, it could actually be useful. While most cameras shoot with RGB sensors, most video compression is in some variation of YUV (1) color space. If you shoot on something like a Red One (2) camera, you get a RAW format with more than 8 bits (3) of color information. If you have a sensible post pipeline, you can go to YUV for your distribution format and have plenty of color data to completely fill out the 8 bit YUV data. YUV and RGB don't have identical color reproduction and gamut, so you can wind up with the odd situation where you shot on an RGB sensor, and you decimated to 8 bit data for distribution, but a normal 8 bit RGB display can't quite show every color that you have.

        I wouldn't expect brick-shittingly amazing results on such a system. I'd need to see it in person and see a measured gamut chart to have any particular opinion on this particular display, but I can't dismiss the concept out of hand.

        (1) : Y in YUV isn't Yellow, it's Luma. Still, the imperfect conversion between YUV and RGB means that a fourth primary could make it possible to more accurately show YUV data on an RGBY display.

        (2) : "Red" is a brand name. "Red" in the name of the camera doesn't specifically imply any relationship to RGB color space or anything like that. The camera does use a standard RGB Bayer pattern sensor, though.

        (3) : 8 bit color in this context is always "per component" rather than "per pixel" and doesn't imply old school 256 total colors palleted mode. In a X11 config file for example, this would be referred to as 24 bit color. Video guys are more interested in per-component colors because they always do operations on components. When you are writing misc. GUI software, you are generally more concerned with bits per-pixel because you would never care about how much space it takes to upload a fraction of a pixel to a video card since you have to upload a full pixel to display it.

        (4) : This footnote doesn't correspond to anything in the text. After all that, I'm now just in the habit of writing footnotes.

    • Re: (Score:2, Informative)

      by lc_overlord (563906)

      Actually the eye is more sensitive to yellows than reds if you look at that wikipedia page you cited.
      As it is now there is a slight dip in the yellow part of the color spectrum on displays because they use a pretty narrow band of red.
      Cameras on the other hand for the red color uses a filter that basically takes all light between yellow and infrared.
      So the input is both yellow an red combined while the output is just red, by adding yellow the display can correct some of that loss.

      Though i would like to see c

    • Re:RGB (Score:4, Informative)

      by pthisis (27352) on Saturday May 08, 2010 @05:02PM (#32141930) Homepage Journal

      It strikes me that a better use of a fourth colour pixel would be to represent all those greens the RGB colour space doesn't actually represent.

      Nit: sRGB isn't synonymous RGB, nor even with RGB as used in displays.

      Plenty of RGB colorspaces don't have the green-deficiency problem, and it's nothing innately required by an RGB LED system if it's willing to do a non-sRGB display.

    • Re:RGB (Score:5, Interesting)

      by Twinbee (767046) on Saturday May 08, 2010 @05:03PM (#32141942) Homepage

      Parent is correct. Any colours around green and cyan are usually terribly unsaturated on most monitors. In fact, even in 'real life', it isn't theoretically possible to experience true cyan/aqua because the nearest direct wavelength will stimulate the red eye cone to some extent creating colour pollution.

      There is a trick around this, which can be found by over-saturating the red cone. This weakens it temporarily, and then when shortly afterwards you see anything resembling cyan, it will appear as close to the true qualia as you could ever expect. The "Eclipse of Mars" illusion that follows in the below link demonstrates this for those who are curious:

      http://www.skytopia.com/project/illusion/2illusion.html [skytopia.com]

      • Re:RGB (Score:5, Insightful)

        by jipn4 (1367823) on Saturday May 08, 2010 @06:20PM (#32142472)

        That's misleading. A lack of a fully saturated green on a monitor is a limitation with the phosphors or dyes it uses. But monochromatic light of around 515 nm is pure, fully saturated green. Fully saturated green stimulates both your M and L cones ("G" and "R" cones); that's the way your eye works.

        You can achieve non-physical responses from your photoreceptors via oversaturation, drugs, or electrical stimulation. That's interesting, but it isn't "green" and it isn't a "true qualia". Thinking of that as "green" is simply because you think of the M cone as a "green" cone and the L cone as a "red" cone, but those are just arbitrary names.

        • Re:RGB (Score:5, Interesting)

          by Twinbee (767046) on Saturday May 08, 2010 @07:01PM (#32142764) Homepage

          That's my point though. How can a apx 515nm wavelength be a fully saturated green if the L cone is also being activated to some degree? That would be the extra pollution I'm talking about. I believe a much purer green would result if you somehow disabled the L cone. Unless you think we might see a more cyan/blue-like hue here?

          To get a definitive answer, I would be interested to see what one would experience if you disabled two of the three S/M/L cones. I'm suspecting you would see pure red (disable S+M), green (disable S+L) and blue (M+L). Any research into that?

          That's interesting, but it isn't "green"

          What is it then?

          • Re:RGB (Score:5, Interesting)

            by jipn4 (1367823) on Saturday May 08, 2010 @10:59PM (#32144178)

            That's my point though. How can a apx 515nm wavelength be a fully saturated green if the L cone is also being activated to some degree?

            Because all light of a single wavelength is automatically "pure"; it doesn't matter what your cone responses are. The cone responses are just a code to transmit that information to your brain. Your cone responses are such that they overlap (for good reason), but that doesn't keep you from seeing pure colors.

            And actually, you perceive color contrast anyway, not absolute RGB values or wavelengths. So, even if you get a group of cones to produce a pure "green" response somehow, that will simply be processed as being part of a strong red/green contrast and result just in a vivid green percept.

  • by tomhudson (43916) < ... <nosduh.arabrab>> on Saturday May 08, 2010 @03:42PM (#32141296) Journal

    It's like the "120 hz lcd display" stuff. The dvd they use to show you the difference in-store is bogus. If you want REALLY sharp, you'd buy a 600hz plasma. The whole screen changes from one image to the next in 1/600 of a second, with no interpolation (and interpolation algorithms are just "best guesses", so they're no better than an upscaler would be).

    • by gbjbaanb (229885)

      you'd buy a 600hz plasma. The whole screen changes from one image to the next in 1/600 of a second

      technically, the source input is still running at 25 frames a second, not 600, so while it can change the whole image in 1/600 second... it doesn't. The 600hz thing is more marketing hype, which does perform interpolation to try and get you a smooter image. I find that the image processing doesn't work so well and results in jaggy movement instead.

      Best look to the plasma's black levels and contrast ratios inste

    • Re: (Score:3, Informative)

      by nxtw (866177)

      It's like the "120 hz lcd display" stuff.

      A 120 Hz display provides a better result for 24 fps input (from film sources) than will a 60 Hz display. With 120 Hz, each frame is displayed for 1000/24 ms instead of varying between 1000/30 ms and 1000/20 ms on a 60 Hz display.

    • Re: (Score:3, Informative)

      by Tyler Eaves (344284)

      Except you're completely missing the point. It's not about sharpness or speed. It's about being an even multiple of 24hz so you can display film material (e.g. about everything you'd really want on a 1080p set) without any tricks that ruin the smoothness of motion.

    • Re: (Score:3, Interesting)

      by Psyborgue (699890)
      120 Hz could provide some benefit for 24p material since it's evenly divisible.
  • by eldavojohn (898314) * <eldavojohnNO@SPAMgmail.com> on Saturday May 08, 2010 @03:46PM (#32141324) Journal

    And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials.

    Well, I'm not sure if you're correct to laugh at this or not. But all televisions are approximations of something analogue that was captured and in that capturing process, some information was lost. To illustrate, entertain a scenario where I have N standard definition television sets that are displaying footage from standard definition video cameras. I daisy chain them together (each camera directed at the last screen) to record something. As I move from the 0th screen to the Nth screen, I will begin to see degradation as more information is lost and randomness comes into play. The same can be done with HD but since HD captures more information, it can safely be assumed that the sampling and resampling will retain more of the original image.

    If you played the Nth HD screen next to the Nth SD screen and piped that through an SD television, you'd still be able to see some difference (for reasonable non-astronomical numbers of N) even though you went through yet another SD television in the end.

    I don't know what the fourth color is supposed to buy, I'm unfamiliar with this technology. But the side by side comparison through an SD or HD TV might still be able to demonstrate that the fourth color adds some meaningful information to the image that -- when resampled to be viewed on your device -- suffers less information loss than the three color implementation. Thus successfully demonstrating some superiority. Not showing you precisely what the final product is supposed to be like but instead give you relativity in signal loss and noise.

    I also know how just making a picture brighter and saturating the colors a bit can make it more appealing to many viewers over a more accurate rendition

    Well, I know that there is a huge photography following that is totally enamored with HDR photography [wikipedia.org] and to many people it makes the images come to life ... I think it's overdone (like autotuning in modern music) but it definitely has a place. Perhaps similarly four color displays hope to widen the dynamic range they can display? I wish I could give you better answers about four color displays but this is the first I've heard of them. Perhaps your questions to a large engineer base are the most effective kind of marketing?

    • Re: (Score:2, Informative)

      by vcgodinich (1172985)
      Well, no, not really at all. Analog signal is converted to digital WAY before you see it, and a 3 color based gambit physically cannot display colors that a 4 color one can. Period.

      Watching something that was RECORDED in 4 colors (which I can't find any camera's that do that) on a 4 color TV (IF the media supports it, standard DVDs do not) would be better, and that improvement cannot be made on a 3 color TV.

      As to your SD vs HD comparison. . . no. The max resolution that SD can display is a DVD (basically)

    • by sjames (1099)

      You cannot add information once it's been thrown away, you can only simulate it. IF the camera had a yellow channel and the video signal actually carried the yellow channel, it MIGHT be useful for the TV to display it, but that's not what's happening.

      I say might, because other than a very few tetrachromates out there we probably cannot actually perceive the extra color space anyway. The ideal color reproduction would require a trichromate camera (we're good there) where the three colors are exactly those of

    • Re: (Score:3, Insightful)

      by BitZtream (692029)

      Well, I'm not sure if you're correct to laugh at this or not. But all televisions are approximations of something analogue that was captured and in that capturing process, some information was lost. To illustrate, entertain a scenario where I have N standard definition television sets that are displaying footage from standard definition video cameras. I daisy chain them together (each camera directed at the last screen) to record something. As I move from the 0th screen to the Nth screen, I will begin to se

    • by Reziac (43301) * on Saturday May 08, 2010 @05:40PM (#32142164) Homepage Journal

      Back in the 1960s there was an ad that did some trick that caused a black-and-white television to display what the eye perceived as colour. There was an explanation as to how it was achieved but lo these many decades later I have no recollection what it was (nor what the ad was for, either). If I hadn't seen it myself, I'd not believe it could be done.

  • The purpose of introduction the Y is to increase the colour gambit. Theoretically, more colors = more "realistic" images. I think that if you can notice the difference between a picture and the actual object (not in terms of dimension, but in therms of the actual colors) then it's likely that a larger colour gambit would be beneficial.
  • Human retinas (Score:2, Interesting)

    Puny human eyeballs only have three kinds of cones, one that peaks in response to red, one to green, and one to blue. While our superior alien overlords may be pleased with this new technology, physiologically, you can't tell the difference.
    • physiologically, you can't tell the difference.

      From what I understand, this is not true. The reason is that you eye can notice a larger amount of green/blue combinations than the RGB combinations are capable of creating.
    • Re: (Score:3, Informative)

      by vadim_t (324782)

      You can't tell the difference, assuming of course that the RGB phosphors are evenly matched with your cones.

      Take for instance printers. We have CMYK precisely because C+M+Y doesn't equal to black, as the inks aren't perfect. I think some sort of muddy brown actually results. So a black ink is needed to fix that imperfection. There exist printers with 6 ink colors as well, because that still doesn't make it perfect.

      I think better monitors would be a good thing, but I'm more interested in a higher bit depth.

    • ... the red one actually "peaks" at yellow. [wikipedia.org]

    • Re: (Score:3, Informative)

      by hhawk (26580)

      Some women have 4 cones..

      • Re: (Score:3, Informative)

        by bugi (8479)

        As I understand it, only in a small, relatively isolated Northern population. And it's not for yellow. Still cool though.

    • Re:Human retinas (Score:4, Informative)

      by fruitbane (454488) on Saturday May 08, 2010 @04:16PM (#32141592) Homepage

      Generally speaking, the human eye is less sensitive to blue and most sensitive to red (more yellow, actually) and green. Making sure that the blue pixels are the brightest in the screen and changing the red pixel to something a little more yellow (assuming the firmware adjusts when recreating colors) would probably be the best approaches to catering to the human eye.

  • Not necessarily fake (Score:5, Informative)

    by russotto (537200) on Saturday May 08, 2010 @03:49PM (#32141352) Journal

    Adding an extra phosphor can extend your gamut, increase your dynamic range within your gamut, or give you finer quantization within the gamut, or some combination of all three. The fact that your source material is provided as three quantities (YCbCr, not RGB) doesn't mean four phoshors won't help.

    Doesn't mean it will, either.

    • Re: (Score:3, Interesting)

      by Kjella (173770)

      The fact that your source material is provided as three quantities (YCbCr, not RGB) doesn't mean four phoshors won't help.

      Well there's two possibilities here:

      1) You have non-RGB information like for example in xvYCC [wikipedia.org], which still uses only three quantities but has a much wider gamut. Then four phosphors could definitely help you reproduce more colors. I guess it also proves the problem isn't that you have three quantities, but the way RGB works which doesn't match the eye.

      2) You have RGB or clipped-for-RGB YCbCr encoding, then you don't have more than RGB no matter what. In this case, it only makes sense to improve the gamut in

  • time to wait (Score:2, Insightful)

    by Anonymous Coward
    Time to wait for all the /.ers who don't actually understand colour theory pipe up with comments of how 3 colors is more than enough for everything simply because it was a design choice that was made several decades ago.
  • by fred911 (83970)

    To be as real as quoting extrapolated mega pixels to sell digital cameras.

  • Review (Score:4, Funny)

    by Concerned Onlooker (473481) on Saturday May 08, 2010 @04:00PM (#32141444) Homepage Journal
    "While you can read a glowing review of it here...."

    Is that supposed to be some kind of joke?

  • As the FS says, "all the source material for this set is produced in 3-color RGB".

    So while you might get an improved gamut with this, it won't be accurate color reproduction. Same with the LED sets that advertise things like "123% of televisions gamut". No way to accurately map that color onto your existing source media well.

  • What's wrong? (Score:5, Interesting)

    by rm999 (775449) on Saturday May 08, 2010 @04:06PM (#32141498)

    Representing yellow with a mix of green and red is already a hack. What's wrong with software determining that the color of a pixel is yellow and actually lighting up a yellow light?

    Maybe a yellow light looks more convincing than a red and green light right next to each other. I'd want to see for myself before making blanket judgments.

  • Open Mind (Score:3, Insightful)

    by gone.fishing (213219) on Saturday May 08, 2010 @04:09PM (#32141510) Journal

    At first blush it appears to be hype but I am trying to keep an open mind because of something that happened to me when I saw my first HD TV picture. I was of the opinion that HD couldn't be that much better than SD. Shortly after I saw my first HD images I was ready to admit that I was wrong. From the moment I laid eyes on HD I knew there was a whole new world out there! I am now a certifiable HD snob. I don't know what I did before but I do know I watched less TV.

    I haven't seen one of the new TVs yet to day I think it makes a difference or not. I will know, and probably rather quickly when I see it if I believe it or not. The first place I will look is at white/black interfaces. That should tell me a lot.

    I really do hope it is hype. I think the 47" TV is a little too big to be moved into the bedroom.

  • As many others have pointed out, it doesn't matter how many primary colors the set is capable of displaying if the signal only uses three. This reminds me of a scanner I saw about ten years or so ago that was capable of recording scans in a 48-bit mode, if the software was capable of using the extra bits. If (and only if) you looked very closely at the text on the box, you'd see a note that few, if any scanner packages supported 48-bit color. It also didn't tell you that it was highly unlikely that any s
  • It does work (Score:2, Informative)

    by psyopper (1135153)
    First - if it's working correctly you shouldn't even notice it. Second, Sanyo has been doing this for a few years in their projectors. The yellow panel helps warm up the color range and keep your tv's backlight from getting too far in the blue range. Read Sanyo's whitepaper: http://us.sanyo.com/shared/docs/QuaDrive_SANYO_WhitePaper08.pdf [sanyo.com] Alternatively try searching for Sanyo Quadrive
  • It *could* be good (Score:5, Informative)

    by __david__ (45671) * on Saturday May 08, 2010 @04:12PM (#32141536) Homepage

    First, check out http://en.wikipedia.org/wiki/Gamut [wikipedia.org] for reference. The sample gamut picture in the top right shows a typical CRT--lets assume for the sake of argument that LCDs are similar.

    If you add a yellow LED to that it just isn't going to add much. The yellow part of the spectrum is already fairly well represented.

    *But* if they also change the hue of the green LED toward the blue spectrum then it has a good chance of really opening up the gamut.

    The people saying RGB is enough don't understand chromaticity--go look for gamut plots of your favorite output devices and see how little of the full spectrum of colors they can actually reproduce. Printers are especially embarrassing. Your eyes can really see a whole lot of color detail.

  • Some people believe that since we have just two ears that stereo sound is enough. Others, on the other hand, believe the experience to be enhanced with 5.x surround sound systems.

    I have not seen the results of this 4th yellow pixel display, but I might guess that there comes with it a newer and better enhancement over traditional RGB output. One might believe that since the eyes can only see combinations of red, green and blue light, that display devices only need to produce light of those colors. But pe

    • by macraig (621737)

      What you just said might as well have been doublespeak. It says nothing at all. Why bother?

  • "It sounds more like hype to extract a higher profit margin...."

    Oh, you mean like a 240 Hertz refresh rate, when the actual changes to the product cost virtually nothing? Or "LED" TVs that aren't driven by LEDs at all but merely backlit by them?

  • by seanvaandering (604658) <sean.vaanderingNO@SPAMgmail.com> on Saturday May 08, 2010 @04:27PM (#32141668)
    Right?

    /obscure? Hopefully not for the /. crowd...
  • by Sivar (316343) <charlesnburns[@]gmail...com> on Saturday May 08, 2010 @04:30PM (#32141706)

    Digital images are displayed in RGB, yes.

    But colors are printed in CMYK (Cyan Magenta Yellow Black), and you'll notice that the best photo inkjet printers have more than just those four color cartridges. They often have the four plus "photo cyan", "photo magenta", etc. and it does make a huge difference.

    As you know, some colors cannot be accurately expressed in CMYK, nor can some in RGB (even though theoretically any color is possible, but theory is not reality in this case).

    While the extra color may or may not make a big difference, there is at least precedent indicating that the idea is sound.

  • Submitter fail. (Score:5, Insightful)

    by blair1q (305137) on Saturday May 08, 2010 @04:35PM (#32141732) Journal

    "And I laugh at how you are supposed to see the advantages of 4-color technology in ads on your 3-color sets at home as you watch their commercials."

    But the script of the commercial is written almost entirely with deference to that fact.

    The estimable Mr. Takei tells you, while you're no doubt ogling his adam's apple instead of listening, that he can't actually show you the difference itself, but, "I can show you this," wherupon he looks at the screen and gives his review in a single, somewhat gaudily overacted word.

    I'm not sure how anyone misses that, since his behavior is utterly bizarre without the concept of telling-not-showing being in play.

  • by phantomcircuit (938963) on Saturday May 08, 2010 @04:37PM (#32141754) Homepage

    http://regmedia.co.uk/2010/05/07/quattron_4.jpg That just about sums up the entire article.

  • by paulsnx2 (453081) on Saturday May 08, 2010 @04:41PM (#32141782)

    If you look at the color spectrum and its frequencies, you will notice the following:

    red -- 610 to 760 nm
    gap - 590 to 620 nm
    green -- 500 to 570 nm
    blue -- 450 to 500 nm

    Now I couldn't find any actual explanation on the net for why Yellow would make a better picture. But if you look at the frequencies above, you will notice that adding yellow DOES do something. It reduces the gap between Red and Green by half; Yellow is in that gap, and comprises the frequencies from 570 to 590.

    By this theory, maybe adding Orange (590 to 610 nm) would make an even more realistic picture?

     

    • Re: (Score:3, Informative)

      by paulsnx2 (453081)

      OPPS! The chart should have been:

      red -- 610 to 760 nm
      gap - 570 to 610 nm
      green -- 500 to 570 nm
      blue -- 450 to 500 nm

  • The tone of this article isn't like the summary states. TFA doesn't portray the TV as some magical device; because the article is actually somewhat critical of the TV.

    I think the thing that a lot of us don't realize, because we spend so much time looking at TV and computer screens, is that colored light isn't really a combination of red, green, and blue. The reality is that light gets its color from its wavelength; and we can get a very close approximation by combining light we perceive as red, green, and blue.

    The question is, can we get a more accurate picture by using light that's closer to the original wavelength? Clearly, the information isn't lost, as the original wavelength can be inferred by digitally processing the original RGB levels.

    Something to consider is that the original NTSC (American Color) TV standards didn't just include Red, Green, and Blue, but also included Yellow and Orange. These parts were essentially deprecated, but the concept of TVs displaying yellow isn't new.

  • by viking80 (697716) on Saturday May 08, 2010 @05:21PM (#32142052) Journal

    Quick terminology: Spectral color- Pure, single wavelength color, like a laser. Composite color- A combination of many spectral colors of different intensity.
    To truly reproduce a color, each pixel should be able to not only make one spectral color, but a combination of all of them.

    This would be very expensive, and fortunately, our eye have sensors only for Red 580 nm, Green 540nm, and Blue 440 nm (RGB), if we exclude the low light rods. We can therefore get away with RGB screens. There are slight errors. For example, assume each R-G-B pixel emits light matching the eyes R-G-B sensors peak sensitivity. Now, we can reproduce any light stimulation by exiting a linear combination of the three emitters. The eye however is sensitive from 380 nm to 740 nm, and can obviously not create the stimulation for neither 400 nm light, nor 700 nm, as your linear combination of only positive values will not cover these spectral colors (outside the gamut of the display). Take a picture of a prism spectrum or rainbow, and compare the original with what you see on the monitor, and you can see this.

    So bottom line, RGB covers almost all colors, but adding emitters allows linear combination to cover more of the possible stimulation, but a high cost for little value. It is primarily the near UV purplish blue below 440 nm and the warm reds near IR that can not be reproduced.

  • by mc6809e (214243) on Saturday May 08, 2010 @08:48PM (#32143412)

    With RGB pixels on an LCD, yellow is shown by allowing light to pass through neighboring red and green subpixels. For the red subpixel, blue and green are filtered out. For the green subpixel, blue and red are filtered out. Then the eye fuses the neighboring pixels together to get yellow from two sources that have already filtered out much of the spectrum. But with a single yellow subpixel, only blue light is filtered out and more light reaches the viewer. I'm sure the effect is to make certain colors more vivid.

    Additionally, the use of these yellow subpixels is also to somewhat increase the effective resolution.

  • Sure, it could be (Score:3, Interesting)

    by Craig Ringer (302899) on Saturday May 08, 2010 @09:08PM (#32143500) Homepage Journal

    In terms of color theory, nothing stops is potentially being real. If you expect to hook this up to some random source and get an improvement, though ... good luck. It's not going to happen. With an appropriate 10-bit or 12-bit wide-gamut source, though, it's certainly capable of better results.

    The input may be 3-color (RGB), but if it's defined with a wide-gamut space like Adobe RGB, possibly with up to 16 bits of precision per colour channel, then it can represent a huge range of colours. It can do this by defining near-"perfect" primary colours and assuming perfect control over blending of those primaries.

    A regular TV, though also an RGB device, has a very different gamut. That's largely because the primary colours the TV uses aren't as bright/saturated or as "perfect" as those in the Adobe RGB space, but it also can't blend its colours as well. Most likely it only uses 8 bits per colour channel, so it has a much more limited range of graduations, further forcing the colour space to be narrowed to avoid banding due to imprecision.

    The regular TV must "scale" a wide-gamut input signal in a colour space like Adobe RGB to display it on its own more limited panel. It can do this by "chopping off" extreme colours, by scaling the whole lot evenly, or several other methods that're out of scope here. Point is, that they're both RGB devices, but they don't share the same colour space and must convert colours.

    So, if the yellow pixel (another primary) expands the gamut of this new TV, then yes, even though it too only takes an RGB signal, it's in theory better, because it can convert a wide-gamut RGB input to its own RGBY space for display with better fidelity than a TV with the same RGB primaries but no Y channel colour achieve.

    Another device might still be plain RGB, but for each of the red green and blue primaries it might have much better (closer to "perfectly red" etc) colour. This device might have an overall wider gamut (ie better range of colours) than the RGBY device, though it's likely that the RGBY device's gamut would still be capable of better yellows. (If you're struggling to figure out what I mean, google for "CIE diagram RGB CMYK" to get a feel for it).

    Attaining better results through adding a channel and/or having better R,G,B primaries presumes properly colour-managed inputs to gain any benefit, though. In reality, video colour management is in a pathetic and dire state - inputs can be in any number of different colour spaces, there's no real device-to-device negotiation of colour spaces, and it's generally a mess. If you feed a "regular" narrow gamut source through to a TV that's expecting a wide gamut signal, you'll get a vile array of over-saturated over-bright disgusting colour, so this is important. Since this device would rely on wide-gamut RGB input to have any advantage, it'll need a 10-bit or 12-bit HDMI or DisplayPort input with a source that's capable of providing a wider gamut signal (say, BluRay) and is set up to actually do so rather than "scaling" the output video gamut to the expections of most devices.

    The fact that most inputs only support 8 bits per channel (and thus aren't very useful for wide-gamut signals because they'll get banding/striping in smooth tones) really doesn't help.

  • by guidryp (702488) on Saturday May 08, 2010 @11:34PM (#32144366)

    Color on RGB monitors currently is a fine match for standard broadcast/HDTV/Blu Ray gamut, and LCD monitors are plenty bright, this really doesn't solve a problem anyone was actually having.

    Sharp has among the worse LCD tech(IMO) with weak (grey) blacks and a lot of viewing angle shift.

    The first reviews that I read, say these problems persist, so Sharp didn't work on real (hard) they have with their technology. Instead they decided to tackle something they can use as a marketing differentiator to impress the rubes.

You might have mail.

Working...