Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Displays

RGB to become RGBCMY 521

elgatozorbas writes "The basic color elements of television have not changed much since 1954; a half-century after RCA introduced the first color set, the RGB (red, green and blue) system used then still prevails. But Israeli company Genoa Color Technologies has broken the RGB barrier by adding one to three primary colors such as yellow, cyan and magenta, thus expanding - from 55 to 95 percent - the coverage of the visible color gamut. The promised result of this multi-primary color (MPC) technology is a television picture that, with its truer, more vibrant color and brighter image, looks more like cinema than video. Also covered in IEEE Spectrum."
This discussion has been archived. No new comments can be posted.

RGB to become RGBCMY

Comments Filter:
  • by r_glen ( 679664 ) on Monday August 16, 2004 @11:48AM (#9981827)
    Does this mean I should hold off on buying an HDTV?
    • by Elecore ( 784561 ) on Monday August 16, 2004 @11:50AM (#9981859) Homepage
      I wouldn't. It's taken so long to get HDTV "standard" that it will take just as long to get this new standard in. If everybody just upgraded to HDTV, they won't want to upgrade to this. These guys were about 5 years too late it seems :(
      • by Anonymous Coward
        Yeah but these guys broke the RGB barrier!!!
      • by Tlosk ( 761023 ) on Monday August 16, 2004 @12:02PM (#9982008)
        This isn't a new standard, it's just an after effect applied to existing signals. In the same way that high end sets have special filters such as a comb filter which gets rid of the jagged comb like fingers from rapidly moving objects on interlaced TV images, this is something that just makes existing TV look better. In other words, there will be HDTV sets with this, and HDTV sets without it. Although if it is as cheap to integrate as they suggest then it might become common on all sets (and other display devices).

        Since they are supposedly coming out with sets later this year, I would probably wait myself if I were about to drop a couple grand on a new set and get a look at the technology in the show room.

        Maybe it's because we're spoiled with the high resolution of computer monitors, but I can barely stand to watch normal TV, even the majority of the newer plasma/LCD TVs have horrible images. There's a lot of room for improvement. The best ones I've seen in my opinion are DLP rear projection sets, but then I haven't really kept up with it the last year or so, so there might be better looking stuff out there now.
        • by dorlthed ( 700641 ) <mxc511@ps[ ]du ['u.e' in gap]> on Monday August 16, 2004 @12:19PM (#9982186)

          Not to be a nag, but that's not what a comb filter does, bud. It seperates the Luminance from the Chrominance in an analog TV signal. When viewed on an oscilloscope, the peaks of each alternate with each other, giving the appearance of a comb.

        • by MunchMunch ( 670504 ) on Monday August 16, 2004 @12:47PM (#9982540) Homepage
          From the article: "How the algorithm does that, precisely, is a secret well kept by Genoa. "It's part of their intellectual property," Stone says. What's certain, according to her, is that even though Genoa's technology increases the range of colors, it's not recovering the full original color information of a movie on film, lost in the conversion to other formats, like DVD. "It's kind of arbitrarily making images look better," she says, though people will in fact prefer the resulting colors, which will typically be more saturated and brighter.

          And here's what you said: "This isn't a new standard, it's just an after effect applied to existing signals."

          While you're right that it can be used in transitional technology, you're wrong that it's "just" an after effect. Nobody would say that Technicolorized B&W reproductions are the same as actual full-color originals. And here, you're going to need a format that preserves color information in the new 5 color system if you're going to exploit the real improvements in this color technology: closer reproductions of actual color.

          • by cmowire ( 254489 ) on Monday August 16, 2004 @01:04PM (#9982771) Homepage
            It's probably simpler than you think.

            CMY are really "combinations" of R G and B.

            So, what's happening is that they are tossing in "intermediate" colors in roughly the same way as a 6 or 7 color printer. The exact equations are probably proprietary, but the process is pretty standard.

            This comes in to play at two places. First, HDTV has a pretty ambitious color gamut, so videos designed around the HDTV gamut will look better, assuming of course that the source footage is equally high quality.

            Second, there are colors that your eye can perceive that are not representable by the RGB system.

            Overall, the research is already done. There's actually quite a few different ways to represent this data. PhotoCDs already use it. You want to use L*a*b or XYZ or one of the other CIE color systems.

            I think it's interesting, but when I read the headline, my first thought was "Gee. What took them so long?"
            • by Cuthalion ( 65550 ) on Monday August 16, 2004 @01:30PM (#9983121) Homepage
              CMY are really "combinations" of R G and B.

              This is false. C, Y, and M are different wavelengths of light from R, G, and B. Because the human eye only has receptors for R, G, and B, we can't distinguish between equal quantities of R and G and a single wavelength in between the two, namely Y. In other words, we are able to trick the eye into perceiving a full color spectrum using only three different wavelengths of light.

            • by canavan ( 14778 ) on Monday August 16, 2004 @01:34PM (#9983192)
              CMY are really "combinations" of R G and B.

              They are on your standard RGB monitor, but not in the general case. For example, take a look at the CIE "Tongue" chart displayed e.g. here [virginia.edu]. With you monitor, you can only display colors in the red, green, blue triangle, but one could add pure cyan at 490nm and actually increase the area/gamut.

              Second, there are colors that your eye can perceive that are not representable by the RGB system.

              That would be the good old RCA, phosphor based RGB system. If you ran your display with e.g. lasers with 410, 520 and 700nm respectively, you could get a gamut that's almost indistinguishable from the full gamut the average eye can percieve. The smaller area covered in the green region on top of the chart would probably be neglegible due to the decreased capability of the eye to distinguish between greens. So, not RGB is the problem, but the technology to record and display it.
            • It's funny, cause when I read the headline, I thought 'what the Hell kind of good will that do?', but after a little thought, this started to sound useful. I had never tried to think outside the RGB world because it 'technically' displays all colors, though it struck me that the colors in-between RGB will come out dimmer than they should.

              I think the first thing to spring to graphic artists' minds is 'when can I get a monitor like this?' And also, how much of a strain would it be for a video card to compu
              • by The Snowman ( 116231 ) * on Monday August 16, 2004 @05:47PM (#9985828)

                I had never tried to think outside the RGB world because it 'technically' displays all colors, though it struck me that the colors in-between RGB will come out dimmer than they should.

                No, RGB technically displays more discrete colors than our eye can see. That does not mean it "displays all colors." There are some colors RGB displays that we cannot distinguish between, and there are some colors we can distinguish that RGB cannot display.

          • by Ungrounded Lightning ( 62228 ) on Monday August 16, 2004 @05:46PM (#9985804) Journal
            ... you're going to need a format that preserves color information in the new 5 color system if you're going to exploit the real improvements in this color technology: closer reproductions of actual color.

            Absolutely not true.

            For people with normal color vision, in addition to the "rod" pigment (which is not a significant player in color perception and daylight central vision) there are three color receptor pigments located in the "cone" cells, which have broad reception peaks with well-known shapes. The response of those three sets of cells to an image can be accurately modeled by using three sets of sensors and filters that model the three pigments' frequency response.

            The problem comes when, given this measurement, you try to stimulate a viewer's cone cells to produce the response equivalent to the light you measured. If you just pick three color phosphors at the peak of the three dyes' response curves, you find that the colors don't stimulate JUST the cones you intended. The green light, for instance, will strongly stimulate the green-responsive cones. But it will also weakly stimulate the red and blue cones. Similarly, red light will strongly stimulate red cones, weakly stimulate green cones, and very weakly stimulate blue cones. Ditto the other way around with blue light.

            This has two effects:

            First: Even within the range of combinations of stimulus the three light sources can produce, simply playing back the signal will cause the results to be somewhat more pastel than the orignal scene. This can be compensated for to some extent - by subtracting out appropriate amounts of each color's signal from the signals going to the others color emitters.

            Second: You can't make the emitters emit a negative amount of light. The result is that there are scene colors, saturated and nearly-saturated colors between the phosphor colors you chose for reproduction, that can produce color sensations that these three screen colors can't reproduce. These scene colors will ALWAYS apper somewhat washed-out if you only reproduce the image with three screen colors.

            So with three values you can accurately transmit any color a normal eye can see. But with three phosphors you can't make the eye see some of these colors.

            The two-dimensional representation of the relative responses of the three dies looks something like a spearment leaf with the base sliced off. (See figure 12 of this [virginia.edu] web page. And thank you, canavan [slashdot.org]) The edge of the leaf represents the response to a pure spectral color, and regions within it to mixes of colors. If you try to reproduce the response with three phosphor colors, you are picking three points on the leaf edge and drawing a triangle between them. By adjusting the relative amounts of light from the three phosphors you can produce a stimulus corresponding to any point WITHIN the triangle. But you can't produce one corresponding to the arcs of the leaf that are outside the triangle.

            But by picking more points along the leaf edge you can draw a polygon and hit any point within it. This covers more of the leaf and leaves fewer colors missing. (Indeed, just a couple extra points can give you most of the leaf.)

            You still send the signal with the three values corresponding to the response you want from the eye. But now your monitor processes it into more than three colors to put on the screen, to get the eye to respond more closely to the response it would have had to the original scene.

            (Note that people with some forms of color blindness have cones with pigments that have abnormal frequency responses. Such people will not see a color TV image as right even with this upgrade, because the camera will not have correctly encoded what THEIR eyes would have seen. They need a camera with a different response, and yet another set of phosphors in the monitor, to get a good match.)

    • there's always new tech coming down the pipe, gotta jump sometime. besides, i think the odds of seeing this at a reasonable price in any decent sized tv in the next 5 yrs is slim to none.

    • by gessel ( 310103 ) on Monday August 16, 2004 @07:58PM (#9986664) Homepage
      No. This is just moronic marketing hype from people who should know better targeting people who don't.

      First of all it's not a new idea - we looked into it at apple in the mid 80's as a way of getting more brightness out of LCDs. Using a CMYG pattern for example.

      Second, a cursory glance at the CIE diagram [gsu.edu] teaches those who understand how it works that well placed RGB primaries cover almost the entire visible gamut (90% or so). There just isn't 20% left to add with a few more primaries, let alone 65%. That's not how vision works. (A cyan primary might add about 10%, but a yellow doesn't do much of anything and magenta just isn't a primary).

      And third, neither video nor movies are color matched anyway. There's no "right" color for a tv program. It's what you want it to be. That's why NTSC stands for Never Twice the Same Color. Expanding the gamut is just like turning up the saturation on your TV. Is your saturation maxed? If so, you'd probably like a TV with a larger gamut (OK, it's not quite that simple, but video programming is targeted to the typical gamut of a TV, so the new technologies typically have to be turned down or they look a unnatural, as the article described. That is, if you really use the new gamut, it looks borked anyway, unless you like that sort of thing.)

      If you've got crappy, unsaturated primaries, then adding more colors can expand the range, but at the expense of monumental complexity in the color math. Comon - getting color matching to work even marginally right with only three primaries is a task yet to be even partially achieved - how many of you have color calibrated monitors? And you want to add more primaries? Get a grip on the 3 you've got!

      The press release does speak of a truth in subtractive color displays (like LCDs but not CRTs) that there is an intrinsic trade off between color purity (gamut) and brightness. Of course you can always use a brighter lightbulb/backlight... Or an alternative primary color technology like CRTs LEDs OLEDs Lasers... etc today. Large screen OLEDS would have a far better gamut than this crap anyway.

      If you want to see amazing color look to laser displays or Sony's new reflective ribbon technology (that uses a laser as the source) with pure RGB primaries, there's no advantage to be had...

      As for the technology being unique or special (not short bus special, though it is that) it's not. Your 5/6/7/etc. color inkjet printer does exactly the same thing. With reflective images (subtractive color) you don't really have primaries, you've got inks, and long ago people chose to print in RGB complement CMY (the K part is just because most inks suck and CMY all togehter would be grey, not black, so they added the black - sound familiar to the story? That's only about 100 years old). Anyway, looking back at our old CIE diagram we see that Cyan Magenta and Yellow inscribe a wee triangle even with fully saturated inks, so Epson chose to add a few more colors (and then more, and more) and figure out the color math behind the transformation from CRT RGB primaries (or CIE LAB) to CMYKC2Y2M2 etc. It works well with printers (Epson was actually copying Pantone's Hexachrome offset process, which itself is probably not the first).

      It's an OK idea to improve the image quality of the color mixing functions used to filter incoming light for color cameras (typicaly CMYG, though some cameras now use RGB), but it's just silly with LCDs. If you're really a color fanatic you're probably using a CRT anyway.

      As an aside, in the persuit of some research about 10 years ago I found a paper article presenting research in capturing archival images of paintings and other works of art, and seeking to eliminate all possible metamerism between the color mixing functions of the detector and the human visual system. The authors found that to do so required a 7 primary system. I haven't been able to find the article again and I'm not
  • by Anonymous Coward on Monday August 16, 2004 @11:49AM (#9981837)
    It's almost enough to make me wish I was a mutant mother of a color blind son [utk.edu].
    • by Dr. Zowie ( 109983 ) <slashdot@@@deforest...org> on Monday August 16, 2004 @12:20PM (#9982188)
      ... if you have normal vision.

      Most folks don't realize, but there really are four primary colors. Most geeky types are familiar with the red, green, and blue cone cells in our eyes -- but the rod cells that are used for night vision have their own separate response spectrum, weighted heavily toward the blue/violet end of the spectrum.

      That means you have four separate "detector systems" in your eye, each of which is sensitive to a different slice of the optical spectrum. In particular, you can distinguish shades of violet and magenta that differ only in the blue-cone/rod response levels.

      Ever think about why blue light is used universally to signify "darkness" or "moonlight" on stage? It's because, in low light levels, your cones shut down and your rods -- which in bright light connote blueness -- are the only part of your retina that works well.

      It's also the reason why night-vision flashlights are red, and why blue LEDs appear so bright when used as flashlights. The red light doesn't stimulate your rods, preserving their sensitivity; and the blue light gives you extra rod stimulation per unit power, making blue LEDS very efficient as nighttime illumination.

      • by budgenator ( 254554 ) on Monday August 16, 2004 @12:58PM (#9982694) Journal
        The red light doesn't stimulate your rods, preserving their sensitivity; More importantly, it doesn't cause pupilary contraction, caused by yellow light. Also blue light filtered flashlights is favored now because present NODS (Night Observation Devices) are made with enhanced red sensitivity, often entering into the near-infrared and are pretty much blue blind. Blue scatters too much in fog, mists and smoke; that's why fog-lights are usualy yellow.

        • by Wyzard ( 110714 ) on Monday August 16, 2004 @04:23PM (#9984998) Homepage
          Blue scatters too much in fog, mists and smoke; that's why fog-lights are usualy yellow.

          Incidentally, it's also why the sky is blue during the day and orange/red at sunrise and sunset. When the sun is overhead, blue light gets scattered in the atmosphere, giving the whole sky a blue look. When the sun is near the horizon, there's a greater thickness of air between it and you, which scatters all the blue light away (toward the part of the Earth where the sun is overhead, and some back into space).

      • by Kenshin ( 43036 ) <.kenshin. .at. .lunarworks.ca.> on Monday August 16, 2004 @01:00PM (#9982728) Homepage
        I may be 25, but I turn into a 12 year old when someone says something like "stimulating your rods" in a scientific explaination.
    • by QuantumRiff ( 120817 ) on Monday August 16, 2004 @12:32PM (#9982316)
      That article explained alot. My GF asked me to hand her a her red shirt. I did.
      She said, Thats ruby, i meant the red one.
      So i handed her one of the other red ones.
      No, thats rose,
      On and on this goes, and then i finally tell her to pick the damn red shirt herself, she goes into the closet, takes a look at the 12 "red" shirts she has, and says, "see the red one, stupid". From what my buddies tell me, this is a very common issue, and perhaps these women have been overlooked for so long is that most of the doctors are men, and they just think the women are crazy. (My GF informs me that its really the other way around, we simple men are just blind!)
  • by Walt Dismal ( 534799 ) on Monday August 16, 2004 @11:50AM (#9981852)
    Certainly makes one wonder what happened to three-color retinas...
    • by ron_ivi ( 607351 ) <sdotno@NOSpAM.cheapcomplexdevices.com> on Monday August 16, 2004 @11:57AM (#9981955)
      The gain of the three-color retinas in the eyes didn't line up well with gains of three-color camera sensors making anomolous colors like blue things looking red [libertythink.com] with certain camera sensors.

      Also, each of the three colors commonly used (rgb) are artificially dark, with each one blocking about 2/3 of the light (since the only let that one color through). So if you think about it, your "white" background is really not as bright as it could be. Some DLP [dlp.com] projectors I think use red, green, blue, and white to get some of this contrast back. But I think these guys have a more interesting idea. Your cyan pixel, letting through both blue and green light, would be brighter than either your plain blue or plain green or blue&green next to each other.

      • Some DLP projectors I think use red, green, blue, and white to get some of this contrast back.

        No that's not for contrast, that's for peak brightness. Since all colors those devices can generate are linear interpolations of the filtered colors, all you can get with white thrown in is bright, non-saturated colors.

        Your cyan pixel, letting through both blue and green light, would be brighter than either your plain blue or plain green or blue&green next to each other.

        But you couldn't make all things
    • by tiltowait ( 306189 ) on Monday August 16, 2004 @12:01PM (#9981997) Homepage Journal
      There are three primary additive colors and three primary subtractive colors. Cecil [straightdope.com] explains it rather well.
    • by Cecil ( 37810 ) on Monday August 16, 2004 @12:05PM (#9982048) Homepage
      Yes, our eyes only have three types of cones, but unlike the color projected by a TV, they are not designed to respond to just one frequency of red, one of green, and one of blue. they have broad, overlapping response curves, each cone giving a different level of signal depending on the frequency of the light. The brain figures out the color based on the response of all three types of cones, not just the one that is active.

      The stuff above is fact, the rest of this post is my pointless, unscientific, meandering hypothesis:

      Obviously we use this concept with RGB signals to create colors like yellow, by tickling both the red and green cones at once with neighboring phosphors, but since the two colors are coming from very very slightly different places, the brain is not necessarily satisfied that it really is the color yellow. Basically, the more spectrum we can cover natively, the less chance there will be of someone's brain mumbling "that color doesn't seem... right"
      • by iabervon ( 1971 ) on Monday August 16, 2004 @02:07PM (#9983568) Homepage Journal
        The human brain rarely says something isn't the right color. There's a huge amount of slop in the brain needed to produce the perception of stable colors of objects under different lighting conditions (if you light a room with light blue light, your eyes will adjust and report the usual colors of objects, even though the light reaching your eyes from them is obviously different).

        The real issue is that, since the curves overlap, the green phosphor triggers the red cone to a certain extent, so green plus blue is cyan plus a bit of red, or a bit less cyan plus a bit of white. So the most pure cyan you can trigger in the eye with an RGB screen is less pure than the most pure cyan you get find in the real world. Purple is more of a mess (since the brain is actually making up colors for combinations that aren't generated by any pure wavelengths, and faking the idea that red is next to violet). But it all comes down to limits on the saturation of different colors due to not being able to keep from stimulating some cone or other.
        • Go caving sometime (Score:4, Interesting)

          by freeweed ( 309734 ) on Monday August 16, 2004 @04:30PM (#9985076)
          There's a huge amount of slop in the brain needed to produce the perception of stable colors of objects under different lighting conditions

          Boy, you can say that again. For anyone who *really* wants to experience this, I suggest you go caving some time. In a deep enough cave that no outside light penetrates. Last weekend myself and a group were out, and we all had different models of headlamps. Now, the cave we were in has 3 interesting things going for it here: very banded & multicoloured rock, lots of ice (again somewhat multicoloured due to how it forms over the centuries), and human artifacts (a fair bit of paint on the walls, general human refuse, etc).

          Here's the trick: you're in an area where your eyes have never seen the surroundings in natural light. Effectively, you have no reference point to know what colour things are. Now, I personally have one of the newer LED/incandescent combo headlamps (an amazing combination by the way, and for those with any doubt, 3 white LEDs will provide more than enough light for at least 20' around you - no more trying to focus right in front of your feet :). Alternating between the LEDs (white light) and the bulb (yellow light) was... interesting. My eyes couldn't decide what colour things were. Relatively speaking, sure. But I'd go for a while with just the LEDs, my eyes got used to that, then switch to the bulb. Suddenly, switch to the bulb, and everything gets weird. Even subtle things like depth cues get messed up, because your brain is frantically trying to re-colour what you're looking at.

          This really didn't happen with things like our clothing or other gear, because my brain "knew" what colour that stuff was, having seen it outside, and it adjusted easily. But the rocks, ice, and *especially* the tagging on the walls - very creepy effect. Things that looked green in one light could be red in another. The ice was fun, because it's actually somewhat brown/yellowish in some layers (dirt, I suspect). But the brain wants to colour it blue-white.

          We also had a good game of "guess my eye colour" - many of these people didn't know each other very well. I think we scored less than 50% overall :)
    • by osu-neko ( 2604 ) on Monday August 16, 2004 @12:05PM (#9982055)
      Nothing. This just provides a better way to stimulate them. If one had the technology to vary the intensity of red, green, and blue over an infinite set of real values, then RGB would be able to perfectly replicate any color. In reality, the RGB color model used in displays today varies these values over a finite set of integers. One gets the best ability to reproduce colors that are red, green, or blue. Colors between these on the spectrum can be simulated by mixing these, thanks to the three types of cones we used to process color on the retina, but if in order to reproduce a particular color, we need 255 parts red to 41 parts green, we simply cannot increase the intensity of this color without distorting it (shifting towards green, because we've already maxed red). Thus, any RGB color model is going to more accurately and vibrantly display reds, greens, and blues, and simpler blends of these (where all values are equal, e.g. cyan), anything else is going to be limited in the range, grosser in steps between intensity, and less vibrant at the max. Adding pixels that display actual yellow (light of precisely that wavelength, rather than a blend of red and green wavelenght light exploiting the trick to stimulate our red and green cones to the same levels that actual yellow-wavelength light would), adding these pixels would increase the ability to accurately display these between colors, despite the fact that, in theory, only RGB is necessary. It's easier to add more between color pixels than to up the intensity range and lower the steps between intensities.
      • If one had the technology to vary the intensity of red, green, and blue over an infinite set of real values, then RGB would be able to perfectly replicate any color.

        Not really. The thing is, everyone's eyes are different.

        As you probably know, our rods respond to the intensity of red, green, and blue light. More specifically, each type of sensor has its peak sensitivity at approximately those colors. Our red sensor responds a little bit to blue light, our blue sensor responds a little to red light, etc.

      • It's not the discrete gaps that are the problem! RGB does not represent all of the visible colors, even theoretically [cs.sfu.ca]. Assuming a perfectly smooth RGB model with infinite intensity and perfect black, and infinitely precise levels of R, G, and B, there is a huge chunk (around 45%, if I remember right) of the visible gamut that is totally unreproducible. CMY covers some areas that RGB doesn't, and vice versa. Neither is the whole gamut. There are more complex models that do, like CIE L*a*b.
  • by krog ( 25663 ) on Monday August 16, 2004 @11:50AM (#9981855) Homepage
    A truly revolutionary idea would be to include and project IR and UV in addition to RGB/CMY. Even though our eyes can't exactly 'see' IR and UV, they still form an important part of our realistic image perception. It's not unlike sounds above 20-25kHz in pitch; we don't 'hear' them, but our brain perceives them nonetheless and they are used for stereo imaging of a space.
    • by tunabomber ( 259585 ) on Monday August 16, 2004 @11:55AM (#9981923) Homepage
      A truly revolutionary idea would be to include and project IR and UV in addition to RGB/CMY.

      Why didn't I think of that? This is huge! It would mean that us cave-dwelling worms will get tans, skin cancer, and cataracts just like everyone else- just by sitting in front of our monitor. Also, we could use the IR radiation to heat our TV dinners so we wouldn't have to keep going back to the oven or microwave to check if its done yet.
    • Wow...that's a great idea. I mean, what we need is more devices shining UV and IR light directly into our eyes and onto our skin that we willingly stare at for hours at a time. Vibrant colors are worth it!
    • by baryon351 ( 626717 ) on Monday August 16, 2004 @11:57AM (#9981956)
      Those sounds are also felt by other parts of our bodies than ears. I once rescued a small bat, and while it was recuperating, from time to time it would open its mouth and squeal its echolocating squeal. While I couldn't hear it, my partner and I could feel the noise in our chest & neck. I also spent some time videotaping the bat as it flew around the room ready to be released. Whenever it did its noise thing, the levels on the VCR shot way up high and all the other audio dropped out. Powerful stuff, and while it's still sound it was perceived in far different ways than just ears.
    • by Tyler Durden ( 136036 ) on Monday August 16, 2004 @12:00PM (#9981994)
      Oh great, project UV from our TV sets. That would be good.

      "So where did you get that sunburn?"
      "Too much TV I guess."

      Or better yet...
      "Oh neat, Jesse James is about to weld something again..." *ZAP!* "...oh fuck, my eyes!" ;)
    • Aside from the health issues associated with blasting people with waves in the UV spectrum, you'd need to actually capture the data to project it. This would mean using an infrared photodetector in addition to a visible light photodetector to capture video... which is prohibitively expensive.
    • It's not unlike sounds above 20-25kHz in pitch; we don't 'hear' them, but our brain perceives them nonetheless and they are used for stereo imaging of a space.

      No, our brain does not perceive sounds much below 20Hz or above 25kHz, and our ears are physically incapable of receiving them in the first place, unless it's loud enough of course (in which case you feel it instead). I have never read any convincing evidence to the contrary in any paper that isn't written by either a vested interest, or by someone

  • Colors or Pigments? (Score:2, Interesting)

    by ryane67 ( 768994 )
    Last I knew there were colors (the actual spectrum of light) and then there were pigments of things (which actually reflect certain colors of the light)
    so now they can project reflected colors, aka pigments? hmmm
  • by erick99 ( 743982 ) <homerun@gmail.com> on Monday August 16, 2004 @11:51AM (#9981866)
    This looks good since it doesn't require a different signal from broadcasters (a la HDTV) and the price to implement seems low - the article notes that the added imaging circuitry was at a minimal cost. Some tv's with this technology are due out within a year. It sounds like something that will do very well. Imagine that, a nice improvement in viewing at a low cost and with an existing signal. Did I miss something?

    Cheers,

    Erick

    • Did I miss something?

      Most definitely. This is just like all of the customized MP3 decoders that came out that were supposed to "enrich" the sound by adding in the lost harmonics. They didn't fare so well because, ultimately, it was just a manufactured enhancement, and it can't compete with the real thing. This is like turning your amplifier up to eleven.

  • While it sure does sound good, I high doubt that anyone will want to throw away the billions invested in good old RGB tvs and monitors. After all, they're "good enough."
  • Sometimes (Score:5, Insightful)

    by agraupe ( 769778 ) on Monday August 16, 2004 @11:53AM (#9981891) Journal
    Sometimes the most mundane improvements can be the best. All the people who swear by HDTV will be SOL, because they'll have hi-res, but improperly colored, television/movies.
  • by MadRocketScientist ( 792254 ) on Monday August 16, 2004 @11:53AM (#9981898)
    My friends are going to be viridian with envy!
  • Clever product advertisement wrapped neatly into small slashdot article.
  • by morcheeba ( 260908 ) * on Monday August 16, 2004 @11:54AM (#9981906) Journal
    There are a couple of factual errors in this story that makes me feel uneasy.

    From the spectrum article:
    While film used in cinema contains pigments that can create an infinitely large number of color variations, TV sets combine discrete amounts of red, green, and blue light to create a much more limited color range.
    This isn't true: color slide film uses three layers, just like monitors do: http://www.imx.nl/photosite/technical/E100G/E100G. html

    He says that in printing it's common to have inkjet devices that use six, seven, or even eight primaries.
    There are good reasons printing uses so many primaries, but it's usually to make an evener tone. My consumer-grade printer has the traditional CMYK (cyan magenta yellow blacK), but it also has two additional colors: light-cyan and light-magenta. They chose these lighter colors so make the blending smoother and the ink spots less noticible; it wasn't to increase the gamut. Printers also use spot-color [webopedia.com] to make particular colors (such as a company logo) print without needing to use a halftone. These are all just gimicks to get around the fact that printing isn't continuous tone -- in projectors that are continuous tone, these tricks aren't needed.

    Basically, it comes down to eyeballs... if you emulate the response curves that your eye is sensitive to [yorku.ca], then you can't perceptually do any better.

    The traditional RGB's and CMY's don't match these curves, so they define a gamut that can be improved on. For example, take this projector's gamut [homestead.com] -- its green is far away from the eye's green, so it can't display the cyans well. But, the color model my company is using for its video product uses a much truer green [convergy.de] so we can cover much more of the gamut.

    disclaimer: IANACE (color expert), but my most recent project has been color calibration to precise standards.
  • I want to see what it looks like.
  • by los furtive ( 232491 ) <ChrisLamothe&gmail,com> on Monday August 16, 2004 @11:55AM (#9981915) Homepage
    Could someone post a screenshot? Preferably one of Natalie Portman's sunburn?

    oh, wait a minute....

  • ...you still won't have a colour-calibrated monitor unless you're a graphics professional, and probably not even then. :(
  • Color Space (Score:3, Interesting)

    by avalys ( 221114 ) * on Monday August 16, 2004 @11:57AM (#9981946)
    Can the human eye even distinguish between such fine variations in color? I know I've never found any flaws with images rendered in 24-bit color.
    • Re:Color Space (Score:3, Interesting)

      This is true, but more colour depth is often needed in compositing work. It's not uncommon for a visual effects shot to be handled at 16 bits per channel, or twice the colour resolution of a 24 bit image. The reason is that it has a greater dynamic range. If you add two bright pixels together, the result will be white. But with more bits per channel, the pixels will be brighter than white, and still maintain values relative to other pixels, so that if you darken them later, no information is lost. Visually
  • Most people do not regard gamuts and colour spaces as important in their purchases. perhaps with the critical mass of photography and printing people may start to be more concerned.

    I for one have given up trying to get Photoshop to display the colours correctly...

    And who cares about increasing the colour space, when the networks are forcing everyone onto digital, highly compressed channels, and also making people buy higher resoution sets, which will lead to higher compression, and loss of colour informat
  • Nonsense! (Score:5, Funny)

    by Anonymous Coward on Monday August 16, 2004 @11:57AM (#9981958)
    16 million colors should be enough for anyone.
  • by ackthpt ( 218170 ) * on Monday August 16, 2004 @11:59AM (#9981981) Homepage Journal
    Genoa partnered with Royal Philips Electronics NV, in Amsterdam, Netherlands, to implement the new color technology by modifying a family of rear-projection TV sets, which rely on liquid-crystal-on-silicon (LCOS) technology. In their current configuration, these sets produce images by shining red, green, and blue light from filtered white light onto a small microchip embedded with millions of tiny pixels made of liquid crystal that modulate and reflect the light to a lens system. This set of lenses amplifies the image and projects it on the screen, where red, green, and blue light overlap to form secondary colors.

    Adding two extra colors to this kind of projection television has little impact on the price tag, says Simon Lewis, vice president of marketing at Genoa. He says the new Philips color-enhanced set, to be available next year, needs only a few additional filters and optical components to create the yellow and cyan light, with no changes to the more costly microprojection chip.

    ... The promised result of this multi-primary color (MPC) technology is a television picture that, with its truer, more vibrant color and brighter image, looks more like cinema than video.

    Right. Right when we've got all these plants around the world cranking out inexpensive TV's using LEDS and LCD, some whizzo comes along and says, "Hey, look, a great idea and all you have to do is retool everything, develop some newer technology and keep selling it all at the same pricing you're currently at!"

    Perhaps the main challenge in converting a video stream from a three- to a five-primary color system is doing it in real time, says Maureen C. Stone, ...

    Yay, now we really will need a computer in every TV! More components - more to go wrong, more power consumption, etc.

    "How the algorithm does that, precisely, is a secret well kept by Genoa. "It's part of their intellectual property," Stone says.

    Yay, more intellectual property. This should drive prices down.

    <curmudgeon>
    Why, back in my day we didn't have remote controls and we had a folded playing card stuck beside the tuner knob to keep the picture from doing funny things, and we liked it!
    </curmudgeon>

    I'm sure it will look lovely, while watching older stuff from the bad old pre RGBCMY days.

    "Gilligan!"

    I'm like, totally there, dude!

  • I wanna know if I can tell the diff between it and my RGB CRT.
  • This won't be a fully complete standard until they include squant [negativland.com] in their color model.
  • Why? (Score:2, Informative)

    by B5_geek ( 638928 )
    RGB and CMYK are counter-productive.

    RGB are Additive Colours. (You add them together to create White)
    CMY(K) are Subtractive Colours. (You add them together to get black)

    CMYK has been used in the Colour-copier/printer industry for a long time. It depends on using White paper to 'iluminate' the colours that have been added.

    RGB + CMYK negate each other. Considering that any combination of RGB can give you any colour, CMYK can't (for example) give you 'floresent' colours {without cheating}.

    CRT's use gl
    • Re:Why? (Score:5, Informative)

      by osu-neko ( 2604 ) on Monday August 16, 2004 @12:38PM (#9982413)
      RGB are Additive Colours. (You add them together to create White)
      CMY(K) are Subtractive Colours. (You add them together to get black)
      ... RGB + CMYK negate each other.
      ... while LCD's naturally use a CMYk approach

      Hehe! No, this is quite false, quite a number of ways.

      First of all, colors of light are additive, colors of pigment are subtractive. This is true regardless of which colors you choose. If you had a monitor using the CYM model, you could not produce red, because monitors, being light emitting devices, are always additive, never subtractive, mixing C and Y would add their lights, not subtract leaving just the G. Because of this, you cannot get a lot of colors. However, you can get white, by adding C, M, and Y together. Since monitors are additive, adding CYM makes white, not black.

      The LCDs we use today are light emitting, not light reflecting. Thus, they naturally use an RGB color model. If they did not emit light on their own but only reflected like, like a sheet of paper, then their natural color model would be CYM(K). But that's just not how things work.

    • Re:Why? (Score:4, Insightful)

      by Dracolytch ( 714699 ) on Monday August 16, 2004 @01:03PM (#9982765) Homepage
      The method with which you combine colors determines whether they're additive, not the colors themselves.

      Remember: It's about emitting light versus absorbing light.

      If you have three flashlights with thin plastic in front, one of cyan, magenta, and yellow... When you combine the beams, things will get brighter (of course... Three flashlights). That's because the method being used to create the light is an additive process.

      If it were a subtractive process, then you'd be able to make a "flash dark".

      Because printing is always a subtractive process (Paper starts white, and must be made darker), the CMY/K gamut is used. (Notice that these three colors are less "strong" than RGB, making them easier to control and combine for printing). In really advanced printing, you can get multitudes of colors, to reproduce more variations, or to get more accurate color (Because sometimes mixing CYM to get perfect tones isn't as effective as it could be).

      Keep in mind: We use combinational color models, because we find them managable and convenient. However, these color models are not perfect, and cannot be. We won't ever have it perfect until we're able to serve up colors by frequency, and have them displayed accurately. Even high-quality film is limited by the chemicals used to make the film.

      ~D
    • noooo (Score:3, Insightful)

      by PenguiN42 ( 86863 )
      RGB's aren't "additive colors" and CMYK aren't "subtractive colors." They're all colors, and you can mix with them any way you like -- adding or subtracting.

      You wouldn't call a painter "counter-productive" for having red, green or blue paint, would you? Then what's so wrong about a screen having Cyan, Magenta, or Yellow?

      See, there's two ways to mix color: adding them (shining multiple light sources upon a surface, or directly at a receptor), or subtracting them (mixing multiple pigments or overlapping m
    • Correction.... (Score:3, Informative)

      by B5_geek ( 638928 )
      Well, I think I should have all my comments modded as -5 idiot.

      As many of you have pointed out, My momma must have dropped me on my head when I was a child.

      I was wrong with the statments that I made. I was purely thinking of the "painter" analogy, and not the "flashlight".

      Sorry, please feel free to delete this thread.
      I am an idiot.
  • So what!? (Score:3, Insightful)

    by shubert1966 ( 739403 ) on Monday August 16, 2004 @12:07PM (#9982076) Journal
    That's JUST what we need, more reasons to watch a box all day. I can look out my window and get all the colors all the time. And since I don't watch TV, time is something I've got 28-42 extra hours of every week.

    Tell me you're not in denial - and I won't listen.
  • by DreadPiratePizz ( 803402 ) on Monday August 16, 2004 @12:09PM (#9982097)
    NTSC throws away 3/4 of the colour information, and even HD throws away Half. From the article, it seems as if the chip is doing a lot of guessing and not "really" incresing the colour resolution. This sounds like a good way to go, since the Codec on the DVD won't have to deal with those extra colours; it's handled at display.
  • by Glock27 ( 446276 ) on Monday August 16, 2004 @12:25PM (#9982233)
    From the IEEE article:
    What's certain, according to her, is that even though Genoa's technology increases the range of colors, it's not recovering the full original color information of a movie on film, lost in the conversion to other formats, like DVD. "It's kind of arbitrarily making images look better," she says, though people will in fact prefer the resulting colors, which will typically be more saturated and brighter.

    Various video media may not have the necessary color resolution to drive these displays, but (given quality art assets;) newer video cards do [nvidia.com].

    I wonder how these types of displays compare to Iridigm's upcoming products [iridigm.com] on color fidelity. Those look quite interesting, especially at effective 200 DPI.

  • by Qzukk ( 229616 ) on Monday August 16, 2004 @12:27PM (#9982261) Journal
    Real advancement would be discovery of emitters that can match the XYZ Color [dixie.edu] standard. This standard was designed to mimic the actual operation of the eye, and therefore its gamut includes all possible human-observable colors.
    • Impossible (Score:3, Interesting)

      by r6144 ( 544027 )
      Any physically existing color (i.e. it is the response of the human eye to a light signal with a certain frequency spectrum) is in the horseshoe-shaped area in the CIE chromaticity diagram. The X, Y and Z base colors are not inside that area, thus they are impossible to produce physically using any means (unless you are going to connect the vision-related part of the brain to something other than a normal eye...).

      Indeed, with a number of primary colors (which must lie in the horseshoe shape), one can onl

  • by Animaether ( 411575 ) on Monday August 16, 2004 @12:32PM (#9982313) Journal
    I'll wait for HDR display [slashdot.org] and feeds, thanks.

    Judging from the gamut chart [ieee.org] for this RGBCMY, the boost in color range is primarily in yellows and cyans. Gold, as they note, would be a good application. Cyan.. well, that's mostly skies - and those already appear just fine on TV. A fairly decent increase in magentas/purples as well (when taking the assymetric lobe into account), but again.. not seeing its application much.
    Unless following the British royal family (lots of golds and purples) a lot, it doesn't appear to offer all that much. Especially considering movie people butcher things anyway (DVD gives a more stable picture, sure.. at the compromise of mpeg artifacting and even encoding issues.. twitches ever 25 frames are annoying - luckily only a few suffer from this).

    On the other hand, a higher dynamic range would be immediately noticeable anywhere.
    A sequence with the sun glaring into the camera ?
    A car's headlights shining at the camera ?
    Highlights on objects ?
    Blown-out surfaces from bright lighting ?

    All that could then more accurately be represented. And thanks to most things still being shot on film, or already on 10bit CCDs with, formally, underexposure but a gain for the operator, a good bit of extra range is already available in previous and current productions.
    Whilst RGBCMY would only really be of use for film (as in, actual film) productions, as digital cameras are in much the same RGB limbo that current displays are.
  • Wide gamut displays (Score:5, Interesting)

    by baxissimo ( 135512 ) on Monday August 16, 2004 @12:36PM (#9982362)
    Wow, this is really cool.

    There's a whole bunch of these wide gamut and high dynamic range displays suddenly.

    At SIGGRAPH this year, there was a 6-primary (RGBCMY) projection system called IRODORI on display in emerging technologies:
    http://www.siggraph.org/s2004/conference/etech/iro dori.php?=conference [siggraph.org]

    There was also a high dynamic range display (capable of a greater range of brightness) from Sunnybrook Technologies at E-Tech:
    http://www.siggraph.org/s2004/conference/etech/hig h.php?pageID=conference [siggraph.org]

    And then I saw a few displays on the exhibition floor from NEC with a "WG" specifier for "Wide Gamut". NEC's WG monitor is still RGB but with purer R, G, and B phosphors to obtain a gammut wider than Adobe RGB.

    And now there's this one. Way cool.

    I can't wait till this becomes more widespread. The question becomes, what will the next color standard be for use in applications and APIs? It doesn't make sense to actually encode color as 6 values for display, since (most) humans only have three kinds of cones. It would make more sense to use something like CIEXYX for color interchange in that case. Especially if we're going to have this wierd mix of HDR and various wide gamut displays around for a while, each which has slightly different needs for color output. Best to just go with a neutral, well-defined intermediate colorspace.
  • by peter303 ( 12292 ) on Monday August 16, 2004 @12:42PM (#9982468)
    Last week in the emerging technology section of SIGGRAPH a company or process called IRODORI was demoing a six-color projection system. (I could not find a reference on Google or www.siggraph.org.) When side-by-side with a conventional three-color you saw dramatic differences. Conventional is like looking at the world with wax-paper taped over your eyes. They claimed that conventional systems only covers about 55% of the CIE color chart, while they get over 90% color space. They bootstrap off of two conventional three-color projection systems. They put in different color filters and add special color separation software.
  • Bandwidth (Score:3, Insightful)

    by SlipJig ( 184130 ) on Monday August 16, 2004 @01:02PM (#9982749) Homepage
    Won't this require twice the bandwidth to transmit?
  • by Saville ( 734690 ) on Tuesday August 17, 2004 @12:34AM (#9988341)
    I couldn't see this info elsewhere. I was at a colour course at Siggraph 2004 last Sunday for most of the day (8:30am to 5:30pm on just colour!). I also got to see both the IRODORI wide gamut display and the HDR display, both were very cool. Once we get HDTV it is clear we can go at least one more step.

    The problem with RGB is it can't describe all colours the eye can see. This was a problem for the guys that made Salem Cigarettes. The problem is their brand's colour lies outside of the small RGB gamut! The best they can display for their brand in RGB is only an approximization. Sure it is a blue-ish green-ish colour when you see it on TV, but it isn't what you would actually see in reality or with a wide gamut colour device. They weren't the only company with this problem.

    This is a huge problem for hundreds of thousands of people every day. There are colours that exist that they can't see in their work. They can sit down on a computer and work in an alternative colour space such as L*a*b* and create these colours and even print these colours, but thanks to our RGB monitors they can't view them! What do they do when they have to print an add for Salem Cigarettes? Guess and check I suppose...

    Technically RGB can represent more colours than we give it credit for, you just have to allow for negative values which is only useful mathematically until we invent anti-photons to remove light...

    Here is a short link to make explain details:
    http://www.cs.sfu.ca/CourseCentral/365/l i/material /notes/Chap3/Chap3.3/Chap3.3.html

    A few more things I'll add from that course; HVS is basically the worst colour space and CIELAB or L*a*b* is the best. CYMK is technically multiplicitive, not subtractive like so many people like to call it. Our eyes are sensitive to short, medium, and long wavelengths, not Red/Green/Blue. RGB happens to mostly match up with what we percive, but it is an over simplification.

    For the real keeners here is a nice FAQ about this:
    http://www.poynton.com/notes/colour_and_gam ma/Colo rFAQ.html

"For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" - Post Brothers comics

Working...