Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Handhelds Displays Hardware

Samsung to use Sub-Pixel VGA Screens 177

pdawerks writes "Samsung Electronics has developed a new graphics chip that will allow half VGA screens to produce VGA resolution. The novelty is specially aimed at future mobiles with VGA screens that will be less than 2.4 inches. It generates color using an entirely new driving method called sub-pixel unit driving methodology." Not sure if I think it is exactly new or not, but it's nifty.
This discussion has been archived. No new comments can be posted.

Samsung to use Sub-Pixel VGA Screens

Comments Filter:
  • More Information (Score:5, Informative)

    by Temporal Outcast ( 581038 ) on Thursday October 21, 2004 @04:21PM (#10592415) Journal

    More details can be found at Deisgntechnica [designtechnica.com].

    Geekzone also has a similar article [geekzone.co.nz].

    • Re:More Information (Score:2, Informative)

      by Stanhenge ( 731422 )
      Refer to the patent US#5193008 [espacenet.com] for a technique that increases resolution of a raster device.

      DP-Tek developed this for laser printer devices, but the idea applies to other technologies. Basically, you can place a physical line between adjacent laser scan lines, using the analog memory of the OPC drum.

    • by Dink Paisy ( 823325 ) on Thursday October 21, 2004 @10:53PM (#10595060) Homepage
      I know this is so late that everyone has moved on to the next story, but curious about the idea, I wrote a program to simulate the idea using my shadow mask CRT monitor, and compare it to downsampling and true 640x480. It may work on aperture grill and LCD monitors as well, but it probably won't look as good. Download here [toronto.edu]. Sorry; it's a windows binary only, and it requires .NET.

      For best results set your resolution low, otherwise it has very visible moire patterns. As a side effect of the conversion, the image gets darker. My program also has a colour cast, which the article claims is due to adding the white pixel. The article also says that Samsung has overcome this problem.

      It works by setting up the subpixels as a 640x480 square grid, with each pixel consisting of a starting pixel, and the right, lower, and lower right subpixels. Subpixel values are calculated using the average intensity of the corresponding colour value in each of the four pixels the subpixel is a part of.

      Visually, aside from the darkness and colour cast which are artifacts of the simulation and wouldn't appear in the real product, it looks decent. It's blurrier than a true 640x480 display, but retains more detail than the 320x240 downsampled version.

  • Interlacing doubles the number of lines in TV images using multiple fields. Is it likely that this is a variation of this concept?
    • by DeepFried ( 644194 ) on Thursday October 21, 2004 @04:29PM (#10592507) Homepage
      Interlacing does not double lines. It is just a process that brings the lines up in an alternating (odd/even) sequence. This is now being joined by progressive scan which brings the lines on in order from top to bottom.

      Progressive or interlaced, can each scale in lines of resolution to HiDef. 1080i and 720p respectively. (i=interlaced p=progressive)
    • by GillBates0 ( 664202 ) on Thursday October 21, 2004 @04:36PM (#10592566) Homepage Journal
      and does not have anything to do with the resolution. Infact, interlacing is sometimes called "interlace scanning", because the gun in the CRT draws alternate lines across the screen to reduce the visible flicker arising due to the time required to move the gun from top to bottom.

      As usual, Wikipedia has a good article [wikipedia.org]. To quote:

      Interlacing is a method of displaying images on a raster-scanned display, such as a cathode ray tube (CRT), that results in less visible flickering than non-interlaced methods. The display draws first the even-numbered lines, then the odd numbered lines of each picture.

      • by shirai ( 42309 ) * on Thursday October 21, 2004 @05:09PM (#10592872) Homepage
        Interlacing is used to reduce flickering? I think not. It used to be used to reduce *bandwidth*.

        An interlaced image refreshing at 60Hz (30 full fields per second divided by 2) is going to have the same flicker as a non-interlaced image refreshing at 60Hz.

        This is actually a very complex subject to do with how people view images, resolution vs fields per second, what type of images you are viewing, movement vs. still images, etc. but in terms of reducing flicker, I would say, at the very least, the statement is deceptive.

        In fact, one of the major problems with old Amigas running in interlaced mode was the annoying (you got it) flicker. This is because a horizontal line that was exactly 1 pixel would turn on and off every 60th of a second. So in this case, it would depend on how you defined the world flicker too.

        To be fair, I think what you meant to say was that given the same bandwidth on a non-digitally compressed transmission and without digitally upconverting the signal, you can get 60 fields per second (at 30 frames per second) instead of 30 fields per second (at 30 frames per second) meaning that you will probably get less inter-frame flicker. But even this is deceptive because if you built televisions specifically for 30 frames per second, you could simply reformulate the glow on the screen to last an extra 1/60th of a second longer. But perhaps this is (a) hard to do and (b) back then they wanted the extra fields per second for smoother motion. By the way, a lot of the bandwidth savings doesn't apply to digital due to the way that digital compression works. This was a controversial point during the discussions on HDTV resolutions.

        Fudge. I'm trying to cover all my bases here so I don't get flamed for not knowing what I'm talking about. Suffice it to say, interlacing and reduction of flicker do NOT walk hand in hand. It is simply one factor, of many, that comes into play.
        • by Anonymous Coward
          >Suffice it to say, interlacing and reduction of flicker do NOT walk hand in hand.

          When the TV was invented, it was noticed that a phosphor did not remain lit long enough for the beam to make a complete pass at 29.9fps, therefore there would be significant "flicker" in the picture. The inventor(s) decided to interlace so you'd get a more uniform brighness to the picture and eliminate the flicker. This problem has long since been solved in other ways.

          During the VGA days, however, the reasons were entir
        • But even this is deceptive because if you built televisions specifically for 30 frames per second, you could simply reformulate the glow on the screen to last an extra 1/60th of a second longer. But perhaps this is (a) hard to do...

          AFAIK, that was it exactly.. way back when the NTSC standard was set technology was appearantly not good enough to reliably refresh a 480 line image 60 times per second. Refreshing it only 30 times per second causes flicker, which is where this idea of interlacing reducing fli

        • Parent reply post is on the mark here.

          Where the confusion comes up is in the old days where interlacing was first used -- black and white television. Interlaced TV signals drove black and white CRTs... and by selecting the right phosphor, the displays had persistence, where the image would continue to glow into the next frame even after being drawn. High persistance phosphor DID cut down on flicker, and was necessary because of the interlacing.

          If you want to get into really obscure stuff... Radar display
        • There's a tradeoff here between flicker and bandwidth.

          When TV was young, tubes were used for amplification and they had by today's standards very limited gain-bandwidth product. Increased bandwidth would require additional amplification stages and thus significantly increased cost. The 5 to 6 MHz bandwidth chosen was about the maximum acceptable then, and to get acceptable resolution with acceptable flicker, interlaced video was necessary.

  • by c0p0n ( 770852 ) <copong@gma[ ]com ['il.' in gap]> on Thursday October 21, 2004 @04:22PM (#10592420)
    It generates color using an entirely new driving method called sub-pixel unit driving methodology

    I suppose I got my driver license from the wrong place...
  • Anyone? (Score:5, Interesting)

    by wankledot ( 712148 ) on Thursday October 21, 2004 @04:23PM (#10592437)
    This sounds exactly like sub-pixel antialiasing, which is the basic for lots of things, including OS X's font smoothing on LCDs, and Microsoft's type technology... I forget its name.

    Is it really as simple as that? because that's been around for at least 25+ years in theory, a bit less in practice.

    • Re:Anyone? (Score:4, Interesting)

      by GreyPoopon ( 411036 ) <gpoopon@gm[ ].com ['ail' in gap]> on Thursday October 21, 2004 @04:31PM (#10592526)
      This sounds exactly like sub-pixel antialiasing,

      Not exactly. Cleartype and OS X font smoothing use subpixel rendering to increase the horizontal resolution. This technique seems to work on the vertical resolution.

      "Contrary to existing color display methods that express color pixel by pixel, this new method creates color at the sub-pixel level representing more than two data lines from the same pixel."
      Maybe they accomplish this by rotating the orientation of the pixels so that it impacts the vertical rather than horizontal? Or maybe this is just a big hoax? Anybody have more information?
      • According to the article, they're generating a white signal from the RGB input and have four color elements for each pixel-- RGBW. I suspect they're arranged in a square, like:

        RG
        BW

        or some such. This would let them apply a system like ClearType or OSX or the old Apple II subpixel rendering in two dimensions, rather than just one as with the typical horizontal RGB subpixel arrangement.
        • Maybe (Score:4, Informative)

          by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Thursday October 21, 2004 @05:37PM (#10593111) Homepage
          They seem to be indicating that the RGBW trick is a whole different thing used to increase brightness (similar to CMYK for printers to make dark black colors).

          There is a chance the subpixel rendering trick might depend on the new RGBW setup though, but it seems like they're two seperate technologies.
        • Doesn't sound like they are doing that though because they are talking about making a 240x640 screen (wtf?) appear to function like a 480x640 screen. I think offsetting the blue with the red and green would make a horrible checker board display.
      • Maybe they accomplish this by rotating the orientation of the pixels so that it impacts the vertical rather than horizontal? Or maybe this is just a big hoax? Anybody have more information?

        This completely depends on how the display is built. If your LCD is 240*4 (RGBW) pixels wide and 640 pixels high, there's no problem about it ;)
    • Very similar, except these will be driven on the hardware level with sub-pixel accuracy. Current sub-pixel rendering has to be done at the driver or higher level, which is why it's usually only used for fonts.
  • 1- take a small vga screen
    2- pretend it was twice bigger
    3- get a half size vga display with vga resolution

    or did you mean a quarter of the pixel count ?
  • I'm Confused (Score:4, Interesting)

    by jmulvey ( 233344 ) on Thursday October 21, 2004 @04:23PM (#10592442)
    the title suggests that "VGA" indicates a default screen size (like 4" by 6"), but my understanding is that VGA says nothing about the size of the display, only the number of pixels (you can display VGA resolution of 640 x 480 on a 10" screen or a 30" screen, and its still VGA).

    So isn't the whole term "half VGA screen" kinda dumb? Or is it just me?
    • "So isn't the whole term "half VGA screen" kinda dumb?"

      I think what they ment was quarter vga screen, which referes to the currently common 320x240 screens found in most PDAs and high end cell phones. This tech would allow these low cost LCDs to display something akin to true VGA 640x480.

    • Half VGA Screen means half of the VGA resolution, 640x240. To quote:

      "By composing a new pixel with the sub-pixel on the adjacent scanning line, 480x640 (VGA) resolution can be attained from a 240x640 (half VGA) panel. The device can display up to 260K colors for TFT panels in mobile phones."
      • Half VGA also means 320x480. Confused yet? Half VGA, hell, even "VGA" is a stupid term. It's 4 more characters to be specific, so why don't they?
    • Re:I'm Confused (Score:5, Insightful)

      by ivan256 ( 17499 ) * on Thursday October 21, 2004 @04:37PM (#10592577)
      Isn't it about time we depricated the use of those silly acronyms we've bastardized to not mean what they originally meant anymore anyway? Wasn't VGA 640x480 at a mere 256 colors? And didn't it imply a particular ISA bus interface as well? Plus, who can keep track of what WUXGA and QWVGA and UHDWMRXGA all mean? Was somebody just leaning on the keyboard, or did they mean to say something anybody could understand like "1600x1200"? Tell us the resolution in a way that doesn't require a lookup in a massive acronym table please. That way it will be easy to compare displays to each other.
      • Actually... (Score:5, Insightful)

        by WARM3CH ( 662028 ) on Thursday October 21, 2004 @05:22PM (#10592979)
        ... VGA has only 16 colors in 640x480. It could only show 256 colors in the 320x200. Comparing it to what most PDAs do now, it seems that getting 64K colors in 320x200 is already beyound what VGA did!
        • The original VGA was capable of 256 colors at 360x480. It just wasn't in the bios and pixel addressing was a little more complicated.
      • Agreed.

        And it kills me that some people still can't agree to put the horizontal resolution first.

        To those who write it 480x640 (without meaning a vertical screen): The war is over. Please come out of the jungle.
        • The writeup doesn't make this clear but Samsung is talking about small screens for cell phones.

          The flip-phone form factor of current-generation phones have a screen with 480(H) x 640(V) pixels, sometimes refered to as "VGA resolution".
      • by N8F8 ( 4562 )
        I ordered a Dell 8600 last week for my mom and chose a lower resolution screen as an option thinking I was choosing the higher resolution. WTF? WXGA?
      • Wasn't VGA 640x480 at a mere 256 colors?

        Worse than that, even.

        VGA just means "Video Graphics Array" and was IBM's first attempt at a commodity video technology that was halfway useful. There were several standard resolutions and color depths supported by the original VGA adapter, ranging from 320x200x256 to 640x480x16, I believe. So-called "Super VGA" adapters eventually boosted that to 800x600x256 (oooh!) and beyond.

        From a practical standpoint, the only useful information that the term "VGA" rea
      • It's really not that difficult - if you're into this sort of thing.

        But fear not... I've already seen flat panel display manufacturers label their screens in megapixels - to match digital cameras, I'm sure. That should satisfy your quest .. maybe. Assuming that all screens remain at a 4:3 aspect ratio anyway. Wouldn't want them to become 2:3 to match traditional photos, or 16:9 for widescreen or 16:10 to match widescreen laptop displays, or 2:1 because the movie industry keeps stretching the da*n image hori
        • That said, I don't know of any site which lists *only* the acronym.

          Last time I was shopping for laptops I noticed that Dell, HP/Compaq, and IBM all list the acronyms only.

          As for your list...

          Don't you think that it's confusing that the Q and H prefixes can be used to indicate larger (Quad) *and* smaller (Quarter, Half)? It's just stupid. List the resolution. That's the information you're trying to convey anyway.
    • Re:I'm Confused (Score:1, Interesting)

      by Anonymous Coward
      It is entirly stupid.

      First, VGA resolution is 320x240 at 256 colours, or 640x480 at 16 colours.

      But, times move on and we've redefined it to be 640x480xN colours, where N is whatever we want it to be.

      So now we have half VGA, which is 640x240, VGA which is 640x480 and Quarter VGA which is 320x240.

      We have SVGA which is 800x600xN colours, 640x480xN>16 colours, and 1024x768xN colours.

      So, we have XGA, which is 1024x768xN colours or more...

      Oh wait, VGA, XGA, SVGA, etc DO NOT MEAN A SPECIFIC RESOLUTION.
      shh
    • but my understanding is that VGA says nothing about the size of the display, only the number of pixels (you can display VGA resolution of 640 x 480 on a 10" screen or a 30" screen, and its still VGA). ... yeah.

      So isn't the whole term "half VGA screen" kinda dumb? Or is it just me?

      It's just you. VGA is 640x480. Half-VGA is either 320x480 (many PDAs) or 640x240 (a few PDAs, blackberry-like devices).

      Half refers to pixel count in one direction, not physical size. That the screens tend to reflect a norma
  • by arashiakari ( 633150 ) on Thursday October 21, 2004 @04:24PM (#10592450) Homepage
    Double the resolution, and blend the colors of neighboring pixels together to fit on a lower res. screen. Sounds like a new way of saying "anti-aliasing" ...

    And the window washers are now "corporate vision enhancers!"
  • I wonder if this could be used with those LCD monitors that have a smaller bit depth in order to hit a low response time.

    Perhaps they could decrese the bit depth even further and design them specifically for this card in order to get REALLY low times.
  • Like what X on my screen is doing right now?
    • You're running X on a cell phone? That's the news here, not the technology per se. This chip will be able to do sub-pix using very little power and on a teeny tiny display.
      • That's the news here, not the technology per se.

        From the article:
        It generates color using an entirely new driving method called sub-pixel unit driving methodology.

        Entirely new makes it sound as if its a new technology, not a hardware implementation of an existing technology. The article is a press release reprinted by a lazy journalist.
        • Samsung Electronics has developed a new graphics chip that will allow half VGA screens to produce VGA resolution.

          The novelty is specially aimed at future mobiles with VGA screens that will be less than 2.4 inches. It generates color using an entirely new driving method called sub-pixel unit driving methodology. Contrary to existing color display methods that express color pixel by pixel, this new method creates color at the sub-pixel level representing more than two data lines from the same pixel. By compo

          • Thanks for pasting from the article something to prove my point.

            By composing a new pixel with the sub-pixel on the adjacent scanning line,

            That technology exists and is in common use at present.

            480x640 (VGA) resolution can be attained from a 240x640 (half VGA) panel.

            Yep, the optical resolution stuff isn't anything new. Just now they can do it on mobil phones with their hardware implementation. As I said, Big deal.

            Better headline: subpixel rendering now available on mobile phones.
    • The parent post is perfectly on topic. The article is talking about sub pixel rendering:

      By composing a new pixel with the sub-pixel on the adjacent scanning line, 480x640 (VGA) resolution can be attained from a 240x640 (half VGA) panel.

      Using adjacent triads from different pixels to increase the optical resolution of screen output is old technology. X uses it for fonts (Gnome Menu -> Preferences -> Fonts -> Subpixel Smoothing), as does Windows XP.

      The article merely mentions Samsung are now usi
    • You must be using Slackware. They have some sort of connection to the "Church of the Sub Pixel" or something.
  • The new driver IC has overcome the physically impossible VGA-class and higher resolution images on small size TFT-LCD panels of less than 2.4 inches

    Why is it physically impossible to design VGA displays less than 2.4 inches? Too small pixels?
    • You can get a VGA screen that is only 0.2 inches [byte.com]. It is about 2000 dpi.

      But, it's always a price/resolution tradeoff; I suspect this lets them use cheaper production techniques to produce these smaller displays (which, as volumes ramp up, are now being squeezed on price).
    • by stonecypher ( 118140 ) * <stonecypher&gmail,com> on Thursday October 21, 2004 @05:53PM (#10593238) Homepage Journal
      Because current LCD pixels require six lead lines, and we can't make lead lines small enough to shrink the pixels any further. The article phrases this badly: it's not that pixels can't be made smaller. It's that TFT LCD pixels' lead lines take all of the available current space, and there is no current technique on the horizon to solve this. Other monitor types do not have this particular problem; this is peculiar to LCD and OLED.
      • Why is it physically impossible to design VGA displays less than 2.4 inches? Too small pixels?

        Because current LCD pixels require six lead lines, and we can't make lead lines small enough to shrink the pixels any further.

        This is weird to me, because the Konica Minolta Dimage A2 [konicaminolta.com] has an electronic viewfinder (EVF, basically a small LCD screen) that's about half an inch diagonal with VGA resolution. That's been out since February or so. But maybe I'm missing something.
  • MS Cleartype (Score:5, Informative)

    by darkmeridian ( 119044 ) <<moc.liamg> <ta> <gnauhc.mailliw>> on Thursday October 21, 2004 @04:26PM (#10592475) Homepage
    The article is really short, but it says that the screen will use sub-pixel technology to allow a half-VGA screen to render VGA resolution. MS Cleartype also uses sub-pixel technology, though to make text sharper.

    A linkie with information about sub-pixels in general (though it's on grc.com, whatever.) http://grc.com/cleartype.htm [grc.com]
  • by ikewillis ( 586793 ) on Thursday October 21, 2004 @04:27PM (#10592481) Homepage
    Subpixel rendering has been around for quite a long time. Two things that I can think of right off the bat are Microsoft's ClearType and FreeType, both of which have hinting engines which support subpixel rendering.

    Subpixel rendering takes into account the physical position of the red, green, and blue subpixels of an LCD display, and can therefore provide up to 3X the horizontal resolution of a typical display (with distortion, of course)

    Here's a nice writeup [purdue.edu]

    • Actually its been around a lot longer then you think. The Apple II used a form of sub pixel rendering [grc.com] written by steve wozniak himself.
      • Actually its been around a lot longer then you think. The Apple II used a form of sub pixel rendering written by steve wozniak himself.

        Brilliant hardware design but messy for software writers. Actually the kind of display you saw depended on what type of monitor you used. Two adjacent bits determined the color of the pixel on a color display plus the high bit of each byte introduced a color shift for all the bits in that byte. That lead to a bizarre set of rules for mixing colors. It was possible, as was
  • ClearType? (Score:4, Interesting)

    by theGreater ( 596196 ) on Thursday October 21, 2004 @04:28PM (#10592492) Homepage

    Sounds basically like cleartype, right? I mean, all THAT is is using the RGB (or CYM) sub-pixels to smoothe out lines and curves, correct? Err, so what's the BFD?

    -theGreater Muller.
  • So basically, this is like separating Chrominance and Luminance, à la YUV. I always found having crappy and blurry colors, especially with RED, some bad compromise, often encountered on TV. Lavished colors won't help. The biggest problem i encountered with my mobile was reflection from the sun. Maybe they should look at the techs used on PDAs, you know, like transflective screens. Anyway, not everybody can read at such high resolution (2.4" screens!)
  • by Bender Unit 22 ( 216955 ) on Thursday October 21, 2004 @04:40PM (#10592618) Journal
    As they say in Germany "ich habe gemüse in das leiderhosen". Which means that it might be looking like new fancy things but it is still the same old clothes.
    Kinda like the Swedish "min trusse lugter af tis",, it's new but then again, it's not.

    Is it a case of someone applying existing technologies like smoothing to the hardware layer if you look into what's really going on?
  • by GrAfFiT ( 802657 ) on Thursday October 21, 2004 @04:41PM (#10592626) Homepage
    The article suggests that they added "White pixels". Additionally, the problem of dark screen due to the increased pixel density on high resolution panels has been solved using 4-color (R-G-B-W) rendering algorithm, improving the brightness of TFT-LCD panels. That's radicaly different than ClearType. ClearType uses the normalized RVB subpixels arrangement to triple the "perceived" resolution. That's because the humain eye is more sensitive to luminance than to chrominance (try to recognize colors in the dark, you can't, but you can still read B&W text). The problem here is not text aesthetics. It's global luminosity, as your backlight often has to battle with sunlignt. They add more "white pixels" to enhance the luminosity. In percentage, the number of "color" pixels are lower in this system. But the eye won't actually see the difference.
    • by shirai ( 42309 ) * on Thursday October 21, 2004 @05:21PM (#10592977) Homepage
      Note that white pixels aren't a magic bullet. You get some brightness but give up saturation. It works like this:

      Given four pixels of RGBW, you can get your brightest color by having all four pixels on. This would result in total brightness of:

      1 white pixel for every combination of RGB and

      1 white pixel for every white pixel.

      So you get the equivalent of 2 white pixels for every 4 pixels or a factor of 1/2 let's say.

      In regular RGB, you get a factor of 1/3 because you get the equivalent of 1 white pixel for every set of RGB pixels.

      Looking at this, you get 50% more maximum brightness from RGBW vs RGB.

      It's not a magic bullet because you lose saturation. For example, if you want a fully saturated red, in the RGBW format, you get 1 full red pixel for every four pixels. In RGB, you get 1 full red pixel for every three pixels. So RGBW gives a factor of 1/4 while RGB gives a factor of 1/3 for a fulls aturated red. This is a reduction in brightness of a full saturation red of 25%.

      In other words, your brightest color is 50% higher in RGBW but you brightest red (at full saturation) is 25% less which means you have to fudge around with values to get a picture that seems to make sense or you get a bright picture with dark spots with a lot of saturation in them. So you might, programatically (and this is probably what samsung is doing) increase full saturation red to include white in it. This makes the color brighter but also reduces the saturation.

      A lot of projectors with a white component have two modes. A dimmer mode that doesn't use the "W" pixel at all but has richer colors (used for movie viewing) and a presentation mode that does use the "W" when brightness is a factor such as in a meeting (e.g. the room may have light leaking in from windows).

      Not saying it is good or bad. Just that a RGBW is not a magic bullet.
    • That's because the humain eye is more sensitive to luminance than to chrominance (try to recognize colors in the dark, you can't, but you can still read B&W text)

      While you're correct that the eye is more sensitive to brightness than color, the demonstration you offered is somewhat flawed - you're effectively using a separate visual system in low-light conditions. Below a certain light level, the color-perceiving cones won't work, so the eye uses the non-color-sensitive rods, which are much more sens
  • by boomgopher ( 627124 ) on Thursday October 21, 2004 @04:45PM (#10592662) Journal
    Quote:
    By composing a new pixel with the sub-pixel on the adjacent scanning line, 480*640 VGA resolution can be attained from a 240*640 half-VGA panel.

    Drop all the "MacOS does this", "ClearType does this", etc. shit please.


    • Err, why? It's the same thing as ClearType, they just rotated the display 90 degrees and are doing subpixles vertically instead of horizontally.

      My guess is that someone read that MS patent really carefully and concluded that it only covers horizontal subpixels. :)

      The novelty would be that it's implemented in the display driver chip thus I guess it can move any pixel around, not only when rendering fonts.

      /greger

      • by hattig ( 47930 )
        As far as I can tell, they are not doing that in any shape or form.

        For a start, Cleartype is for text and increases the horizontal resolution of text because the subpixel resolution of a 640x480 screen is actually 1920x480

        This is RGBW ... and I am guessing that it is laid out in a

        RG
        BW

        format, i.e., a 640x480 screen would have a subpixel resolution of 1280x960. Cleartype wouldn't work on this screen as it is currently implemented.

        What they are doing is taking a 640x240 "Double Height" screen (i.e., 4:3 w
    • But... ClearType *does* do this... :)

      The exact same methods ClearType uses on LCD panels are being used here by Samsung, albeit in hardware instead of just software.

      Sub-pixel displays have been around for years, one of the first uses of it was on the old Atari/Apple computers.

      In the higest resolution one pixel wound up being "smaller" then a full pixel on the television screen. This would up with "odd" pixels showing up brown'ish and "even" pixels showing up red'ish. An odd and an even pixel adjacent to
    • It mashes two lines into one by averaging the values? How is that not stone-age technology?
  • by gotr00t ( 563828 ) on Thursday October 21, 2004 @04:51PM (#10592725) Journal
    I have seen this methodology used in many applications where the screen was just too small to accomodate a purpose. Take Tezxas [ticalc.org], for example, a ZX Spectrum emulator for the TI-89 (and 92+, but that dosn't apply here). Since the ZX spectrum's screen is roughly twice the dimensions of the 89's screen, 4 pixels had to be represented by one. There are also some applications for the PocketPC that use a very similar sounding method to bring full VGA resolution to half-VGA sized screens.

    My question is, is this something new because its more clear? or because it's a hardware implementation?

  • by francisew ( 611090 ) on Thursday October 21, 2004 @05:02PM (#10592818) Homepage

    Here is a link to the Samsung website about the technology: http://www.samsung.com/Products/TFTLCD/Technology/ 4colorrandering.htm [samsung.com]

    I wouldn't complain too hard about the confusion in the details. They couldn't even spell 'rendering' right on their own site (4 color randering???).

    It also discusses 'physicail' pixels. I dunno about that.

    They seem to have created smaller pixels, which are spatially located across a different area than normal.

    They then need fewer wires to connect the given number of pixels. Meaning a higher resolution with fewer interconnects. Maybe I'm completely wrong in this 1 minutes evaluation.

    The neat thing is the overlap of their 'logical' pixel arrangements. It would seem they are using traditional dithering with a complicated arrangement of pixels. This should do exactly what they state. Ther weird thing is that their sub-pixel seems to have the wrong number of color sub-elements.

    One would expect a ratio of 2:1:1 for green:red:blue emitters. They have 4:2:1. Maybe their red emitters are much brighter than the blue, which would make sense.

    They mention replacing some rows with white pixels, but their diagrams don't show anything. Maybe the media-relations people just don't know how the technology works, and are making stuff up until someone corrects them.

    • One would expect a ratio of 2:1:1 for green:red:blue emitters. They have 4:2:1. Maybe their red emitters are much brighter than the blue, which would make sense.

      4:2:1 makes sense because of the relative sensitivity of the eye's receptors for those colors. Humans are much less sensitive to blue than red or green, and they're more sensitive to green than red. The standard YIQ color encoding for (analog) color television broadcasting also takes advantage of this relative sensitivity for compression, and ca

  • by Anonymous Coward
    Samsung's press release about "sub-pixel unit driving methodology" is total hype and bull in my opinion. This technique provides better color and smoothing but no higher resolution by any means. They should be honest and call is what it is - color contrast and sharness enhancing technology - and not suggest that it provides a higher resolution for a given and fixed physical resolution.
  • Wrote about this some time ago; http://www.grc.com/cleartype.htm [grc.com]
  • that I can make an even more improved chip that will display VGA resolution on a screen that physically has only 1/4 the resolution of VGA, by composing 4 lines together? Or 8? Or, hey, I know, why don't we compose all the lines together, so we can have a VGA resolution display on hardware that is only a single row of pixels?

    Well, because it doesn't work that way. You can combine lines and display the right color values, but in the end, you only have half as many pixels, and you simply don't have VGA r

    • Whoa, you're not getting it, rewt66. It sounds like you think what's going on is simply sampling of higher resolution data down to a lower resolution and claiming it's as good. (i.e. if a white pixel and a black pixel were next to each other, they would be replaced with a single 50% grey pixel.)

      That's not quite the technology here. You see, a normal LCD has 'subpixels' which are really just pixels that can display one of the three additive primary colors (red, green and blue.) These pixels are necessari
  • I'm sure Samsung won't want to cannibalise their own panel business, but if they make some sort of inline attachment that we can connect to existing LCDs to boast their resolution, wow... its gonna be so cool!

    I'm sure everyone will buy one!
  • by IGnatius T Foobar ( 4328 ) on Thursday October 21, 2004 @05:54PM (#10593240) Homepage Journal
    Here's a very good writeup on how subpixel rendering works:

    http://grc.com/ctwhat.htm [grc.com]

    It goes into detail with pictures and everything, demonstrating how the technology takes advantage of the separate red, green, and blue subpixels to achieve additional smoothing.

    I'm not sure how Samsung intends to implement "white subpixels" though.
  • by Tumbleweed ( 3706 ) * on Thursday October 21, 2004 @05:55PM (#10593252)
    Ahh, the venerable Apple ][, inspiring people even today!
  • taters (Score:3, Funny)

    by maxchaote ( 796339 ) on Thursday October 21, 2004 @05:57PM (#10593269)
    Did anyone else notice that the acronym for this technology is "SPUD Methodology"?

  • Sounds bogus (Score:3, Informative)

    by Theovon ( 109752 ) on Thursday October 21, 2004 @06:42PM (#10593675)
    First, it sounds like they're simply scaling 640x480 down to 320x240 with antialiasing. Big whoop.

    Second, if they only do a luma blend (ie, ignore the nonlinearity of human perception of light), then it really won't be quite the same thing. I just don't think they're doing it right, because a proper luminance blend is computationally expensive.
  • Technically VGA is.. you can't increase resolution...blah blah blah. This is why people at Slashdot need to all be slapped. This is a press release, of course it's not technically correct. I'll help you guys out here.

    This tech is to increase visibility and clarity on low res screens by taking into account sub-pixels. They are not increasing resolution, they're just stepping back and reconsidering what a pixel is. Mac OS X does this [clarkson.edu], but it's not the same thing as Samsung's tech, since it's only for tex

  • The article is too light on details to tell wtf they are talking about, but it sounds to me like what they're saying is that the software operates on a virtual display device which gets automatically scaled down by blending virtual pixels into a smaller number of real pixels. If so, it's not rocket science, and it doesn't really do jack for image quality compared to just telling the software the display was the correct size to begin with.
  • by happynut ( 123278 ) on Thursday October 21, 2004 @08:12PM (#10594202)
    The sub-pixel technology was actually licensed from Clairvoyante [clairvoyante.com], and is available to all comers. Clairvoyante calls it a PenTile Matrix [clairvoyante.com].

    I know they are working with other panel folks too, so you will probably see more of these type of sub-pixel displays soon.

  • Microsoft holds patents [microsoft.com] relating to sub-pixel rendering. I don't know if they are specific to font rendering, or generic to any sub-pixel rendering to increase perceived resolution.

    Dan East
  • from dpreview [dpreview.com]
    Casio has announced the highest resolution LCD display to date, a 2.2" HAST TFT LCD monitor with full VGA (640 x 480) resolution. The majority of LCD monitors used in digital cameras today have QVGA (320 x 240) resolution (230,000 total pixels), this new screen would deliver over 900,000 pixels which would produce a far more detailed reproduction of images, very useful for immediate record review or playback verification. Casio claim that this new screen has the same power consumption as the
  • Wow. So let's take a few seconds to summarize these technologies:

    1. Zooming an image to the proportion 1:2.

    2. Inverted CMYK.

    The white pixel on the screen's pretty clever though...
  • why they are bothering to do this, and should they really be concentrating on developing smaller true VGA screens, but then they've probably got sheds full of existing screens that they can't shift at a profit and intend to use this as a stopgap until they've shifted them. Then they can stuff real VGA screens in and make use of the same technology to drive "1024x768" displays on 640x480 screens...
    • why they are bothering to do this, and should they really be concentrating on developing smaller true VGA screens

      Because it comes basically "free"... To produce something resembling the human visual range on a screen, you need (at least) three pixels, regardless of the colorspace you choose (note that, although you could theoretically have those pixels stacked into the plane of the display, for some reason (money?) no one seems to do that).

      So, ClearType and what OS-X use, which many have misunderstood
  • It looks like they use technique commonly used in the modern 3D videocards - full-screen antialiasing based on the multisampling (callsed also supersampling). The idea is that the picture produced in the render buffer with higher resolution , and after that each screen pixel produced as an averege of the several render buffer pixels.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...