Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware Technology

GDDR2 Emerging As A Real Standard 122

An anonymous reader writes "I noticed here that EE Times is reporting that the GDDR2 standard is finally becoming a reality. Both NVIDIA and ATI's latest chips offer support. ATI helped spearhead the initiative to develop the standard. The significance of this is great, since it may very well mean that every 18 months or so a new graphics memory standard will be released."
This discussion has been archived. No new comments can be posted.

GDDR2 Emerging As A Real Standard

Comments Filter:
  • I'm sorry... (Score:5, Insightful)

    by JanusFury ( 452699 ) <kevin.gadd@nOsPAM.gmail.com> on Monday March 24, 2003 @03:54AM (#5582044) Homepage Journal
    But it seems like this whole 'building names on each other' thing is getting out of hand.
    GDDR2 SDRAM? What the hell is that supposed to mean? Sheesh. Why can't you just call it something like DDR3 or GDRAM or something simple like that?
    • Because simply put; That would make sense. Look around. When was the last time you can say that, and not either be lying or looking at things in a miniscule scale.
    • Re:I'm sorry... (Score:5, Informative)

      by Doppler00 ( 534739 ) on Monday March 24, 2003 @04:04AM (#5582076) Homepage Journal
      Probably because it's engineers and not marketing people naming these things because it's being sold to other firms to be included in there products. I guess Engineers prefer unintelligable acronyms to cool names like GeForce or Radeon.

      GDDR2 SDRAM really means -> Graphics Double Data Rate 2 Synchronous Dynamic Random Access Memory
      • Of course engineers prefer "GDRR2 SDRAM". It's much more informative. It includes the purpose (Graphics), transmission type (DDR), memory type (Synchronous) and access type (Random). Would you prefer buying "UltraFast MAX RAM" instead of DDR 333? I definitely want to know how fast is the RAM I'm buying and if it has ECC or not.
      • Dear Sir,

        You must be a marketing tit. Please go away.

        Regards,
        YaG

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday March 24, 2003 @03:59AM (#5582060)
    Comment removed based on user account deletion
    • Re:What? (Score:5, Interesting)

      by Anonymous Coward on Monday March 24, 2003 @04:16AM (#5582101)
      "Doesn't this defeat the purpose of "Standard"?"

      No because the lowend is the bulk of the market and there every penny counts.

      "A new standard means the old one isn't..."

      No it just means that they get the benefits of a new standard for high end and high margin devices while reaping the prior standard as well.

      "Or am i missing something?"

      You are missing the fact that the bulk of graphics chips sold are at the low end. This low end bulk is good for 18+ months which is an eternity in the graphics business due to the rate of change (which seems to still be at a rate of preformance doubling every six to nine months). Standardization on this low end will allow lower prices while meeting the need for faster and more specialized RAM than is required compared the more stable CPU markets. In addition the standards will insulate all parties from lawsuits or patent claims lending more stability to their ventures. Finally it may herald a change from the bad old days where a great deal of R&D had to go into reinventing the wheel for memory or relying on exclusive vendors who may not even have the capacity when the need came. I'm thinking in particular of the year with the semiconductor fire that ratched up certain graphics card vendors highend cards.

      Even in this market standards are good.
      • So what you are saying is that they are STILL reinventing the wheel, but at least since it will be a "standard" reinvented wheel, they can all work together to reinvent it right? Well, I guess that is "sort of" progress...
    • by Sad Loser ( 625938 ) on Monday March 24, 2003 @06:24AM (#5582330)

      it says that GDDR3 is going to be the standard, not GDDR2 - which sounds like it has multiple different implementations.
      (I know it is against the spirit of /. to actually read the article - sorry)
    • It means that the old stuff, which is more than good enough for 99.99999% of the people, just becomes cheaper all the time - yeah!

      PS. slashcode is a lame lump of shite - without this PS, it wouldn't let me post...! :-p

  • by Lank ( 19922 ) on Monday March 24, 2003 @04:00AM (#5582062)
    Well, if they could get the number of vendors that offer this type of memory to increase, then they could lower the price enough to make it cost effective. Also, this would make it great for sites that benchmark various video cards - making all of the video cards have the same/very similar types and speeds of memory would be excellent for comparison.
  • last I remembered (Score:5, Informative)

    by lingqi ( 577227 ) on Monday March 24, 2003 @04:02AM (#5582069) Journal
    the special requirement of graphics specific RAM is the simultaneous in/out access. (At least that's my understanding of VRAM (video RAM))

    For that point, why arn't they doing a QDR architecture? QDR is basically DDR but with dedicated in / out pins (separate) that allows this kind of simultaneous read/write.

    Granted, pin count is higher but I think it would be better suited to the graphics people.

    That or I am not quite clear on the GDDR-n specs. heh. Or I am thinking about frame-buffer memory instead of texture memory (AFAIK the latter only need to be continuously read, really fast) hmm...
    • Re:last I remembered (Score:5, Informative)

      by videodriverguy ( 602232 ) on Monday March 24, 2003 @04:23AM (#5582125) Homepage
      VRAM was a dual port RAM, with an output only port and a standard I/O port. The output port was used to refresh the screen, since it could produce data independantly from the I/O port.

      However, with the bus widths being used by GPUs today (128,256), they really don't fit anymore. GPUs now manage the RAM accesses so that frame buffer access is shared with drawing etc. This means that the most important thing is RAM speed - with accesses for the frame buffer being sequential, the less time taken for that the more memory bandwidth left for drawing.

      This will become even more important once we have the very high resolution monitors (LCD) on the horizon - 3K x 2K pixel displays will require a LOT of memory bandwidth to keep them refreshed.
      • by Anonymous Coward
        One of the port was a normal R/W port, and the other was a read-only port. This port also had a very wide word-size. That is, you would select an entire column of the RAM (2K bits back then, 4K or 8K now) and it would parallel load it into a internal buffer, then shift it out bit by bit. This may sound very slow, but it was VERY fast, and you only had to deal with latency with each parallel load, so every 2K to 4K bits. This system was ideal for video display and still is.

        Also note that latency is a relati
      • Back in the days when VRAM was used, it was because the bandwidth consumed just to feed the DACs was a significant proportion of the total available memory bandwidth. Then, having an output only port made a lot of sense.

        Since those times, video memory bandwidth has increased enormously but DAC requirements have not. A 32 bit 3k x 2k LCD at 60Hz will only consume 1.4GB/s. Even maxing out a 400MHz DAC, you top out at 1.6GB/s with 32 bit colour. This is well under 10% of a modern graphics memory subsystems to
    • Granted, pin count is higher but I think it would be better suited to the graphics people.


      The pin count is one of the biggest cost factors. Doubling the number of pins to separate input and output would be very cost-ineffective -- if you can afford to add that many pins, using twice as many I/O ports instead would be a much better solution, since it would double your peak read bandwidth.

      • bull. you don't double the pin count.

        many other of the pins are:

        power pins
        addres pins
        control pins
        i/o timing sync pins
        ground pins
        no-connect pins (for improving signal quality)

        so adding a few dozen (yes a few dozen - because the 256-bit data bandwidth is achieved with multiple chips) pins on a 460 pin BGA is hardly difficult.
        • I wasn't arguing the number of pins added, I was arguing that separating input and output pins wouldn't be as efficient as using twice as many pins that are both input and output.

          (And, BTW, a standard memory channel is 64-bits wide, data-wise. 256-bit bandwidth is achieved with 4 of those channels.)
          • assuming you mean "double the chip bandwidth," keep in mind that when you switch between read and write you incur some serious penalties. dedicated in and out removes this problem, something that "doubling the bandwidth" will never solve. considering that the framebuffer is always switching between read (screen refresh) and write (draw), this switch penalty is not trivial. anyway, think of it as "pipelined memory."
    • the special requirement of graphics specific RAM is the simultaneous in/out access. (At least that's my understanding of VRAM (video RAM))

      This is only really important for the Framebuffer. It simplifies scanning out
      to the screen if you don't have to contend with the other parts of the GPU. But most of the memory today is used for textures and streamable data (vertex lists, etc), where VRAM wouldn't be as useful. nVidia is very proud these days that it
      uses the same memory for all three functions, but I wou
    • QDR would be great for read/modify/write bandwidth... but I bet NVidia and ATI could use those 128 pins (plus clocks) to better use. It all depends on their data flow. If they have a bunch of simultaneous read/writes (like a SRAM cache) than saving the overhead to turn around the bus is a winner. If they move large blocks of data one direction at a time (for example if they have a bunch of cache already on the graphics controller) than using all those extra pins for more common bandwidth makes more sense.
  • Maybe... (Score:4, Insightful)

    by insecuritiez ( 606865 ) on Monday March 24, 2003 @04:03AM (#5582072)
    "GDDR3 will consume half the power of GDDR2 and operate up to 50 percent faster."

    Maybe graphic makers should hold out on GDDR2 for GDDR3. People that buy high-end graphics cards want quality. Look at the GeForce FX. It's going to kill NVIDIA. I think NVIDIA and others (ATI) are going to really learn from the FX and make extra sure that what they come out with will be real innovation, not a quick way to get back on top and the expense of their customers.
    • Re:Maybe... (Score:1, Informative)

      by Anonymous Coward
      "GDDR3 will consume half the power of GDDR2 and operate up to 50 percent faster."

      Uhm read the article:

      "Lee said Micron's GDDR3 chips are made with 0.11- micron processing, allowing speeds that could reach 700 MHz for a 1.5Gbit/s data rate. The device will sample next quarter, he said"

      GDDR2 reaches 500MHz now, GDDR3 _could_ reach 700MHz sometime in the future. By that time GDDR2 will reach at least same speeds. AFAIK frequency goal for GDDR3 is 500MHz in Q3.

      "Look at the GeForce FX. It's going to kill NVI
    • Re:Maybe... (Score:4, Interesting)

      by justin_speers ( 631757 ) <jaspeers@comcast. n e t> on Monday March 24, 2003 @06:16AM (#5582316)
      Look at the GeForce FX. It's going to kill NVIDIA.

      Why do you say that?

      So the FX didn't exactly blow out the Radeon 9700 Pro like it was supposed to, it's still a very fast, very good card capable of rendering anything a game throws at it for the next couple of years.

      nVidia is very smart. They don't make very much money off the highest of the high-end market. Where they make most of their money is in that lower-mid range market, where they've traditionally marketed their "MX" products.

      At the GDC nVidia was talking about implementing the full DX9 feature set in a card for $79. That's where they're going to make a killing.

      I honestly don't think nVidia cares THAT MUCH if they don't have the absolute fastest card in every benchmark. Like any other company, they want to stick around for awhile and make some money.

      Price-performance is VERY important in the market. That's why AMD is still around, despite the fact that P4's are undoubtedly faster now. I think people just see nVidia as being the king of the hill for awhile, and would like to see them taken down a notch.

      The video-card market is very healthy, we have good competition, and the FX is definitely not going to kill nVidia. I think their strategy is right on.
  • Has anybody tried... (Score:5, Interesting)

    by Kirby-meister ( 574952 ) on Monday March 24, 2003 @04:16AM (#5582099)
    ...a cost analysis between buying a bleeding edge graphics card to last you 2-3 years versus upgrading cheaply to last generation's greatest for much less every year or so?

    I've always wondered this, since those two patterns are the ones I've fallen in and out of for the past few years.

    I still think this is why console gaming is more mainstream, either way. With a console, you might not get the best quality in graphics, but hell, you pay $200-300 and the machine lasts 5 years, and you get quite a nice selection of quality games (that's really a bias, I started out on the NES...).

    • ...a cost analysis between buying a bleeding edge graphics card to last you 2-3 years versus upgrading cheaply to last generation's greatest for much less every year or so?

      My rule is "upgrade your vid card when you can get double the performance for $100."

      Doesn't keep me anywhere near the latest and greatest, but is good enough for me, and conserves those valuable beer tokens for the use that God intended 'em for.

    • That analysis is totally dependent on the consumer's utility for time, money, having the latest gfx card, etc.

      A person might value a newly released high-end gfx card more (in order to impress his friends for instance) than a person that only uses his/hers computer to write e-mails.
    • "With a console, you might not get the best quality in graphics, but hell, you pay $200-300 and the machine lasts 5 years, and you get quite a nice selection of quality games (that's really a bias, I started out on the NES...).
      "

      Whos to say games ever 'expire'?

      I still play a good game of nethack every now and then, and enjoy a lot of snes games. Of course, my main addiction is counterstrike (not the newest game, but not exactly old either), so ymmv.

      Console games sell mostly because they 'just work'. no co
      • Didn't mean to imply that when a system went out of date, so did the games.

        My SNES is still in working condition, albiet a tad yellow thanks to some weird effect of the plastic case aging. Super Metroid is in it right now. Punch Out is on my NES emulator a lot. Also procured a Genesis just for Gunstar Heroes. So yeah, old games aren't dead.

        I just meant that these game systems go through a 5 year cycle before any pressure is put on the consumer to even consider upgrading.

      • Games don't expire. But, without a lot of effort on my part, I can't just go and play Doom like I did in 1995. I need to go get a 486 or a P1 around 100Mhz in speed if I want to play Doom the way I did.

        OTOH, I can still play SMB3 off of my SNES Mario Allstars cart, and that's older than Doom by a couple of years. In PC gaming, games may not expire, but targetted architectures do. This classic interview [archive.org] contains some insight into this (Glide/Verite vs. OpenGL targetting).
        • Games don't expire. But, without a lot of effort on my part, I can't just go and play Doom like I did in 1995. I need to go get a 486 or a P1 around 100Mhz in speed if I want to play Doom the way I did.
          What are you talking about? I run DOOM and DOOM II just fine on my Athlon XP 1900+, on a GeForce4 Ti 4200. Why would you need a 486?
          • I can't just plonk in my DOOM floppies and play the game. How do I play the game with no difference in methods after all these years?

            I don't.
            • Uh... I have ZIP archives of DOOM and DOOM II on CD, and when I want to play them, I just unzip the files, run the installer, and start playing. There's nothing fundamentally different, here, and the game runs perfectly fine. I don't see what you're getting at. Sure, I'm not physically installing them off floppies... but so what? The game still installs and plays just fine.
              • by Inoshiro ( 71693 )
                So your Linux magically has SB emulation and VGA emulation for a program which expects a real (protectei) mode DOS environment it can change to flat addressing?

                Or your Windows XP has SB emulation and VGA emulation, etc?

                I can't play DOOM anymore than I can play Genecyst and get my Shinning Force state files. Luckily my Sega Smash pack on Dreamcast isn't on a platform that's a moving target.
                • Ah, I think I see the confusion.

                  It would be due to the fact that you completely forgot to mention you're running Linux. Good job.

                  I guess WinXP just blows then; I'm still running Win98 on my Windows box, and DOOM/DOOM II work perfectly there.
                  • It doesn't matter what my console "runs," because it'll run any title I put into it.

                    The only way you can still play titles is to use Win98, which is 5 years old. At that point I might as well buy a new computer every 5 years and live with each one as its own gaming console. Even old DirectX games don't work on new DirectX.
                    • It doesn't matter what my console "runs," because it'll run any title I put into it.

                      This is a total non-sequitur. You said that DOOM won't run on modern hardware/software. I proved you wrong. You brought up Linux, to wit, the reason you couldn't run DOOM was because you were using Linux. I had assumed you were using some version of Windows up until that point.

                      At that point I might as well buy a new computer every 5 years and live with each one as its own gaming console.

                      Now I'm beginning to thi

      • Whos to say games ever 'expire'?

        The internal battery used to power the SRAM that saves the state of the game, that's what. It will eventually run out of charge.

        • I think he meant that the innate qualities of the game don't expire. You don't need an actual physical Legend of Zelda cartridge and NES console to play the game, thanks to emulators.
          • You don't need an actual physical Legend of Zelda cartridge and NES console to play the game

            But if you don't have the cartridge within five meters of the computer you're emulating it on, the IDSA will come and kill you. Besides, most computers don't have TV out; a 27" living room TV is much nicer for multi-player split-screen video gaming than a 17" VGA monitor.


    • if you are going to do that, you should also factor in the grief of trying to install a bleeding edge card with bleeding edge drivers.
      Having wasted a lot of time and multiple re-installs, I now stick to "not quite bleeding, but still a bit bloodstained" edge products, where at least the drivers are mature.
  • GDDR2? (Score:4, Insightful)

    by supz ( 77173 ) on Monday March 24, 2003 @04:23AM (#5582120) Homepage
    It would be nice if that EEtimes article even gave a slight, non-indepth, technical description of what exactly GDDR2 is.

    Can anyone answer me that? What makes it special?
    • Re:GDDR2? (Score:4, Informative)

      by EinarH ( 583836 ) on Monday March 24, 2003 @06:15AM (#5582314) Journal
      It's hard to find any technical description since its pretty new and there is no JDEC-standard yet.

      But the key that makes it worth the extra bucks is the fact that DDR-II delivers twice the external bandwidth of a standard DDR solution for the same internal frequency. The 1.8-volt device features a high-speed data transfer rate of 533Mbps that can be extended to 667Mbps for networks and special system environments.

      The last year chip-makers have released diffrent DDR chips with increasing frequency like DDR 266, DDR333 and DDR400. But its limited how much higher its possible to go so instead they are trying to add another sort of "bus" inside the chip.

      The reason they started producing DDR (vs. SDR) is because it's much easier to implement such a double data rate (DDR) bus than it is to actually double the clock rate of a bus. So DDR allows you to instantly double a bus's peak bandwidth without all the hassle and expense of a higher frequency bus.

      DDR-II is made thinking in the same way.

    • I haven't looked at GDDR2 in any detail, but lets look at the differences between a Graphics memory and computer main memory.
      • Graphics guys put a much higher value on bandwidth (and will take a higher price)
      • There is a point to point connection between graphics processor and memory. In a computer, on I/O of a chipset drives multiple memory chips (on different DIMMs).
      • The electrical environment is much more controlled on a Graphics cards. No DIMMs. No sockets. More layers on board to route signals.

      So i

  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday March 24, 2003 @04:24AM (#5582128) Homepage
    Isn't it interresting how graphics adapters use the fastest memory available these days, not the CPU. Not counting L1/2/3 caches that is...

    This really goes to show how humans are visual animals above all. I wonder how much more power could be squeesed out of porcessors if we were to use memory like this and wider buses...
    • by Anonymous Coward
      "Isn't it interresting how graphics adapters use the fastest memory available these days, not the CPU. Not counting L1/2/3 caches that is..."

      I used to think that this was indeed interesting or even surprising but when you look at how CPU's and GPU's (thanks Nvidia for making them equal via naming) it is not suprising because evolution in the graphics market was slower for so many years while CPU's kept chugging along. Consider the long period where the most compelling feature of a video card was it could
    • Isn't it interresting how graphics adapters use the fastest memory available these days, not the CPU. Not counting L1/2/3 caches that is...

      Not really that interesting, quite trivial reallly, that's where the pressure is at the moment.

      The bottle neck in 3D is still the graphics accelerators. There's not nearly as big a drive towards more power in the CPU market simply because smp and cluster solutions provide more bang for your buck.

      I am a little surprised at the moment though, how come we haven't seen m
      • The interconnect speeds used on current GPUs make connecting them together almost impossible. You just cannot move data at 1/2 GB/sec over a cable, unless you go serial (ala Serial ATA). Current GPU card designs are hyper critical of trace length between the RAMs and the GPU - at those speeds even a difference of a few millimeters can make or break a card.

        Until someone comes up with a radically new scheme of processing, these physical limitations will always be with us. That's why the Voodoo (3Dfx) scheme
    • by ponos ( 122721 ) on Monday March 24, 2003 @08:27AM (#5582537)
      Actually there is a difference in the way CPU and GPU see
      memory.

      A CPU cares a lot about latency because typical code will
      have "random" accesses scattered with calculations in
      between. The same data and code areas are often
      accessed many times and data are small
      (e.g. a Word document is small) while code
      maybe quite large.

      That's why CPU's don't have enormous
      256-bit buses (which have the same latency as a 64-bit
      bus)

      A GPU performs "multimedia" calculations which typically
      involve serial access to memory where caching can be of
      very little help. You cannot "cache" a whole texture set
      and code is of really trivial size (until now, maybe
      PixelShader 2.0+++ will change all that). Therefore
      a GPU needs serial access to huge areas of memory,
      involving items of similar size and in regular intervals.
      That's why a GPU needs BANDWIDTH (not necessarily
      latency, because when the calculation starts latency
      is hidden inside the calculation loop).

      Considering the above, P4 is a "multimedia" design (much
      more like a GPU) that's why it was made to work with
      very high FSB and RAMBUS (high bandwidth) originally.
      Contrary to this, AMD Athlon is a "generic" design which
      does not depend on huge bandwidth but on very low
      latency (hence the HUGE L1 cache). That's why P4 needs
      HyperThreading : its long pipelines do not care a lot about
      latency but can cause a big bottleneck if they stall. Intel
      feeds them continuously by drawing instructions from 2
      processes at once (so that the pipeline does not remain
      empty if one process is stalled from the front side bus or
      something...).

      Anyway, I expect GPUs to drift slowly towards the generic
      CPU design because pixelshader language has become
      quite complicated with long loops etc. Gradually this
      means that GPUs (esp. with DirectX9) will start being
      compute-limited and not texture-fill-rate limited
      (anything over 2 GTexel/s is really absurd for
      typical screen sizes). This will propably become apparent
      with DOOM III.

      P.
      • by Nicolai Haehnle ( 609575 ) on Monday March 24, 2003 @09:06AM (#5582644)
        I do agree that GPU will eventually become more CPU-like, but...

        "anything over 2 GTexel/s is really absurd for
        typical screen sizes"

        Let's say the screen has 1 million pixels for simplicity (that's somewhere in between 1024x768 and 1280x1024). Let's say you really want smooth motion and target a framerate of 100fps. That means you need to produce 100 MPixels/s. At 2GTexel/s, that's 20 texels per resulting pixels. Now add a 2x overdraw (which is quite low I think) and you're left with 10 texels per resulting pixel.
        Many additional effects, esp. refraction and reflection need render to texture, i.e. you basically render (parts of) the scene twice, which obviously uses a lot of additional performance.

        2GTexel/s doesn't sound so absurd anymore, does it?
  • WTF? (Score:3, Interesting)

    by BortQ ( 468164 ) on Monday March 24, 2003 @04:52AM (#5582199) Homepage Journal
    This write-up is pretty much bogus. The first half of the article talks about how there are a zillion different companies all peddling their own versions of GDDR2. Then the second half talks about how it looks like GDDR3 will not have this problem, and will therefore be widely adopted.
  • by Rooked_One ( 591287 ) on Monday March 24, 2003 @05:04AM (#5582220) Journal
    its called the Cameron Law - it dictates that game companies and graphics cards companies are in a conspiracy together to force us buy more and more of each, and every 18 months we will have to buy a new video card, which probably coincides with new technology video game releases. (this is a joke, so don't take it that seriously)
  • GDDR3! (Score:3, Informative)

    by nrdlnd ( 97720 ) on Monday March 24, 2003 @05:46AM (#5582282)
    The good news in the article is that the much "better" memory GDDR3 will be standardized from the beginning with may suppliers and hopefully a lower price. Forget GDDR2!
    • Re:GDDR3! (Score:3, Insightful)

      The good news in the article is that the much "better" memory GDDR3 will be standardized from the beginning with may suppliers and hopefully a lower price. Forget GDDR2!

      I think they only say that because GDDR3 is farther off into the future.

      I've noticed once these things get closer to an actual release date, these people tend to take off their rose colored glasses. My money says there won't be much of a difference between the two different memory types when they're actually released. Not enough to
  • GDDR2 (Score:5, Funny)

    by rwa2 ( 4391 ) on Monday March 24, 2003 @09:08AM (#5582654) Homepage Journal
    Oh, it's some graphics chipset related thingy. I seriously thought the acronym was for a global standard for Dance Dance Revolution...
    • DDR is getting somewhat overloaded; the old meaning was "Deutsche Demokratische Republik" (East Germany). Sisters of Mercy used that in "Dominion/Mother Russia" (A Kino runner for the DDR) which now always makes me think of Dance Dance Revolution...

      Strange the way these things work out...

  • I want my GQDR dammit!

  • It's hard to find any technical description since its pretty new and there is no JDEC-standard yet.
    But the key that makes it worth the extra bucks is the fact that DDR-II delivers twice the external bandwidth of a standard DDR solution for the same internal frequency. The 1.8-volt device features a high-speed data transfer rate of 533Mbps that can be extended to 667Mbps for networks and special system environments.
    The last year chip-makers have released diffrent DDR chips with increasing frequency like DDR

We warn the reader in advance that the proof presented here depends on a clever but highly unmotivated trick. -- Howard Anton, "Elementary Linear Algebra"

Working...