Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Open Source Hardware

MIAOW Open Source GPU Debuts At Hot Chips 45

alexvoica writes: The first general-purpose graphics processor (GPGPU) now available as open-source RTL was unveiled at the Hot Chips event. Although the GPGPU is in an early and relatively crude stage, it is another piece of an emerging open-source hardware platform, said Karu Sankaralingam, an associate professor of computer science at the University of Wisconsin-Madison. Sankaralingam led the team that designed the Many-core Integrated Accelerator of Wisconsin (MIAOW). A 12-person team developed the MIAOW core in 36 months. Their goal was simply to create a functional GPGPU without setting any specific area, frequency, power or performance goals. The resulting GPGPU uses just 95 instructions and 32 compute units in its current design. It only supports single-precision operations. Students are now adding a graphics pipeline to the design, a job expected to take about six months.
This discussion has been archived. No new comments can be posted.

MIAOW Open Source GPU Debuts At Hot Chips

Comments Filter:
  • by Anonymous Coward

    You are all Cats! Cats say Miaow! Miaow cats MIAOW! MIAOW say the cats. YOU CATS!!!

  • by hsa ( 598343 ) on Saturday August 29, 2015 @06:08PM (#50418077)
    Isn't the secong G graphics? If graphics pipeline is missing, this is just a multicore CPU..
    • by Bengie ( 1121981 )
      We should stop treating graphics as something special. We just need a compute unit that can just so happen to convert buffers into a signal a monitor can understand.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Ah the old RISC versus CISC argument again. Graphics is very specialized. Having a stack of relatively simple graphics cores does very helpful things that your average 4 to 8 core CPU just can't, at least not in remotely that speed. Maybe we will eventually have enough cores in an average cpu to somehow make it possible to do the same things. On the bright side it seems CPUs are again fast enough to decode most 1080p video streams without video card assist, so maybe we are not as far as we might otherwi

        • by Anonymous Coward

          Exactly. GPGPU cores are very specialized:

          Intel i7, 2.6B transistors, 8 cores
          AMD Cayman, 2.6B transistors, 1536 cores

        • by jonwil ( 467024 )

          The plans Intel had for Larabee seemed like a good idea. Take an old Pentium core, add a bunch of fast special-purpose instructions specifically designed for doing the sorts of operations that 3D graphics require, stick a bunch of these cores on a single chip and add a few special blocks for certain operations (as well as stuff to actually display stuff on the screen)

          It sounded like an interesting idea (and would have been a LOT more open than anything from AMD or NVIDIA) but Intel decided to cancel the pro

          • by Guspaz ( 556486 )

            The project to produce a GPU from Larabee was cancelled, but Larabee itself simply morphed into Intel MIC and they've released several generations of it to market. They now use the brand name "Xeon Phi" for it.

          • Larabee became Xeon Phi and Knights Corner based Xeon Phi cards have been on sale for quite some time. The next version will be Knights Landing and is supposed to go public sometime this year, IIRC.
        • by Bengie ( 1121981 )
          GPU "cores" are quite different than CPU cores. I don't want them to be like CPU cores, I want GPUs to be dumb number-crunching vector-manipulating computing units.
        • by Kjella ( 173770 )

          On the bright side it seems CPUs are again fast enough to decode most 1080p video streams without video card assist, so maybe we are not as far as we might otherwise be. I doubt 4k video is going to easily be decoded with today's CPUs, and complex games won't happen.

          Actually that's just because people do crazy things in madVR, "normal" UHD decoding can be done in software (source):

          In a JCT-VC document NTT DoCoMo showed that their HEVC software decoder could decode 3840x2160 at 60 fps using 3 decoding threads on a 2.7 GHz quad core Ivy Bridge CPU.

          • by hattig ( 47930 )

            The main purpose of dedicated hardware video decoders (and encoders) is to save power. Not because it can't be done on the CPU in adequate time. In addition, it saves money from not needing a powerful CPU to drive the process. They are a highly specialised form of hardware.

            GPUs do both - they save overall power (compared to a massive clusters of CPUs that could do the same calculations at the same rate), and also they allow the graphics to occur in real time (which a single CPU can't) at a cost similar to a

      • Congratulations, you just reinvented the modern graphics pipeline.

    • Failure to read even the article summary detected. The students expect to add a graphics pipeline in 6 months.

      Doing the GPGPU first was a brilliant idea, it gets the project to a state of doing something useful way sooner than going for the graphics pipeline first.

  • Comment removed based on user account deletion
    • I'd rather see a 3DMark Vantage/11/13 figure so we can gauge how fast (or slow) it is. The Valley Benchmark would be good to see too.

  • University Project (Score:5, Interesting)

    by craighansen ( 744648 ) on Saturday August 29, 2015 @07:19PM (#50418317) Journal

    I attended the presentation for this chip, and as multiple audience questioners pointed out, this design hasn't been carefully designed to be clear of patents. As a university project, it's not likely to be an issue, but cribbing from a recent GPU design is not a promising way to get a patent-clear open-source hardware design. It's also not complete, as it's missing graphics-specific functions, such as texture-mapping, and the FPGA implementation had a single processing pipeline. By taking the same instruction set, they made it easier to test and operate their design using AMD's tools. All that being said, it's an impressive start for a small university group, and by enabling operation with instrumentation hooks for measuring dynamic operations, may become useful as benchmarking and measurement tool for GPGPU programs. Just don't expect this to displace commercial designs RealSoonNow.

    • "I attended the presentation for this chip, and as multiple audience questioners pointed out, this design hasn't been carefully designed to be clear of patents."

      What patent violations did the Hot Chips audience members ask about. Who asked the questions. Who were the questions directed to. What was the response?
  • by Theovon ( 109752 ) on Saturday August 29, 2015 @08:09PM (#50418479)

    Check out "nyuzi.org". This is a fully functional open source GPU. It's synthesizable Verilog and works already in an FPGA. So not only is it more or less complete, but it also came out before MIAOW.

    • Check out "nyuzi.org". It came out before MIAOW.

      What is it about the geek that compels him to bury his most interesting projects somewhere south of the The Ark of the Covenant? Never leaving behind the faintest clew to what it does or where it might be found.

      Traditionally, the most common dowsing rod is a forked (Y-shaped) branch from a tree or bush. The dowser then walks slowly over the places where he suspects the target (for example, minerals or water) may be, and the dowsing rod dips, inclines or twitches when a discovery is made. This method is sometimes known as "willow witching".

    • Could you not even link to the fully functional open source GPU so that the lazy but curious could click, and google could perhaps realise that it exists?

      OK, I take that back. WTF has happened to links in the comment submission box? They've finally done. Those crazy bastards have destroyed slashdot.

      • by Anonymous Coward

        As suggested... http://nyuzi.org/

      • by KGIII ( 973947 )

        http://nyuzi.org/ [nyuzi.org]

        Maybe they've disallowed ACs to link unless they do the whole markup?

        • I'm not an AC, I did the whole markup and I tried flipping the various settings but nothing would change it.

          I mean, obviously the've fixed it now in some sort of cover-up... testing [nyuzi.org].

    • by alvieboy ( 61292 )

      nyuzi: "It is running on a single core at 50Mhz on a Cyclone IV FPGA. "

      Not too bad, but still far from fast (I consider everything 80MHz on this family to be slow). Perhaps a bit more pipelining would help.

      Regarding TFA, there seem to be no frequency numbers, and I see they borrow much from OR1200. Last time I synthesized OR1K, it was painfully slow (like 8MHz on a Spartan-3E device). I think it has evolved in this area though.

      And I hate Verilog. I always wonder why they do not use VHDL. But it's a matter o

  • this GPGPU isn't opensource at all, it's using a design by AMD which was never opensourced, and it's using patented technology.. So if you're gonna create one, you're gonna have to pay a lot of license fee's...

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...