Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Bitboys Silicon Sighted 148

ZaPhY42 writes: "The Bitboys look like they've actually produced some working silicon of their mythical XBA Xtreme Bandwidth Architecture-based graphics card which they were previewing at Assembly 2002. Photos of the card can be found here(1) and here(2). What next? Duke Nukem Forever gets released by 3DRealms? ;)"
This discussion has been archived. No new comments can be posted.

Bitboys Silicon Sighted

Comments Filter:
  • FPGA? (Score:5, Interesting)

    by katokop1 ( 596588 ) on Monday August 05, 2002 @01:21AM (#4010477)
    Does anyone else notice how much that looks like an FPGA at the center of the board?
    • Re:FPGA? (Score:2, Informative)

      by ianpatt ( 52996 )
      It is: APEX 20K Devices [altera.com].
    • yeah, ALTERA logo on the side makes it look suspicious. Unless, of course, Altera has a DIY ASIC construction kit available to those with the cash. It's a fab-in-a-box!
    • Re:FPGA? (Score:1, Interesting)

      by Anonymous Coward
      They had two different cards with them. The PDA accelerator prototype was using the FPGA based board. The PC graphics accelerator had a real ASIC.

      The PDA accelerator was actually quite hot product. It had less than 100kgates and was working with different principle than usual accelerator cards (vector/polygon based).
    • Yep, that one board definately has and FPGA on it. So that means the bitboys haven't produced any silicon yet. This is a prototype.....
      Who know if it works well or will actually be fast?
      • Yeah, but the hard bit is to get a functioning design at all. If they have a design porting it to an Asic should be fairly straigtforward.

        Besides, if you send the design to Altera, they can manufacture a whole bunch of Asics for you. In theory... whether the resulting ASIC would be fast enough is unclear. (Doubt it.)

  • hi-res pictures (Score:4, Informative)

    by MiTEG ( 234467 ) on Monday August 05, 2002 @01:22AM (#4010480) Homepage Journal
    Please be gentle! I found these [solidhardware.com] in the forums
  • Interesting... (Score:1, Interesting)

    by Critical_ ( 25211 )
    I guess they are using two 128-bit buses to maximize memory bandwidth. I don't see what is so revolutionary about this. Unfortunately, if the graphics processor itself is slow and can't utilize the bandwidth, what's the point?

    Case in point... the new Parhelia by Matrox has seen some overclocking and they have found that performance percentage gain is linear to core clock speed percentage gain. It's still a slow card relative to the current FPS kings.

    On a side note, I am a Matrox fan and have owned a G200, G400max, and will be getting my Parhelia soon. I don't game as much and can use three monitors. =)
    • Re:Interesting... (Score:3, Informative)

      by 10Ghz ( 453478 )
      "I guess they are using two 128-bit buses to maximize memory bandwidth. I don't see what is so revolutionary about this"
      If you don't know what you are talking about, then don't talk, OK? The chip has 12 megs of on-die eDRAM on 1024bit memory-bus running at chip speed. That is pretty revolutionary when it comes to PC 3D-accelerator. PS2 has something similar in it's graphics-chip.
    • I too thought about the paraphelia and the 3 monitor thing... thought it was pretty cool and definitely could use it -- but that's until i realized that it only goes up to 1280.1024 on each of the monitors.

      i am assuming you use 20/21 inch monitors, and 1280 just seem wasteful on a 21 inch. 2x1600.1200 gives the same pixel count, without giving you as much clutter on the desktop (3 monitors).

      so if i eventually gets enough $$ for the 3 monitors, i will probabbly get a good card for primary (so still *can* game) and then two pci cards (they are still around, somewhat) for the other two, so i can drive the resolution to full on each monitor.

      just some thoughts i collected during my search for the perfect desktop.
  • Bitboys are famous for announcing graphics cards and then never delivering. Over the past five years or so, there have been plans and supposed agreements with other companies for at least three graphics cards which would blow the competition away. Not a single card has ever been released.
    • Re:Why this is news (Score:3, Informative)

      by 10Ghz ( 453478 )
      "Bitboys are famous for announcing graphics cards and then never delivering."
      Because they did not announce any products. They held a seminar and in that seminar they had chips and video.cards on display. Now, in the past they have "pre-announced" products. But that was in the past. They haven't said a thing regadring their products in over 1.5 years! They have been running silent for a long time now!
  • by Boone^ ( 151057 ) on Monday August 05, 2002 @01:28AM (#4010498)
    DNF will be the Pack-in.

    Guess you get to decide who's to blame for the holdup. :)
  • by aspjunkie ( 265714 ) on Monday August 05, 2002 @01:33AM (#4010512) Homepage
    Notice later on in the forum: "I have few more images from Presentation, but my site bandwidth is driving owner nuts already so I'll try limit it..."

    Man, he's probably really pissed now.

  • by Petteri Kangaslampi ( 4831 ) on Monday August 05, 2002 @01:34AM (#4010515) Homepage
    Having seen the demo at Assembly 2002, some clarifications are in order: The demonstration and presentation was about their new display acceleration solution for mobile devices, such as mobile phones and PDAs. It is a basic graphics accelerator with support for vector graphics (polygons, beziers) with anti-aliasing, double-buffering and transparency. Even texturing is optional.

    The demo they showed was indeed an FPGA. It has around 20k-30k gates, and was running at around 25MHz or so. The demonstration animated filled polygons and bezier curves, with various effects such as transparency at around 30-50 fps.

    Obviously we are not talking about something that would run Doom 3! Having said that, their solution looked very interesting from a mobile point of view, since it could provide acceleration for UI, SVG and simple games with a very low cost, in terms of gates and power consumption.
    • I don't understand. If this is targetted at crappy PDA and mobile phone displays, why does it need such insane resolution and bandwidth?

      They should be concentrating on getting hardware Transform & Lighting running so as to relieve the relatively slow CPU's on those devices of transforming all of those polygons.

      However, some of the photo's referenced in the original article definitely show some kind of hardware driving a PDA-resolution LCD, so I guess that truly is their target environment.
      • The XBA stuff and this mobile accelerator are two different things. So, no, the mobile stuff doesn't have "insane resolution and bandwidth".

        If you check the photos, it seems that the had a real PC accelerator on display as well. I missed that at the party, since they didn't demo it or talk about it during the presentation. It may well be the XBA thing. I don't know how complete it is, but it might well be working -- heck, I've even seen Pyramid 3D running Tomb Raider. :-)

        The hardware driving the display is the mobile accelerator they demoed. I guess it is probably a standard FPGA test board, which is why it has both a PCI connector and USB interface. I don't recall the display type or resolution (if they even mentioned it), but it is probably something in the range of a Nokia 7650 (176x208) or PocketPC (240x320).
    • The demonstration and presentation was about their new display acceleration solution for mobile devices...

      The demo they showed was indeed an FPGA. It has around 20k-30k gates, and was running at around 25MHz or so. The demonstration animated filled polygons and bezier curves, with various effects such as transparency at around 30-50 fps.

      You shouldn't believe everything you're told. The chip was very clearly marked, it's an Altera APEX EP20K400C [altera.com] PLD. The memory chips on the back are Altera EPC SRAM 'configuration devices' [altera.com]. That means it's got between ~400,000 and 1,051,648 [altera.com] gates, not 20-30k.

      While I can't fault them for writing a program that does everything you mentioned for this particular PLD, it's definitely not as impressive as they lead you to believe, and I don't see how this is 'their' silicon at all. The other card, I can't comment on, since it didn't actually do anything. (But we all know that a reputable company like Bitboys is far above faking a demo, right?)

      • by Petteri Kangaslampi ( 4831 ) on Monday August 05, 2002 @02:58AM (#4010665) Homepage

        The chip was very clearly marked, it's an Altera APEX EP20K400C ... That means it's got between ~400,000 and 1,051,648 gates, not 20-30k.

        Yeah, well, the 20-30k figure was from their presentation (might have been 22k, don't remember exactly). Nothing forces you to use everything on the chip... Obviously we have no way of checking the claims, but I don't think they have a big reason to give misinformation in that area. The people they need to convince are the mobile device manufacturers, not a bunch of demo coders.

        There is no "their" silicon for the mobile accelerator in the usual sense of having an NVidia chip. There probably never will be either -- something like this would be integrated into existing silicon in a device, not put into a separate chip. If you open a modern mobile phone, you don't find separate CPUs, DSPs etc, everything that can be intergated is.

        Having said that, I wouldn't hold my breath waiting for the first device with this graphics hardware to ship... The Bitboys don't exactly have a stunning track record in that area. :-)

      • by udif ( 32355 ) on Monday August 05, 2002 @06:04AM (#4010943)
        The demo they showed was indeed an FPGA. It has around 20k-30k gates, and was running at around 25MHz or so. The demonstration animated filled polygons and bezier curves, with various effects such as transparency at around 30-50 fps.
        You shouldn't believe everything you're told. The chip was very clearly marked, it's an Altera APEX EP20K400C [altera.com] PLD. The memory chips on the back are Altera EPC SRAM 'configuration devices' [altera.com]. That means it's got between ~400,000 and 1,051,648 [altera.com] gates, not 20-30k.
        Just to set the records straight:

        When counting gates, FPGA are inherently less efficient that ASIC or full-custom chip, due to the FPGA's fixed structure. A logic design that may take 400,000 gates in an FPGA may fit into 40,000 ASIC gates. This is normal. The fact that ALTERA calls this device a 400,000 gate device doesn't mean it actually is. This is a hard to measure number, just like performance benchmarks.

        FPGA's are usually left at 50%-60% utilization if you want to be able to get any decent speed out of them. if you start filling them, routing becomes harder, and the speed drops.

        Remember that this is a general purpose prototyping board. They may use a larger device not because they need it but because it allows them more freedom while designing.

        Summary: The fact that they use an FPGA that is characterized by it's manufacturer as a 400,000 gate device doesn't mean their graphics core won't fit into 22,000 ASIC gates.

        • Remember that this is a general purpose prototyping board.
          You might want to look a little closer at those images, the Altera-based board appears to be a BitBoys design, as they have their name and (c) on the PCB.

          However, given that, it might be their general-purpose proto board. The graphics core might be 40k (or fewer) equivalent gates, but it looks like they access the devices over a USB interface, and that takes more gates. And, if you're not sure how big your design will be, it never hurts to have too much FPGA.

          While I'm nit-picking the boards, the other board on display is a BGA in a friction-mount connector. Looks like they're expecting to replace that chip quite a bit.

          And, a final on-topic-like statement. I work on an embedded device. Acceleration of video at any cost in power is worthless. I'd rather flash 5 screen per second at very low power than 50 screens per second at multi-Watt power consumption. (Reference ATI's mobile graphics solution: low power, but still not as low as StrongARM integrated video.)

      • Not to point out the blindingly obvious, but it's quite likely that they're not using :::all::: the gates on the EPK400C. Just because the PLD (PROGRAMABLE Logic Device) has that many gates doesn't mean they're all being used in the bitboy design.

        -Chris

      • When developing a system, it's always typical to overspec hardware a LOT so you don't have to make many hardware changes during the design process.

        At work I'm dealing with a system that has a few FPGAs and DSPs on it, along with some RF hardware. It currently consumes 6-7 amps at 24 volts. It's expected that by throwing away a lot of our "excess" silicon, we'll be dropping that to an ampere or two, simply because EVERYTHING on that board is massive overkill.
  • So I'm not much of a hardware guy, but why the LED (?) numbers?http://solidhardware.com/sn/bb/P8010048.JP G
    Even in a clear case they would be parrallel with the floor/desktop?
    • These are probably just development tools - see where the card is state-wise, watch important data fly around, and figure out where in the routines it crashes. That sort of thing. (Sorta similar to the port 80 card for PCs, shows debug output from the POST on 2x 7segment LEDs)
    • Actually, I bet there are case-modding gamers out there who would pay good money to have current FPS showed on leds on the card.

      It could be pretty cool if you had one of those case windows...
  • Will they succeed? (Score:2, Interesting)

    by Anonymous Coward

    I think "will they succeed?" is a really interesting question, for this company.

    I presume they've still got Psi (Sami Tammilehto). He was the Carmack of the demo scene, an innovator in realtime graphics programming, back in the early/mid '90s.

    Finland has proven, with Nokia, that it can compete on a global scale with consumer products. But this startup feels a long way behind Nvidia, ATI and the other established players.

    Will their chip be good enough to find people to license it? Will the drivers be good enough to compete with Nvidia? What market will they target (hardcore/mainstream/mobile)?

    I think this news raises more questions than it answers, but for love of the Finnish demo crews alone, it's worth keeping an eye on them.
  • it uses slashcode... what good is slashdot when they jsut link to slashdot clones... oh wait i cant get all the goatse.cx links and page wideners... so can we say we slashdotted slashdot?
  • I guess the finally figured out the strategy:

    1. Hype!
    2. Actually produce a product.
    3. Profit!
  • Well, it has. So I go to this assembly'02 site and find a link to The Scene.org [scene.org] and start looking around. Man there is some slick stuff there. And archives that go back to Future Crew's 2nd Reality. I can remember getting that to work on my 40 with my Gravis Ultra Sound.

    Anyway, just brought back old memories. Now my chip is up to 59C.:)

    J:)
    • Interestingly enough the BitBoys are actually ex-Future Crew guys. As are members of the Max Payne and 3D Mark teams.

      Future Crew Timeline [defacto2.net]

      And Skaven [futurecrew.com] was even competeing. In fact he won the "Instrumental Music" category with a new version/sequel to his previous winning song "Catch That Goblin".

      Anyone interested in MOD/ULT/S3M/IT/XM/669 music from the demo scene should checkout Nectarine Radio [scenemusic.net].

      • by Anonymous Coward
        Linux, SSH, IRC, Nokia, Max Payne, 3D Mark.. what is this all about?
      • I'd heard that some of the FC guys had gone off to BitBoys Oy. and 3D Mark (Didn't know about Max Payne). But I haven't heard of FC or Demo-ing in years. I'd figured that M$'s attempts at hardware abstraction had killed them off. Glad to hear there's still a demo culture out there. And they're producing some pretty slick stuff.

        The site I linked to in my Grand-parent post also has archives of FC's music. I didn't see any recent demo work from FC - must be too busy with their jobs in the industry...

        J:)
  • As you can tell, the card as displayed in the photos has no heatsink...can we surmise then that this GPU is clocked at a very slow rate for compatibilty purposes?

    Or perhaps that Transmeta will couple it to its ulta low power, ultra low performance line of CPU's? ...or that they may at sometime in history lay challenge to Intel's dominance of the integrated chipset market circa 1997?

    That little LCD display being driven by the Bitboys GPU is nice...only if we want to run in 120 x 70 display mode.

    Cut the donkey-puck, BitBoys. Put out the hardware on production level silicon. Until then, we can't take your promises like going to tape out in 1999 for real.

    Maybe in another 3 years the tape-out silicon will reach production, until then, what then? Synthetic benches run on an imaginary system looping an imaginary benchmark under synthetic conditions? ...we will see this when DirectX 10.0 parts hit the market. :)

  • by alptraum ( 239135 ) on Monday August 05, 2002 @02:29AM (#4010616)
    "Current Openings" I'm afraid I might see...

    Now hiring:
    Marketers - We need dedicated people to hype non-existant products that on paper outperform all the competitors, Combined!! We will never have an actual product for the market, but we need skilled marketing professionals to make people think one day we actually will.

    Engineering - No Current Openings (and never will be)
  • by Anonymous Coward on Monday August 05, 2002 @02:30AM (#4010619)
    All jokes aside, there really is a connection between Duke Nukem Forever, and the BitBoys.

    When the members of the famous demo group the Future Crew(Think "Second Reality") finally got full-time jobs, there were a couple of shops they went to. First, some of them went to 3D-Realms, which produced Max Payne(Skaven [futurecrew.com] did some of MP's music), and of course, does work on DNF. At the same time, some of the other guys broke off to work at the BitBoys, as they were really more of hardware type. So who knows? It may very well be possible that both sides are holding things up to release together, all because of where they came from.
    • by Anonymous Coward
      They didn't go to 3D Realms, 3D Realms just published their game. They started a company called Remedy Entertainment and created number one hit game Max Payne. PSI, the main coder of all the best Future Crew demos was a very young man from a city called Turku, located in southwestern corner of Finland when Future Crew released their first PC demos. However PSI (aka Sami Tammilehto, an avid corewars player btw. he's one of the best corewars players in the world) WAS NOT the first to code PC demos. The first PC demo group came from the same town where PSI lived, but it was SORCERERS, not Future Crew.
      Future Crew and Sorcerers were competing against against each other. Read the scrollertexts of Future Crew's first demo "YO!" (coded by PSI) - they send greetings to "Sorcerers".
  • Yahaya (Score:3, Insightful)

    by Konster ( 252488 ) on Monday August 05, 2002 @02:33AM (#4010628)

    The demo they showed was indeed an FPGA. It has around 20k-30k gates, and was running at around 25MHz or so. The demonstration animated filled polygons and bezier curves, with various effects such as transparency at around 30-50 fps.

    Yeah, but the demo unit they showed was the relative size of a tank to a Yugo...they want to put THIS into a MOBILE device? Mobile devices come with an ISA slot? Ya, ya, I see how it's all for test and NOT production and all that, but you think that BitBoys would have shown something smaller for the mobile market than something you could barely fit into a standard ATX case!.

    • Re:Yahaya (Score:2, Informative)

      Yeah, but the demo unit they showed was the relative size of a tank to a Yugo...they want to put THIS into a MOBILE device?

      You don't put new chips into mobile phones. The accelerator would be integrated to the same silicon with the CPU or the display controller (which might be on the same chip anyway). From that perspective it makes sense to develop and demo it on an FPGA, since it would be licensed as an IPR block anyhow.

  • Not to sound stupid, but why is this special? Who are Bitboys and why should I know them? Anyone care to fill me in?
    • Bitboys are a hardware company that released several press releases 2-3 years ago touting there XBA architecture that had 20GB/s bandwidth. It was touted as the GeForce killer. Needless to say they never released anything besides press releases.
      • Try releasing press releases and demo's as early as 1996!

        Infact I remember reading an old maximumpc article stating that the bitzboy's vodoo2 killer was doing a demo at 25fps while the vodoo2 was doing it at 45fps. Its no longer online so I can not link it since this was way back in 1997. After the hype their demo never performed and never was produced.

        I judge them with a grain of salt. I am glad I am not an investor since they actually never made a single sale in 6 years! My guess is either vodoo or Nvidia keep coming out with a supperior technology and they decided not to ship and start over with yet another design.

        I am very skeptic if this chip is the killer chip. Mark my words if the geforce5 is better then you can kiss this vaporware goodbye. I am supprised only one /.er really mentioned the poor track record and the hype. Its just that.

  • Looks like one of the cards is even driving three -- yes, count 'em -- three 7-segment LED displays!
    Imagine the number of frames-per-second of ultra-low rez polygons that card can deliver!


    Okay, I need some more sleep. :-)

    dreamer out...
  • I was there (Score:5, Interesting)

    by 10Ghz ( 453478 ) on Monday August 05, 2002 @02:56AM (#4010663)
    Bitboys held a seminar in Assembly '02 regarding graphics hardware in handheld devices. In that seminar they had a hardware-accelerated demo of their technology. That demo was done using Altera FPGA-chip. It has hardware SVGA-acceleration, FSAA (of awesome quality I might add!) the works. Before the seminar I thought "Who needs 3D-acceleration in PDA/mobile phone?". After that demo I'm convinced that it's a must-have feature!
    After the seminar I (and others) managed to talk with them. They had their PC 3D-accelerator on display, along with sample chips. They are pulling out of the PC-business for now in order to focus on the mobile stuff. The chops is called "Axe" and it is working. They are testing it in-house as we speak, and new revision of the chio is coming up. But it will not reach consumers because Infineon is killing the silicon-process at the end of the year. The chip had 12 megs of eDRA and it was somewhat bigger than other chips out there.
    You can get the seminar from:
    ftp://ftp.asmparty.net/pub/seminars/
    It's the one called "Graphics hardware for handheld devices". I'm the guy with the laptop :)
  • I would'nt hold my breath waiting for one of these. Can sniff the vapour all the way from Finland. Give us the real sh-t that we can put into our computers and benchmark.
  • 2 days ago I saw at www.worthplaying.com a banner which clearly had the text 'Duke Nukem Forever: pre-order NOW!'.

    Apparantly it's almost done... or a scam :) I bet the latter.
  • Many forget about the enormous work it is to start from scratch, the card might not be able to compete with the latest offerings from nVidia or ATI but in the value market it can find a place. Next incarnation might be good enough to compete with the latest models from the big ones.

    The only mistake bitboys has made so far is that of the PR department trying to hype their product(s), but that was probably necessary to attract investors.....
  • All bitboys has done is made a wider memory bus. They try to make the case in the press release that memory is handicapping gfx performance no gpu. Historically gfx cards have used some of the fastest memory they can get, but the capability to widen the bus is nothing new and not an accomplishment. The graphics card business is cutthroat and widening the memory bus made for better performace than the industry giants would widen it (and have). No matter how fast you can fill you memory and retrieve data from it, you need a GPU that can process the data fast enough so that it isn't just sitting there. What the bitboys have done is relatively easy, not new and if it was such a good idea than NVIDIA would do it first. Instead they only do it when necessary(for instance the N-force needed a wider bus to utilize cheaper system memory to also serve as a frame buffer.)

    Somewhere there are some really stupid venture capitalists funding these guys.
    • Re:BS Announcement (Score:3, Informative)

      by 10Ghz ( 453478 )
      "All bitboys has done is made a wider memory bus. They try to make the case in the press release that memory is handicapping gfx performance no gpu. Historically gfx cards have used some of the fastest memory they can get, but the capability to widen the bus is nothing new and not an accomplishment"

      You make it sound so simple. But it's not. What BB is doing is not "just widening the memory-bus". They actually move 12 megs of the ram ON THE DIE ITSELF. And that memory is on 1024bit memory-bus. For comparison, that four times as wide as on Radeon 9700 and Matrox Parhelia. That emdedded RAM is used for the things that require most bandwidth, namely the frame-buffer. Textures don't need alot of bandwidth, and they are located in the slower "traditional" RAM. Of course, if there's any eDRAM left, most used textures are stored there.

      When it comes to PC 3D-accelerators, that IS pretty damn revolutionery!
      • It's one thing to propose embedded memory in a paper design, and another thing entirely to get this working on silicon that sells. PS2 did this and Sony deserve much credit for delivering product. Execution is what matters in the graphics business. nVIDIA understand this. Architecturally or academically, what also matters, given the ability to execute, is elegance. Tile based rendering is an elegant idea, but it's a pig to execute it. Elegance is nothing without execution, and brute force isn't even elegant.

        • It's one thing to propose embedded memory in a paper design, and another thing entirely to get this working on silicon that sells.

          A GPU with an on-die frame buffer isn't just vapor on paper. There's one in a video game console from Nintendo called the GameCube. PCs with the GameCube hardware, called Dolphin development kits, are available to a select few.

          • Blimey. Talk about quoting out of context! If you'd quoted my next 3 words, I said "PS2 did this". You're not trying to impress me with your developer credentials are you?
            • You're not trying to impress me with your developer credentials are you?

              No, just pointing out an additional example. If both GCN and PS2 do it, and they manage to make good graphics on a budget (a PS2 chipset + a joystick + a DVD-ROM drive + a DVD decoder license < $200), it's only a matter of time before the tech comes to the PC. Expect good things from ATI in the near future.

              I'm not even a licensed developer; I'm just a lowly homebrew hacker. Here's what I've done on the GBA [pineight.com].

      • emdedded RAM is used for the things that require most bandwidth, namely the frame-buffer. Textures don't need alot of bandwidth, and they are located in the slower "traditional" RAM.

        Oh dear. You haven't thought that through, have you? How many texels contribute to a pixel per texture map? How many textures per polygon?

        • Which needs more bandwidth: Textures or double (triple?) buffered frame-buffer with 32bit Z and 32bit colors?

          Answer: The frame-buffer.
          • You haven't thought it through yet. Why don't you _quantify_ your assumptions? How many textures per polygon? Don't forget those fancy DX8 pixel shaders while you're there. How many texels are read to generate one textured pixel? Is this a classic Z buffered architecture or Tile Based? Any tricks like Hyper-Z accounted for? Where are your caches for the texture in off-chip RAM? If you haven't got the message yet, it's not to make sweeping generalisations about graphics architectures. Sigh.

            • And you still haven't answered my question: which needs more bandwidth: frame-buffer or textures? Generally speaking, frame-buffer requires more bandwidth.
              Sure there are all kinds of tricks to reduce the bandwidth eaten by frame-buffer (compressed Z etc.), but you can do the same with textures (compressed textures, anyone?)
              And still, even if textures as a whole required more bandwidth than frame-buffer, it doesn't still mean that the most used 12 megs of textures (that much would fit in to the eDRAM) required as much bandwidth as frame-buffer would. So putting frame-buffer in to the eDRAM is the smart thing to do IMO.
              • I'll answer. Textures. I'll also say sorry for biting your head off, but I've wasted far too much of my life doing graphics benchmarks in PowerPoint and I'm hypersensitive. If I had 12 Mbytes, I'd use it for a few tiles worth of frame buffer in a tile based renderer, subpixel accumulation for anti aliasing, and lots of texture _caches_, with both main texture memory and frame buffer off chip.
                • Where's the proof that textures eat more bandwidth? also, would the most used 12 megs of textures eat more bandwidth than the frame-buffer would (you never answered that).
                  But I'm sure that you know more about this than people with vast experience regarding 3D (both hardware and software). I mean, BB is using the eDRAM primarily for frame-buffer, not for textures. But like I said, I'm sure you know more about this than they do....
                  From BB website:

                  As an example, how much memory bandwidth is required if the 3D-graphics chip renders 600 million pixels, 1.2 Gigatexels/sec using a dual texturing pipeline? Assume 32-bit color and 32-bit floating Z, as both are superior to their 16-bit counterparts.

                  For each rendered pixel (on average) we read the depth value and write the color and depth value back to the frame buffer. This means that for each pixel we must access 12 bytes of memory. 600M by 12 is 7.2 GB/s. But this is not all, we also have to count the bandwidth required by the video refresh unit, at 1024x768x85 Hz that's 64 MB/s. We also need to read textures and that's 500 MB/s to 2 Gigabytes/sec. In total, close to the 10 GB/s memory bandwidth and that's just for 600M pixels/ 1.2 Gigatexels
                  • There isn't -proof- one way or the other, there are only assumptions, specific design decisions and specific implementations. Don't forget in many cases, the closest thing to proof available usually means breaking an NDA and I'm not falling for that trick. Of course I don't know more than people with vast experience, but when you've been doing 3D hardware and software since 1985 like I have (on and off with about 50% duty cycle) you're allowed to argue a few points on Slashdot. I promise not to reply if you want the last word.

      • They actually move 12 megs of the ram ON THE DIE ITSELF.

        ATI did that for the chip inside the Nintendo gamecube, so that isn't all that impressive. Considering that the top Nvidia and ATI top end GPU's are some of the most complex silicon on the planet this wouldn't that hard for another manufacture to do. 1024 bit buses have been used in supercomputing circles before, this isn't an accomplishment like Tile-based rendering was.
  • Yea.. right. I just read on another site, that it has been hinted that DNF wouldn't be seen in 2002. And the most amusing thing is, George Broussard has had to admit that the graphics won't be as good as those in Doom III.. so why the huge delay on this damn game? They must have switched engines on it at least 3 times..
  • by Anonymous Coward
    I'm saving my money for this card which was hyped on slashdot [slashdot.org]. Sounds much more cool. Then I'll put it in a PC with a Transmeta processor along with my Seti@HOME PCI card. I can't wait for all those cool Loki games to be released so I can take advantage of the chip, I can just download everything from Freenet.
  • The BitGirls [griots.co.jp] are perhaps a bit more interesting than the BitBoys. All computer generated. Click a girl then "photo" for the image gallery. Click to enlarge. There's a movie button, but it's greyed out. Maybe someone can find them?

    Keep browsing if the first one you try is a bit cheezy. Some of the computer work is quite impressive. [griots.co.jp]

    Just think, in a couple of years the BitBoys may be able to render the BitGirls in realtime :)

    -
  • The Radeon 9700 is getting nearly that (19.2 GB/sec) with current technologies. In 2000, this would have been impressive. Now, its not nearly enough to make for a good architecture.
  • Just in case you younger folks don't recall exactly just how long ago BitBoys started promising hardware, there were quite a few people that I talked to that decided to delay purchasing a 3DFX Voodoo 2(!!) because the BitBoys card was "right around the corner.

    This is a whole different realm of late, approachable only by the likes of HURD and possibly Duke Nukem as mentioned earlier.
  • Call me when they have running silicon. Until then it's just more smoke and mirrors.
  • Wouldn't the bottleneck be the AGP bus? Even with the new 8X spec?
  • Of course, when I read this headline I immediately thought of our pint-sized West Virginian friend [weeklyworldnews.com].

  • Mod this off-topic a tad, but how the hell do I get a job like George Broussard's where I can just run a development team indefinitely and never release a product?

    People kind of chuckle about Duke Nukem Forever, but I mean think about it; surely Duke Nukem forever is the worst case scenario in software project management 101.
  • Still... 32bits colors, when everybody else moves into a HDRI (high dynamic range imaging) format that will be supported by most monitors (not LCDs and not as well as monitor built especially for per say 64/128bits colors in the future (higher contrast ratios).

    Anyways the point is, they are talking about 20Gb/sec bandwidth.. comparing themselves to a Radeon 8500. They aren't shipping yet, Radeon 9700 is shipping, has about the same specs, has a brand recognition, has more bitdepth, Matrox has more features and bitdepth, Nividia will probably ship their before bitboys even start sampling... and they will support HDRI as well.

    So what's the point? they got a proof of concept on an Altera FPGA running, good for them, any new technology is welcomed and I usually appreciate it, but in their case, they made so much vapor in the last years that they've lost all respect and credibility to the few of us still interrested in their stories. If they demo something extraordinary, I'll be impressed. I'd say evolutionnary could be a better expectation.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...