Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Upgrades Hardware

Positive Reviews For Nvidia' GeForce 6800 Ultra 564

Sander Sassen writes "Following months of heated discussion and rumors about the performance of Nvidia' new NV4x architecture, today their new graphics cards based on this architecture got an official introduction. Hardware Analysis posted their first looks at the new GeForce 6800 Ultra and takes it for a spin with all of the latest DirectX 9.0 game titles. The results speak for themselves, the GeForce 6800 Ultra is the new king of the hill, beating ATI's fastest by over 100% in almost every benchmark." Reader egarland adds "Revews are up on Firing Squad, Toms Hardware, Anandtech and Hot Hardware." Update: 04/14 16:54 GMT by T : Neophytus writes "HardOCP have their real life gameplay review available."
This discussion has been archived. No new comments can be posted.

Positive Reviews For Nvidia' GeForce 6800 Ultra

Comments Filter:
  • latest vs last-year (Score:5, Informative)

    by bwindle2 ( 519558 ) on Wednesday April 14, 2004 @12:15PM (#8860677)
    They are comparing the latest nVidia GPU to the 9800XT, which is several months old. When ATI's next-gen chip comes out (two weeks?), only then will we be able to see who holds the GPU Speed crown.
  • by hawkbug ( 94280 ) <psxNO@SPAMfimble.com> on Wednesday April 14, 2004 @12:19PM (#8860733) Homepage
    This thing requires a 480 watt power supply, minimum. That's too much. I am currently responsible for a large number of servers that don't have larger than 400 watt power supplies each.

    It's not hard to see why the U.S. has to violently defend our oil interests when we have video cards wastefully burning through electricity like there's no tomorrow.

    I'm all for advances in processor technology, just not when it comes with a high energy consumption price.

    I once heard that by leaving a computer with a measely 150 watt power supply (minute by today's standards) on 24 hours a day like most people do, it consumes more energy than the common refrigerator.
  • by Recoil_42 ( 665710 ) on Wednesday April 14, 2004 @12:24PM (#8860789) Homepage Journal

    here. [mbnet.fi]

    those benchmarks don't look too impressive to me, and the hugeass heatsink/fan combo is still there! not to mention that it requires *two* molexes?

    Nvidia is really starting to fall behind...
  • Re:nvidia's back (Score:3, Informative)

    by scumbucket ( 680352 ) on Wednesday April 14, 2004 @12:25PM (#8860797)
    I've had an MSI K7N2-L motherboard which has the Nforce2 chipset for over a year now. It's rock solid with no problems.
  • Holy mother of crap (Score:5, Informative)

    by l33t-gu3lph1t3 ( 567059 ) <arch_angel16 AT hotmail DOT com> on Wednesday April 14, 2004 @12:28PM (#8860832) Homepage
    Strong points of new Nvidia card:

    -Obscene performance boosts, on a scale I've never seen before
    -fancy new effects
    -massively improved image quality
    -heatsink fan still pretty quiet
    -basically free 4xFSAA and 8x ANISO

    Weaker points of new Nvidia card:

    -Expensive
    -it seems that shader precision is still not as pretty as ATI's, though that may be fixed by game patches
    -takes up 2 slots with the tall heatsink
    -480W recommended PSU
    -video processing engine isn't implemented in software yet

    I don't really object to the power requirements. This thing is more complicated, bigger, and has more transistors than a P4 Extreme Edition. It consumes about 110W, of which 2/3 is the GPU die's power draw. It is certainly NOT unreasonable to require a big power supply with this thing. It seems as though ATI's solution will have a power supply recommendation as well. Simply put, if you're gonna improve performance by such a margin by means other than smaller manufacturing, you're going to increase power consumption. Get over it.

    This thing isn't meant for SFF PCs or laptops, though I'm sure the architecture will be ported to a laptop chip eventually. As for the 2-slot size, well...It consumes 110W! To put this in perspective, it consumes more than any non-overclocked desktop CPU today! Think of how big your Athlon64/P4EE heatsink/fan is, then you'll realise that 2 slots aren't really that big of a problem.

    My own personal reason for wanting this thing: It can play any current game at 1600x1200 with 4XFSAA and 8x anistropic filtering at a good framerate, and is the only card that can claim to do this right now :)
  • Re:nvidia's back (Score:4, Informative)

    by Jeff DeMaagd ( 2015 ) on Wednesday April 14, 2004 @12:29PM (#8860836) Homepage Journal
    Two people have had some issues with the nVidia IDE drivers, at least one person fixed it by using a generic IDE driver.
  • by bonch ( 38532 ) on Wednesday April 14, 2004 @12:29PM (#8860843)
    ...so it's even sillier that the submitter would say that. But, hey, it's healthy fanboyism I guess.

    Here's what the Register says [theregister.co.uk]:

    ATI will ship its much-anticipated R420 chip later this month as the Radeon X800 Pro. The part's 26 April debut will be followed a month later by the Radeon X800 XT on 31 May.

    So claims Anandtech, citing unnamed vendor sources and a glance at ATI's roadmap.

    If the date is accurate, it puts ATI just 13 days behind Nvidia's NV40 launch on 13 April. NV40 will surface as the GeForce 6800 and is likely to form the basis for other series 6000 GeForce parts. Note the lack of the 'FX' branding - Nvidia has dropped it, Anandtech claims.

    The X800 Pro will ship with 256MB of GDDR 3 graphics RAM across a 256-bit memory bus, but a revised version with 512MB of memory is expected later this year. The report also forecasts the arrival of an X800 SE, which supports 128MB of vanilla DDR SDRAM.

    The R420 is an AGP 8x part - the native PCI Express version, the R423, will launch on 14 June, the report claims. It too will be offered as the Radeon X800. Both versions are expected to clock at around 500MHz with 1GHz memory clock frequencies. They feature eight-stage pipelines with six vertex shaders.

    Expect to see Radeon X600 and X300 products in due course, we're told, as the RV380 and RV370 parts come on stream. These represent ATI's first 110nm parts.

    Meanwhile, ATI's Radeon 9100 IGP is due for an update, apparently, in a few months' time. The revision, codenamed 'RS350', will support Intel's LGA775 CPU interface.

    Further down the line, late in Q3, ATI will offer three new Pentium 4 chipsets, currently dubbed the RS400, RC400 and RU400. The first provides PCI Express graphics and non-graphics add-in card buses, along with a dual-channel memory controller. The other two will offer single-channel memory support, while the latter will not support external graphics cards.

    AMD isn't being left out, courtesy of RS480 and RX480 chipsets, the first with integrated graphics the second without it. ®


    Here's a little more info from Rage3d [rage3d.com]:

    Only weeks before the release, ATI Technologies decided to boost performance of its next-generation code-named R420 processor by increasing the number of pixel pipelines inside the chip. Industry source told X-bit labs that the story is not about redesign, but about enabling "big guns" that were "hidden" inside the chip from the very beginning.

    ATI Technologies' chip known as R420 will be called RADEON X800 PRO and is likely to be launched on the 26th of April, 2004. Higher-speed flavour of the R420 - the RADEON X800 XT - is expected to debut on the 31st of May, 2004, if the assumptions posted earlier this week are correct. PCI Express x16 solution powered by the R423 architecture will see the light of the day on the 14th of June. ATI on Tuesday began marketing campaign on its web-site to support the launch of the new graphics architecture.
  • by Seoulstriker ( 748895 ) on Wednesday April 14, 2004 @12:32PM (#8860884)
    Hmmmmm. Let's see. We have about 10 reviews saying that the nVidia is 2x faster than current top of the line cards, and we have one review by [H]ardOCP which uses different measures in its benchmarks (different resolutions, AA, AF settings in the same graph) and is profoundly anti-nVidia and we are supposed to take it seriously? Come on...
  • by Spy Hunter ( 317220 ) on Wednesday April 14, 2004 @12:54PM (#8861102) Journal
    Well, it is obvious in that benchmark something *besides* the graphics card is limiting performance, since increasing resolution hardly even decreases the framerate. If you look at only the benchmarks where it appears that both cards are stressed to the max, NVidia's card does seem to be about twice as fast as ATI's when the new features of DirectX 9 are used. Of course that doesn't make the submitter's statement correct, but it is quite impressive for a one-generation improvement in card performance. (... at a cost of $500 + a beefy new power supply to feed it? yikes)
  • by gl4ss ( 559668 ) on Wednesday April 14, 2004 @12:59PM (#8861158) Homepage Journal
    huh? the prices _do_ come down.

    the prices of _new_ cards are always at the maximum that somebody would pay for them.

    if you want a cheap card, buy a cheap card(that same cheap card would have cost hundreds of dollars few years back).

    the way i see it there's few categories that have been for years: 1. ultra cheaps at 30-50$ 2. entry level gaming cards at 100$ 3. medium level gaming cards 200-300 and then the 4. high end gaming cards at insane 400-500$. all that changes over the years is which speed cards belong where.

    there comes new cheap cards occasionally, but usually they base heavily on yesterdays high end chips.
  • Re:2d Performance (Score:2, Informative)

    by solidox ( 650158 ) on Wednesday April 14, 2004 @01:04PM (#8861210) Homepage
    in my personal experience, ATI cards have always been much much sharper than nvidia equivilents
    at 2d rendering.
  • by KalvinB ( 205500 ) on Wednesday April 14, 2004 @01:05PM (#8861211) Homepage
    "almost" means "many of, but not all."

    Congratulations on finding the games section where it didn't womp the best ATI card until you get into the higher resolution ranges.

    However, you'll notice on the preceeding pages, "over 100% better" was a very common occurance in areas like shaders and lighting and whatnot.

    Pointing out areas where the GeForce doesn't beat the ATI at 100% does exactly nothing to diminish the point of the article submitter.

    This is why he said "almost every" and not "all."

    Ben
  • Re:I wish .... (Score:0, Informative)

    by bonch ( 38532 ) on Wednesday April 14, 2004 @01:10PM (#8861281)
    I'm glad he cleared that up for us. Because this little known company called SGI didn't develop OpenGL back in 1992. In fact, were it not for MS, we would still be in the computer graphics dark ages.

    All he said was that Microsoft provided a platform for Windows. This is true--the point of DirectX is exactly what was stated. What does OpenGL have to do with anything when the article is simply explaining what DirectX is when talking about a DirectX 9 card? How does that make them not "have a clues what they are talking about?" Everything they said was true.

    Take off the anti-"M$" blinders.
  • by dinivin ( 444905 ) on Wednesday April 14, 2004 @01:13PM (#8861324)
    Retail availability for the nVidia card is around April 26th. So, in fact, it's not King of the Hill yet :-)

    Dinivin
  • by Anonymous Coward on Wednesday April 14, 2004 @01:18PM (#8861373)
    This will probably do: http://www.millerwelds.com/products/multiprocess/x mt_350/
  • Power supply issues (Score:4, Informative)

    by EconolineCrush ( 659729 ) on Wednesday April 14, 2004 @01:18PM (#8861376)
    I see a lot of posts on the fact that the 6800 Ultra requires a 480W power supply. However, if you read Tech Report's review [techreport.com], you'll notice that the card's actual power consumption isn't much more than the previous generation of cards. In fact, its idle power consumption is actually lower than the 9800 XT.
  • by Anonymous Coward on Wednesday April 14, 2004 @01:25PM (#8861467)
    I was one of the lucky 250 people that got to be at the GeForce 6800 release in San Francisco. They held a LAN party of 250 people, including some tournaments of UT2k4 and BF:Vietnam. I made the Quarterfinals (top 8) of the UT2k4 and got to actually play on the new video card. All I can say is - wow. I own a 9800 XT so I'm not too shabby, but I took this card to the next level - the ability this card has is just unthinkable in a lot of ways if you're a graphic programmer like me.

    -Shader 3.0 Compatible (Farcry had a demo at the show of a patch they have coming out that will upgrade the game to Shader 3.0. It's by far the biggest improve in a game I've ever seen as I actually got to play it).

    -14983 3DMARK SCORE! If you know anything about 3dmark, you'd scream in joy at that one.

    -Other game companies were there like Everquest2, Lord of the Rings: Battle for Middle Earth and of course, the new nvidia chick Nula with per-pixel lighted hair that has 2 million vectors rendered in real time...

    All I have to say is wow.
    (But wait for PCI express before you buy one)
  • by kobaz ( 107760 ) on Wednesday April 14, 2004 @01:37PM (#8861611)
    Using the gpu as a second processor would definatly be awsome, but your comment about 100fps being a waste is silly.

    Computer generated frames per second is a completely different thing than film frames per second. Most of your dvds are 23.9 frames per second and you can view even the biggest action scenes with no issues.

    Try playing even quake2 multiplayer at 30fps and you will get a headache. It might be okay single player because there is much less action going on. But once you have 50-100 entities flying around think players blasting each other with various weapons that have visible paths, each visible bullet is an entity that needs its projectile path calculated and rendered, thats alot of work if have a card that can only draw at 30fps.

    A video card only capabable of drawing at 30fps is limited to CALCULATING and rendering 30 frames every second with a given complexity. If you add more entities and exceed that complexity, you can't calculate 30fps anymore and movements won't be calculated and displayed quickly enough to gaurentee smooth gameplay.

    The more fps you can calculate and render in a second the more complex of a scene you can render in real time. In unreal tournament 2003, if you have a card that can do 40fps when your in a game by yourself standing still and then you hop on a 30 player server, you will not have smooth gameplay, because 40fps just isn't good enough, because your framerate will drop considerably when the complexity increases.

    Also, 30fps != 30hz, 60fps != 60hz, and lastly, 100fps != 100hz. The monitor refresh rate has absolutly nothing to do with the performance of your video card. It may affect how you see the results of your video card though.

    I also concur with your guess as to what the target market is, the main target market is gamers who spend their life playing first person shooters who want the best and fastest gameplay. Other markets obviously include 3d modelers like you mentioned and many others.
  • by JawFunk ( 722169 ) on Wednesday April 14, 2004 @01:39PM (#8861635)
    I once heard that by leaving a computer with a measely 150 watt power supply (minute by today's standards) on 24 hours a day like most people do, it consumes more energy than the common refrigerator.

    Perhaps the survey you are referring to was measuring energy consumption of a mini-fridge for a single 12 oz.can of beer (served ice cold), but the common refridgerator, and I mean modern, not the one's from the 70s and 80s, as they improve with time, but the modern fridge draws about 700 - 750W. This is about double that of a computer loaded with hardware doing average browsing or word processing. The ratio is less when UT2004 is activated (W00T).

  • by kobaz ( 107760 ) on Wednesday April 14, 2004 @01:51PM (#8861818)
    Hardcore gamers don't want to decrease detail to gain speed, they want a faster video card that can display the detail they want at the framerate they want.

    The reason games look better at an average fps of 100 is that they can actually fully calculate and display the scene as it was ment to look and can handle the complexities of the scene while keeping the framerate at an average of 100fps instead of periodicly dropping below 30 and making the game run like shit.

    Games are getting more and more complex. In order to combat increasing complexity you need a video card that can handle the complexity.

    I would advise you to stop trolling about not needing new games, not needing 100fps, and not needing new video cards before you get schooled by someone who is way more advanced than I am in graphics.
  • by randyest ( 589159 ) on Wednesday April 14, 2004 @02:19PM (#8862213) Homepage
    The 2004 requirement for refridgerators sold in the US is to be labeled as "Energy Star" compliant (which is most of the decent ones) is ~500kWh/year. There are aout 8760 hours in a year. That's means for "normal use" the fridge consumes an average of 60W of power. I think you're off by an order of magnitude.

    A fridge drawing a constant 700W running 24/7 for 365 days would cost about $613/year to run, assuming an average of $0.10/kWh. ~$50/month electric bill just for the fridge? I don't think so.

    Maybe peak power for the compressor is close to 700W, like if you turned the temp as low as it would go and filled it with boiling water, (but I don't think so), but you're way, way off in your guess.

    And, BTW, a (normal) computer running a word processor will consume nowhere near half of "700 - 750W". I can word process on a box without taxing an 150W power supply in the least.

    Just FYI.
  • Re:16 pipelines. (Score:1, Informative)

    by Anonymous Coward on Wednesday April 14, 2004 @02:24PM (#8862276)
    "They made this haul ass by doubling the number of pipes, but the first thing they are going to do when they put out a mid-range card is to halve, or quarter the number of pipes."

    Well sources say that the non ultra will have 12 pipelines and most of the tech that the ultra has. I too wait for benchmarks on this more mainstream card.

    "How much has been done to refine this card, and how much impact will the new design have for those of us with $150 to spend on a video card?"

    I hate to say it but spending $150USD on a mainstream card will soon be a thing of the past. Graphics cards have languished behind CPU's for decades and it is only now that they really are coming into their own. This costs lots of money for products with short shelf lifes due to constant change.

    This Ultra card has many more transistors and RAM onboard than my CPU and it will get much worse before this ratio stabalizes and gets better. Nvidia and ATI said as much years ago hence the term GPU to equate importance and pricing power equal to CPUs. When I see Nvidia's Doom3 shadows I cannot help but say a silent wow. This is the future and it is finally here in its infancy but it is only the start and the costs will only go up.

    Trickle down is really not going to happen because the cost for highend parts cannot be readily dropped like before. Half the RAM and using slower modules will kill the Ultra and simplifing the silicon means skimping on features. Perhaps $5 dollars (manfacturing bulk price) could be saved with a native PCI X offering and another $5 with no MPEG2 processing but honestly there is not a whole lot of wiggle room. ATI might do better with the lowend but with less features. Heck their next generation cards may completly alienate the lowend altogether. Finally remember that ATI is locked into contracts with the console makers and they may be relying on delivering similar tech to the lowend meaning a further stagnation in that sector.
  • by kobaz ( 107760 ) on Wednesday April 14, 2004 @02:30PM (#8862333)
    Having 100fps at maximum complexity will give you some piece of mind that when someone comes running at you with a chainsaw your graphics card won't suddenly drop to 10fps. Its more likely to drop to 50-70fps instead.

    Yes you do need pretty graphics to kill people. If you are shooting at a guy that is across a field and your in 640x480 with ultra minimum detail, all you will see is a block if your lucky, a little speck that looks like a rock if you are unlucky.

    Proceed to bump the resolution up to 1280x1024 with max details and all of a suddon you will notice you can make out a figure across the field, you can probably even see where his head (or other vulnerable spot) is. You can take aim much easier than if you were saving fps and going to the rock bottom low of detail.

    You are right about not being able to actually see more than 60fps on a 60hz monitor but you are still wrong in saying there is no reason to render the extra frames.

    In your previous post it seemed you were claiming that 60hz is the exact same as 60fps generated by the video card. It isn't. Even though your eye isn't seeing those extra frames not being displayed, the likliness of a missed frame is very low if your videocard framerate is higher than your monitor refresh rate.

    For example if your video card is putting out 60fps and your monitor is running at 60hz, there is a high liklihood that somewhere along the line your video card will drop below 60fps, and you will be displaying the same frame in more than one monitor refresh cycle, now THAT is wasteful.

    Generating too many frames per second will make for smoother gameplay, generating not enough, well, you should know what happens.
  • Re:I wish .... (Score:2, Informative)

    by kmo ( 203708 ) on Wednesday April 14, 2004 @02:34PM (#8862384)
    I'm glad he cleared that up for us. Because this little known company called SGI didn't develop OpenGL back in 1992.

    OpenGL predates DirectX even on Windows boxes. Windows NT shipped with OpenGL long before there was a DirectAnything. Microsoft then bought a company called Reality Lab in 1995 that had a Rendermorphics 3D engine. From the press release at the time:

    "Microsoft plans to enhance the Reality Lab product line and make
    it a general-purpose, real-time 3-D API in future versions of its
    Windows family of operating systems products (beyond the release
    of Windows(R) 95). The Reality Lab API will complement support
    for the OpenGL(TM) API, a higher-end API specially suited to
    professional applications."

    And thus was born Direct3D.

    The Microsoft position from Microsoft: OpenGL was too constrained and complicated for 3D gaming, and has no sound or peripheral integration anyway. We need something more.

    The Microsoft position from opponents: We don't control the OpenGL API. If people write their 3D apps in OpenGL, they can run them anywhere. Come up with a new "standard".
  • by roystgnr ( 4015 ) <roy&stogners,org> on Wednesday April 14, 2004 @02:59PM (#8862659) Homepage
    What you claim he said:

    "All he said was that Microsoft provided a platform for Windows."

    What he said:

    "Before the arrival of DirectX, developers had to program their software titles to take advantage of features found in individual hardware components."

    He didn't just say that Microsoft provided a platform for Windows, he said that before Microsoft provided their platform, developers had to write directly to the graphics drivers. This is untrue: although some programmers did write directly to hardware-specific interfaces like 3dfx's glide, they didn't have to. The availability of OpenGL for Windows predates DirectX, and the availability of OpenGL in general (remember, he said "developers", not "Windows developers") predates DirectX by years.

    For a quick reference, check out this Byte article [byte.com], which discusses both the already existing OpenGL, "available on Unix, Windows NT and 95, and the Mac", and the soon-to-be-released Direct3D, "scheduled to ship in the second quarter".
  • by francium de neobie ( 590783 ) on Wednesday April 14, 2004 @02:59PM (#8862664)
    1. The power consumptions of the last generation nvidia and ati cards are indeed very similar. Please don't say ATI's cards consume less power

    Comparison 1 [tomshardware.com]
    Comparison 2 [tomshardware.com]

    2. The ATI Radeon X800s will require two power rails also. So stop dreaming about a "power efficient" part and buy a new PSU :(
    ATI needs extra power too [theinquirer.net]

    That said, I'm no fanboy of nVidia or ATI though. The new GF 6800U is still occupying one extra PCI slot and blowing a whole lot of hot air inside the case. Imagine someone put another 100W+ Prescott next to it. I just feel uncomfortable for a GFX card to dissipate so much of heat right next to the CPU. But well... ATI is gonna do that too (except for the two-slot thing)

    If there's any reason I'd look forward towards the X800s, I hope they won't require two slots - that is just inelegant. But based on the two molex connectors on the X800s, and the power consumption of their older parts, I won't hold any hope that ATI would "save power".
  • It's a subtle joke (Score:1, Informative)

    by Anonymous Coward on Wednesday April 14, 2004 @03:03PM (#8862714)
    The great-grandparent said that the refrigerator take 150 watts/day. The grandparent pointed out that if it's watts per day, then every day the refrigerator will use 150 watts more than the day before. It's a unit mistake that the grandparent poked fun at.

    Essentially, he's saying "here in Sweden our refrigerators have a constant power usage. Haha."

    Then you, the parent, missed that, and ran through the actual calculation for a refrigerator, and compared the electricity costs in the U.S. and Sweden. Pointless. It might have been worthwhile attached to the great-grandparent, but not where you put it.

  • by francium de neobie ( 590783 ) on Wednesday April 14, 2004 @03:06PM (#8862745)
    Bad news for you, the ATI X800 will require 2 molex connectors too.

    ATI needs extra power too [theinquirer.net]
  • by Anonymous Coward on Wednesday April 14, 2004 @03:24PM (#8862916)
    The availability of OpenGL for Windows predates DirectX, and the availability of OpenGL in general (remember, he said "developers", not "Windows developers") predates DirectX by years.

    Except OpenGL doesn't provide real device independence for all software, and that is why it has failed miserably in the gaming arena. Fact of the matter, before DirectX, there wasn't a real, fully-functioning API for Windows (i.e. for Games).
  • by tepples ( 727027 ) * <tepples.gmail@com> on Wednesday April 14, 2004 @03:34PM (#8863018) Homepage Journal

    The availability of OpenGL for Windows predates DirectX, and the availability of OpenGL in general (remember, he said "developers", not "Windows developers") predates DirectX by years.

    People who go on and on about OpenGL vs. DirectX neglect to mention DirectSound and DirectInput. OpenGL replaces only DirectDraw and Direct3D.

  • RTFA. (Score:3, Informative)

    by Illissius ( 694708 ) on Wednesday April 14, 2004 @03:40PM (#8863085)
    At least one of them. I've read/skimmed most, and several of them mention how (a) it's actually significantly *cooler* than the 5950 Ultra (the previous high end card); (b) it's not very loud (not silent, but not disturbing either); (c) it only draws 10-30W more than aforementioned 5950 Ultra (this figure varied from review to review).
    Though you are right, using it in an SFF wouldn't be a great idea. Can't have everything.
    (And several of the sites mention how it worked flawlessly with a 400W PSU, and the 480W is just there to be certain it'll work, as several PSU makers have a tendency to overrate them.)
  • by DeathPenguin ( 449875 ) * on Wednesday April 14, 2004 @04:25PM (#8863488)
    They don't have an official driver for that combination yet, but you can get an unofficial one that will build on AMD64 with a 2.6 kernel here [www.sh.nu]. I'm currently using it for a Tyan s2885 and it works quite nicely. The performance isn't what I'd expect, but hopefully that will be fixed with the upcoming Detonator/Forceware 6xxx.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...