Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

AMD's Athlon-64 Benchmarked With UT2003 227

Sander Sassen writes "Wondering about the performance of AMD's Athlon-64? Want to how well it runs in 64-bit mode? Hardware Analysis managed to run a few benchmarks on a AMD Athlon-64 demo system using the 64-bit version of Epic' Unreal Tournament 2003. There's also an update with the latest about Athlon-64, Opteron and mobile Athlon-64 including streaming video and pictures of a quad Opteron server."
This discussion has been archived. No new comments can be posted.

AMD's Athlon-64 Benchmarked With UT2003

Comments Filter:
  • by Soporific ( 595477 ) on Thursday February 20, 2003 @04:12AM (#5342034)
    AMD is soon to be followed by Intel with the introduction of the Unobtainium chip.

    ~S
    • by quigonn ( 80360 ) on Thursday February 20, 2003 @05:02AM (#5342176) Homepage
      INTEL DEVELOPER FORUM CONFERENCE, SAN JOSE, Calif., Feb. 21, 2003 -- Intel Corporation today announced the new successor for the Itanic 2 processor, named "Unobtainium".

      This highly advanced clone of a 64-bit processor of an unnamed competitor is the first to combine a competitor's technology with the outstanding features of Intel processors, namely lots of Gigahertz, and lots of heat. The Unobtainium has been especially designed to be used on expeditions on Antarctica, always keeping expedition members in a bubble of hot air.

      Intel, the world's largest chip maker, is a leading manufacturer of computer, networking and communications products. Additional information about Intel is available at www.intel.com/pressroom.
      • > Intel, the world's largest chip maker

        Or perhaps, "Intel, the world's largest chips' maker"? Specializing in plastic cases for the computer inside your computer.
    • by Anonymous Coward
      Me wonders if you realise Oakley has Unobtainium trademarked. Supposedly my lenses are made from ut. lol

      Fear not I see your point. ;-)
      • Really? How're the optics on those? I'm asking 'cause my Oakley watch has an Unobtainium wristband, and I was thinking that rubber lenses might be a little TOO dark.
  • by Anonymous Coward
    Who's gonna reach 2048 first, the Microsoft Windows version number, or the AMD bit number?
  • by otterpop378 ( 254386 ) on Thursday February 20, 2003 @04:13AM (#5342041)
    how to slashdot an innocent server in the dead of night.
    • by Sun Tzu ( 41522 ) on Thursday February 20, 2003 @08:53AM (#5342691) Homepage Journal
      From the top of the screen:
      Please register or login. There are 6 registered and 2756 anonymous users currently online. Current bandwidth usage: 1635.01 kbit/s
      Wow! 2756 anonymous users online?! I wonder where they came from.

      Send us your Linux Sysadmin [librenix.com] articles.

  • by mraymer ( 516227 ) <mraymer@nOsPaM.centurytel.net> on Thursday February 20, 2003 @04:14AM (#5342042) Homepage Journal
    ...a 64-bit CPU is totally *pointless* unless I can spawn at least 500 bots on a map designed for 7 at playable speeds with it. Telefraging madness!
  • Text (Score:5, Informative)

    by Galahad2 ( 517736 ) on Thursday February 20, 2003 @04:15AM (#5342043) Homepage
    As promised we'll give you an update on the performance and other features of the AMD systems that we reported on yesterday. Naturally AMD wasn't very keen on disclosing clockspeed or detailed system configurations of the demo systems they had running, but we took advantage of a few fellow journalists entering the room and keeping the AMD PR people busy to run a few quick benchmarks on the Athlon-64 system. The Athlon-64 demo system we already reported on yesterday had a 2GHz clockspeed and used the SuSe 64-bit Linux operating system and was running the 64-bit version of Unreal Tournament 2003 as a demo.

    (pic of monitor playing UT2k3, FPS = 42)
    Fig 1. The Athlon-64 system running SuSe 64-bit Linux and the 64-bits version of Unreal Tournament 2003.

    Naturally we're intimately familiar with the workings of Unreal Tournament 2003 engine and after a quick look at the display settings, which were set at a 1024x768x32bit resolution with all other features at default, we measured a mere average 42fps and maximum fps around the 55...60fps mark. Considering the fact that this is a 2GHz Athlon-64 processor teamed up with a GeForce Ti 4600 we honestly expected a whole lot better. A 1.6GHz Pentium 4 with that very same GeForce Ti 4600 videocard would have no problems clocking in a similar score while running under Windows XP.

    (pic of a white laptop with the terminal showing)
    Fig 3. The Athlon-64 notebook running CyberLink's PowerDVD actually showing the first Harry Potter movie.

    But there's more, we managed to take a closer look at the notebook too and quickly found out that this indeed is a proof of concept. It plays DVDs very well, mostly courtesy of the ATi M9-series graphics card, and unfortunately all our questions about whether we could do something else with it were answered with a resounding 'no'. We did however manage to find out what was inside in terms of chipset, memory and graphics card. The notebook apparently used a Via K8T400M chipset teamed up with an ATi M9-series graphics adapter and was using PC2100, DDR266, memory. The screen was a standard 14.1 inch running at a 1024x768 resolution and the DVD software they used was none other than CyberLink's PowerDVD.

    (naked pizza-box style case)
    Fig 2. The quad Opteron server with the top cover removed, the PCI-X slots in the back and a the four CPUs hidden underneath the huge heatsinks.

    (two white LCDs next to each other, left showing a web browser and right showing UT2k3.)
    Fig 3. The quad Opteron with the SuSe 64-bit Linux operating system running some sort of a database benchmark, right next to the UT2003 demo machine.

    We naturally also took a closer look at the quad Opteron as that's definitely something AMD is currently pushing hard. They're putting all their weight behind the launch of their server products and have postponed the launch of the desktop version of the Opteron, the Athlon-64, back to September. The server parts, including completely pre-configured two-way Opteron systems, should be available in late April, right after the April 26th launch of the Opteron server CPU family. Clockspeeds will initially range up to 1.6 or 1.8GHz and performance is expected to be similar to Intel's Xeon offerings. But as always, we'll reserve judgement until we can actually evaluate two similarly configured servers side by side, for now all they have given us are SpecInt-2000 and SpecFP-2000 scores without disclosing the system configurations, so that doesn't tell us anything.

    Nevertheless it looks like AMD is indeed trying to get some new and innovative products out of the door. Whether they'll be able to make a lasting impression, both in terms of features and performance, with their new 64-bit products remains to be seen though, we'll be sure to keep a close eye on any future developments.

    Sander Sassen.
  • It's much more entertaining to keep refreshing the page and watch the user count rise

    Please register or login. There are 10 registered and 1173 anonymous users currently online. Current bandwidth usage: 2777.75 kbit/s
  • Whoa! (Score:5, Informative)

    by netfunk ( 32040 ) <icculus@icculPARISus.org minus city> on Thursday February 20, 2003 @04:15AM (#5342045) Homepage
    Ok, wait.

    I'm the developer that did the 64-bit port of UT2003 (and the Linux port, and the Mac port...).

    You need to keep two things in mind:

    1) The OpenGL renderer is not as fast as the Direct3D renderer at this time. This is not the Athlon64's fault. You can see this on 32-bit Windows, since it can use both renderers. Since this is a Linux port of the game, we're using the GL renderer on the Athlon64 at this time.

    2) The "stat fps" command isn't really a good benchmarking method.

    3) This is a prerelease version of the game running on a prerelease version of SuSE running on prerelease drivers running on prerelease hardware. Please don't consider this "benchmark" to be representative!

    --ryan.

    • THANK YOU! (Score:2, Insightful)

      by Svartalf ( 2997 )
      I was about to chime in with similar comments- but it's so much better when the person that did the work (and knows what in the Hell he's talking about) says it.
    • Re:Whoa! (Score:5, Funny)

      by IvyMike ( 178408 ) on Thursday February 20, 2003 @04:26AM (#5342075)

      Hey, I know many of the other comments on this article are going to be filled with haters ripping on AMD and their new chip, but I for one was hella impressed that UT2k3 was running so well under such adverse conditions. Once all the pieces start to fall into place, this could be sweet chip.

    • Re:Whoa! (Score:3, Interesting)

      by XnetZERO ( 560391 )
      The Mac port? Rumor has it that there was such a thing, but for some reason it disappeared into the ether and was never seen again. ;)

      Anyhow, if the game ever comes out for the Mac I'll buy it, but for some reason I think I might die of old age first. :p
    • Re:Whoa! (Score:2, Informative)

      by Anonymous Coward
      I've been meaning to email you and tell you - thank you so much for the Linux port! Unreal Tournament 2003 runs about 5-10 times quicker on Debian GNU/Linux than Windows 98SE (and Windows is on the faster drive.)

      A note to anyone reading - UT2003 installs onto Linux off the normal CDs you buy in the shop. No need to buy a linux-only version. Go - get it for your Linux partition today!
      • Re:Whoa! (Score:3, Insightful)

        by bogie ( 31020 )
        "Unreal Tournament 2003 runs about 5-10 times quicker on Debian GNU/Linux than Windows 98SE (and Windows is on the faster drive.)"

        Your Windows install is completely borked then. There is no way UT run 5-10 times or even 2 times faster under linux than it does under Windows. Its generally accepted that UT2k3 is slower on linux(OpenGL) than on windows(Direct3d) as the developer himself states here. For myself personally(XP1900,512MB,GF4200,~10,000 3dmarks) its defintely much slower in linux and that's a direct result of Epic foolishly(direct3d ain't cross-platform) making UT2k3 a direct3d game from the ground up. Luckily RTCW has no such problems so its what I continue to play daily.
        • Actually, Direct3D IS cross platform, just not to the platform you are thinking about. the other platform it runs on, and the one that intrests Epic so much, is the X-box. Epic tried to make ti so that you could basically design a game for teh PC and then port it straight over to the X-box with minimal fuss provided you use the UT 2003 engine.

          Remember they make a whole lot on engine sales, maybe even more than on actual game sales. Well portability to consoles is something that is VERY attractive and important to developers. Consoles are a huge market. Well, with the way they made UT 2003, it really is a minimum of fuss to get a program over to the X-box. Well, that requires using Direct3D.

          Linux/Mac portability is nice, but not real critical, both are a much smaller market (in terms of sales, not necessiarlly units owned) than X-box or Windows. Hence, Direct3D is a logical choice for the primary renderer, given the goal.

          I think rather than whining, people should be happy that they did make an OpenGL renderer and take the time to port it to Linux.
    • Optimizations? (Score:2, Interesting)

      Good work on the port(s).

      Before the Intel/AMD fanboys go crazy. I wanted to get a few questions in:

      1. Can you tell us what specific optimizations you have done/are planning to do for the 64 bit architecture?

      2. What optimization benefit do get from a straight "re-compile" of the UT codebase in 64 bit mode?

      cheers,
      j.
    • by dameron ( 307970 ) on Thursday February 20, 2003 @12:28PM (#5344290)
      we measured a mere average 42fps and maximum fps around the 55...60fps mark.

      Which might indicate that vsync is enabled, effectively capping the the max fps while lowering the average. Whenever I run a benchmark and it tops out at 60 fps and I suspect, as these guys did, that the machine should be faster, I always double check the refresh rate settings and vsync.

      -dameron

    • I was about to start into an "AMD is a one trick pony" spiel, but that would explain a lot. It seemed like AMD got into a good position with the inital Athlon, but since then have been struggling. I'm wondering if the 64 bit systems are going to give them another jump on the game or it's going to continue to be a neck and neck race.
    • Re:Whoa! (Score:2, Insightful)

      by Ramze ( 640788 )
      Yeah, I'd say this comparison isn't even as good as apples to oranges... more like apples to pasta. In order to benchmark performance, as many variables as possible need to be the SAME other than the one you're testing. A better test would be: Windows XP running the game vs. Windows XP (coded and compiled for 64 bit for Athlon 64) running a non-beta 64 bit version of the game I'd like to at least see benchmarks comparing the game under SUSE Linux to the 64 bit game under SUSE 64 for Athlon 64 so I can judge for myself rather than taking the reader's word that the frame rates aren't "good".
    • by T5 ( 308759 )
      First off, kudos and thanks for a great game port to Linux. Works better there than on Windoze.

      Second, was this running Mesa or a 64-bit secret driver from Nvidia? Big difference in performance, those two.
  • by Temsi ( 452609 )
    let's hope hardwareanalysis.com is not running on one of those 64-bit AMD's... would be pretty embarrassing.
  • by Jugalator ( 259273 ) on Thursday February 20, 2003 @04:18AM (#5342052) Journal
    The site, with just a few comments on /., is already showing signs of slashdotting. I'll quote the most important parts about the UT 2003 benchmark, just in case:

    "The Athlon-64 demo system we already reported on yesterday had a 2GHz clockspeed and used the SuSe 64-bit Linux operating system and was running the 64-bit version of Unreal Tournament 2003 as a demo."

    -snip-

    (at 1024x768x32...) "we measured a mere average 42fps and maximum fps around the 55...60fps mark. Considering the fact that this is a 2GHz Athlon-64 processor teamed up with a GeForce Ti 4600 we honestly expected a whole lot better."
    • by Svartalf ( 2997 ) on Thursday February 20, 2003 @04:27AM (#5342082) Homepage
      "1) The OpenGL renderer is not as fast as the Direct3D renderer at this time. This is not the Athlon64's fault. You can see this on 32-bit Windows, since it can use both renderers. Since this is a Linux port of the game, we're using the GL renderer on the Athlon64 at this time."


      I saw that and determined that they were more Windows type people and plain flat didn't know that the OpenGL renderer is much weaker (not due to the API, but due to this being much the first cut of the thing...) than the D3D. What they measured was pretty good considering that detail.
      • And here another thing people don't realize: making everything 64 bits doesn't necessarily make programs go faster. With 64 bits, you have more pressure on the cache, on the bus, on the memory, etc...
  • by whitelabrat ( 469237 ) on Thursday February 20, 2003 @04:26AM (#5342076)
    I'd like to know if it will run nice and frosty like my AMD 2400+ (plus what? Beats me!)

    Any more fans and my computer may levitate. That would be just as cool as a good UT2003 framerate!
  • Face it (Score:5, Funny)

    by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Thursday February 20, 2003 @04:28AM (#5342084) Homepage Journal
    The reason you keep losing at UT isn't because your processor is too slow.
  • nope (Score:4, Insightful)

    by Anonymous Coward on Thursday February 20, 2003 @04:28AM (#5342087)
    i would have to say that this article is about as worthless as the bill gates quotations earlier, int terms of actual usefullness and truthfullness. since everything is prerelease and the details are fairly sketchy, im gonna wait for solid numbers before i decide once and for all who i will be loyal to in the proccesor world.
  • thoughs.. (Score:5, Insightful)

    by itzdandy ( 183397 ) on Thursday February 20, 2003 @04:30AM (#5342094) Homepage
    how well are the drivers for the gefore card working? are they playing nice with that k8t400? are the nvidia drivers 64bit or are they being run in "32-bit" mode? how well is OpenGL playing with the 64-bit OS, 64-bit Chip combo and again, how well are the nvidia drivers playing? is the OS running the AGP in AGP mode or is it PCI mode?

    i bet i could easily get a P4 2.7 with this graphics card to product similar numbers, or even worse in linux with some effort to use least optimized drivers and setting the graphics card to PCI.

    in fact, my P4 2.4x133@2.7x150 with a GF Ti 4600 doesn't post much better numbers, 55fps by stat fps. and thats on a 32bit "system" with fairly mature drivers and everything work "correctly/fullspeed"

    im not an AMD zealot, but i wont make me decision based on a game that is notoriously bad at opengl and on a system that is running all beta software/drivers.
    • Re:thoughs.. (Score:2, Informative)

      by PSUdaemon ( 204822 )
      http://www.nvidia.com/view.asp?IO=linux_amd64_disp lay_archive
    • Isnt this beta hardware as well? With beta software on beta drivers and a beta OS what is the point of this?
    • Who's compiler did you use?
      Does it do any reasonable optimisation on the opteron?
  • Remember Doom III? (Score:5, Insightful)

    by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Thursday February 20, 2003 @04:43AM (#5342124) Homepage
    Let's not get over-excited... This is of course interresting information, but it's information of a premature chip on a premature platform.

    I doubt that any proper conclusions can be drawn from this, apart from what is already known: The Athlon 64 isn't ready yet. If was the release date wouldn't be set for September.

    Much like with Doom III, there is always a cool-factor, but the actual facts at hand are very scarse. One thing is probably for sure though... The Hammer core can't compete with the Barton core on the desktop at this point. Otherwise we'd have the Athlon 64 waiting to be released much sooner.
  • by Anonymous Coward on Thursday February 20, 2003 @04:47AM (#5342136)
    If one compares the claimed 42fps with other cpu:s [tomshardware.com], it seems it is at a level of a Celeron 500 MHz...
    There is something fishy here as the UT2k3-makers themselves claimed there is a 15% increase in 64-bit mode (on Windows). Normally Quake3Arena for Linux is on par with the Windows version, so it should not be the OS' fault either.
    • by Anonymous Coward
      Those Tom's scores have to be from flyby benchmarks, since there's no way you're getting 200+ average FPS in-game or with a botmatch benchmark. Seeing 2x to 3x higher FPS in flyby mode isn't at all unusual, so comparing the in-game FPS to those benches isn't fair even before you factor in the beta-upon-beta nature of the test.
  • As an AMD user, (Score:2, Interesting)

    by Anonymous Coward
    the real benchmarks I'm interested in:

    How many *C does the CPU run at?
    What size PSU does it need?
  • A question... (Score:4, Insightful)

    by Yuioup ( 452151 ) on Thursday February 20, 2003 @04:56AM (#5342158)
    I curious... how do the extra bits per clock cycle supposed to increase performance? I mean the number of instructions per second don't increase...

    Yuioup
    • Re:A question... (Score:5, Insightful)

      by stevelinton ( 4044 ) <sal@dcs.st-and.ac.uk> on Thursday February 20, 2003 @05:15AM (#5342198) Homepage
      Not a great deal is the quick answer.

      Extra bits can improve data movement and a variety of integer operations like xoring one area of memory with another, but (a) this is probably mainly done on the video card and (b) it is usually limited by memory bandwidth, not CPU.

      The main point of 64 it CPUs is to address more than 4GB of RAM per process. A few applications will also benefit from 64 bit integer arithmetic.

      However, this is a new chip architecture, so how well it performs is interesting independently of the word length
      • Re:A question... (Score:5, Interesting)

        by forgoil ( 104808 ) on Thursday February 20, 2003 @06:03AM (#5342292) Homepage
        A wider memory buss can help, but it is not really connected to the instructionset or the ability to crunch larger numbers.

        One thing that the Opteron has going for it though is the fact that x86-64 have more registers. This makes a real difference. I wonder if the mmx registers are shared with the registers, and if not, why not?
        • The Athlon 64 probably won't have more registers than a P4. The P4 has 128 internal registers that the 8 physical registers map to. The Athlon 64 will have 16 visible registers, but the internal register set that they map to might not be any larger. The main benefit is better optimizations thanks to the larger visible register set.
          The Athlon also has more (seperate) XMM (SSE) registers, but the MMX registers are still shared with the FPU registers.
      • The main point of 64 it CPUs is to address more than 4GB of RAM per process.

        This is and stays the main point for 64 bit processors.

        And this is to almost NO use for most program that are used in day to day use.

        A pentium can only use 2^32 (4 GB) bit memry adresses in a flat memory model. A lot of that (0,5 to 2 Gb) is used by the OS. if you need an application that adresses more than 4 GB 64 bits procesors come handy. The main applications for this are BIG databases.

        64 bit arithmic is almost no use. If you need this big integers you might be better of with floating point, and the X86 already has optimized instructions for those (SSE/SSE2/MMX)

        Programs might even become more slowly since pointers are now 64 bits instead of 32 bits so the cpu has to move more data arround and programs become bigger because of this.

        64 bits has limited use on desktop pc's. Its main use will be for (more that 4GB) servers.

        When desktop pc get more than 4GB of memory (or more than 2 GB) 64 bits cpu's begin to perform better.

    • Re:A question... (Score:5, Interesting)

      by WoTG ( 610710 ) on Thursday February 20, 2003 @05:37AM (#5342245) Homepage Journal
      Well, not directly answering your question, but... x86-64 (the 64 bit architecture that the Athlon64s and Opterons use) is more than just more bits. There are also a lot more registers which will help out code that is recompiled, because programs won't need to do do quite as much moving of values into and out of memory (or cache, I guess). There are other improvements too, but I think the register count is one of the most important ones - with respect to playing games at least. =)
    • The 64-bits should really have no effect on performance.
      What *should* improve performance on the Athlon 64 (with respect to UT2k3)

      1) More user-visible registers. The Athlon-64 probably doesn't have any more physical registers than a P4 (which has 128 of them) but allowing 16 to be visible to the compiler should let the compiler optimize better.
      2) Memory subsystem. The Athlon 64 has a memory subsystem derived from the Alpha EV7. Since these games are very memory-bandwidth bound, this should be a big speedup.
      3) Optimizations in the CPU core. With each release, the vendor can make optimizations that improve overall instruction throughput.
  • Nice case layout (Score:3, Interesting)

    by Paul Komarek ( 794 ) <komarek.paul@gmail.com> on Thursday February 20, 2003 @05:18AM (#5342205) Homepage
    I've worked inside a handful of medium-sized machines, including a couple Microway dual Alpha "rugged racks" and a Compaq ES40 Model II. The 8u (or 9?) ES40 is nicely laid out, but removing the motherboard or messing with drive cables is a pain. The 4u "rugged racks" are a disaster of fans and wires. We've had 4 or 5 fans go out on those, and it takes approximately 45 screws (I counted) and about 60 minutes of fast work to remove and replace a midboard fan.

    That opteron case, on the other hand, appears to have plenty of cooling that is easy to reach. I don't see any wires permanently attached to the case. It looks very clean and easy to service, except possibly getting the motherboard out.

    -Paul Komarek
    • I've worked inside a handful of medium-sized machines

      That must have been very cramped for you, even in a medium sized machine. Personally, I prefer to work inside something larger, like a Sun E10K...at least you can sort of stand up and stretch from time to time. I do concede however that working in a smaller system has its advantages from time to time. I remember once when I had to work in a little Compaq Deskpro for 3 months (they were refurbishing the interior of the SGI Origin 3000 that I normally used as an office)...it was pretty uncomfortable, but at least the boss never poked his nose in to disturb my web surfing!

      Thank you.
  • by okigan ( 534681 ) on Thursday February 20, 2003 @05:23AM (#5342210)
    The article said that there will be benchmarks. And there are none. A screen shot of a game does not qualify. I want to see the whole spec or at least the basic ones. And after that I can look at the game snapshot.

    Moderator seriously why this posted with such a misleading title?
  • The Quad Proc niche (Score:5, Interesting)

    by Talisman ( 39902 ) on Thursday February 20, 2003 @05:31AM (#5342229) Homepage
    I recently got into video editing. Until now, I've never needed anything faster than a single CPU system.

    Now I understand, completely, what those who do rendering gripe about when it comes to CPU speed never being fast enough.

    2:57 of video takes my 1GHz w/ 1GB RAM machine nearly 2 HOURS to render. Just for 3 stinking minutes of video!

    The fastest current single CPU would only decrease that number to about 40 minutes, which is still too slow.

    A dual CPU solution would bring it down to 20 minutes, but again, if I ever wanted to render even 15 minutes of video, that would be 1 hr 40 mins of CPU time.

    And forget doing anything else with the computer while it's rendering. It will start dropping frames like mad, and you have to start over.

    Now a 4-way workstation is something that would work. With a 4-way 3GHz Opteron system, I could render in near real-time, and a regular sized MB, if not slightly oversized, could handle 4 procs.

    SuperMicro is the only MB mfg. I know of that makes a 4-way board, but it's for Xeons and is insanely expensive ($1800 +/- $100) and that's before you add the overpriced CPUs.

    If AMD came out with a moderately prices 4-way workstation, they could get the CAD/CAM, video editing, 3D modeling, rendering and compiling crowd all at once, in addition to the freak gamers and Gotta Have The Best Even Though I'll Never Use It crowds.

    The 4-way system is a neglected niche. AMD should fill it.

    Talisman
    • by radish ( 98371 ) on Thursday February 20, 2003 @06:18AM (#5342317) Homepage
      I have to admit to being a bit confused by this. I'm by no means professional, but I do do some video rendering - mainly taking recorded programmes from my Tivo as mpeg files and then cutting out adverts, adding captions and downsampling them to SVCD. I use Vegas Video on a Athlon 1.3ghz with 1gb of ram under w2k and the render takes about double time - in other words give me your 2 hours and I can render 1 hour of video. And there's no chance of dropped frames - why would there be? If I use the machine for other stuff the render just slows down.

      What is it I'm doing which is so different to what you're doing, and therefore so much faster?
      • by Anonymous Coward
        Encoding a full D1 stream to mpeg2 would certainly take the postulated amount of time.
        • Why would you want to even consider encoding to MPEG-2 on general purpose hardware? A dedicated parallel DSP solution can do this in real time for only a few $100s. A general purpose CPU based solution would cost several times this. Sure, in a few years you'll be able to encode MPEG-2 in the background as you encode MP3s now (I remember when you got about this performance with MP3 encoders) but right now, it's a silly way of doing things.
    • by lingqi ( 577227 )
      And to be honest about it, Back five years it would take you DAYS to render the same amount of video. So give it a while longer, you'll be alright.

      I really think that the 4-way system niche is so small that even AMD went to try to fill it, it would not be worth their investment.

      On the other hand, I would like to see more selections of dual platforms. But as you may see even the demand for those are few and far between.

      Back to the original thing: you can do "fast previews" on most 3D programs now if you got a good video card; I don't see how you can gripe that much about it; for long runs just leave it running overnight. or hell, maybe cheap render-farm out of Xboxes =)
      • I really think that the 4-way system niche is so small that even AMD went to try to fill it, it would not be worth their investment.

        AMD certainly will cater to that niche, since that's what hypertransport was designed for (among other things). However, 4 ways don't make sense for rendering applications. A networked cluster is significantly cheaper for the same throughput.
      • I really think that the 4-way system niche is so small that even AMD went to try to fill it, it would not be worth their investment.

        I would snap up a four cpu motherboard if I could buy the board for less than 300 dollars and cpu's in the 100-200 dollar range. Since I can't, I don't. However, the Manufacturers of motherboards and cpus are looking to get a big premium out of me when I want more than a one cpu board, I don't go buying. Believe me when I say I am not alone in the desire for affordable smp because every once in awhile, you hear how people used uni processor chips in smp configurations with success. People would not try silly stuff like that if many did not think the prices on smp stuff was not rediculous.

    • And forget doing anything else with the computer while it's rendering. It will start dropping frames like mad, and you have to start over.

      Unless you're talking about capturing, which you *will* have to be able able to do in real-time to avoid losing frames, how exactly do you manage to lose frames during rendering? The only way I can think of is by working on a preview while the "real" render is made from tape, but that'll require you do to the actual render in real-time too. Frankly, you're not making any sense to me.

      Kjella
    • One minor problem if your talking about windows...

      If you've ever look at the licence you are only licensed for 2CPU and from what I can tell XP won't let you use more that the number of CPU you have licensed and 98 doesn't really handle more that one CPU.

      So while 4 CPU would be nice you'd have to use linux which isn't a bad thing but currently most of the CAD/CAM, video editing, 3D modeling, rendering programs are for windows.

      And unfotunately why bother compiling on a single computer when you can use a comile farm:
      http://distcc.samba.org/
      • you'd have to use linux

        Why? Just run 2k Server, or wait for whatever XP Server 2003 Very Delayed Edition ends up being called.

        Just because it's got "Server" in the name doesn't mean you can't use it on a workstation. There are a ton of people I work with that do this so they can run Terminal Services.
        • Indeed. My notebook runs Linux but I often have a need for a Win32 environment. I found VMWare was a bit of a resource-hog on my slightly older laptop. My solution was to stick a spare workstation in the server room, trick it out with Windows 2000 server and all the apps I needed so that I could talk to it with rdesktop.

          Works great.

    • I wonder why not consider some real hardware. See SUN Microsystems, Silicon Graphics etc.

      I know a quad AMD would be considerably cheaper, but what about the failure rate? And, you know, CPU speed is not everything. Does nobody care any more? 8-)

    • by AssFace ( 118098 )
      look into clustering. especially if you are a programmer and like/want to tinker with a few of the options.
      either Beowulf, which some may argue is becoming dated or at least not as useful in comparison to the other being Mosix (or rather OpenMosix for most of us).

      You certainly won't get the fast memory pipeline access that the on board SMP systems will give you, but the cost of physically separate systems is lower.
      Video and/or 3d rendering lends itself well to distributed tasks because you can effectively outsource each frame to a different processor and then put them back together later (AFAIK there are even systems that do this on the pixel level - but I know less about that than the fram based level).

      IMO OpenMosix requires far less time in setup, and it seems to be more forgiving to different system makeups (some faster, some more ram, etc).

      I use it for financial analysis and on a single system it would take me a few days to go through all of my data. each time I add a node to the system, it nearly halves the amount of time (Due to network bandwidth issues and different speeds of nodes it isn't ever as straightforward as just halving the speed each time a node is added).
      I can put together a single node for under $400 (Athlon 2.1G and 256MB Ram - I don't need much ram for what I do) - so for a quad system, it would be $1600 in computers and about $100-200 in networking. You could then double that for the same cost of less (save when buying in bulk).
      That is $3600 for an 8 processor system - I don't think you will ever see that in an on board configuration and the speed difference isn't enough that you would care (if something finishes 10 mins earlier, but costs $5K more, is it worth it? for video, probably not)?

      The large issue with clusters is that as they grow in number, their physical space that they take up increases, as does power consumption, heat dissapation, and noise.
    • 2:57 of video takes my 1GHz w/ 1GB RAM machine nearly 2 HOURS to render. Just for 3 stinking minutes of video!

      So what you need is a dedicated hardware solution for video editing, not the usual 5-9% clockspeed boost that the latest ultra expensive CPU gets you. Surely you can get add in boards to do video compression and such? Even if they cost $5,000, it would be worth it if you got a 2x increase or better.
    • I am not seeing why you are griping about there being no 4-way systems. If you are stuck with x86 because of software needs, you can get plenty of Quad Intel solutions. The Dell 6600 series, for example. However if your software will run on another platform, say Sparc, you can easily get system of any arbitrarly high amount of processors.

      Also, depending on the sofware, you may just be able to get yourself a render farm. Almost all professional level 3d apps I am aware of can be slaved to a mster app for faster rendering. you get a bunch of small, cheap boxes (again Dell has some execellent solutions for this) and just send the work to all your little systems. Works great and is actually getting to be a more popular way of doing 3d rendering than the old monolithic supercomptuers.

      You don't need AMD's processor, there are solutions out there NOW if you have the cash.
  • Not surprising (Score:2, Interesting)

    "Considering the fact that this is a 2GHz Athlon-64 processor teamed up with a GeForce Ti 4600 we honestly expected a whole lot better. A 1.6GHz Pentium 4 with that very same GeForce Ti 4600 videocard would have no problems clocking in a similar score while running under Windows XP."

    ...Which you would expect if you were under the false impression that internal bus bandwidth, addressing mode and clock frequency have considerable impact on a 3D game-quality rendering system.

    The graphics hardware does most of the work (ie. the computationally intensive rendering), the CPU is used for game logic, culling and feeding data to the graphics card.
    I would say the bottleneck is AGP bandwidth and limited on-board high-speed memory on the graphics card.
    • Re:Not surprising (Score:3, Insightful)

      by 10Ghz ( 453478 )
      I would say the bottleneck is AGP bandwidth


      I call BS on that. There was a noticeable improvement when moving from AGP 1x to 2x. The difference was nonexistant when moving from 2x to 4x. Same thing when moving from 4x to 8x. AGP is definitely NOT the bottleneck!
    • Re:Not surprising (Score:3, Informative)

      by Anonymous Coward
      ...which you would expect if you'd never actually looked at any of the UT2K3 benchmarks on the net that show frame rate scaling linearly with CPU speed.

      for example - with a Radeon 9700:

      botmatch:
      Intel Pentium 4 1.5GHz = 35.5 FPS
      Intel Pentium 4 3.06GHz = 69.6 FPS

      flyby:
      Intel Pentium 4 1.5GHz = 114.5 FPS
      Intel Pentium 4 3.06GHz = 205.5 FPS

      http://www.anandtech.com/cpu/showdoc.html?i=1783 &p =13

      I'm not saying the original article made a fair comparison, but the game really does rely on the CPU a lot more than you seem to think.
    • I would say the bottleneck is AGP bandwidth and limited on-board high-speed memory on the graphics card.

      I agree with you overall, except about AGP bandwidth. With 64MB and 128MB video cards, hardly anything is ever uploaded. Textures, geometry, etc., are all resident. What's left is not bandwidth heavy.
  • by gasgesgos ( 603192 ) on Thursday February 20, 2003 @07:45AM (#5342502)
    There was nothing to this artice. Here is what I learned from reading this article: 1)there's a 64-bit linux port of UT 2003 2)amd likes secrecy 3)the people who were showing off the laptop like Harry Potter wow. now wasn't that informative?
  • Worthless (Score:4, Insightful)

    by Jeffrey Baker ( 6191 ) on Thursday February 20, 2003 @08:24AM (#5342599)
    "Some kind of database benchmark." Thanks for that insightful analysis of the 4-way, and pimping your own site on Slashdot. Tasteless!
    • Considering that a database benchmark would be a pretty obvious test for a 64-bit box, if you want to see speedups over 32-bit boxen...

      They had no clue what so ever.

      I'm pretty impressed that the 64-bit box didn't suck more at UT than it did - 64-bits suck for applications where cache matters and 32-bits are enough. Not doing "terribly" is pretty impressive :)

      I wish they would have run that "some kind of database benchmark" with a 20 GB working set, on the 64-bit box and a similarly configured 4-way P4 (any speed, take the 3G, it won't matter). Stuff 16-32 GB of memory in the machines, and see the 64-bit box wipe the floor with the competition.

      Heck, why do people buy 700 MHz Sun UltraSPARC boxes for some of the biggest and busiest databases in the world? The P4 is faster clock-speed wise, and it's one helluwalot cheaper.

      Quick answer: because 32-bits don't cut it, and clock speed is irrelevant when you are faced with either missing cache (64-bit) or missing RAM (needing a disk seek - 32-bit), for every single darn operation in your database.

      All computers wait at the same speed. 64-bits allows you to stuff enough memory in a box so that you can wait for a L2 cache miss, instead of waiting for a disk seek. That's a few ns of waiting, compared to a few ms. Three orders of magnitude. 700 MHz versus 3GHz is insignificant in this light.

      Ok, I'll stop ranting now. I totally agree that the clueless motherfsckers who did that article should be lined up against a wall and shot, for that "some kind of database benchmark" remark. Sigh, talk about not getting it...
  • Unfair Comparison. (Score:3, Interesting)

    by 13Echo ( 209846 ) on Thursday February 20, 2003 @09:10AM (#5342759) Homepage Journal
    Anyone that has played the pathetic UT2003 port to Linux will know that it is many times slower than the Windows version. The game was coded for DirectX. It uses a wrapper in Linux to convert Direct3D calls to OpenGL in real-time, and it *REALLY* puts a tax on the host CPU.

    In my experience, properly ported OpenGL games on Linux (like RTCW) were faster than under Windows, but UT2003 is definately not the case. For that reason, these comparisons are way too early. I can't speak for the port to the Athlon64 architecture, but when it comes to the 32 bit version of UT2003 for Linux, it's very slow in comparison to the Windows release. This is what happens when you code a game for one platform, one API, and then try to port it to other operating systems.
  • by Czernobog ( 588687 ) on Thursday February 20, 2003 @10:18AM (#5343152) Journal
    Epic releases UT2003 text mode.
    This has the advantage of being playable on all kinds of hardware specifications, from a measly 8086 to AMD's flagship AMD 64...

    You hop Alice-in-Wonderland-like in a room full of bots. What do you do?
    >

  • by Junks Jerzey ( 54586 ) on Thursday February 20, 2003 @11:03AM (#5343508)
    There have been some disappointed posters, wondering why it isn't faster. Stop and think about it: Why would a 64-bit CPU be faster than a 32-bit CPU? It's not bus width, because Pentiums have always had 64-bit busses. It's not FPU width, because x86 FPUs have always been 80 bits internally. It's not 64-bit integer registers, because it's very rare indeed to need to do 64-bit integer math. It's not 64-bit pointers, because this is a machine with less than 4GB of memory. What it comes down to is that this processor is using slightly newer tech than AMD's previous chips, including a larger cache. But it has nothing whatsoever to do with being 64 bits, and hence the results are not mindblowing.

    There's a persistant myth that a 64-bit processor is twice as fast as a 32-bit processor, which is completely incorrect.
  • off topic, but (Score:2, Interesting)

    by kyoko21 ( 198413 )
    Did anyone notice that the location of the video, keyboard, mouse, floppy drive, cd-rom,, and power cable are opposite from the pci slots? I have worked with quite a few different rack servers myself in the past, but this sure is one strange looking monster. Not to mention that it really does have some massive heat sinks and the RAM slots appear to be staggered around the motherboard, somewhat like the Sun's motherboards.

    Perhaps someone has some insight into other types of rack mounted systems and motherboard configuratioins they can share?

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...