Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Intel Hardware

Intel Skulltrail Benchmark and Analysis 111

Posted by ScuttleMonkey
from the coming-up-short dept.
Tom's Hardware has a detailed benchmark and analysis of Intel's new Skulltrail offering, taking a look at 8 vs 4 cores. The comparison uses games, A/V applications, office applications, and 3D rendering tools to help demonstrate benchmarks. "We were disappointed by the Skulltrail platform. Although we have tested and reviewed numerous Intel products, we have never had such a half-baked system such as this in our labs. If this sounds harsh, bear in mind that all we have to base this conclusion on is the Skulltrail system itself in its current state, which Intel provided as an official review platform. We do not know whether Intel plans to revise and improve the platform before the final versions ship to retail."
This discussion has been archived. No new comments can be posted.

Intel Skulltrail Benchmark and Analysis

Comments Filter:
  • A question (Score:5, Informative)

    by warrior_s (881715) * <kindle3@gmail . c om> on Friday February 08, 2008 @01:59PM (#22352174) Homepage Journal
    that is very important

    Are these games and benchmarks actually making.. you know.. use of all the 8 cores? i.e. were they modified so that they can make use of multicores efficiently.

    Multicore machines are useful when either you run multiple applications or if you want to run single app and make use of the cores, then the apps have to be updated so that they can make use of these multiple cores.
    • by UID30 (176734) on Friday February 08, 2008 @02:04PM (#22352254)
      No no ... you're doing it wrong. The REAL benefit of 8 cores is that you can get some work done while the Storm Worm is busily taking over the world.
      • Re: (Score:3, Funny)

        by gandhi_2 (1108023)
        That's what you think. I'm working on a scalable multi-thread port of Storm Worm. Check out our site at http://sourceforge.net/ [sourceforge.net]
      • Regardless of the fact that you are obviously joking your point is valid.

        Many peoples computers run significantly slower because malware is utilizing the majority of their system resources.

        If the malware is running parallel to the users they probably won't even notice.(not that they noticed before...)
    • Very true, in fact the article brings up the fact that there is very little software that utilizes four cores, let alone eight.

      On the other hand, I know that Blender atleast lets you specify the number of threads to use while rendering. I would hope that the OS would be smart enough to put each thread on a different core but don't know for sure.
    • Yes and no (Score:5, Informative)

      by wandazulu (265281) on Friday February 08, 2008 @02:11PM (#22352342)
      I'm not familiar with writing games on a multi-core game system a la the PS3, but I have written multi-threaded apps in Windows and I can tell you that the answer is:

      Maybe.

      The problem is that your app might be multi-threaded up the wazoo, but you're at the mercy of the OS (Windows here) to actually put the threads on separate processors/cores. You can *request* a thread on a separate processor (SetProcessorAffinity(), if I recall..it's been awhile), but the docs state that this is merely a request, and the operating system is free to ignore it if it thinks it can do better. A lot of time I observed that Windows doled out threads to other processors very grudgingly, and I was told that it's because to Windows, the overhead of keeping track of what thread is on what processor was, under a lot of circumstances, more expensive (read: slower) than if it just kept them all on processor 0 and just context-switched (which it was going to be doing anyway)

      Most games have been, as I've seen, multi-threaded for awhile now; the complexity of these games means they'd have an event loop that's a million lines long if they didn't (and probably do anyway), but your performance is always going to be only as good as the hardware, and the operating system, let you.
      • Re:Yes and no (Score:5, Informative)

        by afidel (530433) on Friday February 08, 2008 @02:50PM (#22352956)
        Windows will happily keep everything on processor 0 until such time as a scheduling threshold is reached on processor 0 at which time it will move the thread to another processor if available. It will continue to use the same process until all of the processors have a full load. I imagine in the average case of a desktop system this probably IS the most efficient algorithm, but if you have lots of short lived high resource consuming threads it's probably not due to all the state copying going on. Also in Windows 2003 the kernel is aware of memory locality and so will try to keep processes on the processor closest to their largest pool of memory in a NUMA system. Also the reason that affinity requests aren't hard is that otherwise it would have to throw an error if that processor wasn't available either due to hardware issues or due to the process attributes being set so that it can't see that processor.
        • Also the reason that affinity requests aren't hard is that otherwise it would have to throw an error if that processor wasn't available either due to hardware issues or due to the process attributes being set so that it can't see that processor.

          That's a stupid reason.

          All the big unices have no problem with hard-locking a process/thread to a specific cpu.
          If the cpu isn't available at the time of the lock, then the lock call returns an error and the process remains free-floating. If a cpu gets oversubscribed, the processes just get smaller time-slices (or none depending on their priority) and it is up the programmer to deal with that contingency.

          If the cpu goes away (like a hot-plug event) then the processes get migrated somewhere else and may or

        • Windows will happily keep everything on processor 0 until such time as a scheduling threshold is reached on processor 0 at which time it will move the thread to another processor if available.

          That's not my experience at all. Windows seems to balance the load pretty well, even if the system is 95% idle, all cores seem to have an equal chance at getting the load. It's not very often where I see one core getting a lot more load than another.
    • That's why they need to let someone like me demo it, not Tom's...

      I tinker with CFD and various simulation codes in my spare time, multi threaded based on the number of cores available. I could put all 8 of those cores to work, easy :)

    • No, they aren't (Score:5, Informative)

      by Sycraft-fu (314770) on Friday February 08, 2008 @02:27PM (#22352552)
      At this point 2 cores is about all you'll really useful in a gaming rig. A lot of games are still single thread, especially old ones. However there are a good number of games out there that can make efficient use of 2 cores. Past that, it gets questionable. There are some games that claim quad core support, but in general it seems they don't make efficient use of it yet. Thus far, I've never seen any game that claims 8 core support, much less any benchmarks to back it up.

      I think this is mostly targeted at the "My ePenis is bigger than yours," crowd. There are a non-trivial number of people out there who are willing to just drop obscene amounts of money on gaming rigs, and Intel wants to suck every dollar they can out of their pockets.

      Same sort of deal with nVidia's new triple SLI boards. At this point even 2 card SLI isn't a great idea because it costs so much (literally twice what a single card does) and the benefits aren't that great. There isn't a lot of need for 3 card SLI. However, people will spend the money, so nVidia will happily make a product to take it from them.
      • by afidel (530433)
        Yeah you're not likely to keep more than a handful of cores busy with games unless they are AI intensive, you have core logic, physics, sound, graphics, I/O, network, and per unit AI as possible threads, most of those won't keep a core busy in the least bit unless you are purposely wasting cycles by say making your physics too realistic you aren't going to keep more than 2-3 cores busy.
        • by fbjon (692006)
          Flight Simulator X is one of those games that actually likes more cores, IIRC, because it's more limited by CPU than by the GPU. It's also very much parallel.
        • by BeanThere (28381)
          One problem is that in a typical game, graphics is about the only thing that can be scaled quite dramatically (frame rate and quality-wise) without 'making it a different game', so to speak. If you made use of multi-cores to, say, have much better AI or physics, then you would effectively be *relying* on having multiple cores, because if the game were to run on a single-core system, you couldn't exactly have different, 'worse' physics for those users (the best you can try do is sacrifice a little more of th
      • by MojoStan (776183)

        Same sort of deal with nVidia's new triple SLI boards. At this point even 2 card SLI isn't a great idea because it costs so much (literally twice what a single card does) and the benefits aren't that great. There isn't a lot of need for 3 card SLI. However, people will spend the money, so nVidia will happily make a product to take it from them.

        I'm not a gamer, but I do know that a few maniacs like to play their games on 30" LCDs at their native 2560x1600 resolution. Wouldn't multi-card SLI benefit them? Sure, the market's not big, but I don't think they're all wasting their money on nothing.

    • Who cares? I've always wanted to render hours of high definition video while playing Crysis. It's a beautiful thing.
    • by timeOday (582209)
      But even if a program isn't massively threaded, it shouldn't run slower than on a system with fewer cores! And that's where I see the most damning problem here: "If a program only uses four of the eight processor cores, then the Skulltrail system is noticeably slower than a single-socket quad-core computer." It's one thing to have unutilized cores, but quite another for them to be a hindrance!
      • by LWATCDR (28044)
        The speed issue probably has more to do with Ram than the CPU.
        This chipset uses FB ram. It is a good bit slower then DDR2 or DDR3 desktop ram.
        The other issue has to do with the FSB. I read that the Intel memory system starts to falter at 4+cores. Memory access is one of the few areas where AMD still has an advantage.
        • by timeOday (582209)
          WRT the FSB, you would expect memory contention to be an issue with all 8 cores banging away, but that wouldn't explain why the 8 core chip is slower than a 2 or 4 core chip if there are only, say, 2 active threads.
    • Do you know how much it costs to REWRITE GAMES to work properly and efficiently on 8 cores? Those rewritten games will also run terribly on machines that have less than 8 cores (unless there's a separately programmed path for dual cores, and quad cores, and single cores)... so ignore the thousands of gamers with dual core systems that make up the majority of your enthusiast customer base and spend lots of money satisfying the 100 rich dudes that will buy the SkullTrail for gaming that really don't care as m
    • Re: (Score:3, Informative)

      by milsoRgen (1016505)

      Are these games and benchmarks actually making.. you know.. use of all the 8 cores?

      No they are not, the article goes on to say the 2nd processor is basically left unused and even current quad core designs are out performing skull trail.

      The problem lies in the fact Intel released this platform as a gaming platform. However they reached into their workstation kit to pull out this hardware. Dual processors are a nice bragging right for enthusiasts, but only if the performance is in the very top tier with software actually in use. And using fully buffered memory, is simply a big no-no when

    • Or... (Score:3, Interesting)

      by jgoemat (565882)
      Are you multiboxing [wowwiki.com], playing 5 or 10 World of Warcraft accounts at the same time? My new quad-core flies with five instances of WOW running. My AMD dual-core was faster, but could only handle three sessions at a time before starting to get choppy.
    • Many of the newest Operating Systems, applications, and games are multi-threaded. Multiple cpu cores just allow modern systems to take advantage of them, when available.

      Can all of these be enjoyed on a single-core cpu? Absolutely.

      I have a dual quad-core computer, similar to Intel Skulltrail system, that dual boots Windows Vista Ultimate, 64-bit, and Fedora 8 Linux, 64-bit. Many programs do take advantage of this system, including modern PC games, such as Crysis and Unreal Tournament 3. UT3 does use all
  • by Stanistani (808333) on Friday February 08, 2008 @02:00PM (#22352200) Homepage Journal
    "Although disappointing in performance, bikers and goths will probably be enthusiatic about the 'Skulltrail' name, and get new, annoying tattoos."
  • by TheSync (5291) * on Friday February 08, 2008 @02:03PM (#22352234) Journal
    Guess what guys? We've run out of GHz (mainly a power/heat problem). Start writing parallel programs.

    Here is what the article says:

    To be fair, though, it is not Intel's hardware that is at fault here, but today's software. If a program only uses four of the eight processor cores, then the Skulltrail system is noticeably slower than a single-socket quad-core computer. Since there are practically no current games or desktop applications around that can utilize more than four cores (if that many), the Skulltrail system does not offer any benefit here.

    Read The Landscape of Parallel Computing Research: A View From Berkeley [berkeley.edu] which has the description of why, this time, there is no getting around parallel programming.

    Also examine NVIDIA's CUDA [nvidia.com] platform, which scales from a handful of processors on your PC's NVIDIA chip to the 128 processor NVIDIA Tesla [nvidia.com] card. Scalable parallel processing is the future.

    • by msimm (580077)
      I enjoy the /. corrections like this (as usual). What I wonder is how do systems like this hold up under reasonable load in real multi-application environments? I mean looking at my task manager and task bar right now, on a normal business day, I have about 10 applications running using varying amounts of my system resources (plus services). Is the way I work multi-threaded and if so will I notice a difference when I spread this work across these cores? I mean I appreciate they might be saying that single
      • by h4rm0ny (722443) on Friday February 08, 2008 @02:40PM (#22352760) Journal

        It's going to depend on whether those ten applications are actually making ongoing use of your processor. Encoding a movie whilst listening to music and editing photos - yes, proper use of multiple cores will see big benefits. But if you're talking about some spreadsheets, word documents, browser and an email client, then less so because no matter how quickly you think you're switching between these applications, it's going to look like slow motion to a CPU swapping processes. With this sort of usage, a CPU is actually sitting idle a lot of the time waiting for the next eternity between keystrokes to end. I'm not saying you wont see a benefit, but the benefit really kicks in when you've got multiple applications that are really doing something. A lot of applications (and probably the ten you have open at work) simply don't fall into that category.
        • by msimm (580077)

          Encoding a movie whilst listening to music and editing photos

          I think we can all agree that spreadsheets won't be a particularly taxing task. But ya, music/encoding/compiling/SETI/virus-scanning/searching-indexing/updating/archiving are all fairly common background tasks that have at one time or another probably impacted all of us.

          I'm just wondering if the fellow who did the tests in the article would have been disappointed with the system performance if he'd used it over time with heavier (application) u

        • by geekoid (135745)
          Once 64bit has been common in the business workplace for a few years, Spreadsheets will burst in size. There are large companies who are frustrated because their spreadsheet are limited by a limitation built into them bacause of the memory addressing space issue.

          CFOs and other executives would love to have a real time update from their books to a spreadsheet and have that data sliced a hundred plus way. With Color, graphics and an embedded video on different sheets.

          They don't want there SAP, or any enterpri
          • Once 64bit has been common in the business workplace for a few years, Spreadsheets will burst in size. There are large companies who are frustrated because their spreadsheet are limited by a limitation built into them bacause of the memory addressing space issue.

            There's this thing called a database - anything big enough to care about the 4G limit should be in one.

      • Is the way I work multi-threaded and if so will I notice a difference when I spread this work across these cores?
        I think we're still bottlenecked in other areas. No matter how many cores you've got you are somewhat limited by HD and memory speeds which have not grown as fast as CPU power. I think intel and AMD might benefit me more if they did more mother board research instead of CPU research. I think the SSD mac is a step in the right direction. They just need better SSD.
        • by diskis (221264)
          Uh, do the same thing with HDDs as with CPUs. Add more.
          What's stopping you to buy 4 250GB drives and stripe them? Almost 150 MB/sec sustained speeds. And how much is a 250GB drive, 80 bucks? You'll get four of those for the price of a quad core.

          Not like you need any RAID cards anymore, as any decent desktop motherboard has like 6 SATA ports and a onboard RAID controller.

          HDDs are commodities, almost in the same way floppies was in the 80's.
          • by AuMatar (183847)
            And up your failure rate by a factor of 4 too. Because with striping, if any disk fails the data for the entire file is lost. No thanks. Besides, I have better things to spend $320 on. And none of these fix the memory problem, which is the real issue for most apps- memory bound, not CPU or IO.
            • by diskis (221264)
              Which memory problem?
              Google for some benchmarks. Here is one, appears to be in french, but luckily pictures are language independent.

              http://www.matbe.com/articles/lire/357/ddr2-400-533-667-800-1067----que-choisir/page14.php [matbe.com]

              Comparing DDR2-400 to DDR2-1067, gives like 20% more performance. And that's on a core 2 platform, which is supposed to be memory starved. That is 2.5 times faster memory and a 20% performance increase.

              And, you have backups right? That 4*250 array is good to mirror overnight to a terabyte
              • by AuMatar (183847)
                Backups on my home machine? Nope. Not worth my time or effort. I only do that when I move files en masse between computers- once every few years. Which is actually still more than most people. If I lose data, I lose data. Oh well, I lose some game saves and my latest resume. I'm still not going to stripe my drives- quite frankly the mild performance benefit isn't worth the hassle of restoring even if I did have a backup.

                As for memory slowdowns- are you for real? Memory is *the* biggest bottleneck
            • Um, has everyone forgotten about parity [wikipedia.org]? I swear I see all these kids nowadays with striped raid setups and I have never seen one use parity. Raid 5 for the win.
          • by afidel (530433)
            In fact the fakeraid driver for 6 SATA cards is a good way to keep some of those cores busy =)
            • by diskis (221264)
              Actually, it's not that bad. Ever tried windows 2000's software RAID? 4 disks was barely noticable on a AMD K6-2.
              If you really need that last 10% of a quad core, you need a workstation or server, and not a living room media machine :)
          • Your still bound by the speed of the I/O bus, FBS limits, and speed of memory.
            • by diskis (221264)
              No. Remember good old IDE cables? Those could almost handle that load. 133 MB/S. That's higher than any single hard can provide.

              Are you like my roommate, who likes to use firewire/800 for his external HDD instead of USB2, as "FW is so much faster"?

              Yes, FW is faster than USB, but both still outperform everything but the fastest 15k rpm drives.
              And internal buses? Please... Remember good old PCI, bandwidth 133 MB/sec. Beyond any single harddrive. If that's not enough, get a new-ish computer with PCI-E. Bandwid
              • Are you like my roommate, who likes to use firewire/800 for his external HDD instead of USB2, as "FW is so much faster"?

                He might not be talking nonsense. While USB-2 has a 480Mb/s line speed, you'll be lucky to get more than about 30MB/s through it, which is the speed of my laptop's internal drive - my external disks peak at about 40-50Mb/s and can handle sustained transfers of 30MB/s. With FireWire 800, I can chain two disks together, have one wire going in to my laptop, and not be limited by the interface speed.

        • by TheSync (5291) *
          No matter how many cores you've got you are somewhat limited by HD and memory speeds which have not grown as fast as CPU power.

          I don't buy the HD speed issue, since that can be solved with RAID and eventually RAIDed Flash solid-state.

          Memory speed is a different issue - the only solution is to dramatically change how we think about memory from some chips on a bus to being intimate with and connected to in a parallel fashion to CPU cores.

          The "The Landscape of Parallel Computing Research: A View from Berkeley
      • Well. I suppose you could have a shit load of annoying widgets spinning in the background, but really, most people simply don't need more than a single CPU. Most rarely use more than 5% of the one they already have.

        What I find rather humorous is that we currently try to consume any excess CPU performance by using less efficient languages... We make 100 million people spend another $1,000 each in order to save $500,000 worth of cost in programmer time... Then justify it as cost efficiency.

        To really make use
        • by ianare (1132971)

          Most rarely use more than 5% of the one they already have.
          Huh? I've seen brand new out of the box dual-core Vista machines using 10-15% of CPU at idle. XP is (much) better, but once you start adding all sorts of crap on there, and there are like 40 processes running at idle, CPU usage certainly goes past 5% at idle ... and then they open AOL.
          • by linzeal (197905)
            Even my grandparents ditched AOL, how are they doing nowadays? Isn't time warner going to dump them?
    • As a scientific number cruncher, I always need a few more orders of magnitude in speed. From the first PC until now, I picked up an order of magnitude with each hardware replacement cycle. But now it looks like the next order of magnitude will have to wait until 16-core chips reach commodity prices. So go for it.
      • by Pyrion (525584)
        The next order of magnitude won't be reached until the software is written to take advantage of it. You could have an 80-core CPU and not get any use out of it simply due to obsolete software. That's the point being made here. You're not going to get increases in performance in orders of magnitude with single-threaded applications anymore.

        Much in the same sense that you can have 8 gigs of memory in your computer and you can't take advantage of more than about three gigs of it due, again, to obsolete softwar
        • by neumayr (819083)
          He says he's a "scientific number cruncher". He is going to be able to take advantage of this 80 core cpu.
          About the 3GB RAM limit, way I see it MS is on the way to fix that by making this memory hog called Vista - people are going to go 64 bit just to be able to use enough RAM to run it efficiently.
          • by Pyrion (525584)
            I doubt that. If he really was a "scientific number cruncher" capable of taking advantage of an 80-core CPU, he wouldn't be bitching about scalability.

            As for the 3GB RAM limit, I'm glad MS is forcing the issue. In the short-term, more x86 applications need to be flagged as large address-aware (to take advantage of more than 2GB of virtual memory) and in the long-term more applications need to be compiled for x64 only. Migration to x64 is inevitable. Games today are already pushing those limits. As for the O
            • by neumayr (819083)
              Well, is it really the OS you want to use that memory for? Isn't it more likely you got that RAM to have your applications use it?
              Skulltrail is a proof of concept of course, but I think there are a lot more people that could really put those eight cores to use than most posters seem to think. Pretty much everyone involved with 3D animation for example.
              And for those that get such a system for the bragging rights - well, chances are they at least run some distributed computing client, so it's not a total wa
    • Guess what guys? We've run out of GHz (mainly a power/heat problem). Start writing parallel programs.

      Or better yet, go back and clean out all the useless crap that's been gradually added to software in the past few decades.
    • Im sure if coders spent more time learning how a cpu works, they could achieve 16 cpu quality out of 4 cores, now if you stop writing crap in java and .net you can get
      64 cpu quality out of c++ in 4 cores. Sure, I agree many solutions dont need it since they are crappy little tools that never use much cpu, but may use lots in doing
      simple things like making thumbnails out of 24 images.

      Im glad we've hit the wall on Ghz, it means these java/.net programmers cant assume in 3 years time their software will be fa
  • More Cores (Score:2, Interesting)

    by markass530 (870112)
    I'm the kinda guy who pushes any computer he has to the limits, and when I recently upgraded my computer, I went with the AMD 5000+ Black edition, and decided to wait on going quad. I Can play Crysis, while watching a movie on my 2nd monitor, no prob. About the only time I wish I had a quad core, is when I'm converting video, and other then that, can't really see much of a need for a quad.. let alone. Don't get me wrong, as soon as AMD Cranks out a worthwhile 45NM Quad, I will upgrade right quick, but It wi
    • by ianare (1132971)
      HTF (how the fuck) is this a troll? Judging from this [tomshardware.com] I would take the parent's claim with a grain of salt, however this does not equal troll.
      • Gracias, i have no fucking clue how somebody could mod me a troll, and as far as taking anything i say with a grain of salt, I live in Barracks Rm 531A Biggs army airfield, El Paso Texas, Feel free to stop by and get a Demonstration My Computer is: AMD 5000+ 3 GHZ ATI 3870 SB Xfi 2 Gigs Ram Standard HD I have no need to bullshit about what my setup does, and I don't bullshit either, I AM an AMD Fanboy, and just like their products better, probably just do to comfort with and experience with their product
  • by deander2 (26173) * <public&kered,org> on Friday February 08, 2008 @02:05PM (#22352280) Homepage
    what?!? an 8-core machine doesn't run single-threaded benchmarks any faster than a 1 core? that's crazy! what a revolution! what's next? we'll discover that 9 women can't create a baby in one month?!?

    shocked, i tell you! shocked!
    • by McNihil (612243)
      Unless of course it goes like in "Spieces II"

      accelerated gestation! Woohooo... OMG... hurl.
    • ..but I think the article was saying that on single threaded apps, instead of the benchmarks being identical for the same processor in a single socket vs dual socket configuration, the dual socket one was slower. If you bothered to read the article, on multi-threaded applications there were indeed speed increases (40-50%, nowhere near the 80-90% gains you'd expect). They weren't expecting the single threaded performance to suffer.

      My guess is that the memory controller is now becoming the bottleneck, sinc
      • by Pyrion (525584)
        They've always had this advantage. It's just that this hasn't been an issue in consumer applications. It still isn't, since the vast majority of consumers aren't going to pony up the dough for Skulltrail, not now nor in the near future.
    • by linzeal (197905)
      I volunteer to test that theory with the 9 women and the one month thing. I will get back to you, I hope. ;)
  • by Anonymous Coward on Friday February 08, 2008 @02:05PM (#22352284)
    At the bottom of the linked page I saw "Page 1 of 25" and I gave up. Bad submitter! Bad! Bad!
    • I haven't been interested in anything Tom's has had to say in a long, long time. Their methodology is often shoddy at best, and their opinion is oft swayed by whatever company is giving them the most schwag in any given month.
    • by poot_rootbeer (188613) on Friday February 08, 2008 @06:28PM (#22355698)
      At the bottom of the linked page I saw "Page 1 of 25" and I gave up.

      I used an 8-core CPU to read the article and was able to get through it in just a little more time than a 3-page article would take.
    • Re: (Score:3, Informative)

      by Mad Merlin (837387)

      At the bottom of the linked page I saw "Page 1 of 25" and I gave up. Bad submitter! Bad! Bad!

      Tip: add print.html to the end of any THG URL, and you can read the entire thing on one page. THG would be completely and utterly useless otherwise...

    • by MojoStan (776183)

      At the bottom of the linked page I saw "Page 1 of 25" and I gave up. Bad submitter! Bad! Bad!

      I know you were probably joking, but Slashdot comments have taught me the non-obvious way to get a single-page view of Tom's Hardware articles:

      If clicking that link directly results in a redirect to the multi-page version (for some reason Opera is doing this for me), then copy-and-paste that address directly into the address bar.

  • 3D Rendering... (Score:3, Informative)

    by podperson (592944) on Friday February 08, 2008 @02:14PM (#22352384) Homepage
    The only real test to show the benefit of Skulltrail was the 3D rendering section where the Skulltrail machines really did post decent results. Even for video encoding you reach a point where the problem becomes IO-bound (and you can't compress video frame n independently of video frame n+1 because of interframe compression). Of course, the next question is whether a Skulltrail machine is cost effective against slightly cheaper machines used in parallel for 3D rendering.
    • by MosesJones (55544) on Friday February 08, 2008 @02:31PM (#22352618) Homepage
      Most video compression approaches use Keyframes which are uncompressed (across frames) in order to make sure the compression doesn't drift to far from the actual content. So doing Multi-core is actually pretty easy on video as you just dedicate a core as working on one key-frame to key-frame section. Given that key frames often occur as much as once a second (or on decent connections once every three seconds or so) then there is a huge amount of work that could be done in parallel and its not very difficult to make the encoders work in that way.

      So video compression isn't one of the areas where it isn't an advantage to have multi-cores.
      • by Pyrion (525584)
        Unless you have sudden and frequent changes in scene luminosity, in which case you can have many keyframes in a single second of video.
      • by matfud (464184)
        The problem is that you need to compute when your next keyframe needs to be
        created. You normally do this by computing frames from the last key frame (and
        then sequentially from each subsequent computed frame) until you reach the
        point at which the error in the calculated frame is too large. At this point
        you add a new key frame. This is a serial process (in terms of frames) so does
        not parallelise well.

        There are some approaches to parallelise this but they do not scale well to
        large numbers of processes/threads/
      • by Slashcrap (869349)

        So video compression isn't one of the areas where it isn't an advantage to have multi-cores.
        Thanks for not failing to include any double negatives.
    • by TheSync (5291) *
      Even for video encoding you reach a point where the problem becomes IO-bound (and you can't compress video frame n independently of video frame n+1 because of interframe compression).

      The question is how much can you cache - you can throw a GOP at a core, you can cache the I-frame on chip and predict from that.

      Or you could do the annoying solution and break up the frame spatially and work each "quadrant" or "region" on a separate core (though motion prediction between cores becomes troublesome).

      There are fol
  • FB-DIMMS + a high number of chips on the MB = high power and high heat and it don't even have pci-e 2.0

    The mac pro may end up costing less then this and it will likely use less power and give off less heat and it has pci-e 2.0.

    If amd can just make some good quad cores then a amd based system with a AMD / ATI chip set or a nvidia one with PCI-E 2.0 in all slots with DESKTOP ram will blow this away.
  • by bmajik (96670) <matt@mattevans.org> on Friday February 08, 2008 @02:29PM (#22352578) Homepage Journal
    "Nathan Explosion, front man of Dethklok, sums up the new processors performance:"

    Skulltrail is FUCKING BRUTAL

  • I for one (Score:2, Interesting)

    by Anonymous Coward
    Would like to see them run 8 virtual operating systems and play games on each one at the same time.
  • Do people really wade through Tom's site anymore? Try Anandtech.

    And guess what? My psychiatrist said my misanthropic tendencies were counter-productive to my welfare. So I'm even giving you the single page version!

    http://www.anandtech.com/printarticle.aspx?i=3216 [anandtech.com]
  • by tyler_larson (558763) on Friday February 08, 2008 @03:18PM (#22353368) Homepage
    Where this processor truly shines, and this was unfortunately not reflected in their report, is in running 8 concurrent instances of the benchmark suite.
  • So, how well does this run XP? Since I'm certainly not even considering
    switching until at least Windows 7...
  • It should be "um-limited" - like the phone commercials.
  • ..does it run Linux?

    I think it may be worth asking the people at Bestofmedia if it runs Linux and what the compile, I/O, etc benchmarks are like with 8 cores.
  • The new Mac Pro is an 8 core system. When Tom's Hardware says there is no competition, I think it left out the Mac Pro. Skulltrail gives PC enthusiasts an alternative to it. I wish there were a chip or a piece of software that could automatically allocated processes or parts of processes to multiple cores. Then something like Skulltrail would be very useful.

Old programmers never die, they just branch to a new address.

Working...