Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AMD Upgrades Hardware

AMD Bulldozer Information and Benchmarks Leaked 126

Posted by timothy
from the just-a-pinch-of-salt-between-cheek-and-gum dept.
MojoKid writes "With Bobcat and Llano launched, AMD has one more major product overhaul set for this year. The company's Bulldozer CPU will launch in the next few months, and after years of waiting, enthusiasts and IT industry analysts are both curious to see what AMD has in its high performance pipeline. According to recently leaked info, one of the new AMD octal-core processors will be an AMD FX-8130P running at 3.2GHz base speed, with what's reported as a 3.7GHz Turbo speed, and a 4.2GHz clock speed if only half the CPU's cores are in use." Writer Joel Hruska justly points out that measures based on unofficial data and unreleased chips are subject to all kinds of potential errors, not to mention Photoshop.
This discussion has been archived. No new comments can be posted.

AMD Bulldozer Information and Benchmarks Leaked

Comments Filter:
  • has what to do with clock speed?
    • by Anonymous Coward

      Means that the images could have been made/doctored in Photoshop.

    • Re: (Score:3, Informative)

      by VisualD (1144679)
      There is a 3dMark 11 result screenshot with a date of 01/02/2008, they are implying the result is fake.
      • Re: (Score:2, Funny)

        by SquirrelDeth (1972694)
        My desktop clock is always wrong except for Fedora. Why Fedora? My Suse is 2 hr 31 min off. Why does Linux hate the desktop clock so much?
        • by mfwitten (1906728)

          Perhaps Fedora but not Suse is running an NTP (Network Time Protocol) client, and perhaps Suse is configured for the wrong time zone or the hardware clock and Suse's configuration don't agree on whether time is stored as local time or as time at UTC offset +0000.

        • by Eskarel (565631)

          If you're running that Linux Desktop in a VM, there's some kernel patches required to fix the clock drift, Fedora probably has them and Suse doesn't.

          • I usually correct the setting in VMWare, not sure on others... something I've forgotten about until I get my midnight reports in the later morning.
        • by EricX2 (670266)

          But is your clock often 3 YEARS wrong? I've found in the past, stuff goes weird when your clock is that wrong. SSL certificates fail on websites for one.

          Also, they are using a new motherboard. Even if the clock was reset it should be the current year. My motherboard from last year doesn't default to 2008, it defaults to 2010 when it is reset.

      • Re:Photoshop (Score:4, Informative)

        by rhook (943951) on Friday July 15, 2011 @01:45AM (#36772414)

        Except 3dMark 2011 didn't exist until last December, it's more likely that he never set the BIOS clock.

      • by hairyfeet (841228)

        Well to be fair if this bunch just did a quick Windows setup to run this chip frankly I wouldn't be surprised if they ran something like TinyXP/7 or one of the "Razr1911" Windows builds, all of which seem to have the time service disabled.

        But I think the more important thing to note is we won't know what the true performance of the new AMD chips are for about 8 months after the first Bulldozer chips are released. Why is that? because they are currently currently preparing to switch to a whole new APU arch [slashdot.org] w

        • completely new SMTP design

          They redesigned the Simple Mail Transfer Protocol? :-)

          • by hairyfeet (841228)

            Congratulations! You found a typo! if you know what SMTP is you probably likewise know what SMT [wikipedia.org] is which is of course what I was trying to type.

            It doesn't change the fact that it is gonna be hard to run true benchmarks on a chip that has a design unlike ANYTHING we have ever seen before without having code built to take this new technology into consideration, like running a 10 year old single core benchmark on a modern multicore and thinking its results will be anything useful. Unlike previous designs these

  • Why the hype? (Score:1, Interesting)

    by Anonymous Coward

    I don't really understand the hype behind Bulldozer. Do people really believe that it'll be on-par with Sandy Bridge? The $200 2500k competes well with their own $700+ CPU's. That is absolutely ridiculous performance that I wouldn't have dreamed of 5-10 years ago, for that price.

    Sure, maybe having more cores will mean better multi-threaded performance, but this still isn't taken advantage of. I don't see Intel losing in the single-threaded department anytime soon.

    • Re:Why the hype? (Score:5, Insightful)

      by Mad Merlin (837387) on Friday July 15, 2011 @01:54AM (#36772432) Homepage

      Would you rather AMD go out of business and Intel charge $2000 for that $200 CPU?

      • by Anonymous Coward

        In 1996, Digital Equipment Corporation had a Alpha processor fabricated in a bad process uncorrected until 1999 that otherwise had the potential to play Doom3 in SOFTWARE RENDERING. Despite the corrected process reaching the same processor, this is the first company ever to reach 1GHz and was done in 1999, but it could've been done in 1996. The $10k workstation, made in America, and still had more potential than AMD and Intel but they were sold-out by Compaq and Hewlet-Packard.

        • In 1996, Digital Equipment Corporation had a Alpha processor fabricated in a bad process uncorrected until 1999 that otherwise had the potential to play Doom3 in SOFTWARE RENDERING. Despite the corrected process reaching the same processor, this is the first company ever to reach 1GHz and was done in 1999, but it could've been done in 1996. The $10k workstation, made in America, and still had more potential than AMD and Intel but they were sold-out by Compaq and Hewlet-Packard.

          Meanwhile Intel released the Pentium Pro architecture and its successors, which had a price/performance ratio competitive with Alpha, but could run the software people already had. With this, the Alpha was duly relegated to the dustbin of history.

        • by Sloppy (14984)

          The $10k workstation, made in America, and still had more potential than AMD and Intel

          No, AMD (and much later, Intel) proved they had more potential: the $1k workstation.

          It's mostly all about bang-for-buck, not just bang. The bang-at-any-price market is small, which is why DEC got bought rather than doing the buying.

      • by Verunks (1000826)

        Would you rather AMD go out of business and Intel charge $2000 for that $200 CPU?

        so let's hope some benevolent guy buys AMD cpus so I can buy a cheap six core sandy bridge next year

    • For some things that run well in parallel their current 4 way 12 core processors released some time ago are better than the newly released Sandy Bridge - so OF COURSE it COULD be better. Whether the consumer CPU is as good or not as good is something that will be worth seeing. If it's nowhere near as good but a lot cheaper that will also be worth seeing.
      • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Friday July 15, 2011 @08:48AM (#36774504) Journal

        I think most here are missing the forest for the trees. Unless you are a Crysis playing ePeen "must win teh benchmarks!" type AMD doesn't have to win all they have to be is "good enough" which I would argue they currently are and these new chips will simply make it better.

        I currently have a Deneb AMD quad as my main home machine and slam the living hell out of it. Video transcoding, using it as a Win 7 DVR, playing games for hours, often WHILE transcoding or recording and you know what? it works great. And I'm a hardcore case, most folks still only do one task at a time, be it gaming, browsing, whatever. Now most importantly I have a machine that will do all that, as well as take a 6 core later on if I wish, with 1.5Tb of HDDs and 8Gb of RAM and an HD4850 and the whole smash, including Win 7 HP X64? Less than $600 after MIR.

        And THAT is what matters especially in a dead economy. Folks want a reasonably powerful machine that will last them for years and won't break their wallets and AMD frankly gives them overkill for cheap. I have built fully loaded triples that crank out the video at 1080p all day long for less than $450, quads less than $500 and thanks to how long AMD sticks with sockets if 5 years down the road they decide they want a little more oomph I can pick them up a cheap OEM and just drop it in.

        I have found for the jobs the vast majority of folks that walk into my shop have "good enough" was passed with the dual core chips but thanks to AMD for nearly the same money they can go triple or quad which just gives them more years of service without slowdown. Hell the prices are so cheap i built my dad a quad for home. Does he need a quad? Oh hell no, he still single tasks everything like it is 1993! But by going quad I know that no matter how much crap like messenger he ends up running in the task bar he'll never lose responsiveness, and this machine will probably last him the rest of his life.

        So unless you are trying to do the super heavy lifting like multiple compiles or hardcore video editing (which I'll admit there is more guys here that do such hardcore CPU pounding than the general pop by a long shot) then all the extra $$$ you spend by going Intel is simply wasted money. I'd say as long as AMD can stay even within a third of the performance of the Intel chips they'll be "good enough" for the vast majority, and having nice low prices simply seals the deal.

        • AMD doesn't have to win all they have to be is "good enough"

          If you've been paying attention, this strategy is working wonders in the OEM market. Have you looked at the Best Buy flyer in the past couple months? Nearly half the laptops are:

          AMD E-350
          3-4GB RAM
          500GB HD
          15.6" screen

          There'll be one each for Toshiba, Acer, HP, etc, but the stats are identical and the price (~$400) is eye-popping. We're about to see some serious marketshare slide in AMD's favour if we haven't already.
          • by hairyfeet (841228)

            Isn't it just crazy? If you haven't played with one yet you really should. I picked up the MSI Wind AMD netbook for my dad about a year ago when they first started hitting. This thing came with 3Gb of RAM, an HD4350 GPU, 1.8GHz dual core, 300Gb HDD, it does full 1080P over HDMI without a sweat, hardware accelerated everything, for the hell of it I even loaded the latest Deer hunter for dad and tried it on his 50 inch widescreen. Smooth as butter. the price? $420 shipped. Its just nuts!

            And of course that one

    • Re: (Score:3, Insightful)

      by c.r.o.c.o (123083)

      I don't really understand the hype behind Bulldozer.

      Do people really believe that it'll be on-par with Sandy Bridge? The $200 2500k competes well with their own $700+ CPU's. That is absolutely ridiculous performance that I wouldn't have dreamed of 5-10 years ago, for that price.

      Sure, maybe having more cores will mean better multi-threaded performance, but this still isn't taken advantage of. I don't see Intel losing in the single-threaded department anytime soon.

      You are still thinking raw CPU power still ma

      • And in the case where raw CPU does matter? You know, like when you're mixing audio or something?
        • Why would mixing audio be CPU bound? Wouldn't mixing audio be latency and IO bound?

          • Not when you're applying effects to the audio. At that point, you need CPU power. Fast CPU power too for each core if you are going to try to keep your latency down - the faster you can do your calculations, the less latency you're adding to the audio path.
            • I don't see how sound effects can't be done on GPU. Of course it'd be a lot of time to rewrite all the vst plugins.

              • by adolf (21054)

                I don't see how sound effects can't be done on a GPU, either. But until that rewrite happens (which it ought to -- it makes too much sense), we'll be still CPU bound for audio tasks.

                (Are we discussing today, tomorrow, or the mysterious future?)

          • by Joce640k (829181)

            Why would mixing audio be CPU bound?

            Audio programs like FL studio run out of CPU *very* quickly - you can have hundreds of sounds playing, all with effects processing, etc. I tried running it on a netbook once but it had no chance of keeping up...

        • by Issarlk (1429361)
          Then you buy a SB. But for the vast majority of people an AMD will be enough and cheaper, it's not like Farmville is CPU intensive.
        • Compiling and encoding/transcoding are the only tasks I can think of that are CPU bound, and to some extent both are limited by I/O throughput as well. Most graphics cards have hardware decoders for most common codecs, and most encoding isn't done by consumers. Transcoding usually isn't done by consumers, but I suppose if you're ripping DVD's or something you're doing transcoding.

          That said, it takes 15 minutes or so to rip a DVD into a 1GB MKV file on my Core i7 laptop. In other words, we are well beyond th

          • processors from 10 years ago were fast enough to surf the web, chat, and write documents

            Only if you do one thing at a time, with nothing else running in the background, and you never switch between programs mid-task. And that includes never leaving a web page with Flash ads running.

            Otherwise, I think that multi-core becoming cheap around 2006-2007 is the big change. A 1.5GHz single-core CPU is always going to feel sluggish because the CPU is constantly pegging at 100% busy, with no place to put the o
          • by nabsltd (1313397)

            That said, it takes 15 minutes or so to rip a DVD into a 1GB MKV file on my Core i7 laptop. In other words, we are well beyond the point where most consumers will see CPU speed being a limiting factor in everything they want to do.

            Try the same thing with a Blu-Ray where you do any amount of image processing during the transcode. I had one movie where each pass took 10 hours. Admittedly, that's pathological, but it usually takes me about 1.5x real time per pass (so about 6 total hours for a 2 hour movie).

            I'm not counting the time it takes to actually rip the Blu-Ray to the hard drive for the transcoding work...that's only about 30 minutes on most movies.

            • by Bengie (1121981)

              I did a 5 minute Fraps dump while playing a game. I loaded up VirtualDub after and re-encoded into xVid 1080p with all the pre and post processing options it had and quality set to max. 55% cpu, ~50fps and my HD was pegged transferring 60MB/sec. The resulting video was flawless.

              That was with the slowest i7 sold, the i7-920. I would love to see an Ivy Bridge with 8 cores, 50% more clock, and AVX crunching that. Probably need an SSD to handle the IO. I would assume well over 100fps. Probably closing in on 200

        • by Rockoon (1252108)
          umm, mixing audio? Are you even aware of what you are talking about?

          48khz 5.1 surround sound (6 channels) consumes only 288000 samples per second. Even a fucking 386 could process this, and in fact back in the day 33mhz 386's were playing 16-channel modules (thats software resampling and so forth of 16 independent channels) with enough free time to also do software 3D rendering.

          Now, 288000 samples per second on any machine that is between 2ghz and 4ghz yields between 6700 and 13400 cpu cyles per sample
          • by Joce640k (829181)

            Nope, YOU'RE the one who needs a clue.

            Go ask a few musicians if they have enough CPU power for their sequencers (eg. FL studio, Reason, etc).

            • by Jimbookis (517778)
              I know someone who wrote a popular VST plugin and being a numbers n00b kept hitting the floating point denormal slowdown. I suggested he ditch floats and doubles and just use fixed point with careful scaling in each stage. He hadn't a clue what to do there and suffered the gripes from users about performance. That said, his plugin made wicked sounds. I'd say the musos plugins are somewhat poorly optimised.
              • by Rockoon (1252108)
                Judging by the responses, I think that you are right. The majority of VST plugins are apparently programmed by code donkeys.
          • He might be one of those mysterious folks who means serious digital audio workstation stuff when he says "mixing audio"... That is, shall we say, slightly more intensive than just shoving around the pre-chewed stuff fast enough for glitchless output from the DAC.
          • by Anrego (830717) *

            Just mixing, maybe not. Effects can definitely chew CPU up though.

            I use my desktop (i7 and a tonne of ram) as a guitar amp .. just doing convolution based amp/cabinet modeling gets CPU usage up pretty high. Add in some reverb (also convolution based) and while the box isn't exactly struggling.. it definitely notices.

            And that's just one instrument with a limited set of effects.

            • by Jimbookis (517778)
              WTF? Is you effects software written in Javascript? Convolutions on an i7 should barely wake the CPU up let alone struggle. I think there is some sort of I/O problem instead. Does it use a frickin' spinning while() loop as a wait function for the next sample tick?
              • by Rockoon (1252108)
                Indeed. Apparently these people are using horribly written VST effects or something.

                Even large convolution kernels for 96000 samples per second (DVD quality stereo) should indeed barely wake up an i7 CPU. A machine that WILL execute between 2 and 6 billion instructions per second per thread should not *ever* be struggling with this workload. Hell, you could do a hundreds of large FFT's per second.
              • WTF? Is you effects software written in Javascript? Convolutions on an i7 should barely wake the CPU up let alone struggle. I think there is some sort of I/O problem instead. Does it use a frickin' spinning while() loop as a wait function for the next sample tick?

                My musician friend regularly runs into limits doing audio work on his i7. He runs the premier DAW (digital audio workstation) software and uses only professional VST's so amateur code isn't the problem.

                You seriously underestimate the math that needs to be done to do realtime audio production with synthesized and effected instruments.

              • Try looking at some of the stuff done by some of the more intense plugins from Waves. There's a lot of complicated stuff that goes on in plugins these days, because the point (from an engineer/producer's point of view) is not simply to change the sound, but to achieve a specific sound they hear in their head and realize that in actuality. Thus many effects are very dynamic, and more and more offer the ability to model prohibitively expensive real world equipment in software. Not an easy thing to do! When yo
      • If you're gaming, you're not using an IGP/APU, and won't be anytime soon.

      • by bingoUV (1066850)

        You are still thinking raw CPU power still matters. In a world where even web browsers are 3D accelerated, the GPU suddenly becomes extremely important, even more than the CPU

        Intel integrated graphics is enough for Aero, 99% of the flash videos and games in the internet, and more.

        If Bobcat and Llano are any indication, AMD will integrate a GPU that will be at least 2-4 times faster than the GPU in Sandy Bridge while consuming the same amount of power. And if some of the reviews I read are correct, the integrated AMD GPU will be able to work together with the discrete GPU for a 30-70% performance boost.

        There is an extremely limited number of discrete GPUs the APUs can work together with. For Llano, it's just 4 discrete GPUs. And these 4 are not even the fastest of today.

        Now if someone buys a high end system with a discrete GPU, Bobcat will still be faster, because the integrated GPU will work with the discrete GPU

        The consumer market can be divided into the following segments :

        1. Hardcore Gamers : Sandy/Ivy bridge are better. Gamers will never settle with integrated graphics of Llano (even though it is 2-4 times better than Intel integrated grap

    • by alci63 (1856480)
      Well... some of us also use computer for, say, databases. With this kind of workload, having several cores is a must. Postgresql for example is on a one process per active request model, and concurrency is impacted by having more cores. Just to give one example. So it may not be significant for gaming, but might be for general computing...
      • Not to mention virtualization:

        With that being so common, even the crankiest "this workload is single-threaded, and it really wants a server to itself" applications are likely to find themselves sharing a multicore processor with a bunch of other such workloads.

        Given that AMD's server offerings have lately been pretty cheap compared to Intel's, have the advantage of hypertransport being a good interconnect, along with an on-die memory controller(less of an absolute advantage now that Intel has QPI and
    • And maybe the single core performance is way more than enough to most of users?

      I'm running a laptop with a Intel Core 2 Duo T7200 and the performance for my day to day use is absolutely satisfactory. I rather add more cpus than raw power per core and have a better multitasking/multi-threaded aproach

      Regards

      • by adolf (21054)

        Heh. I'm running a 1.83GHz Intel Pentium-M on my daily-use laptop. Its performance is absolutely satisfactory, as well, and it just turned 7 years old.

        I had the option, recently, of buying a new battery for that machine or buying a new battery for a very similar, just-a-bit-newer Core Duo laptop that I also have (with a far-lesser display), or buying something completely different.

        I elected to buy a battery for the old Pentium-M machine: It still does what I want, still feels quick compared to far-faster

    • by adolf (21054)

      It's no different than the hype behind the Athlon. Or behind Cyrix/IBM's low-cost 6x86MX. Or AMD's then-fast 40MHz 386 clones in both SX and DX variants. Or even the NEC V20.

      Competition is good.

      • I don't know.. the Athlon XP was very competative with Intel, and the later Athlon's smoked the Pentium 4's of the time... Core 2 vs. Phenom was a legup to Intel and iX was a further lead to intel.. on the IGP front, I think AMD will lead again here at least for overall performance. Hoping to see more options, the E-350 seems really compelling for its' price class.
        • by Rockoon (1252108)
          Those 8-core Bulldozer chips essentially have 4 FPU's and 32 ALU's.

          So against a 4-core Sandy they wont be able to compete on FPU work simply because Intel invested the space AMD is using for 4 more cores (aka 2 more modules) in kick-ass FPU's. The Bulldozer is going to look like crap next to the Sandy for FPU work. Period.

          But against those same 4-core Sandy's, the Bulldozers will likely completely dominate the integer scene. The Bulldozer will (reportedly) turbo to 4.2ghz when only using 4 cores. Even t
          • by dwillmore (673044)

            According to: http://techreport.com/articles.x/19514 [techreport.com] the peak FLOP should be the same between a BD 'module' and a SNB 'core' if the BD is using FMA4/AVX and the SNB is using plain AVX.

            To get maximum performance, you're going to have to code in assembly or use a library that's been coded that way. I expect programs like Prime95 will be first adopters of this.

            Supposedly, Haswell (the full tick after IVB) will have FMA3/AVX which should double the FLOP rate and surpass BD, but that's some time out, so we'll h

      • by Adayse (1983650)

        There isn't much hype, that is a difference. Attention has shifted to phones. Desktop PCs and the parts that go in them are worthless because they confer almost no social status on their owners, unless the owner is a teenage boy.

        Before the battle was about performance, now it's heat, price and performance because every household has a couple of computers per person and the cost of a cpu is much lower.

        • by adolf (21054)

          I don't know how your household works, but in mine my wife and I compete for desktop PC geek points amongst ourselves and our friends, and neither of us are teenaged boys.

          And...phones? Seriously? If I'm interested in performance for whatever task I'm doing, I probably also am interested in a display larger than my hand (along with a real keyboard, and a real mouse, and real accessories).

          I don't game on my phone much, because when I'm out and about, I'm simply not bored much. Whether working or driving or

    • Which $700+ CPUs?
      Newegg.com offers the AMD Phenom II X6 1100T Black Edition for $189.99 right now, and that model is AMD's top desktop CPU right now.

      It eats a bit too much energy for my taste (125W TDP), but in price/performance it is a pretty nice CPU. If Bulldozer can improve on the power consumption, my next CPU upgrade will definitely be a Bulldozer.

      • by 0123456 (636235)

        Newegg.com offers the AMD Phenom II X6 1100T Black Edition for $189.99 right now, and that model is AMD's top desktop CPU right now.

        It eats a bit too much energy for my taste (125W TDP), but in price/performance it is a pretty nice CPU.

        Except an i5 will stomp on it, cost less and use only about 50W of power while doing so. I believe even a dual-core i3 typically beats the non-black edition hexacore Phenoms for not much more than $100.

        • by Targon (17348)

          Quad-core makes a huge difference when you are busy and doing many things at the same time. Yes, the i5 has a better core design, so is faster in most tasks, and that is why AMD has been hurting, except at the lower end of the market. Intel graphics are horrible, so for your average consumer that will never add a video card to their machine, a $500 AMD based machine will tend to be a bit better than a $500 Intel machine for "total experience". As you go up from there, AMD starts looking worse since

          • by Bengie (1121981)

            My brother bought a 45nm i7-920, OC'd it to 4.2ghz. He loaded Prime95 and did an 8 thread stress test. His motherboard had a tool to tell your the amount of joules your CPU is using. It peaked about 130 joules and that was heavily OC'd and over-voltaged.

            Now that's not making use of all the SIMD, so I'm sure the absolute peak would be higher, but Prime95 does give a good stress test by loading the FPU.

            The i7-920 officially has a 125watt TDP. I'm sure Intel only lists peak also.

    • by Sloppy (14984)

      Do people really believe that it'll be on-par with Sandy Bridge?

      We don't know! It might. It might not. Everyone hopes it will.

      Whether it wins or loses against Sandy Bridge, one thing's for sure: it's interesting. It sounds like you're still running MSDOS so you don't ever need any parallelism but for the rest of us, AMD has shown the future of hyperthreading and multi-core. It looks like the question of "should we split this piece up so that it can do n things at once?" may be on the table for every pa

      • by Targon (17348)

        You mean multi-threading. HyperThreading is an Intel term for running two threads at a time on one core and "tricking" the OS into thinking there are twice as many cores as there really are. It does help performance in heavily threaded applications, but if you compare eight real cores to four cores that look like eight, and you improve performance so those eight real cores are competitive per-clock with Intel cores, there's a big advantage.

        $320 for an 8-core processor that I think starts at 3.8GHz(not

        • by Sloppy (14984)

          I stand corrected on the Intel trademark abuse.

          if you compare eight real cores to four cores that look like eight, and you improve performance so those eight real cores are competitive per-clock with Intel cores, there's a big advantage.

          Competitive per-dollar. That's why HyperThreading wasn't necessarily a bad idea: of course it wasn't as fast as multicore but it was a lot easier/cheaper. (Cheaper for Intel; I'm not talking about their chips' prices, especially at the time they introduced P4.)

          There's a tra

        • by Bengie (1121981)

          HT adds about 10% more transistors, adding twice as many cores uses 100% more transistors. Being that an HT type CPU design is typically slightly slower or just as fast, going back with the old 1 core = 1 thread is just not competitive.

          You have to realize that modern super-scalar Out-of-Order cores have lots of execution units that are usually idle. Adding an extra 10-20% transistors to make use of those idle units just seems like a good idea. Under many work loads, you can almost double your performance.

          AM

    • The issue of performance is, by the passing of each day, becoming increasingly irrelevant with the exception of very small niche areas. Meanwhile, for the past half a dozen years the processing power of any low/mid-end desktop processor is quite capable of providing more than enough processing power to take care of any computing need that any regular person may have. Browsing video clips online, browsing social network sites, handling office applications and communications are well-taken care by any proce

  • Are you showing that it's a fraud, like the article cited or just to get clicks like your own Headline shows? A bit of both, eh?
  • I think stories, i.e. submitters should have karma. I want to downvote this tripe so bad...

    • by Jeng (926980)

      Although I don't agree with you that this article is tripe, I do agree with you that we should be able to give slashdot some direct feedback about the quality of the articles.

  • by Shag (3737)

    But don't most modern CPUs use hexadecimal?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Yeah, but that is base 16. So it must therefore allocate two numbers per core. 1st core handles all the 1s and 2s, second core all the 3s and 4s etc and the eight core all the Es and Fs. It makes perfect sense, really.

    • by Z00L00K (682162)

      CPU:s are binary, then we use Octal or Hex to represent the contents of the binary structure because it's more convenient.

  • by rossdee (243626)

    Bulldozer is not exactly a synonym for high speed, low energy consumption and compact size.

  • This is why company's work hard to control how and when information is shared with the public.

    Sometimes you just can't help yourself.

    From What's a Metaphor For? [chronicle.com]:

    New research in the social and cognitive sciences makes it increasingly plain that metaphorical thinking influences our attitudes, beliefs, and actions in surprising, hidden, and often oddball ways.

  • As the mini needs better then the i3 / i5 on board video and for apple to go from nvidia on board video to intel is a side grade at best.

  • Nobody cares if Farmville or Facebook will only utilize one core. When folks have the money, they are going to buy the fastest CPU available. Only the budget conscious person is going to ask themselves if they will utilize all those cores or even need that many megahertz (and possibly higher TDP). When they are at their local BestBuy, the sales person is going to pitch them the latest Quad-Core or Octal-Core machine "because your college-bound daughter need it for running Microsoft Word". Going forward,

In the sciences, we are now uniquely priviledged to sit side by side with the giants on whose shoulders we stand. -- Gerald Holton

Working...