AMD's Showcases Quad-Core Barcelona CPU 190
Gr8Apes writes "AMD has showcased their new 65nm Barcelona quad-core CPU. It is labeled a quad-core Opteron, but according to Infoworld's Tom Yeager, is really a redefinition of x86. Each core has a new vector math processing unit (SSE128), separate integer and floating point schedulers, and new nested paging tables (to vastly improve hardware virtualization). According to AMD, the new vector math units alone should improve floating point operation by 80%. Some analysts are skeptical, waiting for benchmarks. Will AMD dethrone Intel again? Only time will tell."
Bit Slice (Score:2)
But SSE is already 128 bits! (Score:2)
Re:But SSE is already 128 bits! (Score:5, Informative)
This was pretty much the reason why most people only bothered with MMX optimizations in their applications.
Re: (Score:2, Interesting)
SSE+ operations up until now were operated on 64 bit at a time within the processor
Hmm...do you mean specifically on AMD's hardware? That stopped being true for Intel starting with the Core, which has 1-cycle latency on SSE instructions.
Re:But SSE is already 128 bits! (Score:5, Informative)
Core2 has single-cycle throughput on most SSE instructions, not single-cycle latency. Most of these instructions still take 3-5 cycles to generate results, which is similar to the Pentium M, but now a vector of results finishes every cycle, instead of every two or four cycles.
An important consequence of this is that if your instructions are poorly scheduled by the compiler (or assembly programmer) and the processor spends too much time waiting for results of previous operations, the advantages of single-cycle throughput mostly disappear.
Re:But SSE is already 128 bits! (Score:5, Informative)
Core2 has single-cycle throughput on most SSE instructions, not single-cycle latency
Well, certainly you won't be able to get a square root through in one clock cycle, but many/most of the simple integer arithmetic, bitwise, and MOV SSE instructions on the Core 2 really do have single cycle latency. source [agner.org]. None do on the AMD64, which supports the theory that SSE128 means more "new for us" than "new for everyone." Not to put AMD down - many of the other features sound promising (but the article is long on breathlessness and light on details, alas).
Comment removed (Score:5, Informative)
Re: (Score:3, Interesting)
Please explain this. Do I understand correctly that you think some SSE instuctions are 16 bytes? Issuing is one thing, and latency another. In most cases I've found AMD/Intel can issue 1 mulps/shufps/adds per cycle, the *ss instructions at 2 per (AMD sometimes 3 per cycle). If you mean that only the first 64-bits, 2 components, are
Re: (Score:2)
Dethrone? No. (Score:2, Insightful)
Re: (Score:2, Interesting)
so they'll have to do some ugly procedures to survive it in the long run. A couple of identical
blows in the meantime could leave them sterile, so if the current setups begin to die out.
And Intel had no more babies waiting anymore, they will not be dethrowned, but will be getting
an hounerable mention in the history books.
GPU not CPU - Re:Dethrone? No. (Score:3, Insightful)
Is dethroning Intel the point? (Score:5, Insightful)
As long as AMD and Intel continue to chase each other in the x86 market, high end chips become low end in the span of six months. Just keep buying 6 months behind the press releases and you get great processors for next to nothing.
Huh? (Score:2)
Am i missing something or am i completly wrong?
Well.... (Score:2, Interesting)
The fact is, absolutely none. It has been shown that only the destruction of information via AND and like instructions create entropy (heat). As long as you use only 3 types of gates (pass through, not, xor), you can create a heat-free CPU. Provided we do want to check for bit errors, we could maintain a very low heat via ECC like checking. Estimates on that are 10^8 lower than present.
We could keep 98% of our efficiency of current day
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re:Well.... (Score:4, Interesting)
Of course. It, at first, sounds too good, but here you go.
Rolf Landauer showed in 1961 that reversible logic operations could be performed by neither using energy or taking heat out. The same could not be said for irreversible logic operations.
"Irreversibility and Heat Generation in the Computing Process" IBM Journal of Research Development 17 (1973): 525-32, IBM PDF [ibm.com]
___
In 1973, Charles Bennett proved that any computation could be derived from purely reversible computing.
Charles H. Bennett "Logical Reversibility of Computation" IBM PDF [ibm.com]
___
Later on, Fredkin and Toffoli presented a review of the ideas of reversible computing. The essential idea is that you can save all intermediary states between an algorithm to get the answer, and then reverse the process so that no energy is used, and generated no heat. Fredkin also indicates that if we switched from irreversible to reversible computing, we would expect to lose no more than 1% efficiency.
International Journal of Theoretical Physics 21 (1982):219-53 PDF [digitalphilosophy.org]
___
And as an unsubstantiated claim, I remember hearing that due to heat/radiation sources, that volatile memory gains errors of 1 bit per billion with a time from 1 minute to 1 day ( I forget the exact time). To correct this would only require the entropy of deleting that incorrect bit. In other words, 10^8 or so magnitude heat shrinkage. But trust the stuff above.
(Many of these ideas were taken from "The Singularity is Near" by Ray Kurzweil from page 130)
Re:Well.... (Score:5, Insightful)
Or did you actually think that those "stupid" CPU designers for all this years, battling with heat dissipation, never thought of, oh.. simply replacing the nand gates with reversible Fredkin and Toffoli gates and 'poof' magically all the heat issues are gone, processors will run @ hundreds of GHz, the wold's electrical power consumption will go down and the geeks won't be able to boast about their huge ass sinks anymore...
Re: (Score:2)
Yeah, I think there was an article back in 1993 or 1994 in Byte about such processors. It seems that in practice, the theory doesn't add up.
Re: (Score:2)
Mr. Coward, in case you have not read the article, the conversation actually is about AMD's new processor, which is a real processor. That processor will generate some amount of heat ... real heat, not theoretical heat.
The conversation might have started out as that, but this thread has gone somewhere else. This is a natural part of any discussion. It does not make it off
Re: (Score:2)
As far as I remember reading, outputting answers adds a bit of heat output to the equation, but doesn't prevent you from using reversible circuits.
Re: (Score:2)
It's not enough to merely limit yourself to NOT, XOR and pass-through, as traditional implementations still destroy information in a way. Traditional gates are made of switches: When you switch the input to an inverter (NOT gate) off, the output switches on by closing a switch to Vdd and opening the switch to ground. Some current flows from Vdd to the inputs of whatever gates the inverter is driving. When you switch the input to that inverter on, the switch to Vdd opens and the switch to ground closes.
AMD64 is very fast (Score:5, Interesting)
Re: (Score:3, Informative)
And pointing out that it isn't fair to compare because a Core2 duo already executes the full SSE instruction in one pass vs. the 2 clocks for a curret AMD64 is the same as saying it's not fair to compare the on-die memory controller on AMD's vs. Intel's FSB. But people didn't seem to care when the numbers went in AMD's favor.
I'd really be interested in seeing your numbers, your programs, and what com
Re:AMD64 is very fast (Score:5, Informative)
OK. I can't give you the code but it is my own implementation of a pretty standard bioinformatics sequence comparison program which doesn't use SSE/MMX type instructions and is single threaded. On all platforms it was compiled using gcc with -O3 optimisation. I have tried adding other optimisations but it doesn't really make much difference to these numbers (no more than a couple of percent at best).
AMD Opteron 2.0Ghz (HP wx9300) - 205 Million calculations per second
Intel Core 2 Duo 2.66Ghz (Mac Pro) - 146 Million
Intel Core Duo 2.0 Ghz (MacBook Pro) - 94 Million
IBM G5 PPC 2.3 Ghz (Apple Xserve) - 81 Million
Motorola G4 PPC 1.42 Ghz (Mac mini) - 72 Million
Intel P4 2.0 Ghz (Dell desktop) - 61 Million
Intel PIII 1.0 Ghz (Toshiba laptop) - 45 Million
Interesting things about these numbers. The Core Duo is clearly a close relative of the PIII since the performance at 2Ghz is roughly twice that of the PIII at 1Ghz. The P4 at 2Ghz is really very poor indeed which isn't a huge surprise as it was never very efficient. The G4 PPC puts in a reasonable result easily beating the much higher clocked P4 (what, the Mac people were right? Shock!) although I have to say that the performance of the G5 is disappointing. The Core 2 Duo isn't a bad performer although it does have the highest clock speed of any processor in this set but it is seriously beaten by the Opteron. From these numbers, a Core 2 Duo at 2Ghz would be about half as quick as an Opteron at the same speed.
Show us your source code (Score:2, Insightful)
Re: (Score:2)
I can't because the program is really large and it doesn't entirely belong to me (you know, work for people, they own your code).
You're right, I could just be making these numbers up and if you prefer to believe that then there is nothing I can do to change your mind. All I can say is that this is my own (admittedly anecodatal) experience.
Re: (Score:2)
Re: (Score:3)
Certainly a possibility. In my defense I would like to point out that all benchmarks are open to question. I know my own code, I know what it does and it doesn't do much but it does a lot of it so the performance figures are what they are. I originally wrote this code on an SGI, ported it to Linux on a 486, SPARC, Alpha, PPC and so on. Its old and simple but does real work. While I could make it faster using SSE an
Re: (Score:2)
Re: (Score:2)
Nope, straight 32 bit. If it had been 64 bit then the Core 2 Duo would also have seen a more significant boost versus its 32 bit predecessor not to mention the G5 should have been better than the G4 which it wasn't.
Re: (Score:2)
Re: (Score:2)
Also which of these chips are single, and which are dual, and which are quad cores?
What's the point of dual and quad core, anyway? Anyone figured out why it's better than just having 2/4 CPUs?
Re: (Score:2)
Pretty much the same as the Opteron in this case. The program doesn't really hammer cache or main memory, just the CPU. Work out your clock speed as a percentage of 2Ghz and do the sums and that should be the number.
The Opteron, Core 2 Duo and Core Duo are all dual core chips in this test, the others single core although the G5 was a dual processor system. Since the program is single th
Re: (Score:2)
It's better than just having 2/4 CPUs because you can now get dual CPU functionality on consumer-level mainboards. You get SMP without having to shell out for workstation or server level hardware. Of course, if you do have workstation or server boards with 2 or 4 CPU sockets on it, then you can put dual or quad core CPUs in those sockets as well. So instead of having 2-way SMP with 2 sockets yo
Re:AMD64 is very fast (Score:4, Insightful)
When you say you've tried "adding other optimizations," are you referring only to other GCC optimization flags? If your program's algorithms have any moderate degree of parallelism and you haven't tried vectorization either by compiler (GCC and ICC can both do this) or by hand, the benchmark you've done is not unlike a race where no one is allowed to shift out of first gear. Can you go into any more specifics about how this program does sequence comparisons?
Also, the disappointing numbers from the G5 may be partially explained by the fact that its integer unit has higher latency than the other desktop processors in that list. The G5 isn't exactly known for blistering integer performance, anyway.
Re: (Score:2)
perfectly fine for a CPU benchmark (Score:2)
The only important thing is that the compiler choices and options are fair. Using gcc on the Opteron and icc on the Core Duo would not be fair. Using gcc everywhere, with the same options, it completely fair.
One can also define "fair" as "all systems tweaked to the max", but this is rather difficult to do right. (see also: OS benchmarks, where the benchmarker knows all the ways to tweak the OS he
Re: (Score:3, Interesting)
When the Core2 was released, benchmarks made it clear that Intel did not optimize for 64-bit performance. They have the architecture, but they pushed th
32 vs 64 (Score:2)
If you took code that was written for 32 bit operations and
Re: (Score:2)
Re: (Score:2)
Where did you get a Mac Pro with a Core 2 Duo?
Should be LGA-771 2-socket Xeon Woodcrest, and not fit a LGA-775 C2D, right?
Re: (Score:2)
http://www.vips.ecs.soton.ac.uk/index.php?title=Be nchmarks [soton.ac.uk]
Again, plain C code, no SSE/whatever. It is threaded, which makes it slightly different. The source is there too.
Results:
Opteron 850, 2.4 GHz, 4 CPUs, 4.5s
Opteron 254, 2.7 GHz, 2 CPUs, 6.9s
P4 Xeon (64 bit), 3.6 GHz, 2 CPUs (4 threads), 7s
Core Duo, 2.0 GHz, 2 CPUs, 18.1s
P4 Xeon (32 bit), 3.0 GHz, 2 CPUs (4 threads), 19.7s
P4 (Dell desktop), 2.4 GHz, 1 CPU, 36.6s
PM (HP laptop), 1.8 GHz, 1 CPU, 58.5s
So I agree: an Opteron
Re: (Score:2)
Please people, get a grip. This guys little application does tons of random memory reads. This is the one area where the Opteron still kicks ass because it has an IMC. The number of applications where this is useful is fairly small, and it's been known for a long time.
Re: (Score:3, Informative)
If only that was the case but actually it is very linear. The application can hold the whole of its memory requirements in cache these days so it hardly has to touch main memory and it was designed to do all the inner loop code using only registers. Heck, I doubled the size of the inner loop just to avoid a single register copy because it made a significant performance increase.
The reason I like this code is that it shows how many operations y
Java is slow on x86 (Score:2)
Java has strictly-defined floating-point math that is incompatible with the x86. An x86 chip must save floating-point options out to memory to force the exponent to be the right size.
JIT/emulation systems in general, including Java, do better with more registers. The G4 has about 6x as many once you exclude registers that are unavailable. (about 5 for x86, but at least 30 for the G4)
Re: (Score:2)
What chipset(s) and memory were used in the Mac's? Were they on-par with a w
Re: (Score:2)
Re:AMD64 is very fast (Score:4, Informative)
Pretty sure it is a Tualatin since it is a 1Ghz PIII Mobile which I bought in early 2002 (http://www.theregister.co.uk/2001/01/31/chipzill
Given that it is a Tualatin, then the peformance of the Core Duo at 2Ghz looks about right. The Core 2 Duo gets about 10% better performance clock for clock from all the blurb I have read except when it comes to SSE where it is about twice as fast so the performance figure of 146 million also looks pretty much on the mark too as a 2Ghz Core 2 Duo should be able to manage about 110 million if you scale the figure for clock speed and that is (surprise) ~10% quicker than the Core Duo at 2Ghz (94 million) so the basic integer performance of the Core 2 Duo is better than the Core Duo but doesn't compare with the 205 million the 2.0Ghz Opteron manages.
Re: (Score:2)
that's what we all run though, and it can be OK (Score:3, Interesting)
I'm using Linux, with single-threaded apps, but so what? I run lots of things at once:
X, window manager, xterm, editor -- that is 4, plus the kernel
X, xterm, tar, gzip -- that is 4, plus the kernel
X, xterm, make, bash, cc1, cc1, cc1, gas, gas, ld... -- that's a lot of things!
vector units mostly sit idle (Score:2)
In the real world, vector units aren't good for much at all. You can do radar processing with them, but that isn't exactly a desktop app. Linux can use them for software RAID.
Re: (Score:2)
goto BLAS uses SSE so doesn't count. It has already been acknowledged that the SSE implementation of Core 2 Duo is very good. The new AMD chips may address this but we won't know until we see the benchmarks. For non-SSE the Core 2 Duo is a little better than the Core Duo which was similar to the PIII/PII/PentiumPro clock for clock. The current Opteron is much quicker clock for clock for no
Re: (Score:2)
Re: (Score:2)
how do they fit a fourwheeler in the chip? (Score:2, Funny)
SSE128 means... (Score:2)
I'm not kidding. In SSE I'm familiar with, one of the input registers is always an output register, which means its contents are destroyed. Another flaw is that there aren't enough registers... SSE uses 8, where 32 are commonly not enough when latency is longish (especially with SoA-style progamming, where pragmatically a single vec3 occupies 3 128-bit registers).
... or Madd. You know, multiply-add. Does it have that?
Re: (Score:2)
No. It means 128 bit SSE ops can be done in a single cycle instead of two (64-bit chunks).
In SSE I'm familiar with, one of the input registers is always an output register, which means its contents are destroyed
How is this different from regular x86 (non-SSE) instructions? They have two operands where one is a source and destination.
Another flaw is that there aren't enough registers... SSE uses 8
AMD64 specifies 16 SSE (XMM) regi
Re: (Score:2)
At least with the general purpose registers, AMD wanted to go to 32, but couldn't do it without changing the instruction set. I'd assume the same thing applies to the SSE registers.
Re: (Score:2)
How so? Unless I'm missing something here, I think the only cost is in the size of the register file and rename register set, but nothing ISA-related.
Great (Score:3, Funny)
Barcelona vs Itanium in single and double float ? (Score:2, Interesting)
SUNDAY SUNDAY SUNDAY (Score:2)
Junk article, full of inaccuracies. (Score:5, Informative)
http://www.intel.com/technology/magazine/computin
I'm not meaning to detract from AMD here - the fact that they have still not had to make any radical changes to the opteron micro-architecture is a testament to the quality of the original design. They are slightly ahead of the game on virtualization - they're going to beat Intel to nested page tables - but other than that this chip is playing catchup. Overall this is going to be a very nice piece of kit to work with. But nothing radical and new here.
G.
Re: (Score:3, Informative)
Actually, if you go waaaay back to the Socket 7 days you could have L3 cache as well. The AMD K6 and K6-2 CPUs only had on-die L1, and the L2 cache was on the mainbaord. But the K6-3 CPU had 256KB or 512KB of on-die L2 and was compatible with the same mainboards. So when you put that K6-3 in a socket 7 mainboard the mainboard's cache actually functioned as L3. Sure it wasn't on-chip, but L3 cache
Re: (Score:3, Informative)
Now, as far as some claims, in detangled order:
Re: (Score:2)
Re: (Score:2)
True. Additionally, the article implies this is something new. All K8 chips (=Opterons, Athlon64) however always had seperate schedulers for float and int instructions (in contrast to the intel core2 chips, so amd is touting that as an advantage - it's more of a design choice than really a simple "better" or "worse" for either solution probably). There is a reason the codename of Barcelona is K8L! As you mentioned, it'
Re: (Score:2)
From an article [theinquirer.net] in The Inquirer:
"WE'VE BEEN HEARING the "K8L" codename for ages now, but we can say now, straight from the horse's mouth, K8L was never a codename for AMD's upcoming generation of chips."
If we are to believe the article, K8L was apparently the code name for the Turion64 where the L stands for Low-power. K9 was the X2 processors, so that would make the upcoming Barcelon
Barcelona??? (Score:2, Funny)
Paging Tables (Score:5, Informative)
Context-switching has long been the weakest design point for x86 in "PCs", especially servers. x86 arch is rooted in single-user, single-threaded, single-context apps. The in-core registers that CPU operations execute directly against have to be swapped out for each context switch. In *nix, that means every time a different process gets a timeslice, it's got to execute two slow copies between registers and at best cache RAM, at worst offchip RAM (over some offchip bus). If the register count is larger than the bus width (even onchip), that's another multiple on that slow cycle. That context-switch overhead can be larger than the timeslice allocated to each process's "turn" in the schedule for lower-latency / higher-response (lower "nice") processes, approaching realtime.
Unix was designed for multiusers, context-switching from the beginning. The chips it's run on coevolved with it. Linux arrived when x86 CPUs ran fast enough that context-switching was OK, but still a big waste compared with, say, MicroVAX multiple register sets. Windows architecture is rooted in the x86 architecture that DOS was designed for, though perhaps Vista has finally lost all of the old design baggage originated in the 8088/8086, but its long history of UI multitasking means it's context-switching all the time, which will gain in speed. The MacOS switch to BSD means it's got lots of power bound up in the context switches that could be released with Barcelona.
So while low-level benchmarks might show something like 80% FPU improvement, the high level (application) performance could improve quite a lot more. Recompiling apps to machine code that exploits more registers without the context-switching penalties could find multiples, especially apps with realtime multimedia that run concurrently with other apps. Intel's hyperthreading already gets past some of these bottlenecks in distributing tasks among multiple cores, but the Barcelona paging tables go even deeper, for likely extra performance (on top of Barcelona's own hyperthreading and new L3 cache).
Aside from the marketing "vapormarks" we'll surely see out of AMD (and their sockpuppets) before it's actually released "midyear", I'm looking forward to seeing how this thing really runs in multitasking apps. I'm expecting "like a greased snake across a griddle".
Re:Honestly... (Score:5, Insightful)
Obsession about process size is sillier than obsession over clock speeds.
If AMD can produce a better performing chip at 65nm, then who the hell cares if Intel - or anyone else - move to a 45nm process?
Re: (Score:2)
Re: (Score:2)
I agree with you on one point - I think as with your requirements, the goal for the average non-technical home consumer should be focused more on efficiency than multi-core 64 bit 4MB cache, etc. But not everyone spends 95% of t
I neglected to mention something else... (Score:3, Interesting)
Re: (Score:2)
An individual implementation can be copyrighted. A way of doing something can't be covered by copyright, and needs to be patented. That's what you meant, right?
Re: (Score:2)
This is Slashdot. We care about those details. You can read more about the "super fast, super cool, super cheap!" market speak on the company's official press releases section.
Re:Honestly... (Score:5, Insightful)
Feature size has denominated progress (as measure either by raw performance or performance per watt) over an unbroken 30 year period. Do you recall the very passionate debates about RISC vs CISC? Did a RISC design at one feature size ever beat a CISC design at the next shrink? I think not. Design has never mattered anywhere near as much as feature size. Not that you can't get design wrong. But then you can get a shrink wrong, too, and end up with 1% yields. AMD managed briefly to remain competitive with Intel playing a full shrink behind when Intel did that rather stupid marketron-driven face-plant into the thermal wall (against good advice from their Israel team, who later came to the rescue with Core Duo).
With the recent skyrocket of leakage current, the holy grail of feature size is somewhat tarnished, but it still dominates the performance curve. You completely missed the relationship between feature shrinks and the performance crown. If Intel has better process technology than AMD (almost always) and AMD has a better design (most of the time since the Athlon was first launched) and both companies shrink every 18 months following the Moore projection (that unbroken 30 year historical trend) and AMD always shrinks 9 months behind Intel, then the performance crown will pass back and forth exactly as often as either company announces their next product.
So I agree with you: feature size has no importance to the customer who wants performance for their dollar. Except that you can set your clock by it and project ten years into the future effective performance levels of shrinks we haven't even seen yet. Except for that part, yeah, I'm with you.
Re: (Score:3, Interesting)
They care. Just moving the chip from 65 nm to 45 nm means you can produce twice as much on the same silicon wafer. Also, if a 65 nm chip performs well, then a 45 nm version of it (with slight modifications of course) will work even better.
Re: (Score:2)
But how much does this really affect the retail price of a cpu? From randomly googling around, it looks like silicon wafer cost only translates to a dollar or two per cpu, so who cares if they can drop this expense by half? Surely other factors would be more important for cost, suc
Re: (Score:2)
The best way of looking at things. I started off Intel and stuck with them up to about 1Ghz, jumped ship stayed with AMD untill my X2 3800, now I'm back to Intel with a Duo 2 6600. We'll see in 1-2 years who'll I'll be with next. The same goes for video cards and soda. Peps
Re: (Score:2)
Obsession about process size is sillier than obsession over clock speeds.
If AMD can produce a better performing chip at 65nm, then who the hell cares if Intel - or anyone else - move to a 45nm process?
If you move to a smaller transistor size you get more processors per w
Re:Honestly... (Score:5, Insightful)
The end is the delicate balance of improving power / watt while increasing overall performance and keeping the price down. If AMD can deliver a chip that does a better job of that at 65nm than an Intel 45nm one, then the AMD chip is not somehow "worse" than the Intel one just because it doesn't use 45nm. That's just stupid.
I'm not saying AMD can do that, but I think that criticizing them for not being ready for 45nm yet is more than premature.
AMD's actually guilty of the same flawed logic though - their criticism of Intel's 4 core processor being just 2 dual cores stuck together is just as pointless. It doesn't matter what matters is how well the processor meets the requirements of its target market.
Re: (Score:3, Interesting)
But it is interesting to see the two companies approach the problem from different ends. Do you improve the silicon process or do you alter the architecture and instruction set? I bet you the best answer will be to do both.
quad cores that actually share cache would be nice. these double duals kind of suck because architecturally they can never share cache. although AMD and Intel don't have ve
Re: (Score:2)
I don't think it's about that. I mean, since Intel quickly pumped out something which seems like being 4 core cpu which took far shorter than to develop a new quad core design cpu makes them seem to lag behind, so what can you do to explain ? Not mu
Re: (Score:2)
The separated schedulers for floating and integer math allows for more parallelism, another speed up.
The shared L3 and reduced latency L2 caches should put Barcelona ahead of Cloverton's split caches
Re: (Score:3, Interesting)
Most laptop processors have a higher performance/watt than desktop processors because they are designed
Quad Core (Score:3, Interesting)
Re:Intel's Responds (Score:5, Interesting)
Oh, here's one [sun.com]. Though it's been out since before Intel had quad-core chips.
Re: (Score:3, Informative)
(this was anonymous for a reason)
Re:If its true (Score:5, Funny)
Seven for the WoW-nerds in their halls of stone,
Nine for Diablo Men doomed to die,
One for the Dark Nerd on his dark throne
In the Land of Silicon where the corporations lie.
One quad core to rule them all, One quad core to find them,
One quad core to bring them all and in the darkness bind them
In the Land of Silicon where the corporations lie.
He paused, and then said in a deep voice,
This is the Master-quad core, the One quad core to rule them all.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I like dial-up, nobody can call me (one phone line, disable call waiting), and I really only do IRC and text browsing. Honestly who wants to give the cable company or phone company $50 a month, those bastards are rich enough.
Re: (Score:2)
Intel comes up with some hair-brained scheme that "More is better!". (like Viagra) They design something new and decide to make it faster (or in this case just glue more of them together). Back in the day it was the "GHz" now it's all about how many "Cores" you got. This tactic seems to suit Intel quite well and dethrones AMD for about a year and a half... During this time AMD massively redesigns there chips to integrate new, emerging technologies. The gamers and server operators of the world sit by their
Re: (Score:2)
"Hey, boss, we need to by another 100 machines to support these validation runs. Or we could buy 80 machines of this other brand which will accomplish the same thing and save
Re: (Score:3, Interesting)
I will not surprised if AMD dethrones Intel again. It is a classical Intel vs. AMD battle...
I am not sure Intel ever did beat out AMD.
I went down to Best Buy where the Intel rep was hard peddling a Code Duo 2 machine and compared his $1500 machine to a AMD X2 clearance one for $600. I had nothing to do that day but be a clown, so I went and got a DVD with software on it, and said these are both XP right? Copy the contents to the hard drive and compress it. I am going to measure it. Core Duo 2 result