AMD Quad-Core Opteron (Barcelona) Tech Report 201
crazyeyes writes "AMD has been very tardy with Barcelona. Countless AMD fans have eagerly awaited a new processor. As the day draws closer, TechARP takes a look at the upcoming quad-core AMD Opteron. Is there more to it than just its four processing cores? Will it be the Intel-killer that AMD promised long ago? From the article: 'AMD is in the same boat as ATI. Delays after delays of their long-awaited Barcelona core not only ensured the dominance of their rival, Intel, in the desktop processor market, it also ensured that Intel would be the only choice for those who want a quad-core processor. Although that wait will end in August, 2007 when the Barcelona is finally launched, it remains to be seen if AMD's new processor will be able to inflict serious damage to Intel's dominance.'"
Intel's sever / workstation chip sets suck (Score:2, Insightful)
FB-DIMMS cost a lot and need alot more power to run then DDR ECC ram and the Intel chipsets have very few pci-e lanes. The nforce pro chipsets have the lanes for 2 full x16 slots with 2 x4 slots and pci-e lanes for on board sata / sas raid with x4 lanes left over that are some times used for pci-x slots.
Also the amd chips have better cpu to cpu link.
Re: (Score:2, Insightful)
On another note, this is considered news? There was, quite literally, nothing to see here. It was a couple of paragraphs with a flashy new slide in it. It lost all credibility when this line was written:
Of course, Intel is also rushing out a similar solution, in the form of their V8 programme. So, it is a race to see which company will be the first to release an 8-core platform. AMD stands to take some wind out of Intel's sails if they are the first out with their 8-core platform.
Intel has had an 8-core platform since last summer.
Re: (Score:2)
Intel V8 for gaming is a joke FB-DIMMS no cross fire or sli.
Re:Intel's sever / workstation chip sets suck (Score:5, Insightful)
Actually it makes a lot more difference than you'd think. This is most evident in caches. Intel's quadcore has two shared L2 caches (one per two cores). AMD has a full L2 cache per core AND a shared 2mb L3 cache. Intel doesn't have an L3 cache on any of their stuff. Besides that, HTT is a lot faster than Intel's dated FSB. More bandwidth and faster aggregate links means that yes, the native quadcore will be a lot better.
Aside from that, AMD also still has much better memory performance via the on-chip memory controller, and doubled-width op registers from the last gen AMD stuff.
Re: (Score:2)
Very interesting comparison. The way I see it: AMD has 0.5 MB L2 per core, and 2MB L3 shared between four cores. Intel has 4MB L2 shared between two cores, and 4MB L2 shared between the other two cores. Anything where 0.5MB is enough, AMD
Re: (Score:3, Insightful)
If I could only get my favorite applications (like Logic Pro or Sonar or Wavelab or Nexus or Kontakt or Premiere or After Effects or even Flash) was available in
Re:Intel's sever / workstation chip sets suck (Score:4, Insightful)
Per core cache is faster than shared cache.
L3 is better as well because it means it can be used to transfer data between cores instead of main memory.
Re: (Score:3, Interesting)
Shared FSB systems do not scale... even Intel knows that. However, dual-d
Re: (Score:2)
That's not really important at all. As long as something works better in practice, that's the one I'd buy/recommend.
And from what I see, the Core 2 Duos are a LOT faster than the AMDs and in most cases the better choice.
2 years ago I'd recommend AMDs over Intel's P4/Netburst crap. But now the Core 2's are stomping all over AMD, and with the recent Intel price cuts, AMD is in for a very bad time unless Barcelo
Re: (Score:2)
The C2Ds are definitely a lot faster, since the top X2 is still only as fast as a mid range C2D.
BECAUSE of that, AMD has little choice but to slash prices, just to become equivalent in price/performance.
Keep in mind: AMD's top chips are likely _cost_ much more to produce than Intel's mid range chips.
The CPU/Mem industry is a bad industry to be in - producing capital intensive commodities. In contrast in other markets if you're Number 2 (e.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Funny)
So you're saying AMD is likely to go tits-up?
Re: (Score:2)
i certainly hope AMD stays in the game but i doubt they will lead it with barcelona.
On AMD boards fully populate your CPU sockets (Score:2)
Or you can't use all your memory sockets because the memory controller for half of them is in the second potentially non-extant CPU.
Balancing the amount of memory on each side is a good idea too.
Re: (Score:2)
That's what you get... (Score:3, Insightful)
The reality of the situation became that the great majority of Athlon64 users were running 32 bit apps, and continue to do so.
There has yet to be a dire 'need' for 64 bit processing, much to the similar way that there isn't a dire need for more than 4 GB of ram in a desktop machine.
At work, I'm the Sysadmin for a dedicated hosting company (Linux, mostly Gentoo), and even in that market I don't know of any of my users running 64bit. any performance advantages are outweighed by incompatibilities and plain old PITA to get things working.
That said, the delay in developing these quad core procs shouldn't put that big a dent in the pocket / market share of AMD simply because it's a niche market that has yet to be widely adopted.
Re:That's what you get... (Score:5, Insightful)
1987 called, they want to use more than 64k of RAM. How can they do that without going to 32-bit?
2007 called back, just to let you know that 4gb of RAM was $150. That's right, $150. At that point, a lot of people are starting to wake up to the unpleasant smell of Intel's PAE (that's right, segmenting, but with 32-bits!). We're also living with the limitations of the 32-bit tlb and the paging methods used. I have a machine here with 4gb of RAM, and it's not unusual because of how cheap RAM is. Linux can run it as 4gb of RAM in 64-bit mode no problem, or I can run in 32-bit with 3.6gb of RAM because the PCI bus and other devices all map to that high region (just like everything above 640k was mapped to devices back in the 20-bit addressing days). Windows 32-bit does the same thing.
Now, while Linux 64-bit is stable and mature (having been something I've used for 3 years, after which most of the userspace apps have been cleaned up to work), Windows 64-bit is still not all there. Naturally, the proprietary apps will always live in the land of 32-bit. Supreme Commander, a recent DX10 game, has a lot of 32-bit troubles -- running out of RAM and crashing. One of the things you have to do to play it well is add
Now, 10 years ago, or even 5 years ago, that would not have been even on the radar screen. Now that you can buy 4gb of RAM for less than $200 (CAD or USD), and now that we have games and applications that need it (beyond the VFS cache; go look at some series SQL applications or scalable web applications), I think you're way off base, and you sound like someone talking about how 64k of RAM (the 16-bit addressing limit) is more than enough for anyone.
If all you're doing is sysadmining mom-and-pop's micro website that runs fine with 1 or 2gb of RAM, you'll never know this. If you're sysadmining a company that relies on this stuff, and has a cluster of machines that need to be up and running with gobs of RAM to buffer slower disks and backplanes, you'll know better. When normal users can get 4gb of RAM for next to nothing, the server machines better have at least 32gb of RAM.
Re: (Score:2, Interesting)
Re: (Score:2)
32-bit software works because it was ALREADY USING virtual memory, so mapping the app's 32-bit virtual memory to 64-bit physical addresses isn't a whole lot different to mapping to 32-bit physical addresses.
As for antivirus, I think that one was a difficult choice. Antivirus prog
Re: (Score:3, Informative)
If you'd watch the market more regularly, you'd know that RAM has priced out at anything between $30 per gigabyte and $125 per gigabyte in the past 12 months. Last summer it was around $60-$75 per GB, rising to the $125/GB figure in the fourth quarter of 2006. Right now it's bouncing around in the $30-$50/GB range.
All depends on what week you buy it and what week your retailer bought their stock.
I'm hoping that inexpensive ($30 or less)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
"If you're sysadmining a company that relies on this stuff, and has a cluster of machines that need to be up and running with gobs of RAM to buffer slower disks and backplanes, you'll know better."
Using non-ECC RAM in a 24/7 machine is stupid.
Re: (Score:2)
http://forums.anandtech.com/messageview.aspx?catid =40&threadid=2054141&enterthread=y [anandtech.com]
Back to original topic... Intel motherboards really are held back. Why can't I put in 16GB for about 2.5x the price of 8GB?
Re: (Score:3, Interesting)
That said, the delay in developing these quad core procs shouldn't put that big a dent in the pocket / market share of AMD simply because it's a niche market that has yet to be widely adopted.
From what I've heard, the Intel quad cores are selling like hot cakes for running virtual machines.
And it's not only quad core, Barcelona also brings a bunch of core improvements, sorely needed to keep AMD competetive with Core2.
Re: (Score:2)
Re:That's what you get... (Score:4, Interesting)
Of course nobody's running 64-bit applications at home on at the office. Because the dominant player there is Microsoft — whose 64-bit support on the desktop is either lame (try to find even basic drivers for XP-64) or a nightmare (try to run Vista-64 at all!). Can't really run 64-bit apps without a 64-bit OS, can you?
On the other hand, there's a huge demand for 64-bit apps that run on high end workstations and servers. How do think AMD managed to grab so much market share so quickly? By finding a way to meet that demand ahead of Intel, that's how.
If it weren't for this demands I wouldn't have a job — documenting x64 servers for Sun. Yes, Sun. Its a big profit center for them these days.
All that tells us is that Gentoo 64-bit support sucks and that you're not supporting any high-end applications. What have you got, some low volume commerce and web presence sites? If you were doing millions of transactions a day, you'd be needing to squeeze all the performance out of your servers you could manage. Which is why the big boys run serious 64-bit OSs: RHEL, SLES, Solaris, Windows 2003.
Ahh more FUD (Score:4, Insightful)
As for your drivers comment, well let's see here: Intel has 64-bit XP and Vista drivers for their motherboards (and by extension graphics) and NICs as far back as their 865 series (anything older doesn't support 64-bit CPUs). Vista-64 has native support for older nVidia chips (GeForce 2 is the oldest I've tried) and nVidia provides downloadable drivers for their 5 (FX) series and newer. ATi likewise has support in the OS for some older chips, and downloadable drivers for the 9500 and newer for XP-64 and Vista-64. Broadcom has XP-64/Vista-64 drivers out for all their NICs (both 44XX and 57XX series). LSI has 64-bit drivers for, well, all their products that I can see for XP and Vista (and Linux and Solaris). Colorvision has 64-bit drivers and is Vista compatible. Logitech, Microsoft, and Saitek all have 64-bit drivers and support apps out for their input devices.
I could go on but basically any modern hardware seems to have no problems at all with 64-bit drivers. In fact, on all the 64-bit Windows systems I've set up, I've never encountered a component we didn't have a driver for. I'm not saying there aren't some oddballs out there, I'm saying that the vast majority of stuff DOES have a driver and thus it is a non-issue.
When you are countering some FUD, please don't spread your own. You may to like MS OSes, that's fine, but it is a lie to say that finding drivers for 64-bit Windows systems is hard. The vast majority of devices, including specialty devices (I've got 64-bit Vista drivers for my colorimeter and StudioCanvas for example) have 64-bit drivers. It is just a non-issue. Far more rare is 64-bit software, but thankfully 32-bit software runs without problems on the 64-bit OS.
Re: (Score:3, Interesting)
Amen to that. I've run both XP 32- and 64-bit on this machine, and now I'm giving Vista x64 a go. XP 64-bit is a total joke - driver support is almost totally lacking, and now with Vista, I
Re: (Score:2)
Re: (Score:2, Informative)
This is very much untrue, as I can attest. I have been running a 64-bit OS since 2003, and it runs like a dream. I can't address all the technical reasons why, but I can say that I have no 32-bit libraries and I'm up and running. No tears here.
Re: (Score:2)
If you say "end users", I'd tend to agree with you (although the situation is improving at a good pace).
But for server users, Linux 64bit has been here for at least 2 years now. The early adopters probably started 3-4 years ago. Things were still slightly shaky 2 years ago, but are definitely pretty solid today.
I game on Windows XP x64 and Ubuntu x86-64 (Score:2)
My 250GB Windows disk is overflowing with major games, and all run wonderfully. Except for the occasional one I have to crack, either because it's too old and tries to load a 32-bit driver (understandable) or the company went for a shit copy-protection system that tries to load a 32-bit driver (eg Overlord). That's not such
Re: (Score:2)
Dominant? Yes. Total control? No. Plenty of us run 64bit apps, just not on Windows.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Because I discovered that pleny of OS apps wouldn't compile. libdv is one iirc. Kino is another, not the most important apps in the universe but enough to make me waver.
That said, some fine people have stepped up and done some of the dirty work and done a 64bit multi-media distro : http://64studio.com/ [64studio.com]
Re: (Score:3, Interesting)
The 80386 was introduced in 1985, but the transition to 32 bits in software was really only done in 1995. Windows 3.1, released seven years after the 386, still ran on the 286. Word 6.0 for DOS, released in 1993, still could run on an original 8086.
The first 64-bit x86 processors were introduced in 2003. If they "herald in a new era of
Re: (Score:2, Interesting)
As a veteran of the 16->32 bit transition (for that matter, the 8->16 bit as well), I've been wondering
Re: (Score:2, Interesting)
It's not really compelling - plus or minus a few percent. And you need to test two binaries which is expensive. So
Re: (Score:2)
No such thing as incompatibilities if your compiling it.
(I run two 64bit Gentoo servers fyi)
Intel was never the choice for Quad Cores, since.. (Score:2, Interesting)
All existing Q6xx0 solutions are dual-dual core ie two dualcores sharing same FSB - and that is _NOT_ the same as true QC as Barcelona is claimed to be.
That difference is enough to make Barcelona the main choice for many core servers even if it were made with old K8 and not the new K10 cores.
Intel should have true QC chips in a year or so...
Re:Intel was never the choice for Quad Cores, sinc (Score:3, Informative)
Re:Intel was never the choice for Quad Cores, sinc (Score:4, Insightful)
Intel had the option to rest on its laurels; they don't like to work any harder than necessary to remain on top, and the Core marchitecture gave them a huge.. well I'll say it.. "Leap Ahead" of the competition. Unfortunately, Intel's more of a bunny; hop a few times then get tired and sit around, whereas AMD is more of the turtle (slow to market, but rather constant). Well all know who wins the race.
Re: (Score:2)
Re: (Score:2)
If AMD wanted to, they could have hads Intel's style "quad core" long ago.
Hell, even two "x2 4800" dies on one substrate, connected through HT link would be an equivalent, and they could do it in a few weeks even if they would decide to go for it _today_. There is not much to it.
Opteron/AMD64 was _made_ so it could be connected it LEGO-like fashion...
Re:Intel was never the choice for Quad Cores, sinc (Score:4, Insightful)
And yet they don't, and they just posted a $600 MILLION loss in one quarter. The difference between what AMD lost and Intel made last quarter is almost 2 billion dollars. Maybe you should take your market genius over there and help them turn it around.
Re: (Score:2)
I think it's important to remember that Intel inadvertently delivered the high-end server market into AMD's lap.
Intel had done so much heavy marketing, pushing claims that the Itanium was going to blow away all the proprietary CPU architectures, that damn near EVERYONE EOL'd their Unix servers... Alpha, MIPS, PA-RISC,
Re:Intel was never the choice for Quad Cores, sinc (Score:5, Insightful)
Now, if you said that "true" quad core was going to make the chips be twice as fast as Intel's, at half the price, then that would be interesting. Of course, you could say that the chips would twice as fast at half the price, and that would be just as interesting - the technology has nothing to do with it.
Re: (Score:2)
2. If your general understanding of the problem is poor than any explanation that could be "interesting" to you is likely to be marketing bull**it, optimized for technical morons.
You can't make universally valid "X-times faster/slower than" comparisons between these kind of machines.
Results tend to be program-and-load spec
There is no "real 4 core" performance leap. (Score:2)
BS: At what tasks?
Because Intel has already demonstrated near 4x performance with it's "untrue" quad core. You are not going to get more than 4x single core performance. There is only a tiny margin of efficiency of multi-cores that AMD could improve upon.
AMDs only real Barcelona hope is if it increases basic core performance significantly, there is no magical "real 4 core" performance leap to be had certainly not a 10X performance increases.
Doesn't matter (Score:4, Insightful)
In the end it doesn't matter how it is delivered, it matters who can deliver the good performance per $$$. Intel's quad core chips go a long way to doing that in the markets that can use them. The reason is it gets expensive to add physical processors to a board. A single socket board might be $100, but the same thing in a dual socket variety can be $400-600 and you don't even want to see the prices on quad sockets. Thus being able to drop 4 cores in to a standard desktop board, even if they aren't a monolithic 4 core package, is a good deal for many.
Technical arguments and contrived benchmarks mean nothing. The only things that matters is how fast it runs the things you actually, really do, and how much it costs.
Re: (Score:2)
Re:Intel was never the choice for Quad Cores, sinc (Score:2)
All existing Q6xx0 solutions are dual-dual core ie two dualcores sharing same FSB - and that is _NOT_ the same as true QC as Barcelona is claimed to be.
That difference is enough to make Barcelona the main choice for many core servers even if it were made with old K8 and not the new K10 cores.
Intel should have true QC chips in a year or so...
You're very convinced the difference will be drastic, that's very funny thing to be when you never saw a single benchmark.
According to
Re: (Score:2)
1. This is mainly what currently holds Opterons over Xeons on servers, despite superior C2D core and heap of cache.
2. I have exchanged dual Opteron boards for single socket DC 6000+.
Despite HT link and dual RAM bank of existing Opterons being superior for most uses to Intel's shared FSB, there is tangible speedup just due to having really fast intercore communication path.
3. AMD has onboard memory controller even now with K8 and K10 wil
AMD: first to 8-core CPUs ? (Score:2)
AMD have indicated they may do MCM in the future (Multi-Chip Module), like Intel. But since they are releasing a true quad-core CPU before Intel, it is going to give them an advantage: to make an 8-core CPU, they will just need 2 quad-core chips, whereas Intel will need 4 dual-core chips.
I wonder how significant this technical advantage really is on various levels: performance, power consumption, reliability, yield, simpler to manufacture, cost, etc. Could this also mean AMD will be first to market with
Re:Intel was never the choice for Quad Cores, sinc (Score:2)
WE KNOW ALREADY. the intel quad core still performs very well in benchmarks - faster than the dual core in multi-threaded apps, one can only conclude it damn well works.
As for AMD's 'true quad' performing better, no one knows for sure, there's no real benchmarks yet at this point.
AMD could release it and say 'wow, our system is 25% faster than the Q6600" and intel could say 'err so what' and release the Q6600 clocked 25% faster because there's so much damn headroom
Re: (Score:2)
It doesn't matter if it's two dual cores in one package. It doesn't matter if it's 4 single cores with a highly trained monkey dividing up the instructions between the cores. It's the end results that matter.
However, right now you can't get the 300$ quad core from Newegg, it's sold out. You *can* get a 61$ dual core from AMD though. And unless you require a lot of processing power, it's more than enough for most people. Especially college students o
I'm upgrading next month and I'd love to buy AMD.. (Score:2)
Re: (Score:2)
And if you count Q6xx0 as quad core system, then by that standard qualifies every dual socket dual core AMD system sold, since architecturally they are about equivalent, if not better ( 2x memory channels, HT links)...
So... um... (Score:2)
Benchmarks, Price, Release date (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But since you have a Cell system, I guess you're not too worried about upgrade paths
Neither one is a bad choice (Score:2)
She's unlikely to tax either one.
She'll need a lot of RAM for VMWare to work well. That will have a huge impact on your cost calculations. Use RAID for performance, she'll need that too.
Processors are important but they're not the whole answer in the performance equation. At this time the bottlenecks tend to be more in the RAM and HD I/O.
Re: (Score:2)
All chips have bugs (Score:2)
They're called errata. The most recent bunch are more plentiful than usual but it's not unheard of. Get your microcode updates, whichever [microsoft.com] vendor [linuxbios.org] you get your chip from. AMD calls them BIOS updates which partly makes sense since you usually patch the BIOS at the same time. You get them from the OEM of your motherboard or system usually but as you see from those links operating system vendors can put them out too. The errata that have been in the press lately are unlikely to affect chips you buy right n
Re: (Score:2)
Re: (Score:3, Interesting)
If you had written that statement in the late 90s or even as late as 2002, I'd agree with you. But system performance stopped doubling every 18-24 months a long time ago. Now it's closer to 36-60 months (although dual-core and quad-cor
I sort of agree with you... (Score:2)
but think your information is a little dated. For many years what I said was true. Then for about 18 months it wasn't. I believe the race is back on now.
Today's quad core, vm supporting 64bit machine will be quite useful in 8+ years, especially if you get the ULV processor. But compared to what's on the shelves at WalMart on that day it will still look dated.
What I wonder is what sort of hideous application would require the type of computing horsepower that should be available 8 years hence. By then
Re: (Score:2)
While I'm a fan of multi-core (dual-core is a minimum recommendation from me for the past year, ever since the Athlon64 X2s broke the $150 barrier), I do question the idea of more then around 4-8 cores on a consumer / light-business desktop. For the power users, yeah, they'll be building systems with 4
Re: (Score:2)
I agree. Where we are right now is that the pace of improvement will not stop even though the baseline product is far more capable than 90% of the people need. I don't expect this to change any time soon. The person who can make efficient use of a top of the line pc will only become more rare.
The baseline will continue to improve at remarkable rates as each innovation on the bleeding edge drives prior art downmarket.
Peripherals are a different story. I expect more innovation in human-machine interface
Re:Benchmarks, Price, Release date-I SINCERELY DOU (Score:2)
I sincerely doubt that anyone still in school - any school - is going to overtax the current Core 2 Duo/AMD-64 X2 offerings available today. Short of running simulations of the Universe in real-time, or high resolution Maya renderings (remember when Photoshop was once the app that justified the
Re: (Score:2)
Re: (Score:2)
Of course, for the budget users, I'd be pointing them at the lowest cost dual-core CPU. Or possibly a few steps up from the bottom. There are some p
Better quad-core how? (Score:5, Insightful)
The latest batch of ATI cards have failed to compete with the 8800GTX and instead compete against lower clocked cards, presumably again with cut margins. Right now AMD and ATI to me look like two second place companies, and if they try to integrate closer they'll drag each other down. I'm certainly not inclined to buy those two as a package...
Re:Better quad-core how? (Score:5, Interesting)
Re: (Score:2)
Motorola is no longer a player in the desktop CPU market, have not been for several years now, I'm curious why you bring them up. Their products were not put into a new notebook computer for over a year and a half now.
Re: (Score:2, Troll)
2) If you use AMD's X2/EE versions your TDP gets far lower (whereas cost of CPU increases by merely a couple of bucks). If you use X2/EE/SFF you get a TDP of 35W.
3) Price/performance wise, AMD is still a clear winner, hands down.
Re: (Score:3, Interesting)
I know, and I also don't give a shit. I got a single-socket mobo and four cores running, you don't. I don't need a special and expensive dual-socket mobo, eATX case or whatnot. That's 99% of the advantage there already. The notion that "real" quad-core makes a big difference is at best disputed, maybe if you have a lot of core-core communication but well... I don't see how that could be a very big bottleneck for normal quad-core u
Re: (Score:3, Informative)
In essence, the desktop will slow and rot, perhaps giving us another boneheaded move like NetBurst.
You can take all of that with a grain of salt, but remember this... It's been hammered here many times before that a com
Re: (Score:3, Informative)
Intel is not going off on a huge strategic blunder like the PIV or Itanium again, this time they're on the ball and overclocking results suggest they have a lot of headroom.
Really? I'm not so sure.
Sooner or later they're going to have to go for something similar to an Itanium processor. Once pushing clock speed runs out, pushing cores runs out, pushing micro-op improvements runs out, they're going to start looking at the instruction set.
You can bet that if they could change the instruction set at a whim they would have done a long time ago, and the processor would perform much better.
I think it's inevitable that in the next 10 years things will start to look towards It
Re: (Score:2)
I think it's inevitable that in the next 10 years things will start to look towards Itanium (or an equivalent), because changing the instruction set will provide a lot of untapped processing power.
We've been hearing things like this for a lot longer than ten years. I'm not an expert in this area (or any other, I'm more of a jackoff of all trades) but I do have a few observations to make.
Itanium was supposed to be inherently faster than x86. It hasn't proven to be so. They did load it up with fancy FP hardware and, lo and behold, it was fast. It's still far cheaper to use a group of AMD processors if you want fast FP. And now they're going quad... That's mostly irrelevant though, the point is that
Re: (Score:2)
No. No it wouldn't. It might perform marginally better, as in maybe 1-2%, if you could get rid of all x86 cruft. That number is mostly a WAG, but I've done performance modelling studies to back it up (or more specifically, I've modelled x86 cores without x86 cruft because it's easier, and adding in the cruft for accuracy costs a percent or two on average).
P
Re: (Score:3, Insightful)
I look at it this way: there are only three players, AMD, Intel, and nVidia. Beyond that you're not going to find a chipset, cpu, or gpu worth anything. The only company that (now) has sufficient expertise in all three areas is AMD. Intel has done a good job with centrino, but clearly has no interests and lacks knowledge in the GPU arena (they've only done the bare minimum w
Barcelona (Score:2)
[laughs]
The Doctor: Imagine how many times a day you end up telling that joke, and it's still funny!
Well, that aside but because it had to be done. But to business. I remember when 64bit cpus came out Sun and a few others had them (12 years ago) the programming took forever to catch up. They are still selling 32bit processors
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Moving encoding typically is multi-threaded, so it will use as many cores as it can lay hands on. So you'd only be encoding one movie at a time.
For the rest of us, having 2 or 4 cores is more about being able to handle spikes in CPU usage without slowing down interactivity. With a singl
Re: (Score:2)
Re: (Score:2)
Real-time encoding...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Quite trying to be pedantic.
Re: (Score:2)
Until you put a CD in the drive. Locks up multiprocessor machines fairly effectively until it works out what to do with it.
Re: (Score:2)
Until you put a CD in the drive. Locks up multiprocessor machines fairly effectively until it works out what to do with it.
yeah, what the hell is that all about? happens with linux too - especially when there are errors.
windows NT finally (in windows 2000) figured out how to read a floppy without killing the whole fucking OS. and then we got USB and ATAPI floppies and the problem went away. but cdroms are already ATAPI (or SATA, or SCSI) and it seems like any OS still shits itself when it comes across a read error.
Do I need to put another computer in my computer, connect it with gigE and do scsi-over-ethernet so that I can