AMD 'Bulldozer' FX CPU Reviews Arrive 271
I.M.O.G. writes "Today AMD lifted the embargo on their most recent desktop AMD FX architecture, code named Bulldozer, whose CPU frequency record Slashdot recently covered. The fruition of 6 years of AMD R&D, this new chip architecture is the most significant news out of AMD since the Phenom II made its debut. The chips are available now in all major retail outlets, and top tier hardware sites have published the first Bulldozer reviews already."
Here are reviews from a few different sites — pick your favorite: Tom's Hardware, PC Perspective, Hot Hardware, [H]ardOCP, or TechSpot. They don't agree on everything, but the consensus seems to be that the new chips aren't blowing anyone's socks off, and that they struggle to compete with Intel's comparable offerings. The architecture shows promise, but performance gains will take time to materialize, making it difficult to leapfrog Intel to any significant degree.
I skimmed a few... (Score:3)
Of late, intel's somewhat confusing set of model numbers has been distinguished, in addition to differences in speed, by various features being lasered off of certain parts, but not others, mostly virtualization-related stuff. AMD generally left those on at all times and distinguished primarily by speed.
Does anybody have an idea how the price/performance comparisons change(if in fact they do) from the pure-benchmark ones given in TFAs, if the buyer requires that all the relevant virtualization features be enabled?
Re:I skimmed a few... (Score:4, Informative)
Re:I skimmed a few... (Score:4, Interesting)
It's of interest to me because my next build/config to order is likely to be primarily for VM hosting, with routine desktop/workstation tasks taken care of by the fact that modern CPUs are fast as hell. Unfortunately, a lot of the enthusiast benchmarks generally focus on running Medal Of Warfare fast and cheap, and the virtualization benchmarks generally start from the assumption that you are looking to buy a palletload of 1Us...
Re: (Score:3)
I've seen ECC errors before. It's frequently because the memory is not seated perfectly and if you reseat it they stop. But if you don't have ECC memory, you don't get an ECC warning, your programs just crash.
Re: (Score:2)
The i7 2600K has all of the bells and whistles enabled. Except maybe ECC memory...
Re: (Score:3)
The i7 2600K has all of the bells and whistles enabled. Except maybe ECC memory...
No, it doesn't have VT-d or TXT enabled, the 2600 - not the K - does though. It less for a fraction less, has slightly worse integrated graphics and is multiplier locked, it's basically the business version of the 2600K. And like you say, if you want ECC you must get Xeons.
Re: (Score:2)
Re: (Score:2)
VMWare Workstation would still run fine with 1 or 2 guests without VT-d, wouldn't it?
Depends on what you're doing, VT-d is virtualized IO. Both have VT-x so CPU-intensive clients should run just fine with or without it, but disk access will be slower without it. But in my experience running virtualization without it, it's not that slow anyway.
Re:I skimmed a few... (Score:4, Interesting)
Couple of hundred bucks it costs to go ECC ? (Score:3, Informative)
Couple of 10 bucks, more likely. I've recently bought parts for an AMD-based PC upgrade (in Germany) and the price differences were
- about 7 euros more for 2x2GB ECC Ram, compared to the same amount of non-ECC. Both Kingston Value RAM BTW.
- maybe 10-20 euros more for a board that supports ECC. That one is not as clear-cut BTW, it is more a case of having less choices if you want ECC, and the cheapest boards tend to not support ECC.
In dollars, that's maybe 40 bucks difference total. Or 50 bucks if you want 2
Re: (Score:2)
Does anybody have an idea how the price/performance comparisons change(if in fact they do) from the pure-benchmark ones given in TFAs, if the buyer requires that all the relevant virtualization features be enabled?
If I were to set up a business to do this, to cut thru the incredibly frustrating marketing from both chip manufacturers in exchange for a small cut of the price, would I currently have any competition?
The point of a confuse-opoly like CPUs or american cellphone contracts is to screw over the buyer by confusing them. Aside from screwing over the buyers, it also creates a business opportunity for someone to intermediate themselves while un-screw-up-ing the marketplace.
The general class of idea is something
Re: (Score:2)
I generally don't find CPUs too confusing as they don't change toooo often, but graphics cards I just stopped trying to keep up with years ago.
When I bought a card recently I just googled "x-card vs y-card", hwcompare.com [hwcompare.com] was generally the top result. It has lots of automatically generated pages which compare benchmarks of one card vs another. If you made a site similar to that for comparing CPUs/mobos within certain categories or price ranges then you might make some money from advertising revenue, especia
Re: (Score:2)
Here you go http://www.anandtech.com/bench/Product/434?vs=363 [anandtech.com].
Re: (Score:2)
I was thinking more choose a category and get a table or chart comparing something like 5-10 processors at a time. Basically just like the roundups that these sites do from time to time, but automatically generated and user customisable what price range or other features they want to filter by. Useful page though.
Re: (Score:2)
Just select stuff from the left hand pair of menus:
http://www.anandtech.com/bench/CPU/342 [anandtech.com]
Re: (Score:2)
Too lazy... I just generally grep http://www.cpubenchmark.net/ [cpubenchmark.net] and/or http://www.videocardbenchmark.net/index.php [videocardbenchmark.net] to see where things fall... roughly.
Plus, the old stuff I'm comparing to is often too old to be listed on any of the more modern benchmarks listed review sites. But I'm a cheapskate like that ;-D
Re: (Score:2)
Here you go http://www.anandtech.com/bench/Product/434?vs=363 [anandtech.com].
Doesn't mention features at all. Doesn't even filter by socket type much less manufacturer. I am trying to burn thru the marketing and figure out the fastest CPU to buy that supports virtualization for $X on my motherboard of choice.
I'm thinking an expert system where I tell it "Intel, Socket B, virtualization mandatory, gimme the performance vs price table, and if you must narrow it down, narrow it down to sub-$300 please".
I can probably do this by hand in at most a couple hours of work, which is worth a
Re: (Score:2)
The problems with your business idea are (1) it's a niche business (since most people these days just take whatever CPU their OEM gives them) and (2) the people in the niche find figuring stuff like this out for themselves to be enjoyable, not tedious. They're hardware nerds -- that's why they're building their own rigs in 2011 -- and that means they love poring over spec sheets.
Generally speaking, good businesses are found by looking for things that people hate doing and offering to do it for them for a s
Re: (Score:2)
The part most benchmarks are concentrating on is the 8150, which costs $250 to retailers – i.e. it costs about the same as an i7 2600 when it gets to your wallet. Unfortunately, it seems to perform worse than an i5 2400, so... fail.
Re: (Score:3)
i've only seen the 8150, no other chips out there yet.
My main comparison point is the x6, since i'm looking at upgrading from a x4 940, so far it looks like the 8150 has serious trouble beating the x6 1100T in anything but the heaviest threading and a few x264 encoding benches. So it looks like i'll pick a 1090T for 100 bucks less then the 8150.
Bulldozer might be interesting from an architectural standpoint, but to me it looks like they gimped the execution hardware and tried to make up for that with rather
Re: (Score:3)
subsequently Global foundries fumbled the ball on the production, meaning Bulldozer isnt hitting the clock speeds needed for being competitive.
125W TDP, 3.6GHz and still not competitive? Why don't they rename it P4/Prescott AMD Edition while they're at it?
Kinda ironic don't you think?
Re: (Score:3)
very ironic indeed, it is very sad to see AMD gamble the good old netburst-play and then foul it up (as is to be expected really). Even if they had the process-control which intel enjoyes, building something a tad inefficient and praying for the clocks to compensate is just stupid.
It also reminds me of the original Phenom, AMD overreached themselves on features, didnt provide solid IPC improvements (although for Phenom, there actually was some improvement over K8), and then fumble the clock speed so as to m
Comment removed (Score:5, Interesting)
If you are an AMD fan.... (Score:5, Interesting)
buy a 6 core Phenom II, overclock it, and pray that AMD can stay around long enough to fix this mess.
Go check the techreport review and look at the price/performance chart: The 2500K has slightly higher performance, lower price, and *much* better energy efficiency.
Go look at the LKML where you'll see Linus & Ingo Molnar calling out AMD for design flaws in Bulldozer's cache that AMD wants to paper-over with kludgy software workarounds in the kernel: http://us.generation-nt.com/answer/patch-x86-amd-correct-f15h-ic-aliasing-issue-help-204200361.html [generation-nt.com]
I feel bad for AMD's engineers. I *don't* feel bad for the marketing hype machine that has been relying on "geek-cred" from sites like Slashdot and the usual David vs. Goliath myth to get unearned praise. If Intel had come out with Bulldozer instead of AMD, we'd be calling this Prescott version 2.0.
Re: (Score:2)
The...optimism... of Intel's pricing guys can get a touch out of hand when their competitors get weak.
Re: (Score:2)
Re: (Score:2)
The prices of the Intel ten core stuff is insane on the level of if you have to ask you can't afford it as an example in server space. Meanwhile there are slightly slower AMD 12 core CPUs for under $1000.
Intel's most expensive chip is $4616, AMDs is $2649. True you get AMD chips to under $1000, but then it's no longer very fair to compare them to Intel's most expensive ones either. The 10-cores are more like the Extreme Edition chips, which despite having two cores less perform much higher than the fastest Opteron...
Re:If you are an AMD fan.... (Score:5, Informative)
We've benchmarked the 10-core 2.0GHz E7 Xeons against the 8-core 2.0GHz Opteron 6128. The Opteron CPUs deliver about 70% of the performance on our workload for about 12% of the price.
The AMD motherboards are much cheaper too.
Bulldozer is underwhelming on the desktop, but it could still deliver great price/performance in the server market. We'll soon see.
Re: (Score:2)
Only for single threaded applications which are now less relevant than they used to be. Of course you only look at 12 cores (or ten) if you are going to use all of them most of the time.
Re: (Score:2)
I just bought a dual 12 core server from Dell. The difference (all other specs being equal) between the R810 with two 10-core Xeon's, and the R815 with two 12-core AMD's is about $8k (we also have 256GB of ram in them. The Intel ram was more expensive for some reason, even though the speed was the same). That difference in price is going towards a Fusion IO card, which will be a nice little benefit to our IO performance on our database.
Re: (Score:2)
When you drop $60k on a server and another $20k of licensing fees, one really doesn't care about shaving $2-3k from the price through a slower processor, causing you to order more servers. Give me the fastest CPU.
Re: (Score:3)
When you drop $60k on a server ...then you're doing it wrong. You can get a fully tricked out 2.5GHzx48 6100 with 512GB RAM from SuperMicro for about GBP 16,000. If you want to throw in a 2U case wil loads of disks you might go up to 25,000. But 60k for a server? You're obviously doing something exotic.
another $20k of licensing fees, one really doesn't care about shaving $2-3k
That's per processor, by the way, of which there are 4. That's a saving of $8k to $12k, which is beginning to get significant, espec
Re: (Score:2)
Or you may just as likely have 48 very CPU intensive non-parallellizable tasks.
Or most likely of all, looking over all the different fields, you have a mix of tasks that utilize CPU's differently, and find that at peak use, you need 48 cores.
Re: (Score:2)
Or you may just as likely have 48 very CPU intensive non-parallellizable tasks.
That's kind of the definition of parallelizable and is the ideal case. Actually, it's the case I have. It means that I pay a hefty premium for the fast HT links and large system image, but it's still the cheapest way of getting high density computing.
Or most likely of all, looking over all the different fields, you have a mix of tasks that utilize CPU's differently, and find that at peak use, you need 48 cores.
The cluster I use s
Re: (Score:2)
"That's kind of the definition of parallelizable and is the ideal case. Actually, it's the case I have. It means that I pay a hefty premium for the fast HT links and large system image, but it's still the cheapest way of getting high density computing."
No, the definition of embarrassingly parallell, aka ideal case, is when a single task can easily be spread over multiple processors with little performance loss due to overhead/locks/stalls or simply waiting for other processors to finish their job.
Raytracing
Re: (Score:2)
No, the definition of embarrassingly parallell, aka ideal case, is when a single task can easily be spread over multiple processors with little performance loss due to overhead/locks/stalls or simply waiting for other processors to finish their job.
Wow, that's arguing semantics. If you have 48 independent tasks to run, then your problem (that the tasks belong to) is embarressingly parallel and won't even stress your interconnects. It happens to be tha case I have. The problem splits into a large number of e
Re: (Score:2)
The CPU price difference alone would be $8000+ at the very top end (assuming the Intels are four way), and a bit more than that shaving a few hundred MHz off the speed. Of course the boards are also a hell of a lot cheaper for some reason. You also get eight more real cores which you are going to care a lot about if you are even considering such a mach
Re: (Score:3)
AMD had a real good run in the early 2000's AMD actually was selling more PC's with its chips then Intel. Then Intel Core 2 Duo processors came out and AMD had to go back to catch up mode again.
But I have stopped watching the processor market as closely as I did before. Then I wanted to build myself a PC... I was like Dag-Nabit! They seem to name all the chips with a code name and a number... Now I would expect the larger number next to the code name would mean it is a better chip then the previous code n
Re: (Score:3)
AMD had a real good run in the early 2000's AMD actually was selling more PC's with its chips then Intel. Then Intel Core 2 Duo processors came out and AMD had to go back to catch up mode again.
I'm pretty sure they didn't and it was just a majority of the retail sales outside the big OEMs. I don't ever think AMD ever had the fab capacity to supply over 50% of the total market.
Re: (Score:2)
That's a pretty constructive dialog. Is that the norm on LKML these days? Linus, I feel your pain.
But honestly, it's not like the transition to AMD64 was all that smooth, either. We're all members of the breakage-of-the-month club. Others should be care
Re: (Score:2)
I don't think BD is a failure because it can't beat Sandy Bridge in every benchmark. I think it's a failure because it is a *massive* (2 BILLION transistor) chip with very large (315 mm^2) die and a LARGE power envelope that still doesn't beat a 2600K even with higher clockspeeds and at using highly multithreaded code where BD is supposed to be superior.
If AMD had come out with a chip that had the same performance as BD but was much smaller and more power efficient (basically the chip that AMD promised rat
Re: (Score:2)
Look at the transistor count for the i7 - it certainly pushing 2G just like the AMD designs. The big difference is the direction AMD is taking the thing.
What I suspect is that in another couple of generations is when we'll start seeing the real benefit of AMD's design.
Some points to think about
I suspect AMD is moving back towards the slot based board designs (Slot A) and putting the entire computer onto a card. The only thing a mobo will need to provide are things like
Re: (Score:2)
APU != FPU.. not directly anyway
The APU works just like a GPU, the only real difference is the 1-2 magnitudes lower communications latency(shared L3 does that), which allows for smaller matrices of data to effectively be computed.
APU = super low latency, but mediocre throughput GPU
huge potential, but still has the basic limitations of a GPU for the type of work, but not the amount of work.
Re: (Score:3)
Intel's Pentium was greeted with ridicule in the smoking hot 60Mhz incarnation (15 watts, can you believe that?) It went on to great success after a die shrink.
I talked to the head of the Pentium Pro to Pentium 4 projects (after he left Intel), and he said that their first power wakeup call came with that chip, when they were told by a company in New York that they couldn't upgrade their desktops because their building's power supply wouldn't be able to cope with the increased load. Sadly, it wasn't until after the Pentium 4 that they really learned this lesson to any degree.
Re: (Score:2)
Well, wait a minute. Obviously it turned out to be a misstep, but what reason do you have for thinking they knew that going in and were just being disingenuous?
Re: (Score:3)
Well, wait a minute. Obviously it turned out to be a misstep, but what reason do you have for thinking they knew that going in and were just being disingenuous?
They didn't know. They (the engineers) thought they were just pursuing the next logical step in the path marketing had decided on with the original P4, selling on high frequency. There was very convincing data they showed that they could extend the P4 architecture up to well above 10GHz and get good performance. Nobody in industry called them on it, because they too didn't see the problem that was just around the corner.
At 60nm, the leakage current of transistors blew up. What was previously a minor pro
Re: (Score:2)
But honestly, it's not like the transition to AMD64 was all that smooth, either. We're all members of the breakage-of-the-month club. Others should be careful what they drool over.
The 64 bit AMD chips, when considered in conjunction with relevant chipsets and when compared to the intel offerings of the day with their relevant chipsets, were astounding examples of efficiency and performance, and while there were real compatibility issues, they were at least fairly scarce.
Intel's Pentium was greeted with ridicule in the smoking hot 60Mhz incarnation (15 watts, can you believe that?) It went on to great success after a die shrink.
This is the time when AMD actually failed, with the K6 and its ever so slightly incompatible FPU implementation. Lucky for them, the Pentium had 0.99999999997 or two FPU problems of its own.
I'm always concerned for it
Re: (Score:2)
The problem for AMD is, over the lifespan of a cluster/supercomputer/data center, the major costs aren't manpower, it's floorspace, power and cooling. These Bulldozer cores use drastically more power and run MUCH hotter than the 10 months older Intel parts. Also, not all workloads(even in science) are easily parallellized, so overall balance of performance advantage leans over towards Intel.
Using the same memory, SSD's, GPU and such, the FX-8150 gurgles down 79 watts more under heavy load than the i7-2600k.
Re: (Score:2)
Are you counting the cost of the chipset? In the past, intel has been horribly bad at producing chipsets that don't suck down the power, but I honestly have no idea if they've rectified that situation or not. The early Athlon 64 laptops actually had desktop parts in them and still had power consumption comparable to their Intel-based competitors due in part to the more efficient chipset.
Re: (Score:3)
This is entire system power use, minus monitor. The systems also used the same PSU's, to remove that factor from the comparison.
The Sandy Bridge chipset and CPU revisions really cut down power consumption.
The i7-2600k put under heavy load sucked down 164 watts, the FX-8150 sucked down 243 watts. The i5-2500k sucks down 148 watts under the same heavy load. Another interesting comparison is the A8-3850, which sucks down 165 watts under the same heavy load.
Re: (Score:2)
Also, not all workloads(even in science) are easily parallellized,
Yes, but if you're running a cluster, you are by definition running problems that parallelize well. If your workload isn't parallelizable, then the best you can do is run the single thread on the most overclocked, most expensice i7 you can get your hands on.
These Bulldozer cores use drastically more power and run MUCH hotter than the 10 months older Intel parts.
On the server end with the quad 12 core 6100s, the numbers are less difinitive. I
Re: (Score:2)
"Yes, but if you're running a cluster, you are by definition running problems that parallelize well. If your workload isn't parallelizable, then the best you can do is run the single thread on the most overclocked, most expensice i7 you can get your hands on."
This is not true these days, since many use clusters even for tasks that are not easily parallellizable, simply because that's what's available.
Also, the 12-core 6100 is Magny-Cours which is not based on Bulldozer. Bulldozer-based Opterons are under th
Re: (Score:2)
I specifically stated that the Bulldozers run really hot, I said nothing about other AMD chips.
Well, given the article (we are discussing the articles, right?), you did indirectly. The x6 is basically the same core as the 6100 (more or less), and there is a power comparison between x6 cores and bulldozer cores in every single linked article. In fact, the performance per watt figures are very similar.
Re: (Score:2)
No, I specifically compared Bulldozer with Sandy Bridge.
If we add in the x6(Thuban core), it just gets MORE embarrasing for Bulldozer, considering that Thuban is using the 45nm process while Bulldozer is on the 32nm process, yet has less power consumption and overall comparable performance to the 8150.
Re: (Score:2)
Re: (Score:2)
The direct reason the Jaguar/Titan is upgraded with Interlagos is that it only requires major redesign of cooling. Everything else, including the internode memory controllers etc don't require much in that way, making it mostly(keyword mostly) a drop-in replacement. I'm not saying Interlagos is bad or anything, I'm just pointing out that in the case of Jaguar/Titan, it's for economical and engineering simplicity reasons on the hardware side at least.
The software side is going to be "somewhat" more tricky...
Re:If you are an AMD fan.... (Score:4, Insightful)
Re: (Score:2)
Trinity and beyond (Score:3)
You have to hope that whatever sacrifice AMD made in this design was made to better enable the CPU and GPU to be fabricated on the same process in Trinity and beyond.
obviously why SB-e not arriving until Q1 2012 (Score:3)
Re: (Score:2)
AMD isn't about performance anymore (Score:2)
I buy AMD, mostly when I build myself and when I'm on a budget. Not because I like weak chips. I love CPU speed, but I also love to keep my wallet as full as possible. This is what AMD offers: "bang for your buck". AMD is interesting for anyone who wants to balance between spending money and reasonable performance. Want pure performance and it doesn't matter what it costs? Go Intel... No questions asked. (This wasn't so in the Athlon XP/64 days)
Also keep in mind that we are now really on a computing
Re: (Score:3)
I love CPU speed, but I also love to keep my wallet as full as possible. This is what AMD offers: "bang for your buck". AMD is interesting for anyone who wants to balance between spending money and reasonable performance.
Which is why this chip is not a good buy until they drop the price. Right now it is not a good "bang for your buck". Looks like I will be putting Phenom II X6 chips in my existing systems and sitting tight for a year or two.
Re: (Score:2)
The AMD A-series APU's are by far the best value around. The AMD A6-3650 ($120) and the AMD A8-3850 ($135) are on par in performance with the Phenom II 1055T ($150) and the Phenom II 1075T ($160) respectively. If you need the (very) marginal performance boost of the Phenom II 1090T ($170) or the Phenom II 1100T ($190) then you should probably be buying the Intel i5-2400 ($188)
They perform as well as the x
Re: (Score:2)
Note the part where I said "existing systems". If any of my existing systems already had Socket FM1 motherboards, I'd already have APUs in them. ;-)
There's also the minor detail of the APUs not supporting ECC RAM. I generally use ECC RAM for any box that acts as a server (prefer it for desktops too, actually... but it is less of a "must have" and more of a "nice to have" for desktops).
Re: (Score:2)
What? i'd like to know how a A8 3850 (which is basically a 2.9 GHz athlon II X2, better known as a l3-less PII) can keep up with a 1075T.
Sure, for general browsing/wordprocessing the A8 is more then sufficient, but once you get into stuff you would actually need a quad-core for, the PII chips are superior.
Yes, if you are building a new surfer-box for mom, by all means get a A6/A8, but if you do any gaming/encoding stuff, the few extra tenners for a PII x4/x6 pay off. I agree with the intel i5 2400 recommend
Re: (Score:2)
Re: (Score:2)
Pentium G840 + decent H61 mobo with 4 RAM slots + 4*4GB of RAM == $215, and beats your A6 system in every respect except GPU performance, and quite frankly, if you're able to cope with a 6530, then you're not looking for a big graphics card, and the HD2000 on the pentium is almost certainly enough.
Re: (Score:2)
Re: (Score:2)
If you're going to roll back to 8GB of RAM, why not put the money towards a Radeon 6770 to go with the G840... that way you have a faster CPU *and* faster GPU.
Re: (Score:2)
Re: (Score:2)
The point is that the intel system is $35 cheaper... if you're going to say "but I could shave off 8GB from the AMD rig" I can equally say "I could shave off 8GB from the intel rig, and use the $70 now left to get a better graphics card"
Re: (Score:3)
Re: (Score:2)
16GB of RAM? Are you stupid or something?
4GB of RAM is more than enough for "mom". Hell, even 2GB is more than enough for "mom".
Meh, I'm with GP on this... For most casual use I'd rather use money to buy enough cheap RAM to fit all of my OS and apps in memory paired with a cheap 1TB disk, rather than plunk money on even a small SSD (that would just get cached to RAM anyway).
16GB is a bit overkill, but maybe she doesn't like to close tabs or something. :-P
Re: (Score:2)
And when 8 gigs of DDR3 will set you back somewhere in the neighborhood of $45, slapping in 16 gigs isn't really a bank-breaker.
*suddenly experiences the memory, now an abomination, of spending $100 in 1996 for 16 megs of EDO DRAM*
Re: (Score:3)
Re: (Score:2)
Why do you "need" two sticks? Is that an actual requirement? Genuinely interested, not trying to be awkward. If you're referring to the timing boost you get by using two sticks, isn't that negated on most mobos when you put in 4 RAM sticks because of the extra overhead (either in power or just address space, I can't remember..).
Re: (Score:2)
The idea is that memory is interleaved between the memory chips, so that the memory chips work in parallel to load and unload data from the CPU's caches. That 64-bit bus effectively becomes 128-bit, 192-bit, or 256-bit.
It is this reason that the old i7's (triple-channel memory) beat up the newer sandy bridge chips (double
Re: (Score:2)
Two sticks isnt a requirement for modern cpu's, and as far as i know, there are no timing boosts to be had (but then again, i've been out of the game for a few years). The benefit is that with a dual channel setup, you get twice as much memory bandwidth, and in modern systems under certain use cases that is a genuine bottleneck. Some benchmarks show 10% performance increase going from ddr 1333 to 1600, now imagine the impact of not adding 25%, but 100% bandwidth.
Re: (Score:2)
And one more point, it can actually be true that the price of the memory can go UP when it gets scarce later, but these days your computer might conceivably provide enough computing power for you for years, and if you make an OS upgrade you will want more RAM; why not buy it now when you know you can get it cheaply?
Last time I went looking for ordinary DDR it was priced higher than what I paid for it the first time I got some, while DDR2 was way cheaper. OTOH, I just got some RDRAM for a free machine I pick
Re: (Score:2)
Re: (Score:2)
So basically AMD is good because you're a stinking peasant so impoverished or fully greedy that you cant even be bothered to invest in a 2500 which is faster than your antiquated junk? Oh but now that we have the trash of the century let's not forget 16GB of RAM because 8GB isn't good enough for the lowest form of peasant such as myself. Seriously, how do you live with yourself?
So, do you drive a Ferrari, always fly first class, and have a home IMAX? If not, is driving an affordable car being a stinking peasent?
Re: (Score:2)
So, do you drive a Ferrari, always fly first class, and have a home IMAX? If not, is driving an affordable car being a stinking peasent?
The difference between my i5 system and a slower AMD system would have been about $100 at most, but the AMD system would have been hotter and sucked up more power.
If a Ferrari was as reliable and practical as a Civic but more economical and more powerful and only cost $100 more then we'd all be driving them.
Re: (Score:2)
But the i5 2500 is massively faster than the A6 3560 –even a Pentium G840 is faster than the A6, and it costs less.
Your argument is akin to "but, an FX3150 is way more expensive than a Pentium G840, therefore AMD is sucky value"
Re: (Score:2)
Do these contain DRM? (Score:2)
This is what is keeping me away from buying core i5 cpus, even if the AMD ones might be a bit slower
Rings a bell (Score:2)
The architecture shows promise, but performance gains will take time to materialize, making it difficult to leapfrog Intel to any significant degree.
Hasn't that been the case for 20 years now?
Re: (Score:2)
Nope. Compare the original Althlon to the comparable Intel chips at the time. It was both faster and cheaper, making it an obvious win. The K6-2 was about the price of a Pentium, but performance was comparable to a Pentium 2 or 3. My K6-2 350MHz cost about as much as a 266MHz Pentium 2 - much less if you included motherboard costs in both cases. The K6-2 and K6-3 were a bit slower than fast Intel chips, but it was very close. The Athlon was faster than the Pentium 3 and than the early Pentium 4s. The
Re: (Score:2)
The outstanding "integrated" graphics performance of the laptop and desktop Llano APUs (ugh, marketing) greatly expands the number of niches where AMD is the better choice. That now includes nearly home user on the planet.
I wouldn't worry much about AMD even if this Bulldozer launch is underwhelming.
AnandTech (Score:2)
As usual, I feel somewhat obligated to post up AnandTech's review, which always seems much more in-depth and polished than almost all the sites out there:
http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested [anandtech.com]
amd has better MB choice at lower price points (Score:2)
Intel has limited pci-e in the i3 i5 and low end i7 boards that makes USB 3.0 and other on board stuff eat in to the X16 for video.
With amd you can get a board with lot's of pci-e lanes with out the need for the high end cpu and or get a high end cpu and not need to buy a super high end MB.
Dwarf fortress fail (Score:2)
When new computers are slower, there's something seriously wrong!
A lot of stuff in this story ... (Score:4, Interesting)
- Then there is the fact that these synthetic benchmarks use intel's proprietary libraries, which were proven to work ineffectively when 'non genuine intel' architecture was detected.
- Then there is the fact that this is a new platform, and its just out, and the main deal with this is being easily increasable in cores. so amd will just add more cores without any research being needed. expect 32 core cpus in a year or so. 16 cores already out.
- As you can understand these cpus are geared more for server environment, and will take that environment over.
- Amd is moving to trinity in one year or so. Trinity is the APU format that all amd cpus will take from then on. Llano apus have been quite successful in gaming fro example 50-80 fps in starcraft 2 (crossfired and not) -> you dont need to buy an external card anymore, and if you do you can crossfire it with the cpu contained one. http://www.anandtech.com/show/4476/amd-a83850-review/6 [anandtech.com] http://techreport.com/articles.x/21730/8 [techreport.com] intel is worlds behind in this one.
and then there is the ultimate question of what the fuck i am going to do if i grab a powerful processor. really. i bought an overclockable board, and an unlocked cpu. and when i played games, i found out that it was mostly the video card i added that did most of the thing. the cpu i had was way, way over any potential requirements and needs of these games. i didnt need to buy a powerful one at all.
i went about hardware/software forums asking what i could do with a powerful computer. answers have been 'video encoding', 'benchmark', 'seti'. as it seems, any daily usage for cpus are WAY behind the power of modern cpus. to utilize your cpu power at all, you need to do unorthodox, unnecessary shit, or be in a profession that works on these.
so i think all this performance talk is bullshit. there is no way in hell you will use that performance, even in hardcore gaming with an eyefinity 3 monitor setup in 5000x resolution, with 2x antialiasing and full detial. (and i just have 2x 5670 cards).
future is in the heterogeneous chips i think. llano already has been a success, and its possible to save 30% on the cost of cpu + mobo + graphics card if you go the llano way over anything intel, and gaming performance is incomparable. when trinity comes, i think there will be a big change in computing. especially when amd puts out a computing platform like cuda.
You folks are truly stoned. (Score:2)
The point of Bulldozer and pairing it with AMD GPGPUs is to leverage all the wealth of work AMD has put into OpenCL 1.1 with OpenGL 4.x.
When more and more apps leverage OpenCL 1.1 [and the list is growing rapidly] using the likes of LLVM/Clang where AMD has worked hard at leveraging you'll begin to see a lot of these ``benchmarks'' being truly useless and tuned specifically for Intel.
The work AMD is putting in with that marriage should be obvious: http://developer.amd.com/pages/default.aspx [amd.com]
Until appl
Re: (Score:2)
i'm pretty sure the 300W figure there is for the full test rig, not the chip itself, just to somewhat blunt that shock.
I agree though, bulldozer isnt pretty and the analogy of the P3-P4 intel phase is striking.
Re: (Score:2)
No they didn't [anandtech.com].
Re: (Score:2)
Re: (Score:2)
Why? Your programs are almost certainly not compiled with the optimizations.
Thats a more accurate test.
Re: (Score:2)
Future systems are 10W or less, not 300W or less.
You can build an i5 system that idles around 30W and peaks around 100W if you don't need discrete graphics. I doubt you can do the same with anything comparable from AMD.
Discrete GPUs have been the space-heaters in high-end Intel computers for some time now, not the CPU.
Re: (Score:2)
If it weren't for AMD, we'd still be using Pentium 4's at 1.6Ghz
Yeah, because it's not like Intel had any other competition. Those 1.6GHz P4s would have stomped on HP-PA, SPARC and other workstation/server CPUs.
I don't want to see AMD go bust, but they look to me to be facing a downward spiral of low sales prices reducing R&D spending leaving them unable to compete leading to even lower sales.