AMD Cancels 28nm APUs, Starts From Scratch At TSMC 149
MrSeb writes "According to multiple independent sources, AMD has canned its 28nm Brazos-based Krishna and Wichita designs that were meant to replace Ontario and Zacate in the second half of 2012. The company will likely announce a new set of 28nm APUs at its Financial Analyst Day in February — and the new chips will be manufactured by TSMC, rather than its long-time partner GlobalFoundries. The implications and financial repercussions could be enormous. Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space."
Good or Bad? (Score:1)
Take your time, let software catch up. (Score:4, Insightful)
Re:Take your time, let software catch up. (Score:4, Funny)
Apps not OS (Score:3)
All things considered, the operating systems are seriously improvin
Re:Take your time, let software catch up. (Score:5, Insightful)
So far I have been totally unable to tax my current CPU past 40% utilization. I think we can take a break and let software catch up and older systems fall off the support map before the next generation of CPUs hit.
Just because your usage scenario is not CPU-bound does not mean everyone else's is.
Comment removed (Score:5, Interesting)
Re:Take your time, let software catch up. (Score:4, Funny)
So while the guys that run gamer sites or live for benchmarks will scoff frankly the average user, which outnumbers them by a 100,000 to one (last number on hardcore PC gamers I saw put the number at 30 million)
Okay I heard Earth has an overpopulation problem, but did I doze off there for a while? Because I seem to have missed some recent developments...
Re: (Score:2)
Re: (Score:2)
But I'm also seeing an inversion of the old rule of thumb about the price-performance curve. In the past, a plot of the price (y) vs. performance (x) curve would track a diagonal line, and above a certain point the curve would shoot up vertically. For a small gain in performance, the price would skyrocket up.
Re: (Score:2)
Re: (Score:2)
Aw man, i hate you guys over in the US. The cheapest C-50 laptop here is 220 euros, and i'm pretty sure the 10" keyboard would be a dealbreaker for me, 12" e350 machines start at 350 euros.
And sadly, all the c50/60 e240/300 machines all come with windows 7 starter and 1 gb of ram, pretty much artificial crippling if you ask me
Re: (Score:2)
Re: (Score:2)
Re:Take your time, let software catch up. (Score:5, Insightful)
The change in feature size won't just be usefull to get faster processors (altough servers could use some of them), it is also important to reduce the power footprint of the chips (that being AMD, it means both CPU and GPU will use less power) and to reduce the price of those chips.
Re: (Score:1)
Who is "we"? Oh right, it's everyone who buys microprocessors, because we're all running the same software and doing the exact same things with our computers.
Re:Take your time, let software catch up. (Score:5, Interesting)
Well, DfrgNtfs.exe is using 25% of my quad-core, and I'm not doing much else. I've gone well into 70% more more at times if I'm actually doing something intensive.
I'm using 7GB out of 8GB of RAM, and if I had 16GB I could probably put a hell of a dent in it too.
I don't even consider what I'm doing to be much of a load, and in the past I've been on machines where something literally was CPU bound for as much as an hour and I needed to walk away.
I don't even find it tough to use up that much resources ... hell, I stopped using Mozilla because it would expand to well over 1GB of RAM overnight (with the same # of windows and tabs that used to fit in 300MB).
I think the software has already caught up ... especially if you're like me and open something and leave it open.
Re: (Score:1)
Defrag? 1995 called and wants its file systems back. News flash to the rest of the world: using (almost) all your RAM is a Good Thing. Can you say RAMdisk?
Oh, for a few 10s of GB of RAM, and an SSD array to fill it.
Re: (Score:2)
1995 called? So ext4 is from 1995? It has an online defrag utility, you know.
Re: (Score:2, Flamebait)
2009 called ... I'm running Vista. My Linux boxes are all now VMs ... I've no interest in running Linux as my primary box anymore.
But, I see you're living up to your nick.
Re:Take your time, let software catch up. (Score:4, Informative)
Vista? Ack.
At least have the decency to install Windows 7.
Re: (Score:2)
And in what meaningful way would that be different than an up to date Vista?
Re: (Score:3)
Re: (Score:2)
Sadly, it's a PC that's getting close to 3 years old ... and I'm pretty sure that 8GB was the maximum at the time.
I've always been of the opinion that nothing increases the longevity of a computer more than an obscene amount of RAM. Otherwise, I would.
Re: (Score:2)
LOL ... if I had 16GB of RAM just laying around, I'd test it.
I'm pretty sure the 2x4GB I have in there is what the specs say is the max.
Re: (Score:2)
Yeah, I'd love 16GB on that ... motherboard manual says 2x240 pin ... a total of 8GB supported.
A nice thought, though.
Never thought I'd see the day where I was seriously contemplating 16GB for my home machine ... hell, I remember upgrading my old 486 to 20MB of RAM, and that was bigger than the Sun workstations at school at the time.
And, of course, the notion of having one's own Terabyte seemed ludicrous ... now I have 6TB. :-P
Re: (Score:2)
Not really. On my system, performance starts to suffer once applications are taking up all but 1 GB or so; if non-app memory drops below 50 MB, the system becomes unusable.
Re: (Score:2)
Seriously just read a Web browser comparison, FF8 is the LEAST memory hungry Web Browser (from a panel of Safari, Chrome, IE9, FF8 and Opera)
Re: (Score:2)
This was a while back, but I once ran a ray tracing project that ran nonstop for two weeks, essentially 100% CPU the whole time. In fact it didn't even finish - it was 2/3 done when someone else pulled the plug on it accidentally. Fortunately the data for that much of the picture was saved to a file as it went. Nowadays the same project would probably take 10 minutes, but hey.
Re: (Score:2)
Firefox has allocated 628MB on my 8GB system after running for days. That's still a lot of RAM (although I have the memory cache turned up pretty high on this system) but it's not a gigabyte overnight. I think you were running crappy extensions.
Re:Take your time, let software catch up. (Score:5, Informative)
With multi-core CPUs, just because you can't reach 100% usage doesn't mean your not CPU limited.
Re:Take your time, let software catch up. (Score:5, Interesting)
Exactly. Too bad I already posted in the thread and can't mod you up anymore.
Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.
Intel has made only modest gains in performance-per-clock-cycle since the core 2 duo. AMD I'm pretty sure is actually going backwards if I am correctly remembering some of the bulldozer vs thurban reviews.
Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.
Re: (Score:2)
Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.
Have you seen the Bulldozer reviews? They've been hitting AMD over the head due to its poor single-thread performance (amongst other things...)
Re: (Score:2)
Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.
Intel has made only modest gains in performance-per-clock-cycle since the core 2 duo. AMD I'm pretty sure is actually going backwards if I am correctly remembering some of the bulldozer vs thurban reviews.
Have you seen the Bulldozer reviews?<snip>
It's safe to assume that yes, they are aware of the reviews since they explicitly mentioned them.
Re:Take your time, let software catch up. (Score:4, Informative)
Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.
There's a very simple reason: physical limitations. The current processor technology is more or less maxed out for single-thread performance. There's probably some gains available by completely changing the instruction set or completely giving up on multi-thread performance, but nothing that Intel can put into a chip they can sell. They can't up clock speed anymore due to the speed of light (except a little bit when doing a die shrink). The obsession with multi-core isn't because Intel and AMD think everyone wants to run more threads; software is moving towards using more threads because Intel and AMD simply can't improve single-thread performance but they, at least for a little while longer, can keep adding more cores.
Re: (Score:3)
They can't up clock speed anymore due to the speed of light (except a little bit when doing a die shrink).
Poppycock, the reason intel/amd dont scale their clocks much beyond the current 3-3.5 GHz is mostly because the power demands increase exponentially. Intels netburst design had a feature called the Rapid Execution Engine, which basically where the integer ALU's, run at double the clock rate. The 3.8 GHz pentium 4 had its ALUs running at 7.6 GHz, the reason this didnt scale beyond some execution hardware was very much down to the power budget.
And honestly, bulldozer's design team should be hit over the head
Re:Take your time, let software catch up. (Score:4, Insightful)
Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.
I think it's quite obvious that AMD didn't have the resources to hit many targets, so they picked two:
1) Laptops/Low-end PCs with Bobcat cores (Fusion/Llano APUs)
2) Servers with Bulldozer cores (Valencia/Interlagos)
Sadly the latter seems to have misfired a bit even in the server arena, but it's no question IMHO that the high-end desktop market was intentionally abandoned. Either that or they've missed their design targets by many miles, they can't have been that off on single core performance. I can sort of understand, Intel was already dominating and the Atom threatened their low end (remember, CPU designs have a 2-3 years lead time) and they couldn't afford to lose their bread and butter machines. So they aimed Bobcat low (power), Bulldozer wide (cores) and left Intel to compete with themselves. Not to be too much of a cynic, but it's better for AMD to win some markets than being a loser in all of them.
Re: (Score:2)
Actually, the changes to the core in Windows 7 mean that most situations are nearly evenly split across processors anyway.
I had a batch file at a previous company calling all "single-threaded" applications and during the entire run of the batch, all 4 CPUs were within 5% of each other. Bring up your Task Manager Performance tab someday and leave it up all day at work. You might be surprised.
Antivirus? (Score:2)
Re: (Score:1)
That's easy!
I just start a thread with an infinite loop for every cpu core.
Kids these days...
Can't code themselves out of a wet paper bag to save their lives...
Re: (Score:2)
Hehe. Install Ad-Aware, and run a full system scan. Watch those cores get used...
Re:Take your time, let software catch up. (Score:5, Insightful)
Seriously, this.
In building computers for my wife and my brother, I just went with lower end I3 and Phenom X2(4) processors. Why? Because the effective performance difference between the two for the applications they are running is .001%. And the price difference between those and say, an I7 is 1000%.
But I made sure to get both systems SSD drives. Price difference? About 200% (500GB HDD $60 vs 128GB SSD $125). But the performance difference is about 700%.
Re:Take your time, let software catch up. (Score:4, Informative)
Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like. Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid. The bottleneck problem is typically solved by increasing the size of the on-chip caches OR by adding an external cache between main memory and the CPU. After that, it depends on whether the bottleneck is caused by bus contention or by slow RAM. Bus contention would require memory to be banked with each bank on an independent local bus. Slow RAM would require either faster RAM or smarter (PIM) RAM. (Smart RAM is RAM that is capable of performing very common operations internally without requiring the CPU. It's unpopular with manufacturers because they like cheap interchangeable parts and smart RAM is neither cheap nor interchangeable.)
Really, the entire notion of a CPU - or indeed a GPU - is getting tiresome. I liked the Transputer way of doing things (System-on-a-Chip architecture) and I still like that way of doing things. The Transputer had some excellent ideas - it's a shame it took Inmos so long to design an FPU (and a crappy one at that) and given that the T400 had a 20MHz bus at a time most CPUs were running at 4MHz, it's a damn shame they failed to keep that lead through to the T9000.
What I'd like to see is a SoC where instead of discrete cores (uck!) you have banks of independent registers, pools of compute elements and hyperthreading such that the software can dynamically configure how to divide up the resources. There's nothing to stop you moving all the GPU logic you like into such a system. It's merely more pools of compute elements. Microcode is already in use and microcode is nothing more than software binding of compute elements to form instructions. (Hell, microcode was already common on some architectures back in the 80s and was available for microprocessors within a decade of their being invented.) There's nothing that says microcode HAS to be closed firmware from the manufacturer - let the OS do the linking. It's the OS' job to partition resources and it can do so on-the-fly as needs dictate - something a manufacturer firmware blob can't do. Put the first 4 gigs onto the SoC and have one MMU per core plus one spare, so that each core can independently access memory (provided they don't try to access the same page). The spare is for direct access to memory from the main bus without going through any CPU (required for RDMA, which most peripherals should be capable of these days).
Such a design, where the OS converts the true primitives into the primitives (ie: instruction set) useful for the tasks being performed, would allow you to add in any number of other true primitives. Since any microcode-driven CPU is essentially a software processor anyway, you can afford to put extra compute elements out there. Any element not needed would not be routed to. Real-estate isn't nearly as expensive as is claimed, as evidenced by the number of artistic designs chip manufacturers etch in. Those designs are dead space that can magically be afforded, but there's nothing to stop you from replacing them with the necessary inter-primitive buffering to build ever-more complex instructions from primitives without loss of performance. I'm willing to bet HPC would look a whole lot more impressive if BLAS and LAPACK functions were specifically in hardware rather than being hacked via a GPU.
Of course, SoC means larger chips. So? Intel was talking about wafer-scale processors several years back (remember their 80-core boast?) and production has only improved since then. The yield is high enough quality that this is practical and since the idea is to software-wire the internals it becomes trivial to bypass defects. T
Re: (Score:3, Informative)
Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like.
Wrong.
The code size of average function is much smaller than instruction cache for any modern processor.
And then there are L2 and L3 caches.
Instruction fetch needing to go to main memory is quite rare.
And then about data.. depends totally on what the program does.
Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid.
None one of the CPU's sharing FPU with multiple HW threads are cheap.
Sun Niagara I had slow shared FPU, but the chip was not cheap
AMD Bulldozer, which usually has sucky performance, sucks less on code which uses the shared FPU.
FPU operations just h
Re: (Score:2)
Lastly, compilers are often god-awful bad at adding in parallel processing. Not that they should have to -- the programmer is SUPPOSED to be competent at this. Parallel programming has only been standard CS material since 1978! If programmers aren't capable of writing efficient parallel programs by now, they need to be dropped off a cliff and replaced with programmers who can write. (...) What matters, though, is that high performance IS achieved by people who bother. If a given programmer can't achieve the same results, it is because they can't be bothered. For all the problems with compilers, I refuse to blame the available technology for the incompetence of code monkeys.
So what? Mathematicians have had number and field theory for centuries, it doesn't make it easier to understand. Recipe-programming is easy to understand, there's no dependency issues, no resource contention, just a simple start-to-finish sequence of events. Simple interactions like worker threads and resource pools are easy to work out, only mutex it so that you don't grab the same work packet or resource.
Truly parallel programming is to me like having 20 chefs in my house cooking a meal, all using limited
Re: (Score:2)
Doesn't matter if the chess program can look at a million more moves or a billion. Chess Grand Masters look at patterns and compute which patterns are better than other patterns, which means that the pattern itself is a function. The better the Grand Master, the better the evaluation function. You need only have a function that evaluates the permutation of pieces on the board to a degree that is greater than the computer's evaluation of the permutation of a billion moves. Since Chess is a Full Information G
Re: (Score:2)
Doesn't matter if the chess program can look at a million more moves or a billion. Chess Grand Masters look at patterns and compute which patterns are better than other patterns, which means that the pattern itself is a function. The better the Grand Master, the better the evaluation function. You need only have a function that evaluates the permutation of pieces on the board to a degree that is greater than the computer's evaluation of the permutation of a billion moves. (...) So, yes, it is because you're lazy.
...okay, I don't even know what to say to that. I have no idea what it's like on your planet, but around here we're only human. No wonder developers aren't up to your standards....
Re: (Score:2)
...okay, I don't even know what to say to that. I have no idea what it's like on your planet, but around here we're only human. No wonder developers aren't up to your standards....
Totally agree. I was initially inclined to say (s)he's trolling, but (s)he's clearly quite learned in computers. Maybe (s)he expects that all people are just that smart... Expecting that people get parallel programs right on the first try, given their complexity is not reasonable, at least where I work (myself included). In fact, I was just working with a developer today to fix a reader/writer issue triggered by parallelism both in code and in writing to the DB. We had to sit down and think out the use
Re: (Score:2)
Dunno, man, but my CPU is running 98-100% as I write this.
Re:Take your time, let software catch up. (Score:4, Funny)
So far I have been totally unable to tax my current CPU past 40% utilization.
Oh, you should try Firefox sometime!
Re:Take your time, let software catch up. (Score:5, Funny)
I salute you, mythical IT-worker who manages to get an overclocked computer work-approved.
Re:Take your time, let software catch up. (Score:5, Informative)
I salute you, mythical IT-worker who manages to get an overclocked computer work-approved.
Who said it was approved? In a previous job a friend inherited a computer from someone who'd left and never understood why it would crash every few days and hit bugs that no-one else seemed to see until he looked in the BIOS and discovered the previous user had overclocked it.
Re: (Score:3, Insightful)
... ok. I'll bite.
If you -know- that it's not stable, why didn't you clock it back down to spec, or at least down to where you can be sure it is truly 100% stable? Aren't you losing more time by doing multiple redundancy checks on your resultant data sets than you're gaining by the few extra clock cycles?
you are doing random spot checks on your data, right?
As anybody who has lived with an -almost- stable overclock for long periods of time knows, if it's not 100% stable, you're getting little computational d
Re: (Score:2)
re-reading, I apologize. I confused you and GGP as the same poster, and thought you were getting errors on a system you were keeping overclocked. My mistake.
Re: (Score:2)
Did everything work as expected once you set it back to stock speeds in the BIOS?
Re: (Score:2)
PROOFREAD please (Score:1)
It's TSMC, not TMSC.
Thank you.
Competition ? (Score:4, Informative)
http://techreport.com/articles.x/21730/8 [techreport.com]
its actually possible to game with acceptable detail and fps with entry-mid level laptops without paying a fortune now.
Re: (Score:1)
You misinterpreted the statement to be about APUs whilst the statement was about the CPU market in general.
Re: (Score:1)
Re: (Score:2)
Yeah and? You do realize that sentences can have different contexts than the one before them, right?
Re: (Score:1)
They should have the same context. Thats what paragraphs are for
Re: (Score:2)
Very true - AMD compete well against Intel in entry-mid laptops.
Unfortunately, it's a rather narrow segment.
Re: (Score:2)
I believe that it is the widest consumer segment actually. Desktop usage is shrinking and gaming has been held back by consoles.
Re: (Score:1)
Desktop share *was* shrinkinga couple of years ago, but it leveled off. It's now 50 -50
Re: (Score:2)
APU market is small, desktop market is big. AMD's APUs compete in both markets.
Pretend you went back 15 years ago and tried selling a dual core desktop CPU. You could claim you're doing well in the multi-core desktop market.
Re: (Score:2)
jealous much ?
Global Foundries (Score:5, Informative)
The description is somewhat misleading in that Global Foundries is not a "long-time partner," but what were AMD's own internal wafer fabs until Global Foundries was spun out as a separate company in 2009.
TSMC (Score:3)
Yeah, and TSMC is the foundry that ATI has used for years (and still does). The plan with the APUs has always been to move ATI's GPU to AMD's^W Global Foundry's process. They have given up on that and decided to move AMD's CPU to the TSMC process instead. It's a pretty big turn of events.
Extremely useful summary (Score:2, Insightful)
Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space
After reading the summary (a few times), I came to the conclusion that I know nothing about this topic. Thanks for the heads up so I that was not burdened with reading an article that only a select few might understand or care.
waaait a minute (Score:3)
so far, all bobcat-based chips have been made at TSMC, haven't they? so is this really news?
Long-time partner? Really? (Score:5, Informative)
Re: (Score:1)
Re:Long-time partner? Really? (Score:5, Interesting)
The Maturation of the American Economy (Score:1)
X86 cpu manufacturer can and should survive. Maybe Intel or Microsoft or Apple will buy them out to put them out of their misery. The quicker customers can box themselves in the better. Choice is fleeting and obviously, chooses the current "best" processor is always in your "best" interest with no thought of the long term. But maybe Arm really is meant to eventually replace the X86 architecture.
You have to silently face East at 11am EST (Score:3, Funny)
Oh my god, there's less than 70 shopping days left!
It's tradition in my house that on Financial Analyst Day, or FAD as we call it, we make spiced wine and spike it with DMT, then sit around singing appropriate songs, such as "Money" by Pink Floyd, "Money (That's What I Want)" by the Beatles and "Gimme da Loot" by Biggie Smalls.
Then, sitting in a circle, we pass around a revolver with only one shell loaded and spinning the cylinder, we point at the person to the left and pull the trigger.
It's by far my favorite holiday.
AMD APU graphics make big difference (Score:2, Interesting)
APU unlikely to fend off increasing competition from Intel? Most Intel Atom based netbooks/tablets/whatever that I know have the GMA 3150. Which runs at 200 Mhz max. and has 2 shader units. The C-50 has 80 unified shaders running at 280 Mhz (yes, again low but I'm guessing 80 things working in parallel make up for it. please correct me if I'm wrong), supporting DX11,OpenGL 4.1 and UVD 3. Way better than Intel graphics cards. True, the CPU isn't very fast, but for things like video playback and 2D,3D games a
They need to develope a cpu+memory module (Score:2)
A single part that has the cpu and the memory on a single pcb. Have 2, 4, 6 and 8gb models. Put the memory right next to the chip and eliminate complexity. You could still add ram to the mobo, but it would act as cache for other things like disk and video. You could even have multi-socket mobos, but the cpus would not share memory except through the secondary memory.
Re:AMD = Stagnated. (Score:5, Interesting)
I hope you like $500 celerons...
If this was 1995, I'd believe it. In 2011, Intel competes with itself. If they drive up CPU prices, they won't be able to make more and more profits because people do *NOT* need to upgrade. The vast majority of the population is doing fine on a dual core 4+ year old CPU running a browser and IM program and watching videos. Since people do not need to upgrade, but Intel has to sell more and more CPUs, their profits would collapse and then the stock and then ... hilarity ensues.
Re: (Score:2)
I hope you like $500 celerons...
If this was 1995, I'd believe it. In 2011, Intel competes with itself. If they drive up CPU prices, they won't be able to make more and more profits because people do *NOT* need to upgrade. The vast majority of the population is doing fine on a dual core 4+ year old CPU running a browser and IM program and watching videos. Since people do not need to upgrade, but Intel has to sell more and more CPUs, their profits would collapse and then the stock and then ... hilarity ensues.
Actually, the vast majority of people world wide don't even have access to a computer. For those who do, the vast majority probably don't even need a dual core cpu, for most of what is done. Engineers, graphic designers and gamers would be the exception to that statement.
Re: (Score:2)
Actually, the vast majority of people world wide don't even have access to a computer. For those who do, the vast majority probably don't even need a dual core cpu, for most of what is done.
The vast majority probably don't need more than dial-up connectivity, though it sure is much more pleasant when you have broadband.
Dual core is the same way. It's much more pleasant to work on a dual core machine than a single core, because most people multitask (listen to music, watch video, browse sites with many tabs, plus OS and antivirus and dropbox sync and blah blah in the background.
Your point has more validity above dual core. Certainly engineers, graphic designers, and gamers have a better chanc
Re: (Score:2)
Re:AMD = Stagnated. (Score:5, Insightful)
Your assumption that you can simply ignore AMD's influence in the CPU market and still end up with a relevant model to explain and predict its outcome is both naive and disingenuous. AMD does have products which outperform equivalent Intel products, even when not accounting with Intel shenanigans such as relying on funny compiler tricks, and AMD happens to price them quite attractively. If you haven't considered any AMD offering on any budget for any serious desktop and instead opted to rely only on Intel products then you are both clueless and economically-challenged.
Re: (Score:2)
AMD does have products which outperform equivalent Intel products,
Such as?
Re: (Score:2)
If you are really interested to know that then you should simply pick up any random benchmark from the web and compare prices. For example, in some benchmarks [cpubenchmark.net] the AMD FX-8150 processor, which goes for about 220 euros, outperforms Intel Core i7-2860QM systems, which sells for around 500 euros. And in the nearest mom&pop store, an AMD Phenom II X6 1100T goes for 178 euros while the Intel Core i7 870 goes for 240 euros.
But seriously, pop up any random benchmark between recent intel and AMD processors and
Re: (Score:2)
So a disingenuous comparison! Why use the 2860qm to compare to the 815/ when when you could compare to the cheaper i7-2600 which is only $30 more and has 30w less tdp while still outperforming the bulldozer. Or why not compare that 1100t to the i5-2500 which is way more performant, again 30w lower tdp and only $35 more. Oh right, because that doesn't create as insane a price gap.
Re: (Score:2)
Correction the i5-2500 is only $20 more. The i5-2500k which is even faster is the one that's $35 more than the 1100t.
Re: (Score:2)
'Disingenuous' ... you keep using that word, I don't think it means what you think it means. Are you seriously comparing a *LAPTOP* processor to a *DESKTOP* processor? And I am disingenuous? You should compare it to the i5-2500K, which is cheaper and way better performing for most tasks and runs significantly cooler.
AMD = Important. (Score:2)
Re:AMD = Stagnated. (Score:5, Informative)
Intel i5 661: http://www.newegg.com/Product/Product.aspx?Item=N82E16819115217&Tpk=i5%20661 [newegg.com]
According to these benchmarks [cpubenchmark.net], we have:
And this doesn't account for the money spent on a motherboard, which adds a hefty price to any intel offering.
So, looks like you botched your careful number check.
Re:AMD = Stagnated. (Score:4, Informative)
I got an AMD Phenom II X4 840 for $59.99 a few days ago (at Microcenter); I'm sure it's more than half as fast as a 965, so it's an even better value. I got a new motherboard (AMD 760G chipset) with it too; it was also $59.99. Not bad, I think -- would I have been able to find an Intel solution for that price/performance?
Re:AMD = Stagnated. (Score:5, Interesting)
In 2011, Intel competes with itself.
That's part of the problem. One of the speculated reasons the Atom processor is so far behind, is that Intel was afraid it would cannibalize more profitable segments of its mobile CPU market. As a result, they launched it with a bunch of contractual restrictions on it (customers had to agree not to use it in any notebook larger than 10"-form factor), while using pricing models that discouraged 3rd party graphics (Atoms bundled with Intel's chipset were sometimes actually cheaper than solo Atoms, making nVidia ION combos uneconomical).
Since AMD had no strong CPUs in the netbook segment, everyone had to simply accept these restrictions at first, until AMD introduced their Ontaria and Zacate series.
Re: (Score:3)
Oh, they can go slower. The world market is still expanding both in size and average price they can afford, companies will still buy them for their X years of support, laptops break down and so on. Intel wouldn't drive prices up as such, they'd bring costs down. Sell 22nm processors at same prices as 32nm processors, does that sound massively profitable to you? It does to me. In the end they'll sell you something that costs like an Atom for the price of a 2600K. Or maybe just slow down their tick-tocks, let
Re: (Score:2)
Bulldozer is a fail architecture. Lower performance and higher power draw than their own chips that are lower price like the phenom 2 six cores.
Re: (Score:3)
I wouldn't be too sure about that. The Pentium Pro failed miserably as a CPU offering, yet ended up as the basis for the Pentium 2 and 3, and then the Pentium M, and going forward. Just because Bulldozer in its first release has done poorly may be due to some design issues that we just don't know about, and in the next rev, may be fixed.
Re: (Score:2)
How about a Phenom that can be used in a multiple socket motherboard? It might destroy their Opteron marketshare, but they would own the desktop + server market.
5 x ~$200 Phenom II X6s...30 cores for $1000.
Re: (Score:3)
Or get an i5-2500k which is faster than a lot of the x6s for only.like 20 bucks more.
Re: (Score:2)
And 30w less tdp.
Re: (Score:2)
AMD tried this once before with their AMD QuadFX 4x4 concept [wikipedia.org]. It didn't go anywhere.
The problem is that most games are insufficiently multi-threaded to take advantage of a dual processor architecture. A hard core group of gamers exists that would purchase dual processor and quad processor Opteron and Xeon motherboards if it resulted in increased game performance. Unfortunately, best game performance is often obtained from single processor desktop chips.
Bottom line: Games often struggle at keeping more
Re: (Score:2)
Agreed. However, my point was that with all the problems they are running into with Bulldozer, they might be able to bridge the problem by modifying (hopefully slightly) their Phenom II design, and could spend a year or two punishing Intel. And I do mean punishing them.
And while games may not be designed to take advantage of those extra cores, I can think of a host of applications between the workstation - server range that could.
Virtual Machines, for starters. Video encoding for another. Databases love cor
Re: (Score:2)