AMD Outlines Plans For Zen-Based Processors, First Due In 2016 166
crookedvulture writes: AMD laid out its plans for processors based on its all-new Zen microarchitecture today, promising 40% higher performance-per-clock from from the x86 CPU core. Zen will use simultaneous multithreading to execute two threads per core, and it will be built using "3D" FinFETs. The first chips are due to hit high-end desktops and servers next year. In 2017, Zen will combine with integrated graphics in smaller APUs designed for desktops and notebooks. AMD also plans to produce a high-performance server APU with a "transformational memory architecture" likely similar to the on-package DRAM being developed for the company's discrete graphics processors. This chip could give AMD a credible challenger in the HPC and supercomputing markets—and it could also make its way into laptops and desktops.
Finally a replacement (Score:2)
I've been hobbling along with my FX-8350 and AM3+ for awhile now and have been wanting to upgrade. If it lives up to the hype, unlike dozer, and piledriver, then I'll definitely get one. Now if only the process actually works....
Re:Finally a replacement (Score:5, Interesting)
Re: (Score:3, Informative)
Still trucking along on a Phenom II X6 1045T... 6x2.8 GHz or 3x3.2 GHz still seems like a lot. I can't remember the last time I was CPU-bound. I have to spend more than a hundred bucks on a GPU, I guess.
Re: (Score:2)
One of our HTPCs is rocking an Athlon64 with an All-in-Wonder 9800 AGP card. One of the last ones to support component video out. Plugged into a 27" CRT. Works great for Plex Web client (via Google Chrome).
The main server in the house is a Phenom-II something-or-other. With a lowly, silent Nvidia 210 GPU for transcoding videos as needed via Plex Mediaserver.
Neither system is CPU-bound, or even GPU-bound, for what they do. The lowly 1 TB SATA drives in the server (even in a 4-disk RAID10) are the bottle
Re: (Score:2)
One of our HTPCs is rocking an Athlon64 with an All-in-Wonder 9800 AGP card. One of the last ones to support component video out. Plugged into a 27" CRT. Works great for Plex Web client (via Google Chrome).
Nothing wrong with any of that... other than perhaps the 27" CRT. :) How you can stand to look at that anymore is beyond me, but to each their own...
And that computer is fine for SDTV, but if you were running HDTV it probably would have issues.
Re: (Score:2)
Eh, it still works without any issues, it's in the bedroom, and really only used when we're too sick to walk downstairs. Why get rid of a perfectly working TV?
Eventually, we'll replace the lowly 39" LCD downstairs with something larger, move that one into the bedroom, and move the CRT into the kids' room with the NES, SNES, and Wii. :)
At that point, I'll have to replace the Athlon64 with something that can handle 720p or 1080p. Until then, we'll just keep on rocking.
Re: (Score:2)
Eh, it still works without any issues, it's in the bedroom, and really only used when we're too sick to walk downstairs. Why get rid of a perfectly working TV?
Because staring at an electron gun shooting at your eyes is bad for you?
CRTs always gave me headaches, the flicker and refresh of the screen. The move to LCDs was very welcome.
Eventually, we'll replace the lowly 39" LCD downstairs with something larger, move that one into the bedroom, and move the CRT into the kids' room with the NES, SNES, and Wii. :)
Oh goodie, ruin the kid's eyes! :)
At that point, I'll have to replace the Athlon64 with something that can handle 720p or 1080p. Until then, we'll just keep on rocking.
And that was the point, you said your old Athlon was just fine, and I was simply pointing out for everyone else who isn't running a CRT that your statement had a catch to it. :)
Re: (Score:1)
An elegant weapon, for a more civilized age...
Re: (Score:2)
Moving up to a 1090T or 1100T Black Edition could be nice. 3.2 or 3.3 out of the box, but 3.8 or 3.9 is almost a given without worrying about voltage bumps and awesome cooling. My 1090 was throttling at 3.8 when running SuperPi on the stock cooler, but have no such problems with a Hyper212. I would need more cooling to run all-day-every-day at 4.0.
Granted, 2.8 to 3.8 may not make a lot of real world difference when memory bandwidth comes into play, and as you pointed out, the GPU has a lot to do with it (wh
Re: (Score:2)
I might think about an upgrade like that when those processors are being thrown away, if I haven't upgraded before then. But right now they're still over $120 for the 1090T, and around $200 for the 1100T. The last time I spent that much I upgraded from a 720BE and doubled my cores. If I'm going to spend another hundred bucks, I need to double my performance again, which isn't really possible. It would make more sense to just buy a new MB+CPU, otherwise. The one I have now doesn't even support SLI, which I w
Re: (Score:2)
Why focus on clock speeds like that?
The biggest advantage of newer chips is that they do the same job using less power.
My 9-year old fileserver pulls 450W. Replacing the motherboard with a quad-core C200-atom or equivalent would drop that by 300W - and pay for itself in 3-4 months just in reduced electricity charges.
Re: (Score:2)
Re: (Score:2)
Still rocking my X3 720be
Re: (Score:2)
I have one of those too but I've been too lazy to badcap the system it's in. It was a really great processor too, got it to OC to 3.2 with a $20 cooler master heat pipe and arctic silver iv. I bought that originally for $100, and then upgraded to this twice-the-cores X6 1045T later, for $110 shipped or so. You just can't beat the value of these mid-range AMD chips.
Re: (Score:1)
I have the same chip and it is ridiculous at everything but gaming. It's single core performance is laughingly bad. My Girlfriends' 1st Gen i5 absolutely smokes it.
Re: (Score:2)
I have the same chip and it is ridiculous at everything but gaming.
Ridiculously good for the amount of money I have spent on it versus the amount of time I've had it, I think you mean.
It's single core performance is laughingly bad.
Single-core performance? What is this single-core? Nothing which uses a lot of CPU is single-threaded, and hasn't been since forever. Whether I'm [de]compressing an archive, transcoding video, playing a game, or surfing the web, I'm using multiple threads which can be relocated across processors — and are.
My Girlfriends' 1st Gen i5 absolutely smokes it.
Bullshit. We all know you don't have a girlfriend.
Re: (Score:2)
I just retired my 1045T desktop and moved it to a VMware server. With SSDs it performs very well in this application.
Re: (Score:2)
I just retired my 1045T desktop and moved it to a VMware server. With SSDs it performs very well in this application.
That's probably what mine will do when I finally decide to upgrade to a newer processor in another year or two. The only problem is that it's relatively power-hungry compared to the C2D that I have doing this now. Well, it's a libvirt server, but anyway
Re: (Score:2)
Yes, it has a slightly higher power consumption, but at this processor's insanely high 6MB L3 Cache and 6 x 512KB L2 Cache, I can accept it. It is a solid performer.
Re: (Score:2)
Re: (Score:2)
Not so clear cut. The Phenom has a slight edge in single-core performance at the same speed, but the FX can give you more cores (8 are readily available but the Phenom topped out at 6) and is now available with higher clock speeds.
One catch with the FX is that you need updates to the scheduling algorithm in the OS. In the case of Windows that means you have to run Windows 8 (or presumably the Windows 10 preview) because earlier Windows versions never got updated to take full advantage of the FX.
The other ca
Re: (Score:2)
Other things I noticed - since most games rely on single threaded performance, the Phenom II has the edge there (and Intel a much larger edge). For heavy math, the AVX on the FX-81xx series performs poorly compared to Intel.
Re: (Score:2)
The shared FPU on the FX series is rarely a problem. Sure, there is only one that is shared between a pair of cores - but it is a 128 bit FPU that can do two 64 bit operations simultaneously. It's not likely to be a bottleneck unless you are working with long double or binary128 data, in which case it can only do one operation at a time.
The most common problem with poor utilization of the core pairs in the FX series is cache contention. That's where the updated process scheduler comes in. To oversimplify a
Re: (Score:2)
Easily believable if PassMark was compiled using icc (Intel C Compiler).
Re: (Score:2)
I have an Intel Core i7-4980HQ 2.8GHz quad-core,
Too bad your mother didn't give you a name.
Re: (Score:3)
Re:Finally a replacement (Score:4, Interesting)
This.
I have a FX-8350 too. Given how much the the cores sit around idling at less than 5% usage, I don't think I'll need to upgrade my CPU before 2020. RAM, on the other hand...
As many other people have noted: the CPU speed wars have been over since the Intel Wolfdale/AMD Deneb days of 2009-2010.
Re: (Score:2)
Agreed on the ram. When DDR4 hits the mainstream product lines that is going to be a nice bump.
Re: (Score:2)
Meanwhile, Intel is transitioning to 14nm process, while your processor still is still 32nm from 2012, and this intel
Re: (Score:2)
Meanwhile, Intel is transitioning to 14nm process, while your processor still is still 32nm from 2012, and this intel of mine actually 22nm from the same year. These numbers might not tell you much, but the main difference to me is that my processor runs much cooler and requires less power. AMD is way behind the competition.
They tell me a lot, and you pretty much reached the same conclusions that I reached a year ago. AMD still makes good processors but they have been behind intel for awhile now. When it came to picking a new processor for my HTPC I picked a A5350 over a i3. It was a perfect processor for the job too.
Next year I'll be sniffing around to replace the fx-8150 workstation I have. Unless AMD does something drastic to get caught up, I will probably be going with a i7 system this year.
I've been a amd fan
Re: (Score:1)
Lol, yeah. I was running an Phenom 1100T when they first came out and upgrade to the FX-8350 when it was launched over 2 years ago. I've been considering buying the FX-9590 just to tide me over till these come out.
To be honest, I have no real need to upgrade until I get more into 4K gaming and upgrade from the overclocked 290x that I have now.
Re: (Score:2, Interesting)
Wait until you try to do some video encoding. The new instruction set support alone will justify the update.
Where is the support for ECC RAM? (Score:1)
Sorry, but without ECC RAM support these chips are useless.
AMD... please note, the Intel E3-1276v3 has ECC interface on the die (and also has GPU on die)... you need to do this, otherwise the server world is moving permanently to Intel.
Re:Where is the support for ECC RAM? (Score:4, Informative)
Re: (Score:3)
Nearly all AMD CPU/APU's support ECC memory, you just need the right Mother Board... ASUS bios's have consistently supported AMD/ECC memory combinations for many years.
Not long ago, I configured a number of Phenom II X6 1045t and FX-8320 systems with 8-16GB of ECC memory on ASUS M4A88T-M and M5A78L-M motherboards. This link indicates the Zen series [fudzilla.com] will support 4 channel ECC/DDR4.
Re: (Score:2)
The AMD processors which use socket AM2, AM2+, AM3, and AM3+ support ECC but the processors for the FM series of sockets including the APUs do not. The recent exceptions are the processors for the FP3 (notebooks) and AM1/FT3 (small form factor/mobile) processors which leaves ECC out for AMD's highest performance desktop Piledriver and Steamroller APUs.
AM1 with ECC looks great for lower power network appliances, servers, and disk storage but the only thing faster with ECC is AM3+ without the APU. :/
Re: (Score:2)
I won't say hobbling. I have both the fx-8150 and the fx-8350 and they are both de finally behind on the curve when it comes to technology. Both of these processors where gifted to me by a friend who was sick of always being behind on the curve when it comes to AMD. He hopped over to Intel with a couple of i7's last year.
I'm convinced unless something changes an AMD gets on the ball with this release, my current AMD systems will be the last AMD systems I own.
Re: (Score:2)
I won't say hobbling. I have both the fx-8150 and the fx-8350 and they are both de finally behind on the curve when it comes to technology. Both of these processors where gifted to me by a friend who was sick of always being behind on the curve when it comes to AMD. He hopped over to Intel with a couple of i7's last year.
I'm convinced unless something changes an AMD gets on the ball with this release, my current AMD systems will be the last AMD systems I own.
And I am hoping or expecting that AMD's cpus reach or exceed the comparable performance of Intel products. Intel 4790 stuff is about $200 too high per unit.
Re: (Score:3)
If it is hobbling, you probably didn't do a very good analysis of if it matches your work load, because it is not a high-end general purpose CPU. ;)
Mine is freakin' awesome. It does everything I ask of it so easily that I can't even hear the fans.
Re: (Score:2)
One of my requirements is ECC so I have been hobbling along with a Phenom II 940 and AM2+ which was less than half the price of the Intel equivalent at the time. The current Intel alternative would be a Xeon E3-1220 v3 to Xeon E3-1276 v3 which I will consider if I have to replace it but the Intel solution is still more expensive.
Just in time for the End of the Line (Score:4, Insightful)
14nm tech may be the end of the line for CMOS. The 10 nm node that follows may not even be possible
Re: (Score:1)
Pundits have been saying that at least since 90nm, you know. Then 65, and again at 45, 32, 33, and now at 14.
Re:Just in time for the End of the Line (Score:4, Insightful)
None of those other nodes pitches involved dimensions of which quantum mechanical tunneling was the dominant effect, nor of gate thickness being one atom. But that's what 10nm is.
Re:Just in time for the End of the Line (Score:5, Interesting)
Re: (Score:1)
The problem is not building small structures. The problem is designing small structures to give you working chips. Sure, you build a 10 atom "isolator". Unfortunately a large number of electrons will disagree and act as if it's a conductor.
This is made worse by the need to get all your isolator perfect, which is increasingly hard as the gate counts go up while the dimensions go down. An isolator that's 1 nm off is now a much bigger problem than it used to be.
Re: (Score:2)
None of those other nodes pitches involved dimensions of which quantum mechanical tunneling was the dominant effect, nor of gate thickness being one atom. But that's what 10nm is.
Not even close. They have on the research stage made functional 3nm FinFET [eetimes.com] transistors, if they can be produced in the billions is unlikely as it requires every atom to be in the right place but 10nm still has some margin of error. The end of the road is in sight though...
Re: (Score:2)
Still a way to go.
Re: (Score:2)
Pundits have been saying that at least since 90nm, you know. Then 65, and again at 45, 32, 33, and now at 14.
And those limits were dodged by some new process: SOI, copper interconnects, what have you. He's not saying we won't see 10 nm, he's just saying there's a good chance it will have to be something other than CMOS.
Re: (Score:2)
Pundits were saying that about 90nm, but usually that was because they misunderstood when the engineers (who were multiple product cycles ahead of the consumer-pundits) were speculating that they "would" "soon" be reaching these limits. ;)
Other times it was as simplistic as, "Can we keep shrinking this forever?" "No." "How far can we go using current technology?" "Using current technology we can only go as far as we can currently go."
Re: (Score:2)
Maybe that could account for a few cases, but there were other issues [geek.com] at least some researchers didn't think would be solved so quickly either. In the link I provided (unfortunately, the original article isn't found, just the summary), a researcher in 2002 was claiming that CMOS would end up at 45 nm and halt there because of the issues with thermal noise. I also remember these types of predictions being made by researches at Intel as well.
Look, I'll I'm saying is that *every* time so far the prediction w [slate.com]
Re: (Score:3)
10nm is proven possible. Samsung have demonstrated some 10nm chips already.
Transistors have been made much smaller in process development labs so we know that the scaling will continue some time into the future.
Better Tell Intel Their 4nm Process Isn't Real (Score:5, Informative)
http://www.xbitlabs.com/news/c... [xbitlabs.com]
Intel's 2012 roadmap shows 4nm process in 2022. Which means they have a process that has been tested to work, they are just tweaking it to reduce errors and working on the best way to outfit a plant for it. Also costs billions and time to refit a plant.
Really? (Score:1)
TSMC has already produced test wafers on 10nm and plan to enter volume production in 2016
http://www.fudzilla.com/news/p... [fudzilla.com]
Re: (Score:2)
10nm chips have been manufactured and demonstrated by Samsung, Intel plans on producing 10nm chip in 2016. 10nm isn't a problem, process development is close to finished. 7nm doesn't seem to present any physical problem.
And 3nm transistors have been manufactured. Not in a commercial manufacturing process but not only can they be made - they are demonstrated to be working.
So you are way off.
Zen Architecture (Score:5, Funny)
Featuring GGL (Gateless Gate Logic).
Re: (Score:2)
Can it calculate the sound of one hand clapping?
Re: (Score:1)
Mu.
So close... (Score:3, Funny)
>40% higher performance-per-clock from from the x86 CPU core.
That could very slightly close the gap between AMD and Intel!
Re: (Score:1)
40% higher IPC vs bulldozer is not ">40% higher performance-per-clock [sic]". this basically says it'll take them until late 2016 or early 2017 to have a part that competes with haswell... ok, great... yay, AMD! your mommy is so proud...
Extrapolate? (Score:3)
Anyone care to extrapolate from current benchmarks as to how this new processor will compare to Intel's desktop offerings? I would like to see Intel have some competition there.
Re:Extrapolate? (Score:5, Interesting)
Anyone care to extrapolate from current benchmarks as to how this new processor will compare to Intel's desktop offerings? I would like to see Intel have some competition there.
FX-8350: 2012
"Zen": 2016
The 40% jump is more like 0%, 0%, 0%, 40%.
If you compare a 3770K (best of 2012) to a 4790K (best of today) you get a ~15% frequency boost and another ~10% IPC improvements. If the leaked roadmaps are to believed Skylake for the desktop is imminent [wccftech.com] which will bring a new 14nm process and a refined micro-architecture at the same time as Broadwell missed their tick for the desktop, so in the same timeframe Intel will have improved 30-40% too.
Anyway you asked about AMD and I answered with Intel but it's a lot easier to get a meaningful answer without getting into the AMD vs Intel flame war. In short, even if AMD comes through on that roadmap they're only back to 2012 levels of competitiveness and honestly speaking it wasn't exactly great and AMD wasn't exactly profitable. They're so far behind that you honestly couldn't expect less if they weren't giving up on that market completely, which honestly thinking I thought they had. And I wonder how credible this roadmap is, I remember an equally impressive upwards curve for Bulldozer...
Re: (Score:1, Troll)
Re: (Score:2, Interesting)
Uhhhh...just FYI but Intel has come right out and admitted it rigged the benchmarks so you can trust them about as much as the infamous FX5900 benches with its "quack.exe" back in the day.
Yes yes, you spam that to every thread. That's exactly why I compared Intel with Intel. Unless you think they're creating benchmarks that's increasingly inaccurate for each new generation, the point was that AMDs "jump" isn't actually more than Intel has improved through yearly releases since. Do you think the benchmarks are more "rigged" for the 4790k than the 3770k? Is the lack of new FX processors not real? By the way, even Phoronix's conclusion says:
From the initial testing of the brand new AMD FX-8350 "Vishera", the performance was admirable, especially compared to last year's bit of a troubled start with the AMD FX Bulldozer processors.
(...)
In other words, the AMD FX-8350 is offered at a rather competitive value for fairly high-end desktops and workstations against Intel's latest Ivy Bridge offerings -- if you're commonly engaging in a workload where AMD CPUs do well.
In not all of the Linux CPU benchmarks did the Piledriver-based FX-8350 do well. For some Linux programs, AMD CPUs simply don't perform well and the 2012 FX CPU was even beaten out by older Core i5 and i7 CPUs.
I guess "bit of troubled" was the most pro-AMD way he
Re: (Score:1)
I don't know why you were moded up.
Zen is 40% IPC greater than EXCAVATOR. The FX-8350 is bulldozer core. Since Bulldozer there has been 3 cores: Piledriver, Steamroller and Excavator (to be released this year).
Intel has in fact dropped the ball. They have not released a new desktop CPU in 2 years.
Re: (Score:2)
So you're an Intel troll now, too? Goody for you.
Re: (Score:2)
*laughter*
Finally (Score:1)
Finally, a processor that will meditate on existential paradoxes and encourage us to push to deeper self-understanding as it half-assedly pretends to work on solving a task.
Re: (Score:2)
You might consider the wisdom in having less attachment to what people think about the value of you or your CPU's meditations.
Dear AMD (Score:4, Insightful)
Note that, in comparison to ARM CPUs, x86 SoCs are *crazy* overpriced. There are superb ARM SoCs for just 20 USD. WTF are you doing selling similar consumer-grade chips for 100 USD??
Re: (Score:1)
The best ARM CPUs (Apple) have close to Bay Trail Atom CPU performance in single-core mode, but Bay Trail is quad core and Apple's are dual core. Other ARM CPUs are quad core but don't have the single core performance Apple does.
And once you get into better than Bay Trail performance (most everything x86 except cheapest AMD stuff), ARM is left in the dust.
That and software backwards compatibility are why x86 is still a premium. But that's changing.
Re: (Score:2)
You mean like
Re: (Score:2)
I bet they maintain an internal port now, and have been since day one. (Just like they did with PowerPC and OSX)
No need to bet and it's not internal. It's called the "iOS Simulator" and has been around for years and years as part of Xcode.
Re: (Score:2)
Hell I'd settle for just having one FPU per core again and none of this "module" nonsense.
AMD jumped the shark for me in 2012 when they killed off their Phenom 2 line.
Re: (Score:3)
AMD gets a lot of shit for this, and there's plenty to give 'em shit for... but that's not one of 'em.
It's a shared FPU only when in 256bit AVX mode. When in normal 128bit (2x64) SSE mode that you share one FPU with 2 cores.
Re: (Score:2)
Unfortunately no matter what mode it is in, the two FPU pipelines are still sitting behind a single FPU scheduler so the result is essentially the same.
My numerous benchmarks with bulldozer processors back this up, particularly with the likes of Matlab. Those processors are great for multi-process servers (web, email, etc) but useless for floating-point math (gaming, render farms, compute servers, etc).
Re: (Score:2)
Note that, in comparison to ARM CPUs, x86 SoCs are *crazy* overpriced. There are superb ARM SoCs for just 20 USD. WTF are you doing selling similar consumer-grade chips for 100 USD??
ARM CPUs still do not have proper pipe-lining. Out-of-order execution is castrated too. That basically kills ARM CPUs in any workload with lots of math (think games and encryption).
And how about the CPU cache? 1MB of cache for ARM is still a high-end feature, while most desktop CPUs have 4 or more MB of cache. Huge HUGE difference for the memory bandwidth.
ARM has its niche (which if ARM really wanted could have been very very broad and not niche at all) but as soon as you start gaming or do any real wor
Awesome marketing... (Score:1)
They messed up (Score:2)
That is the exact polar opposite of what everyone wants. We want one thread executed on multiple cores. The single core performance of AMD 6 and 8 core CPUs is PATHETIC. Most software still runs on just one core.
Details, please (Score:2)
I see lots of announcements - not just this one - shouting about their new microarchitectures, how cool they are, the amazing benefits, and so on. But documentation of exactly what the new microarchitecture is, exactly what it does, seems thin-to-non-existent. Maybe I'm not looking in the right place.
All "big" processors nowadays have fancy pipelines, out-of-order execution, branch prediction, multiple cores, and so on. Fine. But how is Zen different from past microarchitectures? What makes it revolutiona
Currently shopping workstations, AMD competes? (Score:2)
I'm currently bidding out a pretty hefty workstation (128GB RAM, RAID 5 disk array, highly parallel workload).
From what I'm seeing, AMD is pretty competitive on price/performance. Our work load is integer heavy, and I can get a dual 16-core 2.8ghz AMD machine for $2500 cheaper than a dual 10 core 3.2ghz Intel machine. Even if you assume the AMD is 20-30% slower per core, it stacks up quite nicely.
The MBs are cheaper, the processors are cheaper, and the registered DDR3 DIMMs are waaaaay cheaper than the DDR4
Re: (Score:2)
Back when I built my workstation I went with the AMD Phenom II 940 because the Intel solution for ECC was a lot more money which was better spent on system RAM and disk performance. The Intel processor, motherboard, and FB-DIMMs would have more than doubled the cost. It is not that bad now but an Intel system still carries a price premium.
Re: (Score:2)
I go for AMD if I can.
Why?
Re:Please make it soon AMD (Score:5, Insightful)
Re:Please make it soon AMD (Score:4, Funny)
A lot of people don't really understand that the CPUs are already "fast enough" and that they can include other important issues in their buying decisions.
I mostly buy and recommend AMD because:
1) better price/performance ratio
2) code I write is designed to scale horizontally, which loops back to #1 even when it is a high load service
3) better power/performance ratio on desktops and servers
4) the motherboards are cheaper for the same components, and I hate over-paying even if the motherboard cost is too small a percent of the total system cost to matter very much
Intel does mostly win on laptops due to lack of availability of alternatives.
MB and Strategy (Score:2)
"Fast Enough" for you maybe. That usually depends on what you are using your computer for. Gaming is a driver. Most common applications I'll concede are not. However I have noticed in certain professional applications that used to be dependent on primarily graphic cards, are more dependent on CPU and system memory than anything else now. All that aside...
I agree, while I have always bought Intel, I have always kept AMD in mind, One of the things that Intel does that is annoying, but probably linked to the f
Re:Please make it soon AMD (Score:4, Funny)
At the low end good enough and dirt cheap pushes towards AMD. In the middle there are niches where braindead developers still don't have a fucking clue in 2015 how to write multi-threaded code so a fast i7 with hardly any cores is the best tool for the job, but that's a diminishing niche as developers start to learn what they should have in 1995.
Re: (Score:2)
Cheaper. AMD always was and is cheaper than comparable Intel parts. Ditto AMD vs nVidia.
For the lower price you lose some performance, especially in the edge cases. But for most tasks, the difference is almost negligible.
P.S. Though the case is different in the gaming, where pretty much the whole market optimizes exclusively for the Intel/nVidia.
Re: (Score:2)
You lose more than "some performance". They also run hotter and use more power.
Re: (Score:2)
Hotter, but not so hot as to cause problems.
N.B. Historically, I actually had more termal problems related to the Intel's than to AMD's CPUs.
But it is true that in the power efficiency department, the Intel i-series are better than the AMDs.
Though it is not a clear cut, if you take the whole system power consumption. Intel CPU + nVidia GPU would draw about the same (or more) power as AMD's comparable integrated CPU/GPU.
Re: (Score:2)
Arsehole Puckering Up! :P :P :P
Re: (Score:3)
What is the use case for a high end integrated APU? The current use case is for systems that need modern GPU capabilities, but not high graphics performance. Lots of use cases require something like OpenGL but would never have it maxed out. They are full-featured, they just don't have the number of cores that you want for playing fancy games.
If you want both to be high end, it seems more logical to have discrete graphics. But if you need OpenGL for CAD, WebGL, or some other use, then an APU is indeed "full
Re: (Score:2)
"What is the use case for a high end integrated APU? The current use case is for systems that need modern GPU capabilities, but not high graphics performance."
The reality of the situation is that right now, even at current tech, if AMD used a bit more actual die space, and embedded a couple gigs of eDRAM for the CPU/GPU to share apart from main system memory, the performance increase would be quite huge.
Re: (Score:2)
A 64-bit memory bus is like the slowest available memory bus for GPUs. Minimum 256-bit wide memory bus path is recommended. Many GPUs are gimped specifically because of the low bit-width bus.
Re: (Score:2)
The old sideport implementations maxed out at DDR3-1333 speeds (10.6 GB/s) so they might as well add another channel of memory. Integrated GDDR5 would only be 40 GB/s and DDR4 system memory about half that so it does not really make sense at this point.
Re: (Score:2)
The APU supports unified access to system memory which should be a big advantage in heterogeneous computing applications but I have not seen it pan out yet.
Re: (Score:2)
Do you think you even said anything? When you say it doesn't "pan out," you mean that my APU systems actually access their memory? Or, you did you just mean, the feature is just an implementation detail and it doesn't include any unicorns farting rainbows into your applications?
Or is it just pure, "not my favorite team" type of fan-commenting?
Is there a specific claim that they make that the chip can't achieve? That is what it would mean if what they did actually "didn't pan out." But that isn't the case, i
Re: (Score:2)
When I say that I have not seen it pan out, I mean that I use engineering applications which should be able to take good advantage of heterogeneous computing with a unified address space as implemented in AMD's recent APUs but none of them do and AMD's existing APUs come with a significant cost in single thread performance. If I just wanted video acceleration and processing, then the existing PCIe interface GPUs can do that.
Re: (Score:1)
Dear Mr Coward,
I've been reading almost exactly this same screed from you on this site since the late 90s. News Flash: AMD is doing great, and the APUs are designed for certain uses not all uses.
News flash: Intel is bigger than AMD and will be targeting more use cases than AMD will. In no way does that tell us anything about if AMD is facing tough times. Judging by the availability of AMD-compatible motherboards now, compared to various times in the past, a person could easily think instead that they're in
Re: (Score:2)
AMD is doing great? In what alternate universe? Their net income for the most recent quarter was -$180 million. That is not what a company that is "doing great" has happen.
Re: (Score:2)
Their recent APUs look pretty neat but are there any applications which take advantage of their heterogeneous computing capability?