Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Cancels 28nm APUs, Starts From Scratch At TSMC 149

MrSeb writes "According to multiple independent sources, AMD has canned its 28nm Brazos-based Krishna and Wichita designs that were meant to replace Ontario and Zacate in the second half of 2012. The company will likely announce a new set of 28nm APUs at its Financial Analyst Day in February — and the new chips will be manufactured by TSMC, rather than its long-time partner GlobalFoundries. The implications and financial repercussions could be enormous. Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space."
This discussion has been archived. No new comments can be posted.

AMD Cancels 28nm APUs, Starts From Scratch At TSMC

Comments Filter:
  • by Anonymous Coward
    With all the issues at gloflo, this might be a good thing. But it looks like too little too late.
  • by Kenja ( 541830 ) on Tuesday November 22, 2011 @04:23PM (#38140436)
    So far I have been totally unable to tax my current CPU past 40% utilization. I think we can take a break and let software catch up and older systems fall off the support map before the next generation of CPUs hit.
    • by Dunbal ( 464142 ) * on Tuesday November 22, 2011 @04:27PM (#38140498)
      Don't worry, the next OS version should do it...
      • I have a neat little handheld Sony Vaio which has a 1.33Ghz Core Solo and a Intel GMA945 graphics adapter oh... and 1Gb RAM. It's an awesome machine but Windows XP was too heavy for it. Windows Vista was far to heavy for it. Windows 7 runs pretty nice on it. Windows 8 beta is much nicer, very usable. Android is ok on it... but I still don't know what the point of Android is. Meego wasn't too bad on it. Mac OS X Lion is a laughing joke on it.

        All things considered, the operating systems are seriously improvin
    • by CSMoran ( 1577071 ) on Tuesday November 22, 2011 @04:32PM (#38140556) Journal

      So far I have been totally unable to tax my current CPU past 40% utilization. I think we can take a break and let software catch up and older systems fall off the support map before the next generation of CPUs hit.

      Just because your usage scenario is not CPU-bound does not mean everyone else's is.

      • Comment removed (Score:5, Interesting)

        by account_deleted ( 4530225 ) on Tuesday November 22, 2011 @10:49PM (#38144288)
        Comment removed based on user account deletion
        • by Kjella ( 173770 ) on Wednesday November 23, 2011 @02:01AM (#38145418) Homepage

          So while the guys that run gamer sites or live for benchmarks will scoff frankly the average user, which outnumbers them by a 100,000 to one (last number on hardcore PC gamers I saw put the number at 30 million)

          Okay I heard Earth has an overpopulation problem, but did I doze off there for a while? Because I seem to have missed some recent developments...

        • Pretty much agreed. I've been recommending low-end AMD and i3 systems for my clients because honestly that's more computer than they really need. An SSD helps them more than a better processor.

          But I'm also seeing an inversion of the old rule of thumb about the price-performance curve. In the past, a plot of the price (y) vs. performance (x) curve would track a diagonal line, and above a certain point the curve would shoot up vertically. For a small gain in performance, the price would skyrocket up.
    • The change in feature size won't just be usefull to get faster processors (altough servers could use some of them), it is also important to reduce the power footprint of the chips (that being AMD, it means both CPU and GPU will use less power) and to reduce the price of those chips.

    • by Anonymous Coward

      I think we can take a break

      Who is "we"? Oh right, it's everyone who buys microprocessors, because we're all running the same software and doing the exact same things with our computers.

    • by gstoddart ( 321705 ) on Tuesday November 22, 2011 @04:56PM (#38140828) Homepage

      So far I have been totally unable to tax my current CPU past 40% utilization.

      Well, DfrgNtfs.exe is using 25% of my quad-core, and I'm not doing much else. I've gone well into 70% more more at times if I'm actually doing something intensive.

      I'm using 7GB out of 8GB of RAM, and if I had 16GB I could probably put a hell of a dent in it too.

      I don't even consider what I'm doing to be much of a load, and in the past I've been on machines where something literally was CPU bound for as much as an hour and I needed to walk away.

      I don't even find it tough to use up that much resources ... hell, I stopped using Mozilla because it would expand to well over 1GB of RAM overnight (with the same # of windows and tabs that used to fit in 300MB).

      I think the software has already caught up ... especially if you're like me and open something and leave it open.

      • by Anonymous Coward

        Defrag? 1995 called and wants its file systems back. News flash to the rest of the world: using (almost) all your RAM is a Good Thing. Can you say RAMdisk?

        Oh, for a few 10s of GB of RAM, and an SSD array to fill it.

        • 1995 called? So ext4 is from 1995? It has an online defrag utility, you know.

          • Re: (Score:2, Flamebait)

            by gstoddart ( 321705 )

            1995 called? So ext4 is from 1995? It has an online defrag utility, you know.

            2009 called ... I'm running Vista. My Linux boxes are all now VMs ... I've no interest in running Linux as my primary box anymore.

            But, I see you're living up to your nick.

            • by Joce640k ( 829181 ) on Tuesday November 22, 2011 @05:32PM (#38141248) Homepage

              Vista? Ack.

              At least have the decency to install Windows 7.

              • by badran ( 973386 )

                And in what meaningful way would that be different than an up to date Vista?

        • News flash to the rest of the world: using (almost) all your RAM is a Good Thing.

          Not really. On my system, performance starts to suffer once applications are taking up all but 1 GB or so; if non-app memory drops below 50 MB, the system becomes unusable.

      • This was a while back, but I once ran a ray tracing project that ran nonstop for two weeks, essentially 100% CPU the whole time. In fact it didn't even finish - it was 2/3 done when someone else pulled the plug on it accidentally. Fortunately the data for that much of the picture was saved to a file as it went. Nowadays the same project would probably take 10 minutes, but hey.

      • Firefox has allocated 628MB on my 8GB system after running for days. That's still a lot of RAM (although I have the memory cache turned up pretty high on this system) but it's not a gigabyte overnight. I think you were running crappy extensions.

    • by Bengie ( 1121981 ) on Tuesday November 22, 2011 @04:58PM (#38140844)

      With multi-core CPUs, just because you can't reach 100% usage doesn't mean your not CPU limited.

      • by Skarecrow77 ( 1714214 ) on Tuesday November 22, 2011 @05:26PM (#38141160)

        Exactly. Too bad I already posted in the thread and can't mod you up anymore.

        Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

          Intel has made only modest gains in performance-per-clock-cycle since the core 2 duo. AMD I'm pretty sure is actually going backwards if I am correctly remembering some of the bulldozer vs thurban reviews.

        Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.

        • by ob0 ( 1612201 )

          Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

          Have you seen the Bulldozer reviews? They've been hitting AMD over the head due to its poor single-thread performance (amongst other things...)

          • Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

            Intel has made only modest gains in performance-per-clock-cycle since the core 2 duo. AMD I'm pretty sure is actually going backwards if I am correctly remembering some of the bulldozer vs thurban reviews.

            Have you seen the Bulldozer reviews?<snip>

            It's safe to assume that yes, they are aware of the reviews since they explicitly mentioned them.

        • by Anonymous Coward on Tuesday November 22, 2011 @05:52PM (#38141504)

          Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

          There's a very simple reason: physical limitations. The current processor technology is more or less maxed out for single-thread performance. There's probably some gains available by completely changing the instruction set or completely giving up on multi-thread performance, but nothing that Intel can put into a chip they can sell. They can't up clock speed anymore due to the speed of light (except a little bit when doing a die shrink). The obsession with multi-core isn't because Intel and AMD think everyone wants to run more threads; software is moving towards using more threads because Intel and AMD simply can't improve single-thread performance but they, at least for a little while longer, can keep adding more cores.

          • They can't up clock speed anymore due to the speed of light (except a little bit when doing a die shrink).

            Poppycock, the reason intel/amd dont scale their clocks much beyond the current 3-3.5 GHz is mostly because the power demands increase exponentially. Intels netburst design had a feature called the Rapid Execution Engine, which basically where the integer ALU's, run at double the clock rate. The 3.8 GHz pentium 4 had its ALUs running at 7.6 GHz, the reason this didnt scale beyond some execution hardware was very much down to the power budget.

            And honestly, bulldozer's design team should be hit over the head

        • by Kjella ( 173770 ) on Tuesday November 22, 2011 @06:11PM (#38141758) Homepage

          Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.

          I think it's quite obvious that AMD didn't have the resources to hit many targets, so they picked two:

          1) Laptops/Low-end PCs with Bobcat cores (Fusion/Llano APUs)
          2) Servers with Bulldozer cores (Valencia/Interlagos)

          Sadly the latter seems to have misfired a bit even in the server arena, but it's no question IMHO that the high-end desktop market was intentionally abandoned. Either that or they've missed their design targets by many miles, they can't have been that off on single core performance. I can sort of understand, Intel was already dominating and the Atom threatened their low end (remember, CPU designs have a 2-3 years lead time) and they couldn't afford to lose their bread and butter machines. So they aimed Bobcat low (power), Bulldozer wide (cores) and left Intel to compete with themselves. Not to be too much of a cynic, but it's better for AMD to win some markets than being a loser in all of them.

        • by PRMan ( 959735 )

          Actually, the changes to the core in Windows 7 mean that most situations are nearly evenly split across processors anyway.

          I had a batch file at a previous company calling all "single-threaded" applications and during the entire run of the batch, all 4 CPUs were within 5% of each other. Bring up your Task Manager Performance tab someday and leave it up all day at work. You might be surprised.

    • You really should install an antivirus program.
    • by Anonymous Coward

      That's easy!
      I just start a thread with an infinite loop for every cpu core.

      Kids these days...
      Can't code themselves out of a wet paper bag to save their lives...

    • Hehe. Install Ad-Aware, and run a full system scan. Watch those cores get used...

    • by PRMan ( 959735 ) on Tuesday November 22, 2011 @05:39PM (#38141338)

      Seriously, this.

      In building computers for my wife and my brother, I just went with lower end I3 and Phenom X2(4) processors. Why? Because the effective performance difference between the two for the applications they are running is .001%. And the price difference between those and say, an I7 is 1000%.

      But I made sure to get both systems SSD drives. Price difference? About 200% (500GB HDD $60 vs 128GB SSD $125). But the performance difference is about 700%.

    • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday November 22, 2011 @05:43PM (#38141390) Homepage Journal

      Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like. Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid. The bottleneck problem is typically solved by increasing the size of the on-chip caches OR by adding an external cache between main memory and the CPU. After that, it depends on whether the bottleneck is caused by bus contention or by slow RAM. Bus contention would require memory to be banked with each bank on an independent local bus. Slow RAM would require either faster RAM or smarter (PIM) RAM. (Smart RAM is RAM that is capable of performing very common operations internally without requiring the CPU. It's unpopular with manufacturers because they like cheap interchangeable parts and smart RAM is neither cheap nor interchangeable.)

      Really, the entire notion of a CPU - or indeed a GPU - is getting tiresome. I liked the Transputer way of doing things (System-on-a-Chip architecture) and I still like that way of doing things. The Transputer had some excellent ideas - it's a shame it took Inmos so long to design an FPU (and a crappy one at that) and given that the T400 had a 20MHz bus at a time most CPUs were running at 4MHz, it's a damn shame they failed to keep that lead through to the T9000.

      What I'd like to see is a SoC where instead of discrete cores (uck!) you have banks of independent registers, pools of compute elements and hyperthreading such that the software can dynamically configure how to divide up the resources. There's nothing to stop you moving all the GPU logic you like into such a system. It's merely more pools of compute elements. Microcode is already in use and microcode is nothing more than software binding of compute elements to form instructions. (Hell, microcode was already common on some architectures back in the 80s and was available for microprocessors within a decade of their being invented.) There's nothing that says microcode HAS to be closed firmware from the manufacturer - let the OS do the linking. It's the OS' job to partition resources and it can do so on-the-fly as needs dictate - something a manufacturer firmware blob can't do. Put the first 4 gigs onto the SoC and have one MMU per core plus one spare, so that each core can independently access memory (provided they don't try to access the same page). The spare is for direct access to memory from the main bus without going through any CPU (required for RDMA, which most peripherals should be capable of these days).

      Such a design, where the OS converts the true primitives into the primitives (ie: instruction set) useful for the tasks being performed, would allow you to add in any number of other true primitives. Since any microcode-driven CPU is essentially a software processor anyway, you can afford to put extra compute elements out there. Any element not needed would not be routed to. Real-estate isn't nearly as expensive as is claimed, as evidenced by the number of artistic designs chip manufacturers etch in. Those designs are dead space that can magically be afforded, but there's nothing to stop you from replacing them with the necessary inter-primitive buffering to build ever-more complex instructions from primitives without loss of performance. I'm willing to bet HPC would look a whole lot more impressive if BLAS and LAPACK functions were specifically in hardware rather than being hacked via a GPU.

      Of course, SoC means larger chips. So? Intel was talking about wafer-scale processors several years back (remember their 80-core boast?) and production has only improved since then. The yield is high enough quality that this is practical and since the idea is to software-wire the internals it becomes trivial to bypass defects. T

      • Re: (Score:3, Informative)

        by hkultala ( 69204 )

        Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like.

        Wrong.

        The code size of average function is much smaller than instruction cache for any modern processor.
        And then there are L2 and L3 caches.

        Instruction fetch needing to go to main memory is quite rare.

        And then about data.. depends totally on what the program does.

        Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid.

        None one of the CPU's sharing FPU with multiple HW threads are cheap.

        Sun Niagara I had slow shared FPU, but the chip was not cheap

        AMD Bulldozer, which usually has sucky performance, sucks less on code which uses the shared FPU.

        FPU operations just h

      • by Kjella ( 173770 )

        Lastly, compilers are often god-awful bad at adding in parallel processing. Not that they should have to -- the programmer is SUPPOSED to be competent at this. Parallel programming has only been standard CS material since 1978! If programmers aren't capable of writing efficient parallel programs by now, they need to be dropped off a cliff and replaced with programmers who can write. (...) What matters, though, is that high performance IS achieved by people who bother. If a given programmer can't achieve the same results, it is because they can't be bothered. For all the problems with compilers, I refuse to blame the available technology for the incompetence of code monkeys.

        So what? Mathematicians have had number and field theory for centuries, it doesn't make it easier to understand. Recipe-programming is easy to understand, there's no dependency issues, no resource contention, just a simple start-to-finish sequence of events. Simple interactions like worker threads and resource pools are easy to work out, only mutex it so that you don't grab the same work packet or resource.

        Truly parallel programming is to me like having 20 chefs in my house cooking a meal, all using limited

        • by jd ( 1658 )

          Doesn't matter if the chess program can look at a million more moves or a billion. Chess Grand Masters look at patterns and compute which patterns are better than other patterns, which means that the pattern itself is a function. The better the Grand Master, the better the evaluation function. You need only have a function that evaluates the permutation of pieces on the board to a degree that is greater than the computer's evaluation of the permutation of a billion moves. Since Chess is a Full Information G

          • by Kjella ( 173770 )

            Doesn't matter if the chess program can look at a million more moves or a billion. Chess Grand Masters look at patterns and compute which patterns are better than other patterns, which means that the pattern itself is a function. The better the Grand Master, the better the evaluation function. You need only have a function that evaluates the permutation of pieces on the board to a degree that is greater than the computer's evaluation of the permutation of a billion moves. (...) So, yes, it is because you're lazy.

            ...okay, I don't even know what to say to that. I have no idea what it's like on your planet, but around here we're only human. No wonder developers aren't up to your standards....

            • ...okay, I don't even know what to say to that. I have no idea what it's like on your planet, but around here we're only human. No wonder developers aren't up to your standards....

              Totally agree. I was initially inclined to say (s)he's trolling, but (s)he's clearly quite learned in computers. Maybe (s)he expects that all people are just that smart... Expecting that people get parallel programs right on the first try, given their complexity is not reasonable, at least where I work (myself included). In fact, I was just working with a developer today to fix a reader/writer issue triggered by parallelism both in code and in writing to the DB. We had to sit down and think out the use

    • Dunno, man, but my CPU is running 98-100% as I write this.

    • by bill_mcgonigle ( 4333 ) * on Tuesday November 22, 2011 @08:40PM (#38143358) Homepage Journal

      So far I have been totally unable to tax my current CPU past 40% utilization.

      Oh, you should try Firefox sometime!

  • It's TSMC, not TMSC.

    Thank you.

  • Competition ? (Score:4, Informative)

    by unity100 ( 970058 ) on Tuesday November 22, 2011 @04:31PM (#38140544) Homepage Journal
    AMD has no competition in APU arena. It is dominating it.

    http://techreport.com/articles.x/21730/8 [techreport.com]

    its actually possible to game with acceptable detail and fps with entry-mid level laptops without paying a fortune now.
    • by Desler ( 1608317 )

      You misinterpreted the statement to be about APUs whilst the statement was about the CPU market in general.

    • Very true - AMD compete well against Intel in entry-mid laptops.
      Unfortunately, it's a rather narrow segment.

      • by Nadaka ( 224565 )

        I believe that it is the widest consumer segment actually. Desktop usage is shrinking and gaming has been held back by consoles.

    • by Bengie ( 1121981 )

      APU market is small, desktop market is big. AMD's APUs compete in both markets.

      Pretend you went back 15 years ago and tried selling a dual core desktop CPU. You could claim you're doing well in the multi-core desktop market.

  • Global Foundries (Score:5, Informative)

    by Anonymous Coward on Tuesday November 22, 2011 @04:42PM (#38140684)

    The description is somewhat misleading in that Global Foundries is not a "long-time partner," but what were AMD's own internal wafer fabs until Global Foundries was spun out as a separate company in 2009.

    • by pavon ( 30274 )

      Yeah, and TSMC is the foundry that ATI has used for years (and still does). The plan with the APUs has always been to move ATI's GPU to AMD's^W Global Foundry's process. They have given up on that and decided to move AMD's CPU to the TSMC process instead. It's a pretty big turn of events.

  • Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space

    After reading the summary (a few times), I came to the conclusion that I know nothing about this topic. Thanks for the heads up so I that was not burdened with reading an article that only a select few might understand or care.

  • by markhahn ( 122033 ) on Tuesday November 22, 2011 @04:43PM (#38140696)

    so far, all bobcat-based chips have been made at TSMC, haven't they? so is this really news?

  • by WilliamBaughman ( 1312511 ) on Tuesday November 22, 2011 @04:55PM (#38140812)
    Calling Global Foundries AMD's "long-time partner" really dates "MrSeb", he must have started reporting tech news in the last three years. Global Foundries isn't just a "partner" to AMD, it's part-owned by AMD, and was spun out of AMD's manufacturing and merged with Chartered Semiconductor.
  • X86 cpu manufacturer can and should survive. Maybe Intel or Microsoft or Apple will buy them out to put them out of their misery. The quicker customers can box themselves in the better. Choice is fleeting and obviously, chooses the current "best" processor is always in your "best" interest with no thought of the long term. But maybe Arm really is meant to eventually replace the X86 architecture.

  • by PopeRatzo ( 965947 ) * on Tuesday November 22, 2011 @06:50PM (#38142232) Journal

    Financial Analyst Day in February

    Oh my god, there's less than 70 shopping days left!

    It's tradition in my house that on Financial Analyst Day, or FAD as we call it, we make spiced wine and spike it with DMT, then sit around singing appropriate songs, such as "Money" by Pink Floyd, "Money (That's What I Want)" by the Beatles and "Gimme da Loot" by Biggie Smalls.

    Then, sitting in a circle, we pass around a revolver with only one shell loaded and spinning the cylinder, we point at the person to the left and pull the trigger.

    It's by far my favorite holiday.

  • by Anonymous Coward

    APU unlikely to fend off increasing competition from Intel? Most Intel Atom based netbooks/tablets/whatever that I know have the GMA 3150. Which runs at 200 Mhz max. and has 2 shader units. The C-50 has 80 unified shaders running at 280 Mhz (yes, again low but I'm guessing 80 things working in parallel make up for it. please correct me if I'm wrong), supporting DX11,OpenGL 4.1 and UVD 3. Way better than Intel graphics cards. True, the CPU isn't very fast, but for things like video playback and 2D,3D games a

  • A single part that has the cpu and the memory on a single pcb. Have 2, 4, 6 and 8gb models. Put the memory right next to the chip and eliminate complexity. You could still add ram to the mobo, but it would act as cache for other things like disk and video. You could even have multi-socket mobos, but the cpus would not share memory except through the secondary memory.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...