Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
AMD Businesses Hardware

The Ups and Downs of AMD (hackaday.com) 225

szczys writes: In 2003 AMD was on top of the world. Now they're not, but they're also still in business. AMD continues to produce inexpensive, well-engineered semiconductors. The fall over the last 10 years is due to Intel, who used illegal practices and ethically questionable engineering decisions to knock AMD off their roost while still keeping them in business. The latter prevents the finger of antitrust from being pointed at Intel the way it was for Ma Bell.
This discussion has been archived. No new comments can be posted.

The Ups and Downs of AMD

Comments Filter:
  • AMD settled (Score:5, Informative)

    by cfalcon ( 779563 ) on Wednesday December 09, 2015 @07:24PM (#51091895)

    AMD settled their entirely valid lawsuit:
    http://www.cnet.com/news/intel... [cnet.com]

    Intel's actions were shocking and absurd, and they seem to be willing to play by legal limits only when failing to do so would visibly get them hammered with monopoly lawsuits. It was a poor resolution to a very real issue. The other part? It prevents Intel from having to do anything rash or aggressive with their chip power, because by neutering their only competitor they were able to focus more on profitability and less on performance and perception. In my *opinion*, I think this is a big part of why we saw chips mostly become stagnant compared to in years prior- Intel is actually keeping in range of what AMD is capable of on purpose. They are holding back.

    • Re:AMD settled (Score:5, Insightful)

      by Anonymous Coward on Wednesday December 09, 2015 @08:10PM (#51092127)

      In the past Intel did them dirty and there's no argument about that.

      AMD's curren't problems are entirely their own fault. They fired the development team that made the K8(and then K10), the processor family that completely destroyed all of Intel's products from desktop to enterprise.

      Intel had the Netburst CPUs, AKA the Pentium 4. Power hungry, low IPC, stuck with the FSB, hamstrung because they were developed around another failed Intel venture - The RDRAM debacle. The arch was utterly unable to go multicore (Pentium D was one of the worst processors ever made and was multi-chip packaged.)

      And lets not forget fucking Itanium. Intel fucked that up so hard they had to backpedal and introduce the 64 bit tech that AMD pushed.

      Enter the K8 - Scalable chip interconnect, 64 bit, later developed in to the first true multi-core cpu available to consumers. Took over the server space completely. For a time, Xeon was dead. Not even kidding.

      And then AMD threw it all away. A bunch of fucking MBAs decided they didn't really need to pay a bunch of expensive chip designers to make chips, and that it would be a better idea financially to sell of the fab so their remaining development team could be isolated away from the fabrication process. Brilliant plan.

      That's the shit that gave us bulldozer, and that is why AMD sucks today.

      The rest is history. Intel cleaned up their act, released the core 2, and AMD has been irrelevant ever since.

      Intel has learned. They have not slowed down. AMD almost killed them. Every iteration is faster, lower power, cheaper. They're 2 generations ahead of everyone else in fabrication tech. Skylake CPUs are CRAZY fast and sip power.

      • Re:AMD settled (Score:5, Interesting)

        by Pulzar ( 81031 ) on Wednesday December 09, 2015 @09:36PM (#51092607)

        A bunch of fucking MBAs decided they didn't really need to pay a bunch of expensive chip designers to make chips, and that it would be a better idea financially to sell of the fab so their remaining development team could be isolated away from the fabrication process. Brilliant plan.

        While, yes, AMD management totally did destroy the company, the bit about selling the fab happened later, after the Barcelona disaster, and after they threw away all their money on ATI.

        The fab was not competitive (as GlobalFoundries performance showed for the next few years), and they absolutely had to get rid of it to survive. Not having the cost of maintaining that thing is the reason they are not bankrupt (yet).

        • Re:AMD settled (Score:4, Informative)

          by Anonymous Coward on Wednesday December 09, 2015 @11:19PM (#51092985)

          I wouldn't say that ATI was a bad purchase, arguably its the only reason AMD is still competitive, and they can leverage that design work into making better desktop chips.

      • by Mashiki ( 184564 )

        And then AMD threw it all away. A bunch of fucking MBAs decided they didn't really need to pay a bunch of expensive chip designers to make chips, and that it would be a better idea financially to sell of the fab so their remaining development team could be isolated away from the fabrication process. Brilliant plan.

        Well it'll be Intel's chance to gain again, since for the last couple of years Intel has been hiring a bunch of MBA's and slapping them into high positions within the company and it's starting to show already.

      • So when did their management team take over RIM?
      • I pretty much agree with your timeline, and wasn't really aware of the business plan, but that sounds about right. The results were the same.

        As for Intel learning. I am not as optimistic. AMD hasn't been competitive. Meaning Intel hasn't had to do much really. They have come out with several generations of solid CPU, however the increase in computational power year over year isn't what it used to be. You could chalk it up to physical limitations, or even lack of demand, or is it lack of competition? About t

      • by sl3xd ( 111641 )

        Having worked in the HPC/supercomputer world during the rise & fall of AMD, I really wish I could mod you up further.

        TFA talks makes much of the Intel compiler & benchmarks compiled with the Intel compiler for Intel processors.

        I call BS. Nobody in HPC was dumb enough to be fooled with the benchmarks using the Intel compiler & Intel chips. There are (and were) commercial, highly optimizing alternatives to Intel's compiler, each with similar speed boasts over GCC: PathScale and PGI come to mind

    • Re:AMD settled (Score:5, Informative)

      by Rockoon ( 1252108 ) on Wednesday December 09, 2015 @10:43PM (#51092877)

      AMD settled one of their entirely valid lawsuits:

      Fixed that for you.

      In another lawsuit [europa.eu], Intel was convicted of anti-trust violations.

      The European Commission has imposed a fine of €1 060 000 000 on Intel Corporation for violating EC Treaty antitrust rules on the abuse of a dominant market position (Article 82) by engaging in illegal anticompetitive practices to exclude competitors from the market for computer chips called x86 central processing units (CPUs). The Commission has also ordered Intel to cease the illegal practices immediately to the extent that they are still ongoing. Throughout the period October 2002-December 2007, Intel had a dominant position in the worldwide x86 CPU market (at least 70% market share). The Commission found that Intel engaged in two specific forms of illegal practice. First, Intel gave wholly or partially hidden rebates to computer manufacturers on condition that they bought all, or almost all, their x86 CPUs from Intel. Intel also made direct payments to a major retailer on condition it stock only computers with Intel x86 CPUs.

      • by Luthair ( 847766 )
        Its not remotely enough money given the benefits Intel has had as a result of AMD's decline.
    • I think this is a big part of why we saw chips mostly become stagnant compared to in years prior

      Nope. CPU power increases have slowed down because the mainstream market isn't demanding faster CPUs. It's not the bottleneck for a vast majority of users. Even serious games only need a decent CPU and then put all of their money into video cards. The market pressure has been on price and power usage, not performance. Intel is just responding to the market.

      • For numerical work modern CPUs have gotten MUCH MUCH faster than older CPUs. Things like FMA, more vector ops, load and store two cache lines per cycle etc. These features are hard to take advantage of in higher level languages but modern cpus are vastly faster than older ones. For any normal users modern cpus are fast enough. If you need higher performance in games and simulation software you can write your code to use the CPU more effectively.

        In the end a GPU is really not any faster than a CPU but a GPU

    • The article is repeating a lie. The actual settlement and case do not contain the lie.

      The Lie is Intel sold below cost.

      Due to a fixed cost to operate a fab and process wafers, the cost per die is greatly impacted by line yield.

      Due to the competitors line yield of about 50% at the time, it was assumed Intel had to be selling below cost. This was investigated and found to be false based on the number of raw wafers purchased and the number of die shipped. If two identical companies manufacture identical chi

  • by Anonymous Coward on Wednesday December 09, 2015 @07:31PM (#51091919)

    Read Ars Technica's history of AMD, the issue was with spectacular mismanagement more than with Intel's practices.

    http://arstechnica.com/business/2013/04/the-rise-and-fall-of-amd-how-an-underdog-stuck-it-to-intel/

    • My father worked there in Processor Validation up until 2007, and they really did make a bunch of crap decisions and run themselves into the ground. Intel being Intel aside, it is astounding how they pissed away such a great situation.
      • by Anonymous Coward on Wednesday December 09, 2015 @08:27PM (#51092235)

        didn't know your father, but i was there for a "long time" up till 2013, and mismanagement is about the only thing AMD had going on at the top. it was comically bad. and it still is... i get a chuckle out of fanbois hyping Lisa-this, Raja-that, whatever. i never met Raja, so can't comment on him; but Lisa is not terribly impressive technically, and seemed to be planning for her golden parachute from the moment she walked into our office.

        she also, apparently/allegedly, told teams (who had dependencies on other internal teams) that different projects were "top priority". so you'd have a weird deadlock case of project A being held up by people who were working on project B (being told it was top priority), being held up be a different set of people working on project A (being top priority). was a way of bullshitting paying customers, best i could tell. that was a sign that it was time to move on...

    • Thank you. The summary was pretty biased. I've never been a fan one way or the other, I mainly just try to get the best processors and video cards that I can with the money that I have. I've had AMD machines and ATI cards plenty in the past, and I still feel they deliver good low end chips and solutions. But with the introduction of the Core 2 Duo, Intel really started to shape up, and lately they've been blowing it out of the ballpark (for the most part). AMD, on the other hand... hasn't been. Now, y

    • I fully agree w/ this. I was a fan of AMD after they acquired a part of the ex DEC Alpha team, and for a while, they were doing well (even though as a RISC purist, I hated the idea of the x86 instruction set going 64-bit). But they failed to keep up w/ Intel due to a lot of their own shortcomings.

      Main one, from what I could tell, was that AMD's process practices were way behind Intel's, and as process shrinks became more difficult, that magnified the gap b/w the 2. Couple that w/ the fact that AMD, in

  • by Sowelu ( 713889 ) on Wednesday December 09, 2015 @07:34PM (#51091935)

    The article mentions Intel "Permanently disabling AMD CPUs through compiler optimizations". Am I reading this right, did they find a way to brick AMD processors? It doesn't say anything else about it in the article that I can see, if so, and I'm really curious.

    • by ClickOnThis ( 137803 ) on Wednesday December 09, 2015 @07:43PM (#51091995) Journal

      The article mentions Intel "Permanently disabling AMD CPUs through compiler optimizations". Am I reading this right, did they find a way to brick AMD processors? It doesn't say anything else about it in the article that I can see, if so, and I'm really curious.

      No. TFA explains that Intel's compilers were written to ignore certain optimization-friendly parts of the instruction set if they were compiling for a non-Intel CPU. AMD actually supported the instructions, but Intel's compilers just pretended that AMD didn't. And surprise! Intel's processors beat the crap out of AMD's in benchmarks. Really shitty of Intel to do that.

      • AMD actually supported the instructions, but Intel's compilers just pretended that AMD didn't.

        I'm not apologizing for Intel because they've definitely got some shady dealings, but if we've got our facts straight, their compiler is not pretending AMD doesn't support SSE.

        Intel's compiler does not target instruction capabilities. It targets specific CPU architectures with intimate knowledge of their pipeline. Even if your CPU supports a fancy new instruction, for what you need it for it might perform worse in aggregate than some alternative.

        So less about SSE, AVX, etc. and more about Sandy Bridge, Hasw

        • by xiando ( 770382 )
          Your story must be so true, it is just just so very hard for Intel to check what instructions are actually present and optimize based on that not the CPU ID. It must be totally impossible to do this.. except that GCC does this just fine.

          There is a reason why GNU/Linux users have favored AMD processors and there is a reason CPU benchmarks give somewhat different results with GCC code vs Intel compiler code.
          • Speaking as a compiler writer: There is a huge difference between architectural and microarchitectural optimisations. When vectorising, the back end provides a cost model indicating the availability of different resources.

            To give a toy example, consider two implementations of the same vector instruction set. One provides a one-cycle add and a 5-cycle multiply, but microcodes the fused multiply-add and it takes 10 cycles. The other provides a 2-cycle add and a 6 cycle multiply, but a 7-cycle fused mult

        • Sorry, but that is bullshit. Intel started doing this very early on (in version 8 of their compiler), and none of their CPU capability checks looked at the specific architecture at all. The only thing they checked was the CPU capability flag, and they deliberately skipped that check unless the chip was from Intel.

          They even cocked this up with their first iteration, such that instead of producing binaries that ran slowly on AMD chips it produced binaries that segfaulted on AMD chips. See http://www.swallowta [swallowtail.org]

          • I've not yet seen a valid and reasonable argument for why an Intel compiler should support a non-Intel product at all, let alone to the same level as an Intel product - care to give one?

      • by dbIII ( 701233 )
        As an example some CPU-bound trivially parallel stuff some geophysicists I work with which is compiled with the intel compiler is only twice the speed on a 64 core AMD machine than on a four core intel laptop. Other stuff is clockspeed to clockspeed between xeon and opteron.
        Sucks.
        I really don't know why they insist on the intel compiler in that place but I thing some marketing drone has got into the chain with the developers.
  • Back in the 486-pentium days, AMD was a much better processor (the k6 was amazing for its time) and even when the quads came out almost a decade ago, the bang for buck was still there. But sadly my next build is probably going to be intel simply because thats where the power is.
    • by unixisc ( 2429386 ) on Wednesday December 09, 2015 @09:03PM (#51092437)
      Actually, the K7 - which was the Athlon and the first processor made by the ex Alpha team - was their first great CPU which matched or beat Intel. They did a remarkable coup when they came out w/ AMD64, totally upsetting Itanium in the process and forcing Intel to adapt their architecture and do a cross licensing deal. Too bad that on the fab side of things, they failed to keep up, and thereby let their game plan implode. That's one thing Intel had been brilliant at. In the 90s, I recall people would speculate on which of the major RISC CPUs - SPARC, MIPS, POWER, Alpha, PA-RISC, et al would make it big. Just having far superior process technology enabled Intel to ultimately first catch up, and then beat each of them one by one.
  • It Goes Deeper (Score:2, Interesting)

    by Anonymous Coward

    Attorney here. In the late 90's I worked on contracts between clients and Intel. Intel was offering payments if you put a banner on your website that said it was optimized for the Pentium II. They also helpfully provided code to slow your website down if it detected any non-Intel processor.

  • by Tablizer ( 95088 ) on Wednesday December 09, 2015 @07:49PM (#51092015) Journal

    Intel knows they have to let AMD live for at least 4 reasons:

    1. Avoid anti-trust lawsuits over x86 chips.

    2. Have a second-source option so that vendors don't switch to ARM. Contracting practices for critical equipment often require more than one part source (vendor).

    3. Keep the x86 market viable. Without producer competition, x86 may die a slow death.

    4. Have someone to steal ideas from.

    • by Kjella ( 173770 )

      While all of those might be good reasons to keep AMD alive, I don't think any of them are strong enough to ease off in the competition with ARM. Intel needs to push mobile chips that very directly compete with AMD and at this point the collateral damage might be better than the alternative. With lawyers you can stay #1 for years, #2 is marginal, #3 takes forever and #4 they probably hire any smart people AMD has to let go. Intel could join the ARM pack using their production process and low-level design kno

      • That is right - Intel is under no compulsion to keep AMD alive. And AMD is failing all on its own.

        For Intel to get into mobile, they may have to come up w/ a chip that beats ARM on both performance, as well as price/performance. It's not clear that that's achievable w/ x86, so they might try one of the other instruction sets that they have cross patenting agreements on.

        But that begs the question of why would Intel, w/ their high end fabs and processes, want the low end market, which is what the chea

        • by mikael ( 484 )

          It's three things. They have to beat ARM on price, performance and power consumption simultaneously. They can lower all three which is a fail, they can raise all three which is also a fail for mobile, but OK for high-end servers. They have to keep price down (less R&D), and power consumption down while keeping performance up (more R&D).

  • by taradfong ( 311185 ) * on Wednesday December 09, 2015 @08:09PM (#51092125) Homepage Journal

    - AMD was on top of the world with Opteron / AMD64
    - Intel was losing everywhere it went. You'd be hard-pressed to find an Internet / financial shop *not* buying AMD
    - But Intel responded with Merom / Core2Duo. That mostly closed the gap, though initially the memory subsystem was still inferior
    - Had AMD met expectations with the follow-on part (Bulldozer), there is no reason they could not have continued to win
    - But in my mind, their ATi acquisition initiated their downfall. They became schizophrenic.

    To beat Intel (like most market leaders) you have to have a non-trivial advantage. When AMD had one, they kicked Intel's ass to the point that they severely altered Intel's roadmap. When they no longer had one, they lost.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      as an ex-AMD-red guy, i can tell you that no one at the company was happy about AMD buying ATI. the AMD-green folks saw their stock price drop.. the red side saw ineffective leadership, weird internal politics, exceptionally-poor design methodologies, and a loss of a cool corporate culture. both saw tons of abandoned pre-pre-llano projects and strange re-orgs. i don't think the ATI acquisition was the downfall of AMD, they were already on that trajectory; but it didn't help anyone, that's for sure...

    • AMD lost my interest when they started their new naming schemes; before you have semprom operaon phenom, etc. then they went to A series, C Series and E series - trying to make heads or tails of which chip was better was not as easy anymore.

      The second thing was AMD buying ATI and more frequency of bundling them with their CPUs - and that was a problem only because ATI just generally sucks under Linux.

      So it became way easier to spec out an Intel i series and also find one bundled with nVidia GPU.

      Wasn't my c

  • by Kjella ( 173770 ) on Wednesday December 09, 2015 @08:16PM (#51092161) Homepage

    July 24, 2006: AMD buys ATI, stretching their credit to the limit
    July 27, 2006: Intel launches Core 2 Duo (Conroe)

    To get an idea of how quickly AMD was in trouble, here's Anandtech [anandtech.com] in November 2007 at the launch of Phenom:
    If you were looking for a changing of the guard today it's just not going to happen. Phenom is, clock for clock, slower than Core 2 and the chips aren't yet yielding well enough to boost clock speeds above what Intel is capable of. While AMD just introduced its first 2.2GHz and 2.3GHz quad-core CPUs today, Intel previewed its first 3.2GHz quad-core chips. (...) Inevitably some of these Phenoms will sell, even though Intel is currently faster and offers better overall price-performance (does anyone else feel weird reading that?). Honestly the only reason we can see to purchase a Phenom is if you currently own a Socket-AM2 motherboard; you may not get the same performance as a Core 2 Quad, but it won't cost as much since you should be able to just drop in a Phenom if you have BIOS support.

    Up to July 2006: K8 > Netburst
    July 2006 - November 2007: K8 < Core (AMD sales tank)
    November 2007 - October 2011 K10 < Core (successor lagging behind)
    October 2011-2016? Bulldozer < Sandy Bridge (late and underperforming)

    Why didn't AMD have the cash to burn in 2006-2009 to come up with something better? Oh, a $5.4 billion purchase of ATI. It sucked all the R&D out of CPUs and into APUs and "synergies", but even today you see no major differences between an APU and pairing a CPU + dGPU unless you've written very special code for just that situation.

    • by Xest ( 935314 )

      Let's be honest, the ATI purchase was a bold move that had the potential to pay off big time in an era when we were seeing convergence between GPUs and CPUs with GPGPUs and such.

      Unfortunately, it was a gamble that they lost, the market they foresaw never came to fruition to the degree they were expecting, in large part because everyone got distracted by mobile which became the new thing and the new focus. Had the iPhone and Android never have happened a completely different set of chip designs may have beco

  • by ndykman ( 659315 ) on Wednesday December 09, 2015 @08:24PM (#51092217)

    Sure, lots of controversy over their actions in the late 90s and early 2000s, but by 2005, Intel had recovered from the mistakes made in NetBurst. Starting with the Core microarchitecture, Intel made some very strong advances in process and gains in their CPU architectures in the consumer and server spaces. AMD got distracted with the APU designs and made a huge misstep with the Bulldozer line. I think the ATI acquisition was a distraction as well. Meanwhile, Sandy Bridge was in place and allow Intel to make gains all around. By the time Haswell was in place, their entire lineup was solid. They had the core counts to match the high end Opterons, they were pushing ahead on virtualization (VT-D, APICv) and AMD was and is in a rough spot.

    Zen needs to have good parity with Skylake for AMD to regain market share, and that's a tough task. Also, Intel has major process advantages. They are at 14nm already, which helps keep yield up as transistor count rises (core count). They do have an advantage in the all in one market and do very well in the budget segments. We will see if their ARM based assets play out, but it's going to be tough going for AMD with Intel on one side and NVidia on the other.

  • for ZEN! Come on AMD!

  • by steveha ( 103154 ) on Wednesday December 09, 2015 @08:52PM (#51092389) Homepage

    Intel's compilers still use the CPUID instruction to decide whether to emit efficient code or not. Intel has an official notice to this effect. Charmingly, the notice is only available as an image file. I presume this is to make it harder to search for the notice.

    https://software.intel.com/en-us/articles/optimization-notice/ [intel.com]

    Every time I see benchmarks now, I wonder whether the results were affected by the use of an Intel compiler.

    I try very hard to not buy Intel products.

    • I've tested version 13 of ICC using Povray [povray.org] on several of my AMD and Intel systems. I can tell you that the dispatch code works as you would expect: AVX (but not AVX2) code path does work on the FX8350, does not work on Phenom II x6 1090T so switched to SSE2 path. Of course, compiling without dispatch will cause seg faults on processors that don't support AVX or AVX2. But that doesn't necessarily mean ICC is the fastest compiler for AMD. For Povray at least that would be GCC.

    • How good would the Intel compiler have to be at optimizing for AMD products before people would no longer claim that Intel was deliberately crippling the optimizations? I submit that there is no limit, and therefore there is no reason for Intel to try.

      • by dbIII ( 701233 )
        It's not how good it has to be but how astonishingly bad it is now.
        In my case it was a CPU bound trivially parallel thing - only half the time on 64 AMD cores running flatout than on an i5 laptop with both using the intel compiler. With gcc the same sort of thing is clock to clock and core to core, with the AMD machine finishing 16 times faster as it should. I forget how many days it took for the short run, but it was days.
      • by at0mjack ( 953726 ) on Thursday December 10, 2015 @05:48AM (#51093859)
        Oh come on, this has been beaten to death. Nobody is saying that Intel has to optimise for AMD products. There is a standard mechanism (introduced by Intel!) to query a chip to find out what instruction sets it reports. Intel's compiler uses this mechanism to decide what code branch to run, but *only* for Intel chips. The literal code path is

        if (Intel chip) then if (supports SSE2) then run SSE2 code else run non-SSE2 code else run non-SSE2 code endif

        All people are saying is that the code path should be

        if (supports SSE2) then run SSE2 code else run non-SSE2 code else

        See the difference?

        Yes, you can get extra speed by ordering the instructions differently for different architectures, and Intel's compiler quite rightly does that to product Nehalem-optimised code or Skylake-optimised code. I don't expect the compiler to produce Bulldozer-optimised code, but I expect it to allow me to run the Nehalem-optimised code on a Bulldozer. Where does this meme that this request is "forcing Intel to optimise for the competition" come from? I want Intel to do *less* work, not more - all they need to do is *remove* a small amount of code from their compiler and I'd be happy.

      • by The_countess ( 813638 ) on Thursday December 10, 2015 @05:55AM (#51093871)
        that's a easy answer: when performance no longer goes up when you change the ID to 'genuine intel'. changing that id should have NO effect on performance. the fact that it does is a CLEAR indication intel does things with the compiler it shouldn't be doing.
        • by ndykman ( 659315 )

          Well, actually, no. An optimal compiler and libraries would indeed be able to squeeze out some gains by knowing exactly what microarchitecture was being targeted versus just instruction sets.

    • by RuffMasterD ( 3398975 ) on Thursday December 10, 2015 @09:27AM (#51094293)

      Optimization Notice

      Intel’s compiler may or may not optimize to the same degree for non-Intel microprocessors for optimisations that are not unique to Intel microprocessors. These optimisations include SSE2, SSE3, and SSSE3 instruction sets and other optimisations. Intel does not guarantee the availability, functionality, or effectiveness of any optimisation on microprocessors not manufactured by Intel. Microprocessor-dependant optimisations in this product are intended for use with Intel microprocessors. Certain optimisations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

      Notice revision #20110804

      As written by Intel [intel.com], but written in text for the convenience of visually impaired slash-dotters with screen readers. Highlights mine.

  • by swm ( 171547 ) <swmcd@world.std.com> on Wednesday December 09, 2015 @09:27PM (#51092551) Homepage

    I kind of don't get the defeatured compiler hack.
    It seems like all AMD needs to do is contribute the appropriate code generators to GCC.

  • by Rockoon ( 1252108 ) on Thursday December 10, 2015 @12:30AM (#51093219)
    Links to the FACT that Intel was convicted of anti-trust against AMD keeps getting modded down.

    So here it is again:

    E.U. Commission press release detailing their conviction of Intel. [europa.eu]

    The European Commission has imposed a fine of €1 060 000 000 on Intel Corporation for violating EC Treaty antitrust rules on the abuse of a dominant market position (Article 82) by engaging in illegal anticompetitive practices to exclude competitors from the market for computer chips called x86 central processing units (CPUs). The Commission has also ordered Intel to cease the illegal practices immediately to the extent that they are still ongoing. Throughout the period October 2002-December 2007, Intel had a dominant position in the worldwide x86 CPU market (at least 70% market share).

    Intel was CONVICTED of monopoly abuse. This is an irrefutable fact. There are a lot of people here either claiming that they were never convicted or downmodding those that are revealing the truth. The site I linked to is the official press release site of the E.U. Commission.
  • While I am generally happy with my CPU experience. Pathetic Linux support for their GPUs means I will never buy their tat again. (come back ATI - all is forgiven)
  • The article does not cover the whole story, missing the important parts of the last 10 years. AMD dropped the ball completely with the Athlon 64 - anyone else remember the Sempron and Opteron? Phenom was meant to redeem them, but Intel's Core2 architecture completely obliterated AMD, taking the entire high end of the market and beating them on bang per buck in the middle range as well. AMD were relegated to competing (relatively successfully) for the low end. Bulldozer only compounded this, again unable

Nothing will ever be attempted if all possible objections must be first overcome. -- Dr. Johnson

Working...