Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel "East Fork" Technology Migration 165

Hack Jandy writes "When Intel's Centrino platform first unveiled, industry experts were surprised to see such great performance of the Pentium M, based off Intel's P6 (Pentium III) architecture. According to sources in the industry, Intel has officially adopted the approach to migrating Pentium M to the desktop (hence, "East Fork") to offset some of its Pentium 4 processor sales. Cheaper, slower, cooler, but higher performing processors are on the way to an Intel desktop near you!"
This discussion has been archived. No new comments can be posted.

Intel "East Fork" Technology Migration

Comments Filter:
  • by PornMaster ( 749461 ) on Monday November 15, 2004 @09:33AM (#10819187) Homepage
    The cooler they can keep a well-performing CPU, the less noise they need coming out of the box. Let's count this one as a victory for using PCs for PVR/Jukebox-style uses.
  • Great for servers (Score:2, Interesting)

    by Folmer ( 827037 ) on Monday November 15, 2004 @09:34AM (#10819196)
    Gonna be great to use this platform for servers..

    Low power usage...
    Great performance..
    Low heat emission (easy to make passive cooled..)

    GamePC made a test not long ago, and it performed on par with p4EE and amds FX5x...
    http://www.gamepc.com/labs/view_content.a sp?id=dot handesktop&page=1
  • by iamthemoog ( 410374 ) on Monday November 15, 2004 @09:43AM (#10819255) Homepage
    http://www.reuters.com/newsArticle.jhtml?type=topN ews&storyID=6786951 [reuters.com]

    Since it's from Reuters anyhow... old news too (11th Nov).

  • I guess. (Score:4, Interesting)

    by dj245 ( 732906 ) on Monday November 15, 2004 @09:48AM (#10819285) Homepage
    Intel does listen to their customers after all! I mean, after their flagship processor becomes incapable of scaling higher... And uh, emits more heat per area than most smelters.... and needs server-levels of expensive cache to keep it compeditive.

    So yep, they respond very quickly to customer needs and wants.

  • by swordboy ( 472941 ) on Monday November 15, 2004 @09:49AM (#10819292) Journal
    I don't think that you are seeing the whole story. Basically, Intel has been holding out for IBM's silicon-on-insulator [ibm.com] technology because it reduces power requirements a good deal. Unfortunately for Intel, IBM is pretty sneaky when it comes to licensing and often prefer to swap technology rather than accept cash. I'd imagine that IBM is holding out for an x86 cross-license agreement while Intel does not want to give that up.

    What you've seen in the past couple years is a game of chess. With each move, the other hopes that they have positioned themselves to better reach a licensing deal. Intel's move to non-clock processor ratings was a big move in this game.

    From what I've seen at Intel's developer forums, they're working on some radically different architecture. Something that isn't von Neumann at all. They're calling it "massively parallel" but the industry seems to think that this means multiple cores on one chip. I think that it means thousands or millions of "processing elements" on one chip (think really small processing elements). Their claim is that they'll be able to apply this architecture to everything from mobile to high-end servers simply by adding or subtracting elements as power constraints allow.
  • by ceeam ( 39911 ) on Monday November 15, 2004 @10:06AM (#10819394)
    I noticed that every x86 CPU architecture in the past decade climbed 4-5 times in MHz from inception to the "end of the line" model: 486 - 25..100(???, 133 is AMD's version and those started higher than 25), Pentium - 50..200, Pentium4 - 1200..3600 now and still has a tad in reserve as shown by extreme overclockers; similarly for AMD, K6 - 166..550; Athlon - 500..2.x(?). And now Pentium2/3 - started at 233 and climbed until around 1300, which is higher than 4/5x. But maybe there's been some really notable arch changes since P2? What're your thoughts?
  • by IGnatius T Foobar ( 4328 ) on Monday November 15, 2004 @10:11AM (#10819428) Homepage Journal
    This is really about Intel finally coming to terms with the fact that nobody wants to buy Itanium chips. That's where Intel was headed, and Intel assumed that everyone would follow along. Unfortunately, Itanium's future depended on technology advancements that never happened, and a rate of adoption that nobody was willing to pursue.

    This is why Xeon became an architectural dead end: Intel wasn't willing to move the technology forward, because Xeon was supposed to be superseded by Itanium.

    Did you know that "Pentium M" is actually based on the same technology they originally called Pentium Pro? It's true. It was a good design. It didn't do all that well initially because its 16-bit performance was abysmal, and people were still running a lot of 16-bit software at the time. Now that everything is 32-bit, Pentium Pro (now Pentium M) is just fine. The fact that it gets used in laptops is a testament to its ratio of performance to power consumption.

    Intel would be wise to move forward with this. They ought to ditch Xeon entirely, and perhaps even graft the AMD64 instruction set onto this chip.
  • Re:Why do this? (Score:3, Interesting)

    by myurr ( 468709 ) on Monday November 15, 2004 @10:22AM (#10819502)
    The probelm for Intel is this: By the time they get this chip to market, or certainly not long after, Microsoft will actually ship Windows XP 64.

    While the Pentium M may be able to close the gap to the Athlon 64 when running in 32 bit mode, possibly even beat the AMD chip if Intel are successful in increasing the M's clock speed, the Athlon is just waiting to really stretch it's legs. In some situations moving to 64 bits will not improve performance, and could possibly even hamper it, but for the majority of desktop applications and games with optimised code the 64 bit version with the extra registers will trounce the 32 bit chips.
  • by myurr ( 468709 ) on Monday November 15, 2004 @10:27AM (#10819550)
    Precisely. I would be suprised if they could make such a chip and keep it both X86 compatible, and fast for todays applications. If it's only slightly parallel then it's no more than a dual core chip with hyperthreading. If it's massively parallel like the grand parent post suggested, then each individual thread is unlikely to run as fast as a P4 or Athlon64 chip today, and that will hurt applications that don't benefit from being mulithreaded (ie. most of todays unoptimised apps).
  • by Slack3r78 ( 596506 ) on Monday November 15, 2004 @11:11AM (#10819904) Homepage
    The Banias cores had 1MB L2 cache and the new Dothans have 2MB L2 cache, yes, but that's not the sole reason for them performing as they do. The current Prescotts have 1MB L2 and Intel's slated to introduce 6xx P4s in January with 2MB L2, but I can promise you that the additional cache won't suddenly cause Prescott to perform simliarly to Dothan clock for clock.

    It's a design philosophy based around high IPC, not the large cache, that makes the Pentium M such a strong performer.
  • by Oestergaard ( 3005 ) on Monday November 15, 2004 @11:27AM (#10820057) Homepage
    Dude, computers have not been even *close* to Von Neumann for several decades.

    Von Neumann assumes uniform memory access times - this is largely untrue for any ordinary scalar processor with a cache - ten years ago processors had internal memory write buffers, l1 cache, possibly l2 cache and main memory - today it's even more complicated. But common for all of this is that even a simple hierarchical memory system with a single level cache and the main memory makes your computer very far from Von Neumann.

    So, if Intel wants to present something that "isn't von Neumann at all", all they need to do is pull an i386 out of a hat and wave it at the drooling masses.
  • by getch(); ( 164701 ) on Monday November 15, 2004 @11:55AM (#10820345)
    SOI only helps reduce one particular source of static power consumption. While static power is a big issue at 90nm, SOI doesn't magically solve it. Further, the big problem with Prescott is power dissipation and heat under load--dynamic power consumption. I'm not sure where you heard this rumor, but even if true it's ancillary to the current discussion.
  • Re:Bout bloody time (Score:3, Interesting)

    by qtothemax ( 766603 ) on Monday November 15, 2004 @11:58AM (#10820379)
    I'll believe that when I see it though, I don't think Intel would do it if only for the reason it's not as marketable.

    Of course its marketable. The new model number scheme puts the p4 at 4xx, while the M is 6xx. Thats 200 more. It must be a lot better. I knew they'd do this as soon as they came out with the model numbers.

    Note: I'm not a moron. I'm just writing what "joe sixpack" thinks.
  • by fitten ( 521191 ) on Monday November 15, 2004 @11:59AM (#10820397)
    Okay I guess you have not read that Intel is going to produce a Xeon with x64 extensions.

    Not "going to"... "have"... They have been for sale (and actually shipping) for a couple months now.

    I have to wonder if we are possibly seeing the end of the X86 ISA?

    Well... If one thing has been proven in the past it is that software is the driving force, not hardware. It will still take some time for the near 30 years of x86 software to be replaced by "platform independent" stuff (like Java and .NET).

    I mean Microsoft is droping the X86 from the XBoxII that means a port of WindowsXP to the PowerPC.

    Yeah... this is really interesting... especially along with the three versions of the XBox2 that will be shipping (one of which is actually called a "PC").

    Really kind of funny since WindowsNT was supposed to be multiplatform for the start.

    It was. I had PPC, Alpha, and MIPS versions. One major problem for those was that there wasn't a market for them. There were only a few machines of those types of architectures that wanted to run Windows and no one for home would buy them. It just didn't make sense to keep them around (from a making money perspective). Also, some of the work to support those ports were supposed to be done by hardware vendors and they didn't do it (also because of the making money issue) so Microsoft was either left to do it themselves (on a losing money platform) or drop them from the support line.

    Will Microsoft support Longhorn on IBMs power cpus?

    Very good question... with the XBox2, it certainly seems that it wouldn't be too much of a step farther.

    Frankly Intel has really had a dismal record with cpus except for the x86 The 8080 and later 8085 because second string players to the Zilog Z80 a better 8080 much like the Athlons are now. The 432 and 80860 where never hits. Intel even dropped its 890 line of embeded risc cpus to jump on the ARM bandwagon with it's Xscale line it bought from DEC.

    Well... some folks would disagree with this. The 8051 (and followons) were huge in the embedded world. The i860 wasn't intended to be a "home PC" type processor and saw good use in the HPC world (Intel Paragons, iPSC860s, etc.) and in the graphics world (high end SGI graphics cards were based on i860s - RealityEngine, etc.) Likewise, the i960 family was huge in embedded systems. They were big in printers and all sorts of other devices. The i960s were phased out for newer/better technology in the XScales. The i960 was getting pretty old :)
  • by cant_get_a_good_nick ( 172131 ) on Monday November 15, 2004 @12:05PM (#10820468)
    Cringely had an article a while back [pbs.org] that mentioned Google liking to use Pentium IIIs in their data center. Yes the Pentium 4s were faster, but if you looked at your datacenter as a whole system, including power, cooling, and space requirements, they were better off with 'old' Pentium IIIs. At the time, I think Google was worried they wouldn't be able to source new machines with P-IIIs, looks like Intel is following them this time. Intel seems to be following a lot lately, the megahertz at any cost mantra sure faded fast.
  • Don't think so (Score:4, Interesting)

    by Moraelin ( 679338 ) on Monday November 15, 2004 @12:24PM (#10820709) Journal
    I don't think so.

    Intel has basically been hanging itself with the awful lot of rope their own marketting gave them. The "MHz is everything" marketting was an easy thing to push, since most people actually _want_ one number that tells them everything about a CPU.

    (True story: I actually spent some time arguing with a marketroid about it, and gave up. He was arguing that it must be Anantech's and everyone else's benchmarks that are at fault, because CPU A is in some apps 50% faster than CPU B, in some apps equal, and in some apps actually a little slower. "It can't be! If CPU A is X% faster than CPU B, it must be X% faster in everything!" Any explanations about differences in CPU architecture and such, went right above his head.)

    So it was easy for Intel to push the MHz as the one true speed indicator. And for a while all they had to do was keep putting out CPUs with more and more MHz.

    Except after a while it became a trap. Any new design _had_ to be higher MHz, or have Intel's own marketting working against it. All those many millions that went into telling people "buy a higher clocked CPU", now would basically tell them "don't buy the newest Intel CPU chip", if Intel made one with less MHz.

    And now Intel finally _has_ to find a way out of the hole it dug itself into.

    As for Cyrix (now VIA), it was never really a problem for Intel. Cyrix just fell behind performance-wise on its own. The last proper Cyrix versions were already falling beind in integer performance too, but it was their floating point performance that was abysmal. So what killed Cyrix was not as much Intel, as games going 3D: now everyone had benchmarks everywhere, clearly showing the Cyrix as barely crawling.

    And Via's versions fell behind even more. They aren't just slower in MHz, they're also slower _per_ MHz. Other than being low power, they just suck.

    And it's not that VIA really _wants_ to be the poor-man's niche, for Chinese families who can't afford an Intel or AMD. People find such niches to survive, but noone really wants to _stay_ in such a niche. Noone actually wants to sell their top CPU at $30 or less, instead of, say, the $600+ that an Athlon 64 FX sells for.

    So if VIA could break out of that unprofitable niche, believe me, they would. The problem is simply that they can't.
  • by LWATCDR ( 28044 ) on Monday November 15, 2004 @12:28PM (#10820742) Homepage Journal
    "Well... some folks would disagree with this. The 8051 (and followons) were huge in the embedded world.'
    They still are extermly popular but not really an inovative design. But very successful but mainly for other companies Intel left the 8085 bussines a long time ago.
    " The i860 wasn't intended to be a "home PC" type processor and saw good use in the HPC world (Intel Paragons, iPSC860s, etc.) and in the graphics world (high end SGI graphics cards were based on i860s - RealityEngine, etc.)" Actually the i860 was going to be a major new family of CPUs for workstations and the like. It never really lived up to it's billing. The worst problem with it was context switching was dog slow and the "smart" compilers never got smart enough. Running really tight code writen by hand running a single task they proved very fast and as you pointed out ended up in graphics cards and the like.

    " Likewise, the i960 family was huge in embedded systems. They were big in printers and all sorts of other devices. The i960s were phased out for newer/better technology in the XScales. The i960 was getting pretty old :)
    "
    The i960 is no older than the ARM. In fact it came out a year after the first of the ARMs did. I would have to say that Intel except for the HUGE Wintel market really has not been all that successful. Frankly the have not had to since the x86 has been a huge money pump for them. I mean if you are going to win only one market that was the right one to win.
    I do wonder what type of perfromance you could squeeze out of an ARM or an Alpha if you put as much money into them as Intel has with the x86.

    "Well... If one thing has been proven in the past it is that software is the driving force, not hardware. It will still take some time for the near 30 years of x86 software to be replaced by "platform independent" stuff (like Java and .NET).
    " You have forgoten the stealth platfrom independent stuff" Linux and c. For the server market anyway things like Samba, Apache, PHP, Perl, Postgres, and MySQL are all available to run on none Intel platforms. Linux and c are bringing write once compiler everywhere to the server world. Think of all the companies that are already porting stuff to Linux from old unix systems. Do you think they care if they are moving from a Sun or Vax to a linux box if they recompile for x86 or PPC? For the desktop you are right but even that is changing now. OpenOffice and Firebird/Thunderbird are bigger changes than anyone really wants to admit.

  • Re:Why do this? (Score:3, Interesting)

    by Not_Wiggins ( 686627 ) on Monday November 15, 2004 @12:37PM (#10820845) Journal
    Basically, the Pentium M is a move back to a P3 type design philosophy, away from the 30-stage pipeline madness Intel's gotten themselves into with Prescott. I fail to see how going with a more intelligent design is going with a dumbed down processor.

    I agree with you whole-heartedly. Although the only thing I'd add to what you've said is that they're going back to a chip design that they didn't actually design! If anyone recalls, the Pentium was basically ripped off from DEC. Sure, adding SSE and other "add-ons" was a way of extending the life of the base design until Intel could design its own chip from scratch: the Pentium IV.

    Figures they'd go back to a design that was more efficient clock-for-clock than what they could come up with on their own.

    And before anyone reads too much AMD kudos in this, AMD bought DEC engineers for chip design and traded flash tech for copper fabrication tech from Motorola to help them leapfrog from K6 (Intel-clone) to the K7.
  • by CptSkippy ( 793400 ) on Monday November 15, 2004 @12:41PM (#10820905)
    Pentium M's foundations are the Pentium Pro but I wouldn't really say it is based on it. Prior to the Pentium 4 most of Intel's architecture moves were based on cost savings either in manufacturing, QA or support. The Pentium Pro wasn't too popular because it didn't support the MMX instructions of the Pentium MMX and it's L2 cache was on the mainboard and thus Intel hand no QA over it and the L2 was often the cause of problems. To solve these short comings they came up with the Pentium II. Pentium 2 = Pentium Pro + MMX + external L2 on a PCB + Die Shrink + Slot 1 The Pentium II was exspensive so the Pentium III came along. Pentium III = Pentium II + SSE + L2 moved on Die + Die Shrink The Slot 1 design was still pricey so... Pentium III v.2 = Pentium III + socket 370 The Pentium M was Intel's answer to Transmeta as a growing threat and Pentium 4's unsuitability to perform as a mobile chip. The Pentium M was based on a Pentium III Tulatin. The Israeli Team went about making it a power mizer. To do this they decided the best way was to make it powerdown as frequently as possible because a CPU spends most of it's time idle. Because a CPU spends alot of it's time waiting on memory, they removed the PC133 FSB and slapped on a highly optimized P4 QDR 100mhz (4x100=400mhz) FSB to feed the CPU so it wasn't waiting and could just go to sleep. The optimized the opcode up the wazoo and made a chipset that was just as optimized and would shut down unused portions of the system whenever possible. Since the system is always pressure to go to sleep, everything is optimized to be as efficient as possible so that it can get to be quicker. The end result/realization was that making a CPU power efficient is that you actually just make a super efficient cpu that just power cycles frequently. Thus if you tell it not to power cycle you have a CPU that really kicks ass. Intel is slowly realizing that the P4 won't scale forever and that the Pentium M has alot of potential. What alot of people don't actually know is that the Pentium 4 was purposefully made inefficient to permit it's clock to scale higher. Intel initially delayed the P3 Tulatin and made very little of it when it did come out because a 1.4ghz P3 wouldn't have helped the sales of the 1.x Ghz P4s. After the P4s were moved to .90nm and the clocks jacked up over 2ghzs they quietly rolled out the Tulatin. An overclocked P3 on a 440bx regularly schooled the P4 and a P3 Tulatin on an 815 Solano PC 133 chipset would have been the final nail in the coffin for the P4. Intel has always made technological moves based on it's bottom line and rarely introduces a good technology just to benefit progress. If you look at it's moves, most of them are new manufacturing or formfactor technologies to cut cost or sell chipsets. Often these moves shifted cost onto the mainboard makers, as is the case with LGA chips. Intel has traditionally relied on it's manufacturing process to defeat it's competitors either in price, or performance by ramping clock speed. The P4 was a gamble that by making an ineffienct scalable processor, their manufacturing processes would allow them to defeat AMD and Tranmeta. Unfortunately AMD has showed that they can play ball and now Intel is looking into factors other than manufacturing for the perfomance they need.
  • Re:Great for servers (Score:3, Interesting)

    by Glock27 ( 446276 ) on Monday November 15, 2004 @03:16PM (#10822515)
    GamePC made a test not long ago, and it performed on par with p4EE and amds FX5x...

    It was only truly competitive with the FX when it was overclocked. Granted, it did very well for a low-power chip though. It was also interesting that AGP 8x appears to make very little difference over 4x for the games they tested.

    The new 90 nm. Athlon64s overclock quite a bit also, though, and they are 64 bit (64 bit mode is faster, and wasn't tested). The upcoming dual core Athlon64s and Opterons also sound very good. There are also low-power versions which get a lot closer to Dothan power consumption.

    All told, though, I'd like to see Intel market Dothan as a desktop solution with faster frontside bus, AGP 8x or PCIe and so on.

    Competition is good! :-)

Old programmers never die, they just hit account block limit.

Working...