Forgot your password?
typodupeerror
Intel Hardware

Intel "East Fork" Technology Migration 165

Posted by Hemos
from the moving-towards-the-M-class dept.
Hack Jandy writes "When Intel's Centrino platform first unveiled, industry experts were surprised to see such great performance of the Pentium M, based off Intel's P6 (Pentium III) architecture. According to sources in the industry, Intel has officially adopted the approach to migrating Pentium M to the desktop (hence, "East Fork") to offset some of its Pentium 4 processor sales. Cheaper, slower, cooler, but higher performing processors are on the way to an Intel desktop near you!"
This discussion has been archived. No new comments can be posted.

Intel "East Fork" Technology Migration

Comments Filter:
  • by XNormal (8617) on Monday November 15, 2004 @09:32AM (#10819180) Homepage
    "So perhaps this Pentium 4 architecture with its ridiculously deep pipeline wasn't such a great idea after all?"
    • No, I think they're saying that "The average consumer has just figured out that they don't need a 3GHz processor that dissapates 1.21 gigawatts of heat!" :-)

      On another note, does anyone know what the heck this is [amazon.com]? It says Celeron, but it also says FPGA. Does anyone know which is it? I found it the other day when looking for FPGA books, and it's been puzzling me ever since.
    • by swordboy (472941) on Monday November 15, 2004 @09:49AM (#10819292) Journal
      I don't think that you are seeing the whole story. Basically, Intel has been holding out for IBM's silicon-on-insulator [ibm.com] technology because it reduces power requirements a good deal. Unfortunately for Intel, IBM is pretty sneaky when it comes to licensing and often prefer to swap technology rather than accept cash. I'd imagine that IBM is holding out for an x86 cross-license agreement while Intel does not want to give that up.

      What you've seen in the past couple years is a game of chess. With each move, the other hopes that they have positioned themselves to better reach a licensing deal. Intel's move to non-clock processor ratings was a big move in this game.

      From what I've seen at Intel's developer forums, they're working on some radically different architecture. Something that isn't von Neumann at all. They're calling it "massively parallel" but the industry seems to think that this means multiple cores on one chip. I think that it means thousands or millions of "processing elements" on one chip (think really small processing elements). Their claim is that they'll be able to apply this architecture to everything from mobile to high-end servers simply by adding or subtracting elements as power constraints allow.
      • by qbwiz (87077) <`moc.ylimafnamuab' `ta' `nhoj'> on Monday November 15, 2004 @10:01AM (#10819362) Homepage
        They better tell programmers and compiler-writers about this soon. Any chip like this is would be very hard to program for - I suspect that any attempted move to this architecture would end up like the Itanic.
        • Precisely. I would be suprised if they could make such a chip and keep it both X86 compatible, and fast for todays applications. If it's only slightly parallel then it's no more than a dual core chip with hyperthreading. If it's massively parallel like the grand parent post suggested, then each individual thread is unlikely to run as fast as a P4 or Athlon64 chip today, and that will hurt applications that don't benefit from being mulithreaded (ie. most of todays unoptimised apps).
        • If it's really as far away from current architectures as the GP makes it sound like, it might require changes to more than just the compilers to make some languages work on it.
      • Sounds like IBM is working on essentially the same thing [arstechnica.com]
      • You mean they're cloning Sun's Niagara? [aceshardware.com]

        Intel's really falling behind the curve if that's the case. If I were a stockholder, I'd be pretty annoyed after AMD64^H^H^H^H^H EM64T, and the Itanic debacle. Now they're cloning a Sun processor design? Heh.

        </flamebait>

      • by gilesjuk (604902) <.giles.jones. .at. .zen.co.uk.> on Monday November 15, 2004 @11:20AM (#10820003)
        I can understand why they're keen to experiment with different architectures, but I think such ideas are often panic measures.

        Intel knowing that it's 64-bit offering is a lame duck and seeing AMD's opteron cleaning up in many areas is panicing and hoping to produce something radically better.

        It was the worry that 32-bit CPUs were going to deliver that gave birth to the whole transputer concept (in the UK of all places).

        Have a good read about the concept, it's not too disimilar to what is being proposed today (except the cores are more advanced).

        http://en.wikipedia.org/wiki/Transputer [wikipedia.org]
      • Dude, computers have not been even *close* to Von Neumann for several decades.

        Von Neumann assumes uniform memory access times - this is largely untrue for any ordinary scalar processor with a cache - ten years ago processors had internal memory write buffers, l1 cache, possibly l2 cache and main memory - today it's even more complicated. But common for all of this is that even a simple hierarchical memory system with a single level cache and the main memory makes your computer very far from Von Neumann.

        S
        • Having cache itself doesn't take you too far from von Nueman; yes, technically its not von Nuemann, but its very similar. The real thing that took modern processors away from von Nuemann and more towards a Harvard architechture was seperate data and code caches.
      • I think it means an ALU on the RAM die. No bus/controller latency to memory would ROCK,
        and Intel could eliminate the commodity DRAM market, glomming all that revenue until them-greedy-little-selves.
      • SOI only helps reduce one particular source of static power consumption. While static power is a big issue at 90nm, SOI doesn't magically solve it. Further, the big problem with Prescott is power dissipation and heat under load--dynamic power consumption. I'm not sure where you heard this rumor, but even if true it's ancillary to the current discussion.
      • Actually, Massively Parallel doesn't mean Non_Von_Neumann. Multi-core architecture still connects execution units, memory, and IO using common busses, which is the crux of VNA.

        Now if you had multiple busses connecting multiple paths to the same execution units in a web, like the internet, or a neural net, that's nonVNA.

    • by gadget junkie (618542) <gbponz@libero.it> on Monday November 15, 2004 @10:02AM (#10819366) Journal
      "So perhaps this Pentium 4 architecture with its ridiculously deep pipeline wasn't such a great idea after all?"

      It is not that deep pipeline is bad in itself; the point is, the decision to build the pIV that way was slaved to the use of MHZ as a marketing tool. That, in itself, drove the chip design in a way that essentially banned it from the laptop market, which in turn drove the design of the pentium-m , a.k.a. Centrino.

      Now Intel itself is at a fork in the road, because Prescott is also geared towards higher frequencies, which means it will probably be hotter still. [tech-report.com]
      Now, I do not know how much money Intel sunk in the prescott design, but if it is serious in building this new Centrino derivative processor, all this money will be washed away; and if Intel tries to keep this processor one step behind Prescott in performance, it risks a royal Chewing up by AMD.
  • by PornMaster (749461) on Monday November 15, 2004 @09:33AM (#10819187) Homepage
    The cooler they can keep a well-performing CPU, the less noise they need coming out of the box. Let's count this one as a victory for using PCs for PVR/Jukebox-style uses.
    • Well, except for the fact that decoding video and such is one of the few things that the P4 is particularly good at, since the long pipeline doesn't hurt.
      • Decoding MPEG-2 video is something that can be done on any Radeon, and it cuts the CPU load of a high quality encode to a fourth of CPU-only.

        I imagine if properly standardized, VC-1 and that H.xxx version of MPEG-4 being put into the next DVD format will be put in too. nVidia needs to get on board with this too.
  • Obscene (Score:4, Funny)

    by BabyJaysus (808429) on Monday November 15, 2004 @09:34AM (#10819194)
    Intel employee: "Shall I try migrating Pentium M to the desktop?"

    Intel boss: "Fork off!"

    </shame>
  • Great for servers (Score:2, Interesting)

    by Folmer (827037)
    Gonna be great to use this platform for servers..

    Low power usage...
    Great performance..
    Low heat emission (easy to make passive cooled..)

    GamePC made a test not long ago, and it performed on par with p4EE and amds FX5x...
    http://www.gamepc.com/labs/view_content.a sp?id=dot handesktop&page=1
    • by Anonymous Coward
      Most servers need very little processing power. a P-II can do it easily for 1000 users.

      processing intensive things like a DB would be happier with multiple low power cores. we have a 2.2Ghz Xeon server here and the 4 processor P-III 500 server next to it regularly kicks the faster and newer machines arse HARD every single time. and that performance gap increases as the load increases.. having 20 users on each server really shows it off. the Older P-III kicks the Xeon's head so hard it is not even funny
    • Re:Great for servers (Score:3, Interesting)

      by Glock27 (446276)
      GamePC made a test not long ago, and it performed on par with p4EE and amds FX5x...

      It was only truly competitive with the FX when it was overclocked. Granted, it did very well for a low-power chip though. It was also interesting that AGP 8x appears to make very little difference over 4x for the games they tested.

      The new 90 nm. Athlon64s overclock quite a bit also, though, and they are 64 bit (64 bit mode is faster, and wasn't tested). The upcoming dual core Athlon64s and Opterons also sound very good. T

    • What I would love to see is a micro-ATX board with one of these chips to be used as a multimedia PC-TV unit. Something fairly beefy to act as a good PVR but not need a lot of power (or space) for huge heat sinks and lots of cooling fans.
  • by iamthemoog (410374) on Monday November 15, 2004 @09:43AM (#10819255) Homepage
    http://www.reuters.com/newsArticle.jhtml?type=topN ews&storyID=6786951 [reuters.com]

    Since it's from Reuters anyhow... old news too (11th Nov).

  • Not really surprising, as the PIII has been faster [theinquirer.net] than the PIV for awhile. Curiously at the same time, people were also noticing that NT4 was faster [hal-pc.org] than 2000 at server tasks, yet most who reported such at the time were gagged by the no-publishing-benchmarks EULA fine print...
  • some sensible dual core damage control...
  • I guess. (Score:4, Interesting)

    by dj245 (732906) on Monday November 15, 2004 @09:48AM (#10819285) Homepage
    Intel does listen to their customers after all! I mean, after their flagship processor becomes incapable of scaling higher... And uh, emits more heat per area than most smelters.... and needs server-levels of expensive cache to keep it compeditive.

    So yep, they respond very quickly to customer needs and wants.

    • Re:I guess. (Score:5, Funny)

      by stevelinton (4044) <sal@dcs.st-and.ac.uk> on Monday November 15, 2004 @12:20PM (#10820661) Homepage
      I read this as "more heat per acre than most smelters". This piqued my curiousity.

      A Pentium 4 seems to run around 217 mm^2 and produce about 100W of heat. This is quickly converted to almost exactly 2.5 million horsepower/acre. Leaving aside the livestock management problems of fitting 2.5 million horses into your 1 acre field, we now turn to a smelter, running, according to ask Jeeves at about 1400K. Radiated heat output per unit area is sigma*T^4 for a black body, less for a real material (where sigma is the Stefan Boltzman contstant), although there will also be quite a bit of convection and so on, which we ignore because it's too hard.

      So, thanks to the magic of the units program, we find that the Smelter puts out about 1.18 million hp/acre, or about half the power output of the PIV.

      So parent was right, P4s really do put out more heat per area (or acre) than most smelters!
  • by data1 (23016) on Monday November 15, 2004 @09:55AM (#10819323) Homepage
    It seems the company is trying to go in a significantly different direction to retain its market dominance.

    1) New Non Engineer CEO :
    http://www.itweb.co.za/sections/business/2004/0 411 151128.asp?S=Career%20Moves&A=MOV&O=FRGN

    2) GHz No longer a big deal after marketing it for so many years as the only major thing you need to know about the performance of a computer.
    http://www.theregister.co.uk/2004/10/14 /intel_kill s_4gh/

    3) Shift to Better if not necessarily newer technology - see article above: oh who am I kidding....
    http://www.xbitlabs.com/news/chipsets /display/2004 1111133206.html
  • Competitors (Score:1, Informative)

    by Anonymous Coward
    After reading for a while yesterday (after checking yesterday's k=note about the latest intel processor). I found that VIA [via.com.tw] (who bought Cyrix, I think the best processor at the 4x86 era, after National semiconductors almost broke it) has been working on it for a while. Perhaps the increase on "speed" (power consumption) was the strategy to take AMD and Cyrix out of the market? Now they want to come back because their processors are that inefficient?

    Unfortunately, it seems like VIA [via.com.tw] is not focused on the PC m
    • Don't think so (Score:4, Interesting)

      by Moraelin (679338) on Monday November 15, 2004 @12:24PM (#10820709) Journal
      I don't think so.

      Intel has basically been hanging itself with the awful lot of rope their own marketting gave them. The "MHz is everything" marketting was an easy thing to push, since most people actually _want_ one number that tells them everything about a CPU.

      (True story: I actually spent some time arguing with a marketroid about it, and gave up. He was arguing that it must be Anantech's and everyone else's benchmarks that are at fault, because CPU A is in some apps 50% faster than CPU B, in some apps equal, and in some apps actually a little slower. "It can't be! If CPU A is X% faster than CPU B, it must be X% faster in everything!" Any explanations about differences in CPU architecture and such, went right above his head.)

      So it was easy for Intel to push the MHz as the one true speed indicator. And for a while all they had to do was keep putting out CPUs with more and more MHz.

      Except after a while it became a trap. Any new design _had_ to be higher MHz, or have Intel's own marketting working against it. All those many millions that went into telling people "buy a higher clocked CPU", now would basically tell them "don't buy the newest Intel CPU chip", if Intel made one with less MHz.

      And now Intel finally _has_ to find a way out of the hole it dug itself into.

      As for Cyrix (now VIA), it was never really a problem for Intel. Cyrix just fell behind performance-wise on its own. The last proper Cyrix versions were already falling beind in integer performance too, but it was their floating point performance that was abysmal. So what killed Cyrix was not as much Intel, as games going 3D: now everyone had benchmarks everywhere, clearly showing the Cyrix as barely crawling.

      And Via's versions fell behind even more. They aren't just slower in MHz, they're also slower _per_ MHz. Other than being low power, they just suck.

      And it's not that VIA really _wants_ to be the poor-man's niche, for Chinese families who can't afford an Intel or AMD. People find such niches to survive, but noone really wants to _stay_ in such a niche. Noone actually wants to sell their top CPU at $30 or less, instead of, say, the $600+ that an Athlon 64 FX sells for.

      So if VIA could break out of that unprofitable niche, believe me, they would. The problem is simply that they can't.
  • How ironic (Score:3, Funny)

    by Sentry21 (8183) on Monday November 15, 2004 @10:03AM (#10819378) Journal
    I got a notebook machine with a desktop processor, and now I can get a desktop machine with a notebook procsesor. Superkeen.
  • by ceeam (39911) on Monday November 15, 2004 @10:06AM (#10819394)
    I noticed that every x86 CPU architecture in the past decade climbed 4-5 times in MHz from inception to the "end of the line" model: 486 - 25..100(???, 133 is AMD's version and those started higher than 25), Pentium - 50..200, Pentium4 - 1200..3600 now and still has a tad in reserve as shown by extreme overclockers; similarly for AMD, K6 - 166..550; Athlon - 500..2.x(?). And now Pentium2/3 - started at 233 and climbed until around 1300, which is higher than 4/5x. But maybe there's been some really notable arch changes since P2? What're your thoughts?
    • Efficiency (Score:3, Informative)

      by 21chrisp (757902)
      Much of these speed increases are mostly a result of shrinking die sizes. Most archetectural changes revolved around the introduction of new instructions (SSE). A lot of work was also done to improve the effeciency of the PIII for the coppermine release (which saw a signficant speed increase). The PIV project, which worked in parellel and was doing a much more radical redesign, wasn't able to benefit from this work. The archetecture became different enough that new and much more thorough R&D would

    • And now Pentium2/3 - started at 233 and climbed until around 1300, which is higher than 4/5x. But maybe there's been some really notable arch changes since P2? What're your thoughts?

      The P-II is largely based on the Pentium Pro, and so is the Pentium M, so the speed range really should 133 MHz (first PPros) to 2000 Mhz (latest Dothan - or is it 2100 MHz now?), a factor of 15, and with some unknown amount of life still remaining (3 GHz? Higher?). There's no doubt in my mind that the Pentium Pro (and all

  • by IGnatius T Foobar (4328) on Monday November 15, 2004 @10:11AM (#10819428) Homepage Journal
    This is really about Intel finally coming to terms with the fact that nobody wants to buy Itanium chips. That's where Intel was headed, and Intel assumed that everyone would follow along. Unfortunately, Itanium's future depended on technology advancements that never happened, and a rate of adoption that nobody was willing to pursue.

    This is why Xeon became an architectural dead end: Intel wasn't willing to move the technology forward, because Xeon was supposed to be superseded by Itanium.

    Did you know that "Pentium M" is actually based on the same technology they originally called Pentium Pro? It's true. It was a good design. It didn't do all that well initially because its 16-bit performance was abysmal, and people were still running a lot of 16-bit software at the time. Now that everything is 32-bit, Pentium Pro (now Pentium M) is just fine. The fact that it gets used in laptops is a testament to its ratio of performance to power consumption.

    Intel would be wise to move forward with this. They ought to ditch Xeon entirely, and perhaps even graft the AMD64 instruction set onto this chip.
    • by pertinax18 (569045) on Monday November 15, 2004 @10:38AM (#10819629) Homepage

      Did you know that "Pentium M" is actually based on the same technology they originally called Pentium Pro?

      So are the Pentium II and Pentium III, what's your point? The article clearly states (and it is common knowlegde) that the "M" is based on the PIII, this is no secret or some massive Intel conspiracy... Yes the Pentium Pro was a great design; it really has legs to go from 166MHz to 2GHz or whatever the "M" runs at these days. But it has been a long evolutionary process, not a direct jump from the Pro to "M".

    • They ought to ditch Xeon entirely, and perhaps even graft the AMD64 instruction set onto this chip.

      I believe they will put the x64 set on all their x86 chips, but I doubt they'll dump Xeon. Xeon is mainly a server & workstation rated version of their desktop chip, not some pixie-dust chip.

      Part of it is available higher cache, the rest is often better testing for multiprocessing, de-rated chips and they are also the chips that are tested to consume less power of a fab batch. Intel would also intro
    • "This is why Xeon became an architectural dead end: Intel wasn't willing to move the technology forward, because Xeon was supposed to be superseded by Itanium."
      Okay I guess you have not read that Intel is going to produce a Xeon with x64 extensions. The Xeon is not really an architectural family but Really big cache super tested server grade versions of Intels CPUS. There are PII Xeons, PIII Xeons, and now PIV Xeons. The Itanium was supposed to replace the Xeons in Workstations and I would guess eventually
      • Okay I guess you have not read that Intel is going to produce a Xeon with x64 extensions.

        Not "going to"... "have"... They have been for sale (and actually shipping) for a couple months now.

        I have to wonder if we are possibly seeing the end of the X86 ISA?

        Well... If one thing has been proven in the past it is that software is the driving force, not hardware. It will still take some time for the near 30 years of x86 software to be replaced by "platform independent" stuff (like Java and .NET).

        I mean
        • by LWATCDR (28044) on Monday November 15, 2004 @12:28PM (#10820742) Homepage Journal
          "Well... some folks would disagree with this. The 8051 (and followons) were huge in the embedded world.'
          They still are extermly popular but not really an inovative design. But very successful but mainly for other companies Intel left the 8085 bussines a long time ago.
          " The i860 wasn't intended to be a "home PC" type processor and saw good use in the HPC world (Intel Paragons, iPSC860s, etc.) and in the graphics world (high end SGI graphics cards were based on i860s - RealityEngine, etc.)" Actually the i860 was going to be a major new family of CPUs for workstations and the like. It never really lived up to it's billing. The worst problem with it was context switching was dog slow and the "smart" compilers never got smart enough. Running really tight code writen by hand running a single task they proved very fast and as you pointed out ended up in graphics cards and the like.

          " Likewise, the i960 family was huge in embedded systems. They were big in printers and all sorts of other devices. The i960s were phased out for newer/better technology in the XScales. The i960 was getting pretty old :)
          "
          The i960 is no older than the ARM. In fact it came out a year after the first of the ARMs did. I would have to say that Intel except for the HUGE Wintel market really has not been all that successful. Frankly the have not had to since the x86 has been a huge money pump for them. I mean if you are going to win only one market that was the right one to win.
          I do wonder what type of perfromance you could squeeze out of an ARM or an Alpha if you put as much money into them as Intel has with the x86.

          "Well... If one thing has been proven in the past it is that software is the driving force, not hardware. It will still take some time for the near 30 years of x86 software to be replaced by "platform independent" stuff (like Java and .NET).
          " You have forgoten the stealth platfrom independent stuff" Linux and c. For the server market anyway things like Samba, Apache, PHP, Perl, Postgres, and MySQL are all available to run on none Intel platforms. Linux and c are bringing write once compiler everywhere to the server world. Think of all the companies that are already porting stuff to Linux from old unix systems. Do you think they care if they are moving from a Sun or Vax to a linux box if they recompile for x86 or PPC? For the desktop you are right but even that is changing now. OpenOffice and Firebird/Thunderbird are bigger changes than anyone really wants to admit.

          • Something interesting along those lines is that the XBox2 is rumored to be released in 3 forms... one of which is a "PC".... and it's going to be multiprocessor G5s (from the rumors)... and run a version of Windows XP... There might be some interesting times ahead.

            It shouldn't be too much longer until a critical mass of multi-platform software is available (OpenOffice, etc.) but the real kicker is games. As soon as another hardware platform that is cheap and viable for games in addition to all the other
            • "It shouldn't be too much longer until a critical mass of multi-platform software is available (OpenOffice, etc.) but the real kicker is games"

              No not really. For the Slashdot crowd yes but most companies would be very happy if there desktops did not run games at all. Frankly more machines out there can not run the latest and greatest games like DOOMIII anyway. It is begining to look like more and more mainstream gaming is moving from the desktop onto consoles What you may endup seeing is the XBOX pc introd
    • Pentium M's foundations are the Pentium Pro but I wouldn't really say it is based on it. Prior to the Pentium 4 most of Intel's architecture moves were based on cost savings either in manufacturing, QA or support. The Pentium Pro wasn't too popular because it didn't support the MMX instructions of the Pentium MMX and it's L2 cache was on the mainboard and thus Intel hand no QA over it and the L2 was often the cause of problems. To solve these short comings they came up with the Pentium II. Pentium 2 = Pe
      • The Pentium Pro wasn't too popular because ... it's L2 cache was on the mainboard and thus Intel hand no QA over it and the L2 was often the cause of problems.

        This is wrong.

        PPro had its L2 cache integrated in the CPU package. It was the socket 7 chips which had L2 seperate and located on the mainboard. (Though, the K6-III SS7 CPU came with integrated L2, hence turning the cache on mainboard from L2 to L3).

        The PentiumPro was *hugely* popular in terms of workstation and server sales - it was intel's first
    • This is really about Intel finally coming to terms with the fact that nobody wants to buy Itanium chips. That's where Intel was headed, and Intel assumed that everyone would follow along. Unfortunately, Itanium's future depended on technology advancements that never happened, and a rate of adoption that nobody was willing to pursue.

      No. That is half of intel's problems. The Itanium was aimed at high end, possibly expanding to the lower end. For the lower end they had their P4s and variants.

      Their Itanium p
  • slower ... but higher performing? oxymoron? [my attempt at being funny]
  • As chief information minister for Intel Corporation, let me assure you we will destroy the AMD infidels! The Opteron is like a snake which is going to be cut into pieces. The force that was in the airport.. this force is destroyed. Let the AMD bastards bask in their illusion, we have given them a sour taste. We have them surrounded.
  • by Spacejock (727523) on Monday November 15, 2004 @11:13AM (#10819921) Homepage
    I went through an upgrade about 2 months ago. Looked around to see whether I could get a Pentium-M motherboard and CPU (in Perth, Western Australia - hah.)

    I liked the idea of throttling the CPU back when it wasn't busy. We get daytime temps of 100+ degrees (40 deg centigrade) fairly regularly in summer, keeping a hot CPU cool isn't fun.

    Before I wasted too much time looking, I read about the Athlon64 3400+ and that was that. Mind you, cool 'n' quiet locked up hard on my Gigabyte K8NSNXP bios revisions F5 and F6. (Whether I was running Win Xp or Linux) Rev. F7 came out about 3 weeks after I got the board, and it's been rock solid at 1ghz to 2.4 ghz ever s--
    • I don't believe you would be able to find one, unless you went hunting for a laptop motherboard. Intel wants clear markets for its chips. P4 for desktop, Pentium M for laptop. They do not want to be in a position where they are competing against themselves.
      • This is the kind of thing I meant: Dothan on the desktop [gamepc.com]

        While there always has been some demand for Pentium-M motherboards for the desktop, there was not enough of an urge to turn this demand into more than niche appeal.
        Today though, we finally get to see how the Pentium-M platform can compete with the big boys, thanks to AOpen's new Pentium-M desktop motherboard.

        • Neat. I wasn't aware that anyone had put out Pentium M motherboards. Thanks for the link.

          The benchmarks do prove that the Pentium M is a very respectable desktop CPU.
  • by Nom du Keyboard (633989) on Monday November 15, 2004 @11:57AM (#10820372)
    Cheaper, slower, cooler, but higher performing

    Let's be precise here folks. Slower clock rate. I got the wrong impression the first time I read this, and likely others did too.

  • Uh, Excuse Me... (Score:3, Insightful)

    by Nom du Keyboard (633989) on Monday November 15, 2004 @12:01PM (#10820412)
    "Intel Centrino" synonymous to long battery life and flawless wireless networking

    Excuse me. Certainly we're not referring to 802.11g wireless networking here, are we?

    It's statements like that one that make me doubt the entire article. Just who are these guys anyway?

  • by cant_get_a_good_nick (172131) on Monday November 15, 2004 @12:05PM (#10820468)
    Cringely had an article a while back [pbs.org] that mentioned Google liking to use Pentium IIIs in their data center. Yes the Pentium 4s were faster, but if you looked at your datacenter as a whole system, including power, cooling, and space requirements, they were better off with 'old' Pentium IIIs. At the time, I think Google was worried they wouldn't be able to source new machines with P-IIIs, looks like Intel is following them this time. Intel seems to be following a lot lately, the megahertz at any cost mantra sure faded fast.
  • Interesting read from eWeek [eweek.com], talking about CPU power consumption and California energy woes (which server farms helped contribute to).
  • All along, Intel has been producing chips that are cutting edge in terms of processor clock -- the higher clock speeds they can get out of their lines, the better -- which has entailed, at times, some hoary measures to keep power consumption (barely) in control.

    But most people don't need a 2.8 GHz processor that dissipates 100 W. My laptop and one of my desktops are 700 MHz machines, and while not the latest zippiest out there, are perfectly adequate for my needs, and I imagine most peoples'. Not all, bu
  • This will be a bomb. Imagine - tiny cube like PCs which only turn their processor fans on when they need to. I have Pentium M processor in my laptop and I haven't run any benchmarks, but it _feels_ faster than my P4 desktop.

    Now the only issue is, it's not 64 bit compatible. Intel, hook up 64 bit instruction set and memory controller to it, will ya?
  • by The trees (561676) on Monday November 15, 2004 @03:15PM (#10822505)
    I actually read the article, and it makes no mention of Intel adapting the Pentium M for the desktop. Instead, it describes a marketing label for a desktop processor/chipset/network combo similar to the Centrino label for certain laptop processor/chipset/network combos.

    This comment seems to suggest that the processor will be something else entirely:
    "East Fork will include a newly designed Intel microprocessor with two processing cores, a supporting chip set, and a Wi-Fi wireless radio. The package will be designed for "digital home" PCs, which shuttle music and movies around the home and can store TV shows digitally,"

    However, this does sound like the platform will target the same applications that VIA's Mini-ITX systems are widely used for. Therefore, it would make sense that the "newly designed Intel microprocessor" will be based on or similar to the Pentium M, but I wouldn't say that this is an announcement of a desktop Pentium M.

"Of course power tools and alcohol don't mix. Everyone knows power tools aren't soluble in alcohol..." -- Crazy Nigel

Working...