Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Upgrades Hardware

Intel 3.40EE & 3.60E - LGA Arrives 121

MBR writes "MBReview has taken a quick look at Intel's new high-end LGA775 processors, the 3.40GHz Extreme Edition, and the 3.60GHz 'E,' now known as the 560. They've covered some of the questions about pin frailty of the new LGA socket, as well as cooling issues that might arise from these new processors." ("LGA" stands for Land Grid Array, which moves pins from the processor to the socket it sits in.) Update: 06/19 20:50 GMT by T : Reader Chi-Energy points out that besides the new processor packaging, Intel has also just released its i925X and i915 chipsets, PCI Express and DDR2 DRAM for the desktop, and links to this review showcase with benchmarks at HotHardware.
This discussion has been archived. No new comments can be posted.

Intel 3.40EE & 3.60E - LGA Arrives

Comments Filter:
  • New pins (Score:3, Insightful)

    by Deltawolf ( 789706 ) on Saturday June 19, 2004 @03:24PM (#9473969) Homepage
    Oh wow! Now if your pins snap you have to replace your mobo instead of your processor. Sounds like its begging for trouble.
    • Re:New pins (Score:4, Insightful)

      by Naffer ( 720686 ) on Saturday June 19, 2004 @03:27PM (#9473976) Journal
      Most home motherboards are cheaper then the processor. Motherboards run around $150 while newer processers run above $200
      • Re:New pins (Score:5, Insightful)

        by Anonymous Coward on Saturday June 19, 2004 @03:29PM (#9473995)
        I'd rather replace a CPU with a broken pin than tear apart my case and pull out the tray to replace a motherboard.

        A CPU can be replaced in just a couple of minutes. A motherboard would take much longer, depending on your case type, how many cards you have, and all the various types of things you're going to have to unplug from it and plug back in.
        • Re:New pins (Score:5, Insightful)

          by JPriest ( 547211 ) on Saturday June 19, 2004 @04:46PM (#9474447) Homepage
          A few points: $150 is reasonable for a motherboard, the above listed processors are likely to run closer to $450 and $900.

          If 6 minutes of your time is worth $300 - $750 then you obviously make way more than I do.

          AMD is going to start using the same technology. When Intel does it, it is a pain in the ass, when AMD does it, it's innovation.

          Besides, you have to be pretty careless with your hardware to break a pin.

          • Re:New pins (Score:5, Insightful)

            by Too Much Noise ( 755847 ) on Saturday June 19, 2004 @06:40PM (#9475087) Journal
            There are some points you're missing about this.

            Who's paying for the RMA? if the natural life span of the pins is about 8 insertions (as the mobo producers seem to claim), then there would be a large number of legitimate breakages that get sent back to the mobo manufacturer. Now, they can either replace the CPU socket (not very funny, I think) or throw away the whole mobo, including the rest of the perfectly good components on it[*]. As oppose do just discarding a defective CPU if its pins break.

            [*]like the spankin' new and expensive Intel chipsets. I doubt $150 will happen anytime soon as a mobo price, as even the chipset estimated price seems to be above that. I also doubt mobo manufacturers getting too many returns due to bent socket pins will be very happy about all this - remember, their margins are quite slim these days. The least hurt by this is probably going to be Intel itself.

            Your AMD jab is a troll. As far as they stated so far, the Opteron socket stays put for the foreseeable future (meaning at least one year). They will have no incentive to move to a pinless package unless it shows some solid advantage. Even Intel might have to back down on this if the hw producers get to unhappy (and they already have enough grief with the BTX form factor).

            Finally - pins break. It's called mechanical stress. How many times do you think you can 'carefully' insert and remove a CPU in its socket before some pin gives in? At least, for the old sockets, all you had to do is match pins and holes ; now, with only point contacts, bending can come so much easier.
            • How many times do you think you can 'carefully' insert and remove a CPU in its socket before some pin gives in?

              I don't know about you, but for my part I have never removed and inserted a CPU into its socket more than a couple of time. And remember that 99% of their revenue doesn't come from people that do that more than once.

              So again, don't generalize on the "Geek point of view". They don't give a sh*t if your CPU/mobo breaks
            • or throw away the whole mobo, including the rest of the perfectly good components on it[*].

              So I guess it's time for chipsets in sockets, too.
      • Re:New pins (Score:3, Insightful)

        Why don't they make a pin grid insert? That way you can just replace the pin set if you bend some, and both the motherboard and th processor are immune from that type of damage.

        The only problem you might experience with this is if you break off pins in the socket or the processor, but it seems like a (mostly) unlikely situation.
        • But how would that connect? More pins?
          • Re:New pins (Score:2, Interesting)

            by Synkronos ( 789022 )
            Yes, it would basically be a widget with pins on both side, one plugging into the mobo and the other into the chip. Then, if you break a pin off, you just replace the cheap adapter. It also means chips would be less susceptible to static shock, since the contacts would not be as exposed.
          • Re:New pins (Score:3, Insightful)

            by Bishop ( 4500 )
            A pin insert would connect to pads on the mainboard and pads on the cpu. I worked with some SGI systems that used this kind of setup. It is a good idea.

            I am a little disapointed that Intel did not go with a pin insert. However it would have cost more which would have been hard to justify to the mainboard makers and their razor thin margins. In the long run I think that a pin insert would have been a smart move. Judgeing by the reported fragility of the socket 775 I won't be surprised if Intel moves to a ne
            • "It would not take much to convince the enthusiasts that a new pin insert was needed whenever a cpu was upgraded to insure maximum performance"

              Well, it would take some fairly repeatable benchmarks, at least. Maybe you could sell "higher quality" ones with gold plated pins and a heatspreader or something. Of course, you could always package them with the CPU...

              "In some respects computer enthusiasts are as bad as audiophiles."

              I resent that; I haven't spent more than a couple of quid on IDE and VGA cables

              • Some might like a water cooled one :)
                • I'd imagine the pin lengths are quite important at high speeds, and routing water around all those pins in a tiny little package, under a hsf...

                  This line of thinking's got me wondering about cooling both sides of a processor... maybe if you made one with the pins at one corner, like a little slot, pack heatsinks to both sides of it; with the right mounting gear and processor package you could effectively double your cooling area. Maybe one way of dealing with the heat of dual/quad core chips and lower sur
        • Even in the event of pin breakoff, tweezers would work... A pin insert is a good idea and is used elsewhere in the industry, so it's not exactly new thinking, so I wonder why this LGA came up as intel's 'solution'.
        • I was thinking something along the lines of no pins at all, rather just use flat contacts. there would be a clip that would hold processor in place on motherboard.

          (or maybe that wouldn't work, i'm not a processor engineer.)
          • http://cpu-museum.de/forum/viewtopic.php?t=927&pos tdays=0&postorder=asc&start=15

            The UltraSPARC II used something like what you refer to.
            • I would think that an array of pins in on a given area on the underside of a chip would have more electrical contact surface area then flat contacts in that same area.
        • Pins (Score:3, Interesting)

          by bsd4me ( 759597 )

          At high frequencies, the pins on a package aren't really short circuits (ie, zero resistance); they have a capactiance and inductance which mess with the signals. Making removable pins would make this a lot worse.

    • Saw these up close a month or so ago at the ICC. So long as you don't go sticking your fingers in the socket to "check out" the newness, you shouldn't have a problem bending pins. Moving the pins off the CPU and using "pads" in their place should make CPU installation a bit easier, although I neve had a problem with the current layout.
    • by Anonymous Coward
      This isn't just some AC's random opinion either. This was the consensus of most of the Taiwan motherboard manufacturers at Computex this year. P4 boards were everywhere, but nobody really expected them to sell. It's like Intel is on a suicide mission to hit the wall at full speed.
      They recently has a press release where they stated that they intend to produce processors that consumer 200watts by the end of the year. And that's not peak power as many people naively want to pretend, that's mostly leakage
    • As the reviewer pointed out, Socket 478 CPUs tend to stick to the heatsink and get yanked out during upgrades. Having built and repaired hundreds of machines in the past year, I have to tell you that I am still a bit scared to change my own machine's aging 865 chipset motherboard. For a business a few dead CPUs due to bent pins is no biggie, but for an individual who doesn't have another $300 it's more than an annoyance.

      The advantage of the new setup is that the CPU is locked in place better, so as long a

      • Re:New pins (Score:2, Interesting)

        > Socket 478 CPUs tend to stick to the heatsink and get yanked out during upgrades

        Seen it happen twice, once with a p4 and once with a p/166.

        Both times, the chip was just fine afterwards.
  • by irokitt ( 663593 ) <archimandrites-iaur.yahoo@com> on Saturday June 19, 2004 @03:28PM (#9473987)
    I'm not as worried about frail pins as I am about the amount of heat these things push out, the size of the new heatsink/fan assemblies, and the noise they put out. I thought Socket 478 processors were hot, but LGA Prescott processors run even hotter, which makes me think Intel has a point when it says we should switch away from the ATX case factor and adopt BTX for Intel chips..

    That said, are the Extreme Edition processors still selling for $900 USD a pop? Hardly seems worth the extra money for gaming, although a server that wants to survive Slashdottings could probably use one...
    • Looks like there should be better heat transfer between the chip and the heatsink with this new package. It looks like the heatsink with make contact with the metal on top which almost covers the entire chip.

      The price ofcourse will go down, $900 is about right for a new processor. Give it a few months.

      I just wish they had better benchmarks, They just compare the new processors to other intel processors.
  • Is not (Score:2, Insightful)

    Faster importantless? With 64 bit processing power for all available by athlon 64 made available that works with 64 bit yes immdiately we would switch.

    These chip make futiliity. Why make processors of like these new when you can improve on 64 bit? The battle is to will be lost to Athlon without 64 bit competition by.
    • Re:Is not (Score:4, Insightful)

      by NanoGator ( 522640 ) on Saturday June 19, 2004 @03:59PM (#9474174) Homepage Journal
      "These chip make futiliity. Why make processors of like these new when you can improve on 64 bit? The battle is to will be lost to Athlon without 64 bit competition by."

      Are you running 64-bit apps I'm not aware of?
      • Guess what? The Hammer core executes 32 bit code faster than a P4 does, clock for clock, and it has all the same MMX (and similar) extensions including SSE2. Meanwhile, a few games are being built as 64 bit and by the time we can actually get an x86-64 Windows, there will be more of them. From what I hear, just recompiling your code for x86-64 rather than x86 nets a ~15% performance increase, and processor-specific optimization ought to net still more. Hence, especially given a price comparison, Hammer is t
    • In other news... Babelfish reduces language barrier to language annoyance - Slashdotters still don't understand one another
    • Re:Is not (Score:4, Informative)

      by bobthemonkey13 ( 215219 ) <.keegan. .at. .xor67.org.> on Saturday June 19, 2004 @04:21PM (#9474310) Homepage Journal
      Not only does AMD have the only desktop 64-bit offering right now, but their chips are much faster than Intel's at the same clockspeed, even in 32-bit mode. Whereas Intel's engineers are just running their chips at insane clockspeeds, AMD's are actually designing better processors. For the price of a 3.4GHz "800"MHz FSB P4EE ($989 on pricewatch [pricewatch.com] right now), you could buy two Opteron 246s ($441 each) with cash to spare. If you want to talk raw, meaningless numbers, the Opterons still beat the P4EE (4GHz and 2MB cache total). Of course, SMP isn't simply additive like that, but consider the advantages of 64-bit and multiprocessing, and the fact that AMD chips are /much/ faster than Intel's at the same clockspeed (even on 32-bit code), and there's no contest. All halfway-modern Windows versions and Linux kernels can support SMP, and the latest support amd64, too.
      • [insert mandatory G5 is 64 bit and I'm going to mention that even though it is obvious that you're talking about x86 chips]
      • The point of the P4EE is not to be a real competitor against the Opteron, but rather to show just how easy it is for Intel to play in AMD's back garden. Intel can also validly compare P4EE to the Opteron if they want.

        It's actually cheaper and faster to run a dual Intel Xeon computer than it is to run a P4EE.

        AMD chips are /much/ faster than Intel's at the same clockspeed

        But they aren't running and can't run at the same clockspeeds. I suspect that without Marginally Extreme Cooling, AMD's chips would si

        • Considering the fact that Athlon 64 FXs running at 2 to 2.4GHz beat the crap [aceshardware.com] out of various P4 and P4EE with up to 3.4GHz, I can't really take your clockrate argument seriously. Ok, the P4 scored better on some closed-source renderers, but then consider that most applications in the benchmarks have not even been optimized for 64 bit yet!
  • Apparently [theinquirer.net] the BTX form factor (of which LGA is a part) has been heavily resisted by many Taiwanese chassis, mainboard and heatsink manufacturers.

    But what's new here? Word has it that this time round, the Taiwanese heastink, mainboard and PSU manufacturers - and quite a lot of them it would seem - are being rather less than enthusiastic or co-operative, about the sweeping changes and support that Intel is asking, nay demanding, of them.

    I'd be interested to see if Intel can actually strong-arm them into it
    • I'd be interested to see if Intel can actually strong-arm them into it
      With mainland facilities becoming more and more advanced, but without huge increases in cost, I think Intel can get their way. All they need to do is say, "Shanghai" and I bet the Taiwanese manufacturers will change their minds.
    • Since AMD chipset and motherboard makers are NOT planning to switch over to the BTX form factor anytime soon, it would be awkward for Intel motherboard and case manufacturers to do so, since it would introduce two completely different standards. It's also difficult because BTX focuses on the "desktop" form factor, where the case is a small box you can place under your monitor. I prefer the "server" form factor, a seperate tower that can be placed anywhere. There is a server form factor for BTX, but none of
    • I can understand why they are resisting.

      Intel has decided that the motherboard/case manufactuers need to shoulder the cost of the cooling required by the newest Intel chips.

      This of course is not going over well. Computers are such a mainstream industry that the profit margins are very low and Intel is basicly trying to shift some of the cost away and increase their profits at the expense of another part of the industry.

      I only hope they get taught a lesson like IBM did with Microchannel. But I'm not hop
    • BTX isn't needed today, which is why the manufacturers are complaining. But when the processors get up into the 150W range, they may find that BTX systems are either cheaper at constant dB or quieter at constant cost.
    • I don't blame the resistance. Most, if not all of the "unique" changes that are part of the BTX chassis spec can be adapted to the ATX bolt patterns and other parts of the spec.

      The fan duct? Ha. There are a few aftermarket mods for putting fan ducts into an ATX case. Compaq and Dell use fan ducting too, in some models.

      Small form factors? There already exist micro ATX and even NLX form factors. OK, the PCI-E video cards do get better cooling under BTX, but there's nothing there that says that an ATX
  • by Jeremy Erwin ( 2054 ) on Saturday June 19, 2004 @03:56PM (#9474157) Journal
    Oh, wait. It seems that none of those bar graphs include an origin. Never mind.
  • by athakur999 ( 44340 ) on Saturday June 19, 2004 @03:57PM (#9474164) Journal
    Every review I've seen on these chips has pictures of the CPU and the socket with the CPU in it. I haven't seen one with some good pictures of an empty socket.

    Anyone have any links to any? Does this new chip just rest on the pins or is there some more positive mounting method (besides that cover that goes over the CPU)?
  • by DraconPern ( 521756 ) on Saturday June 19, 2004 @04:24PM (#9474325) Homepage
    It's about time they catched up and started to use the LGA connection! The NEC VR10000 MIPS chip had LGA in 1998, as well as LGA contacts on the motherboard. To connect the LGA on the proc against the LGA on the board, a plastic holder with wads of springy gold wires was used. There was no issues with bent pins, etc. The only problem was lossing those wads of gold...
  • This next incarnation of processors could speed up the innovation of watercooling. Just as alternative fuels will be researched quickly and furiously when conventional fuels become very expensive, when these new processors hit the desktop market (think Dell, HP, Compaq, etc.), they are probably going to be loud or obnoxious.

    Maybe we are going to see better watercooling systems due to mainstream demand. That would be pretty cool. (NPI)
  • by phrasebook ( 740834 ) on Saturday June 19, 2004 @04:34PM (#9474383)
    I'm sick of reading reviews that compare new products with other new products. Example on MBReview: comparing P4s that are all pretty much brand new, all expensive, hardly any difference between them. I want to see how it stacks up against my P3-866, not another P4 that I've never even seen. At least throw an older proc in there for comparison. Same with video card reviews. I don't give a hoot how the Radeon 9600 compares with the 9500... how does it compare with my GF3? FFS these reviews suck. At least throw in an older chip just for a relevant comparison. And stop mentioning how Quake 3 is getting old but is still useful: "this benchmark is slowly progressing towards an archaic stage". STFU. Who keeps regurgitating this crap.
    • 2004
      http://www.aceshardware.com/read.jsp?id=60000301
      2003
      http://www.aceshardware.com/read.jsp?id=50000356

      Not exactly what you are looking for but it will give you a clue how much performance an upgrade will give. Digging further back in the archive to say 2001 might be even better.
    • Shouldn't it be obvious how it stacks up against your P3? What would be the point of such a comparison - "Should I get a P3 866 or a P4 3600?" Some thing with the GF3 and Radeon 9600.

      Quantitative comparisons between components with such disparate performances are pretty much meaningless - "I'll wait until a new graphics card has exactly 2.5 times the performance of my old to get it." With such new features as Pixel Shader 3.0 coming (even though games may not support it yet), the quality of the picture i
      • He's not considering buying a new P3 or something. He's wondering if the new processors give the 2x (or 4x, or 10x, or 1.1x, or whatever...) benefit that would convince him to upgrade. You can't even use the old specs because the spec programs change every year in order to keep up with the latest featuers (at least with graphics spec programs).
  • That was my 1st question. And the review site mentions this as a possibility.

    This is something Intel seems to be a master at. Releasing CPU's to review sites that you can't buy for a long time just to get the hype and "title" of the fastest. Other companies do it, just not as bad as Intel.

    My personal opinion is /. shouldn't be participating in the hype-machine by promoting CPU's who we can't even be sure will ever be released as reviewed unless they are truely groundbreaking. And this isn't groundbrea
  • by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Saturday June 19, 2004 @05:11PM (#9474563)
    According to Sandpile.org [sandpile.org], the 3.4GHz Pentium IV Prescott can use up to 127W, and has a typical power usage of 103W (when browsing the web or reading email).
    In my opinion, it is rediculous for a single processor to single-handedly run up your power bill. That's like having two light bulbs on 24/7 (assuming you keep your computer on), not to mention the power needed to cool your PC, let alone your house's air conditioner.

    I would take a VIA chip for low-performance stuff, and an Athlon64 for performance computing. support 64-bit software including 64-bit Linux distributions, are faster than Intel's best even running 32-bit software, and they have a maximum power usage of 89W. Because of Cool'n'Quiet mode, they spend most of the time running at 800MHz consuming about 30-35W and generally not requiring a loud and abnoxious cooling fan.

    It is actually impressive what the chips can do at 800MHz. You can play a full screen DVD at 1400x1050, and the CPU usage tops out at about 5% (at 800MHz). If, of course, you run something that requires more power, like a video game or a compiler, the processor instantly switches to full speed. Handy, that.
    • by Anonymous Coward on Saturday June 19, 2004 @06:31PM (#9475033)
      That 103W figure of yours is bullshit - at least how you've understood it. That's the typical maximum power consumption, or what Intel call the "Thermal Design Power". While web browsing or reading email, power consumption is down around 30W. You need difficult-to-design test vectors to push the CPU beyond the TDP, and unless you know what you're doing, you're unlikely to get more than a couple of watts beyond it.

      One way to prove this to yourself is to simply remove the heatsink from a running Pentium IV Prescott system. Shock, horror: it will continue to run, only slowly. That's the same slowly the system silently goes into and out of depending on the CPU load (it takes about 10000 CPU cycles, or about 3 millionths of a second, to get into or out of this state.) Now: try web browsing or reading email while it's running like this. Tell the difference? Didn't think so. You might notice a _slight_ slowdown (when rendering a complex page, for example): that's because without the heatsink on, the CPU won't go back into the "normal" S0 power state.

      So, just as a Pentium 4 doesn't dissipate 103W without a heatsink installed, so to does it not dissipate 103W if you're not doing anything.

      This can all be found in http://developer.intel.com
      Handy, that.
    • Im sorry, but 103 watts (even if thats correct information) is not much. Like you even said, its about the same as 2 50watt bulbs. Thats .103KW (KiloWatt), running 24hrs a day for a month is about 730Hrs, or 75KWH (KW-Hours). At the US avg electric rate of 7cents/KWH (whats used on most appliances) that ammounts to $5 a month. Your air conditioner on the other hand uses quite a bit more power (for a central unit). I dont think any P4 desktop machines require 2phase 30amp breakers with heavy guage power cabl
  • Come on, bust out some even faster processors! I'm sure, with PCI and IDE, we'll be able to get the best out of every single additional Mhz in comparison to those slow and obsolete 2Ghz machines!
  • by Anonymous Coward
    Each time Intel rolls out a new product, an improved version of that product comes about 6 months later that is much better. Early adoption gets you screwed first, quick, and for a higher price. Nothing is better than paying top dollar to be an uncompensated beta tester for the mobo companies.

    It seems to me that when Intel develops a product half way through the design process they realize they screwed up but still release the original tech to make cash, then the fixed version of that tech comes out 6-12
  • Am I the only one here who's goign to ignore everything Intel releases until they start releasing dual-core Dothan-based P4s? Here's [tomshardware.com] Tom's Hardware's take on the new LGA 775 architecture, along with copious comparitive CPU/platform benchmarks. Anandtech has their own entry here [anandtech.com]. In both cases, the combination of new architectures, cpus, features, etc don't add up to much of an advancement in performance. What you get are a lot of features of questionable value and/or features that have been touted by pl
    • Dothan-based P4s? Dothan is the Pentium-M core, and it's based on P6 (not the Pentium 4-exclusive P7). I don't think they're replacing the P4 line with Dothan, they're doing it with its successor (Merom or something, hell if I can keep track of their roadmap at the rate it's been changing). Also, I'd be surprised if they call 'em P4s, seeing as they have way less in common with any of the P4 cores we know than they do with, say, the good ol' Tualatin P3.

      Still, nitpicking aside, I get what you're saying. P6
      • This [eweek.com] article indicates that the dual-core cpus will carry the P4 name. I remember seeing some roadmaps for this somewhere, but I can't find them at the moment. As to whether or not the dual-core cpus will carry Dothan cores or cores based on its successor, I do not precisely know. I only know that they will likely be cores based on whatever is the current Pentium-M at the time of release.
  • by rumpledstiltskin ( 528544 ) on Saturday June 19, 2004 @11:56PM (#9476644) Homepage Journal
    One of the most useful things about the 925 chipset, IMHO, is the interesting possibilities it offers for SATA RAID. Say you want the performance capabilities of RAID 0, but at the same time, you need the redundancy for RAID 1. let's also say that you can only afford two SATA drives.
    the intel 925 chipset has native support for a mixed raid, where you can create a raid 0 partiiton across two hard drives, using only part of the hdd capacity on each drive for the raid 0 partition. the rest of the unpartitioned space can be set aside as a raid 1 partition. that way you can install the OS and other non-critical files tha can be lost to the raid 0 partition and get the performance, but if one of the drives fail, you can store your important stuff on the raid 1 partition. I'm trying to find a controller card that will do this functionality, but I can't find anyone that claims to explicitly support it. the only reason I know about the 925 features is I got a chance to play with a pre-production board. definitely a cool feature.
    • FYI, if you're using Linux, you can just use software RAID. It's widely, albeit controversially, regarded as faster than hardware RAID, and is substantially more flexible.

      You might've thought of that already, or not be able to use Linux, but nevertheless, thought I'd mention.

      Cheers

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...