Forgot your password?
typodupeerror
Intel Wireless Networking Hardware

Intel Announces New Chips, Chipsets 113

Posted by michael
from the intelfanboydotcom? dept.
Saud Hakim writes "Intel showed a prototype of an IEEE 802.11a wireless LAN transceiver created by using a 90-nm CMOS (Complementary Metal-Oxide Semiconductor) fabrication. The chip can switch between different networks and frequencies; it is capable of tuning and tweaking itself. It is also capable of detecting what kinds of wireless networks are available nearby and shifting to the frequency that is most appropriate." Reader serox sends more: "Intel has two big news releases today and IntelFanboy has it covered. First up is the new Xeon processors have been released with a list of improvements. Second, Intel has revealed two significant milestones in the development of extreme-ultraviolet (EUV) lithography that will help lead to developing the next generation chip technology."
This discussion has been archived. No new comments can be posted.

Intel Announces New Chips, Chipsets

Comments Filter:
  • Story is incorrect. (Score:5, Informative)

    by mlyle (148697) on Tuesday August 03, 2004 @02:44PM (#9870577)
    It's new Intel server platforms based on the Xeon that have been release; not new Xeons.

    That being said, this really bulks up the low-intermediate end of the Intel enterprise offering.
    • Apparently there are new Xeons, ones that support Intel's renaming of x86-64, unless those have been out for a long while and no one told the Slashdot editors.
      • From the article:

        The Intel Xeon processor, which was introduced in June, is the first Intel Xeon processor to offer Intel® Extended Memory 64 Technology (Intel® EM64T). EM64T helps overcome the 4-Gigabyte memory addressability hurdle, providing software developers flexibility for writing programs to meet the evolving demands of data-center computing. The processor also features Demand Based Switching with Enhanced Intel SpeedStep® Technology to dynamically adjust the processor's power usage
  • hot hot (Score:5, Funny)

    by scaaven (783465) on Tuesday August 03, 2004 @02:45PM (#9870589)
    now I can fry an egg on my LAN card too!
  • by macklin01 (760841) on Tuesday August 03, 2004 @02:45PM (#9870593) Homepage
    But the leakage current problems have been increasing with process shrinks (not just at Intel, but also at IBM and AMD). So they can use even smaller lithography. Great. Will the leakage current and associated heat suck even worse than Prescott?
    • According to the article, they can use less power, due to the feature shrinkage.

      I won't pretend to understand the relationship of power and leakage wrt feature size, though.
      • My very basic understanding of the relationship is this, it takes less power to cause a smaller semiconductor to switch states, however as you move wires closer together you start to have capacitive leakage and inductive effects from the wires. Up until a few years ago, you the former was signficantly larger than the latter, but in recent years they have become more equal in magnitude of effect.
        I like to think of semiconductors (and most electrical things) in terms of fluid flow (not ideal but you can ge
    • I read somewhere today that Intel engineers have developed a new compound to use for the insulating layer on the gates, to replace SiO2. This was said to reduce the leakage currents and allow finer lithograph. IIRC the article said they were planning to start using it for 55 nm lithography.
    • > But the leakage current problems have been increasing with __process shrinks__ [my emphasis] (not just at Intel, but also at IBM and AMD).

      Not really true. Leakage current doesn't increase significantly with just a process shrink; rather, it tends to be associated with process shrinks because one of the main reasons for a process shrink is to rev the clock rate up. In this case there is little reason to rev the clock rate on an 802.11a/b/g chip that is processing signals at pre-defined frequencies.
      • Thanks for the interesting responses, folks. I feel I've learned a lot. Perhaps I didn't RTFA well enough, but I was under the impression that these were two separate news items: one about wifi chipsets, and another about a new lithography technique that Intel would be using ubiquitously, including for future CPUs.

        I definitely agree about the power savings from the process shrinks (thanks for the correction!); we saw those in the Coppermine->Tualatin shrinks and the Willamette->Northwood shrinks,
  • by dFaust (546790) on Tuesday August 03, 2004 @02:46PM (#9870600)
    It's just like Ultraviolet lithography.... TO THE EXTREEEEEEME!!!!!

    Hey, at least they didn't spell it "Xtreme"

    • by Anonymous Coward
      It's just like Ultraviolet lithography.... TO THE EXTREEEEEEME!!!!!

      I'm confused as to why this wasn't announced on sunday, Sunday, SUNDAY!!!!
    • Extreme ultraviolet? They SHOULD have used the X... After all it's called X-rays.
      • That's actually a funny story, with more point than you realize. A while ago, a number of groups spent a lot of money on x-ray lithography, without any commercial success. Because of this x-ray lithography has a bad reputation. So, to distance the technique from x-ray lithography, and to more closely align it with the very successful optical lithography, they changed the name to EUV lithography from projection x-ray lithography.

        This also points out an interesting cultural difference between Americans an
        • What a load of c... inaccuracies...

          The used wavelength (afair 15.4nm) is stil far from hard x-ray. The technologies for generation, mask, and "optics" of x-ray and EUV radiation is very different.
          • Yes, 13.4 nm (~100 eV) is far from hard x rays (> 30 KeV), but who said anything about hard x rays? X-ray lithography was generally done with wavelengths near 1 nm, so it's hard to say if 13.4 nm is closer to 1 nm or 193 nm. All three techniques are very different.

            In any case, look some of the first work done on the technique by Bell Labs and others in the late 80's, early 90's. Those papers refer to the technique as soft x-ray projection lithography.

            (I will admit that my second paragraph about cult
  • by Virtual PC Guy (720945) <ben@@@bacchus...com...au> on Tuesday August 03, 2004 @02:46PM (#9870608)
    Yay - now it will be easy for guys like me (lazy people who don't feel like assembling machines by hand anymore) to get an x86-64 box from Dell:

    http://www1.us.dell.com/content/products/compare.a spx/precn?c=us&cs=04&l=en&s=bsd [dell.com]

    Or should I say 'Intel® Extended Memory 64 Technology' (whatever guys - everyone knows that it is just AMDs tech)
  • a? wtf? (Score:4, Interesting)

    by rokzy (687636) on Tuesday August 03, 2004 @02:46PM (#9870609)
    isn't 802.11a the old one that had a few benefits in certain situations over 802.11b, but is now superceded by 802.11g?
    • Re:a? wtf? (Score:5, Informative)

      by hpa (7948) on Tuesday August 03, 2004 @02:51PM (#9870642) Homepage
      Not really. 802.11a operates in the 5 GHz band, and can thus coexist with 802.11b without suffering degradation, unlike 802.11g which does degrade when .11b devices are present -- if nothing else because the .11b devices hog the channel for 5 times as long.

      Thus, heavy-use WLANs like corporate installations are frequently A+G, and a lot of current wlan client chips are also A+G.

      In the current wlan market, 802.11a is the premium solution; unfortunately both in terms of cost and performance.

      • Re:a? wtf? (Score:5, Informative)

        by ElForesto (763160) <{moc.liamg} {ta} {otserofle}> on Tuesday August 03, 2004 @03:15PM (#9870843) Homepage
        It's worth noting that 802.11a has a significantly shorter theoretical maximum range when compared to the 2.4GHz (802.11b/g) solutions.
        • Re:a? wtf? (Score:3, Interesting)

          by Jeff DeMaagd (2015)
          It's worth noting that 802.11a has a significantly shorter theoretical maximum range when compared to the 2.4GHz (802.11b/g) solutions.

          That is true but it is also far less crowded, with five or eight available channels in most countries. With the recent FCC posting, "a" is considered an indoor technology. I get pretty good range with "b" - something pretty close to the claimed 1000ft with the equipment I have, but that is with no obstructions. I really don't need that sort of range. The range problems
  • by SharpFang (651121) on Tuesday August 03, 2004 @02:48PM (#9870615) Homepage Journal
    Texas Instruments released a new microcontroller based on the revolutionary TTL ( Transistor-Transistor Logic) technology!
  • 10 GHz? (Score:5, Interesting)

    by Pusene (744969) on Tuesday August 03, 2004 @02:51PM (#9870644) Homepage
    Too bad this type of wireless sytem is not allowed to use in better parts of the world, due to the regulation of radio frequencies. Why not use this adaptive frequency model in CPUs. Let the clockspeed scale with the load on the processor! (I meen scale in 30 MHz increments or something, not step between two speeds like it does now on some CPUs!)
    • Re:10 GHz? (Score:3, Informative)

      by Short Circuit (52384)
      Why not use this adaptive frequency model in CPUs.

      They do. It's called SpeedStep or LongRun.
    • Re:10 GHz? (Score:5, Funny)

      by IPFreely (47576) <mark@mwiley.org> on Tuesday August 03, 2004 @03:31PM (#9870963) Homepage Journal
      Too bad this type of wireless sytem is not allowed to use in better parts of the world, due to the regulation of radio frequencies.

      That's OK. I don't live in the better parts of the world. I live in the US.

    • Just FYI, the operating frequency of the radio has *NOTHING* to do with its speed. At whatever the frequency the radio operates on, it uses a fixed amount of frequency width (which is on the order of 30 MHz not GigaHz). So, if I am on 10 GHz, it means that I am allocating frequencies between (10GHz-15MHz) and (10GHz+15MHz). It doesnt mean that I have a CPU running at 10 GHz. Operating speed of these radios are based on reception power which is generally inversely (and exponentially) proportional with the di
    • It seems like people are misunderstanding what you mean, but you talk about two disparate things in one paragraph.

      10GHz is still pretty expensive to deal with for consumer commodity parts for wireless radio. 5GHz is a hard enough sell as it is.

      I'm not sure why CPUs don't have a larger range of speeds for dynamic clocking. There may be little power savings benefit for clocking slower than the minimum speed, and not much benefit to having intermediate speeds if the system can switch between the two freque
      • Thanks for understanding me. Isn't this /.? I though us geeks ought to be able to hold two thoughts in our heads at the same time!
    • AFAIK, 'harvard architecture' CPUs like the ancient 68040 in my Quadra could be clocked ALL the way down, even stopped if need-be. When I heard that Intel was introducing 'SpeedStep' so their CPUs could drop from 500 to 400MHz (or whatever) to save some juice I couldn't help but think that they missed the boat entirely. You could make very cool, very quiet laptops if you had CPUs that would just clock themselves based on a signal from the memory controller signalling how busy the bus was (bus saturation exc
      • That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

        Harvard architecture [wikipedia.org] refers to seperating instruction and data memories, unlike Von Neuman architectures you find most places. Harvard architectures are still popular in many microcontroller families, though.

        Whether parts are certified for static operation (e.g. clock frequency down to 0Hz) is a completely different matter.
        • Ahh, that's right. I'm a bit rusty with my old CPU technologies.
        • That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

          That is not 100% accurate. Actually it is common to designate CPUs as a harvard architecture when they use separate data and code caches. For example it is impossible on the 68040 to modify code that resides in the code cache.
          • This is semantic. Harvard architecture implies seperate paths for data and instructions. The path into the CPU for the instructions is the same as the path into the CPU for data.

            On the 68040, yes, there is a separate I-cache that isn't coherent with memory writes. But it is quite possible to use instructions that operate on data memory to modify code-- as long as you're sure to invalidate the i-cache before the code runs.

            Yes, I admit some people use the term harvard architecture to refer to processor a
    • Intel's new method of throttling is to take less instructions off the queue per unit of time. The CPU does less work, so less gates switch, so power dissipation as heat is reduced. Why change clock rates when you can just process less instructions?
  • Wake me when (Score:4, Insightful)

    by wowbagger (69688) on Tuesday August 03, 2004 @02:54PM (#9870685) Homepage Journal
    Yawn. Wake me when Intel has released real, production ready (NOT 0.2) drivers for Linux for this, or any other modern wireless network chip.
    • Re:Wake me when (Score:2, Interesting)

      by Anonymous Coward
      The specs are publically available. Instead of sitting around whining, why don't you get off your ass and write the drivers yourself?
      • Instead of posting anonymously defending a company with billions of dollars who refuses to write the drivers, why don't you divert your energies to signing up for a Slashdot account.

        While you're at it, maybe you should think about how retarded that statement you just made was, and rethink it. An acceptable retort would be "Linux sucks, I personally hate it, and Intel is doing the right thing by ignoring it. If you feel differently, write it yourself!" which is what your statement came off as to begin w
      • Re:Wake me when (Score:2, Insightful)

        by debilo (612116)
        Uhm. I think one could expect a vendor to provide drivers themeselves. You actually have to pay for their products, remember? You give them money, you make them rich. I really don't feel like giving money to a company just to find out that I also pay them for limiting my choice.

        Grandparent was right, you are wrong.
    • Yawn. Wake me when Intel has released real, production ready (NOT 0.2) drivers for Linux for this, or any other modern wireless network chip.

      Wake ME when they publish the source for the DSP firmware for the chip/core.

      a) Visibility into the firmware is just about mandatory for writing your own driver. API documentation is better than nothing, but it's often not enough.

      b) Drivers are relatively easy compared to doing work in the signal processing portion. While the FCC really doesn't want you to be
      • They may not be using DSPs as much as FPGAs/ASICs - a great deal of the signal processing for that sort of thing is easier done as parallel blocks of hardware than software.

        The FCC is no more worried about you mucking around in the modulator/demdoulator as the driver - either will allow you to cause interference.

        (A guy who does Software designed radio for a living.)
        • They may not be using DSPs as much as FPGAs/ASICs - a great deal of the signal processing for that sort of thing is easier done as parallel blocks of hardware than software.

          It's an 802.11a chip. While .11b used DSSS (which is a time domain solution and goes well with dedicated logic), .11a and .11g use OFDM (which is based on FFTs thus is much easier to do in a DSP than with dedicated logic).

          (And just now I have a real need to get hold of an OFDM testbench for prototyping some related things in a nearby
          • You can also implement FFTs in hardware, or use a different approach - use a more "analog-y" method like mix&filter, which allows you to run a seperate downconverter for each carrier.

            As for the HW - what kind of development are you doing? What's your price range for a devel board? - are you doing this as a hobbist or professionally? If you are looking in the professional range you could get a Pentec board or an Aeroflex PXI board.
  • Press Release links (Score:3, Informative)

    by mobby_6kl (668092) on Tuesday August 03, 2004 @03:07PM (#9870783)
    why would somebody link to a forum reposting the official press release? (well ok I think I know)

    New Server Platforms [intel.com]
    EUV Lithography [intel.com]
  • Mesh This! (Score:2, Funny)

    by lofi-rev (797197)

    Now if everybody would just carry around one of these devices and cooperate in a mesh network then I could finally achieve my dream of....

    Well, it would be really cool.

  • by starannihilator (752908) on Tuesday August 03, 2004 @03:09PM (#9870809) Journal
    There has been a great deal of discussion regarding the availability of the Lindenhurst chipset [theinquirer.net], and WIN Enterprises [win-ent.com] is pleased to offer developers the latest Xeon technology for their embedded controllers and platforms. WIN Enterprises, Inc., a leading designer and manufacturer of customized embedded controllers and x86-based electronic products for OEMs, has announced the availability of the latest Intel 64-bit Xeon core module for developers of high-performance embedded platforms - Nocona / Lindenhurst [win-ent.com]. WIN Enterprises is pleased to offer leading-edge, long-life solutions based on Nocona / Lindenhurst for everything from embedded single board computers to platform systems. For OEMs looking to incorporate the newest Xeon technology, WIN Enterprises has developed a proven core module for Nocona / Lindenhurst to create custom embedded controllers. "We have spent an extensive amount of time debugging and perfecting this specific core module," said Chiman Patel, WIN Enterprises' CEO and CTO. "This will allow our OEM customers to bring their application-specific Nocona / Lindenhurst embedded products to market quickly and cost-effectively." For more information, please contact WIN Enterprises at 978-688-2000 or sales@win-ent.com. Visit www.win-ent.com to learn more about WIN Enterprises' embedded design and manufacturing services.
  • "And how much *is* this Complementary chipset?"

    CB
  • Well I only hope this new wireless performs better than Centrino. It's not like integrating WiFi into a chipset is rocket science as all chipset makers are at it now. Oh and this time, some Linux drivers right off the bat, please.

    At the moment Centrino pairs an excellent low power, good performing processor (Pentium M); with the one of the poorest performing Wi-Fi solutions you can get. But look at how they've marketed it on it's poorest facet, with Centrino you can read your email on top of Everest, brows
  • So now people like me who don't think it makes sense to buy a x86 that can't handle 64 bits, and who (unlike me) don't have confidence in AMD, can start buying x86 chips again.

    Tell me, is EM64T [intel.com] truly identical to AMD64 [amd.com] or are there small differences? I'm curious.

    • well, I think one has AMD written on the package and the other has intel written on it, so they're definitely not identical. - or were you talking electrically or in terms of timing and logic?
      • Laugh while you can, monkey boy, but I'm worried that EM64T [intel.com] is just enough different from AMD64 [amd.com] to give us all headaches.

        Did someone [slashdot.org] just say that the DMA implementations are different enough that device drivers will not be compatible? Tell me more about that.

    • Intel's chip obviously is a completely different design - i.e. it works differently internally. They copied the _instruction set_ to make the new 64-bit instructions compatible with the AMD chip's - seeing as AMD had got to the market first this was the logical thing to do. However its worth noting that Intel already had their own 64-bit chip designed beforehand - they just hadn't thought the market was there for a 64-bit chip just yet (thus letting AMD beat them to the gun).

      If AMD hadn't released their ch
      • I believe you are mistaken. It is true that "Intel already had their own 64-bit chip designed beforehand," and in fact it was actually available as a product. These 64-bit Itanium chips have a completely different instruction set--they are not x86 chips. Intel's plan was to move the world from x86 to Itanium, so it is incorrect to say "Intel would have eventually reseased the same chip" without AMD breaching down its neck. The success of AMD64 forced Intel's hand.

        As far as copy/clone/reverse-engineer goes

        • Right. I was just wanting to make clear what we mean when we say 'clone', as it wasn't clear originally whether you just meant the instruction set or more.

          The Itanium is another story. I was however referring to some of the P4s, which Intel has for a while been selling with 64-bit capabilities there but simply disabled (as Intel didn't see a market for them, and obviously from a marketing perspective wants to hold back their introduction until its something that can be sold for extra $$$). Heres what a qui
  • Are they going to have to tweak the Duke Nukem Forever engine to take advantage of alll these features?
  • The chip can switch between different networks and frequencies; it is capable of tuning and tweaking itself.

    I don't see how this has anything to do with the 90 nm process. We've had the technology to do this for quite a while. Just have the right frequency divider on the VFO for demod and you have the frequency switching. Run it over the bands sequentially and you've got autodetect. Program one or two algorithms into the firmware and you have all the tweaking you'd ever need. Is this just some other c

  • by Mr.Zong (704396)
    Sweet. I didn't know you could use my computers muffed up clock to make chips. Rock on. Now if I can just figure out how to use the BIOS to make some dip, I'll be in freakin heaven.
    • One and the same. The "CMOS" in your system is a memory chip made using a CMOS process. It was called CMOS because the first systems to have it had mostly NMOS chips, and a CMOS chip was the only type of chip that was low enough power to be run by a battery. Here are the basic chip technologies:

      PMOS - P-channel Metal Oxide Semiconductor (slow, high power)
      NMOS [wikipedia.org] - N-channel Metal Oxide Semiconductor (fast, high-power)
      CMOS [wikipedia.org] - Complementary Metal Oxide Semiconductor (a sort of cross between PMOS and NMOS, fast
  • by Anonymous Coward
    No PS2 connections, no serial, no parallel. USB or forget it.

Real Users hate Real Programmers.

Working...