Become a fan of Slashdot on Facebook


Forgot your password?
Intel Hardware

Intel Launches Atom CPU With Integrated FPGA 188

An anonymous reader writes "Intel is quite clearly serious about offering competition to ARM in the embedded market, and has just announced a new Atom processor series that offers a unique selling point: an integral FPGA processor. Billed as 'the first configurable Intel Atom-based processor,' the Atom E600C series combines an Intel Atom 'Tunnel Creek' chip with an Altera Field Programmable Gate Array — offering, the company claims, significantly more flexibility for ODMs and OEMs."
This discussion has been archived. No new comments can be posted.

Intel Launches Atom CPU With Integrated FPGA

Comments Filter:
  • Awesome (Score:5, Informative)

    by phantomcircuit ( 938963 ) on Tuesday November 23, 2010 @03:22AM (#34314928) Homepage

    Assuming it's priced relatively reasonably, that is fucking awesome.

    • Re:Awesome (Score:5, Interesting)

      by arivanov ( 12034 ) on Tuesday November 23, 2010 @04:44AM (#34315298) Homepage

      It is not the pricing which is interesting here, it is will there be anticompetitive marketing restrictions.

      Atom was intentionally crippled through pairing with crippled 5+ year old video and a specific resolution restriction for systems with it. After NVidia broke this restriction it was redesigned to exclude it.

      i815e was intentionally crippled to 512 RAM through a marketing restriction so that RDRAM and 840 and 820 sell.

      Turning off SMP anywhere they could turn it off for 10 years since PPro so that the "server varieties" of the same chip (often from the same tray) sell.

      And so on.

      Intel has a long history of shooting itself in the foot on non-cannibalisation grounds. I suspect it shot itself here as well. This can make a phenomennal HPC platform due to its motherboard "real estate" and cooling requirements, however that will eat into Intel Xeon + QPI enabled FPGA sales. So I guess it will be crippled through marketing to disallow that.

      FFS, it does not take a genius to understand the basic idea that "If there is money in it, someone else will cannibalise it for you, so you might as well cannibalise yourself and expand the market".

      • Re:Awesome (Score:5, Interesting)

        by Elbereth ( 58257 ) on Tuesday November 23, 2010 @05:24AM (#34315468) Journal

        Dude, everyone does that. AMD/ATI does it, Nvidia does it, IBM does it, Motorola used to do it, and if Apple ever designed/manufactured anything themselves, they would do it, as well. It's called marketing. Those $1000 "Extreme" CPUs that Intel sells only cost about $100 to manufacture, if that. Probably only $25 or $50. How do you think Intel recoups its R&D costs? It prices the high end chips as high as the market will allow, then sells the mid-range chips for a more reasonable price.

        Did you forget that AMD was selling Athlon XP and Athlon MP chips at wildly different prices, even though you could enable MP on the Athlon XP by drawing on them with a pencil? What about disabling MP every one of the later Athlon chips? Even some Opteron chips have MP disabled! That's seriously wrong, in my opinion. As far as I know, no Xeon has ever had MP disabled. Say what you will about Intel, but if you buy a Xeon, you know what you're getting.

        What do you want Intel to do, anyways? Sell all their CPUs at manufacturing cost, with no feature differentiation at all? So that everyone can buy Xeon MP chips for $50 each? Yeah. OK. Let's see how long that lasts. I'd say Intel would be bankrupt in less than a year.

        Seriously, dude, if you want cheap SMP motherboards and CPUs, go shop on ebay for used stuff from failed dotcoms. That's what I used to do. I even scored some high-end server-grade hardware, like DEC Alpha CPUs, SCSI RAID enclosures, SCSI drives, and smart UPSes. There's no need to rant about Intel's "anti-competitive" tactics, of which exactly zero legitimate examples exist in your post. Intel has done some pretty shitty things in the past, but this isn't one of them. Save your rant for something that matters.

        • Re:Awesome (Score:5, Funny)

          by Man On Pink Corner ( 1089867 ) on Tuesday November 23, 2010 @07:13AM (#34315970)

          Dude, everyone does that. AMD/ATI does it, Nvidia does it, IBM does it, Motorola used to do it, and if Apple ever designed/manufactured anything themselves, they would do it, as well.

          Dude, WTF? If Apple were any more vertically-integrated they'd own their own African tantalum mine.

        • Re: (Score:3, Interesting)

          Did you forget that AMD was selling Athlon XP and Athlon MP chips at wildly different prices, even though you could enable MP on the Athlon XP by drawing on them with a pencil?

          Done that. It ups the heat output of the chip from "lots" to "ow my fingerprints"...

          I suspect that the chips actually sold as MP were from the higher-end binnings so that they produced less heat (the same bins that the highest performance and the laptop versions of the chips also come from). The "midrange" chips often can't be clocked to the same speed as the top-end chips, because they are physically inferior.

          Incidentally the Athlon XP-M chips used less power and put out less heat than the normal ones, and

        • Even some Opteron chips have MP disabled!

          You know, the first thing I thought about when reading that Intel was going to include FPGA on the Atom was that some manufacturer(s) would figure out how to use it to give consumers less for their money, or preventing them from doing something with their hardware.

        • by DrSkwid ( 118965 )

          s/YOUR MESSAGE/every company in the world tries to charge what the market allows them to, it's called the elasticity of demand such that price is determined by competition not the price of production.

      • by CAIMLAS ( 41445 )

        Yeah, maybe. I'm sure some people will attempt it. But you're forgetting something crucial: warranties.

        If you drive your car off a cliff, you've voided the warranty. The same basic principle applies here.

  • by __aatirs3925 ( 1805148 ) on Tuesday November 23, 2010 @03:31AM (#34314978) Journal
    I'm kinda excited for whatever this means. Could somebody please explain? Does this mean Atom processors might be useful now?
    • Re:double rainbows (Score:5, Informative)

      by Neil Boekend ( 1854906 ) on Tuesday November 23, 2010 @03:44AM (#34315046)
      It means that intel has thrown an FPGA [] into a normal CPU. FPGA's are highly programmable chips that are very fast in the thing they are programmed for. Changing the programming takes, by comparison, a lot of time and they usually can't do anything else than what they are programmed for.

      If you would program one to be a decryption device you could have very fast decryption, but you can't let it do something else when there is nothing to decrypt (multitask).

      All in all the result will be a major increase for applications that are reprogrammed to be in the FPGA (and are small enough for the FPGA) but nothing will change for the other applications.

      There are many other chances and limitations, as it is a completely different device, but these are the most important (as far as I know) in this case.
      • This sort of stuff isn't done for a general purpose computer usually. Instead you put whatever hardware acceleration you need into the FPGA because you've got an embedded system or have application specific hardware. It's often used when you have a low volume of devices that you're making because FPGAs are relatively expensive but easy to modify if you find bugs. I've worked on a system that had multiple FPGAs + multiple DSPs + CPU (and a big fan).

        Though you can use them for things other than hardware a
      • Interesting. I was thinking the same as GP: what the heck is that!

        Can this reprogramming be done by the OS, upon need? And how slow is slow?

        Could be nice for e.g. video decoding.

        • by wisty ( 1335733 )

          Video decoding may be better on a dedicated chip. A good GPU should be fine (oh wait, it's Intel ...)

          I guess FPGA could be used for nifty device drivers. You don't want to change the touchscreen interface very often (as an example), but if you (shudder) encounter a bug then the FPGA can be modified.

          • That's why my question: how slow is slow to reprogram? Could this be a replacement for various dedicated chips - taking up a task when needed? Like when you want to play a video, it becomes video decoder, or maybe it can be used for other tasks that are fairly intensive, and last long.

            • The only sure answer is unfortunately "it depends". Just because they are programmable in the "field" doesn't mean you can necessarily do it from software. Some FPGAs require a service tech to hook some other system up to the motherboard to change anything. Some require pulling the chip and putting it in a portable device. Some can have different programs swapped in from ROM at different times. Some can have custom programs loaded from RAM by an application. I'm not sure which this is, but since it's the At

            • by Andy Dodd ( 701 )

              Given that some of Xilinx's parts can reconfigure from flash memory in only 1-2 seconds, this much smaller part should be able to configure in under half a second if the reconfiguration architecture is done right.

              So you could reconfigure it as part of an application startup sequence. Not sure how you'd handle device contention though (an attempt to run two apps that both want to use the FPGA - context switching would be a real bitch).

    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )

      If there was a mass market, you'd make an ASIC. This lets embedded developers create special circuitry for whatever embedded need they have, which is useful but I don't see it as a mass market product for regular consumers.

      • Re: (Score:3, Informative)

        by Morty ( 32057 )

        Some vendors, such as Juniper, have transitioned at least some of their product lines from ASICs to FPGAs. A problem with ASICs is that you can't patch them for security issues. This is bad if, say, you sell firewall products.

        • by mrand ( 147739 )

          s/some vendors/most vendors/

          Telecom and datacomm equipment have long used FPGAs at key points in their systems for one or more of the following reasons:

          * off-the-shelf silicon sometimes costs to much

          * off-the-shelf silicon is missing something that is important to you (maybe an interface type, or a key feature)

          * off-the-shelf silicon doesn't have the density

          * ASIC's cost a lot to develop, and prices have been going up (while each year, FPGA prices go down). If you don't have pretty high volume, each year i

      • I could see usage of this in portable consumer devices (phones, tablets, whatever is the next thing), offering the possibility for app-dependent 'ASIC' on otherwise low-computing-power devices - say, when the user is watching a video, put the stream decoding stuff on the FPGA, if there is music in the background, put the mp3/ogg/whatever decoding there so that the main processor is free for other apps, heck, if flash or html5 is too slow then probably some compute-intensive part of it can also be pushed to

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Essentially this means that there is a chunk of the processor which will be *COMPLETELY* configurable. FPGA stands for "field programmable gate array" which just implies that you can re-program the way those gates are connected *after* the chip has been manufactured.

      Without understanding basic electrical engineering logic it's hard to describe all the neato things you can do with this, but essentially FPGAs can do all sorts of neat things and they can do them in parallel. If you've ever heard of something

      • Wouldn't a GPU with GPGPU be better at that anyway?

        • Current GPUs are only really useful for some tasks. Namely, code that doesn't do alot of branching (e.g.: matrix multiplication). The rest can't really gain that much performance. Not to mention, you have to manually upload and download data to the GPU, it's a total mess to program.

          With an FPGA, you can generate on-the-fly a customized hardware accelerator for your problem domain. This could be a processor with specialized instruction for your problem domain, a vector processor, or even a hardware raytra
      • Re: (Score:3, Funny)

        And REALLY piss off intel at the same time. Using Intel chips to decode hdcp would be pretty ironic. I mean, Can you imagine using the FPGA to do the grunt work of decoding and then using the cpu to re encode the stream?
    • Re:double rainbows (Score:4, Informative)

      by ThermalRunaway ( 1766412 ) on Tuesday November 23, 2010 @03:55AM (#34315098)
      FPGAs are useful as the actual digital circuits are re programmable. So you could theoretically patch your CPU and change the physical functionality of at least part of it. This would lead to all sorts of nice customizations.

      One interesting aspect of the Altera soft CPU (NIOS), is that you can add custom HW directly into the execution unit, basically making your own HW instructions. Then you can generate an assembly instruction for it and use it right from your code. This lets you do nifty things like build a custom piece of HW to implement some arcane computation that is specific to your particular use of the HW and have it built right into the CPU. Wonder if there is this sort of setup here.. that would be pretty nice.
      • by Arlet ( 29997 )

        This is not a soft core CPU. You get a package with 2 dies inside: a regular Intel Atom CPU core, and a separate FPGA.

        • Re: (Score:3, Insightful)

          I know.. I've simply giving an example of an interesting way Altera lets you customize some of their IP. The Atom has an Intel core and an Altera FPGA... im doing some wishful thinking that maybe you would get some level of access to the CPU like you do with the NIOS.
          • man, i remember working on a nios II project back in 2005 for my internship, that was an awesome experience, configuring my own cpu, bus-clock, multipliers, memory interfaces and all that stuff

            Compile times sucked though, especially since the best hardware us interns got was a 2.4 GHz pentium 4.. i would click compile and go for a walk around the building, get tea/coffee, returning to find my 45 minute compile couldnt achieve the clock speeds i wanted it to run at...

    • Re: (Score:3, Insightful)

      by Macman408 ( 1308925 )

      Probably not for anything you'd be interested in. Unless of course, you're interested in a slow CPU with slow (but custom) logic. If you want fast custom logic, or ridiculously low-power, you go with an ASIC (assuming you have either high volume, or can tolerate a high per-unit price). If you don't have a rather complex, repetitive calculation to do, you go with a regular CPU. If you do have a big calculation, you might consider a faster CPU or GPU, or at least something with a faster connection between the

      • There is a programming model for FPGAs. They have their own programming languages which are widely used in the industry (Verilog/VHDL). This model isn't so different from the way OpenCL is used with GPUs. This kind of design will work well for some applications, where custom hardware accelerators can be precompiled and loaded on demand. There will already be demand for this. Some companies that can't afford to make ASICS will certainly like the idea of integrating their own decryption/routing/video accelera
        • There *is* a programming model, there's just no *good* programming model. I'm very familiar with Verilog and VHDL, and use them in my job. That said, they're not languages that an average programmer can pick up and expect to get a good result - you have to learn how various constructs get converted to hardware, and how you are constrained by the hardware you are working with.

          Similarly, I feel like OpenCL and the many parallel programming models suffer similar limitations (I've used Pthreads, OpenMP, MPI, Ha

    • Atoms aren't totally useless. I actually have a dual core atom that I am using as a server, with a VM running. I also remote desktop into the machine and use it on a daily basis at work. Since I'm pretty much the only user of the machine, it works perfectly and consumes under 30W. I used to leave my desktop on 24 hours a day and it was sucking up 300W all day long. That's a huge power savings for me at $0.12a kWh.

      If you're wondering why I keep this thing running all the time its because I run an SVN,

      • Yeah, but SSDs have more CPU overhead than HDs. If you are running something that doesn't do a lot of disk I/O it may be that you would be better off with an HD.

    • Good example (Score:3, Insightful)

      by SuperKendall ( 25149 )

      It would let viruses create some custom FPGA code that would be able to crack any encrypted files you had in mere seconds, instead of hours.

  • by allanw ( 842185 ) on Tuesday November 23, 2010 @03:39AM (#34315010)
    In related news, and also very interesting: []
  • by fpgaprogrammer ( 1086859 ) on Tuesday November 23, 2010 @03:39AM (#34315014) Homepage


    • I thought you don't need a programmer for FPGAs ;)

      • Re: (Score:3, Insightful)

        You need an EE master. FPGA programming is *hard*.
        • i am an ee monster!

  • Actual information (Score:5, Informative)

    by Mysteray ( 713473 ) on Tuesday November 23, 2010 @03:46AM (#34315056) Homepage []

    350 user I/O pins. I think that could control a few Christmas lights. Or make a nifty message-passing bus for a parallel computer.

    Wonder if anyone will make inexpensive boards with breakout IO?

    • Re: (Score:3, Interesting)

      by allanw ( 842185 )
      Only PCI-E 1x interconnect between the CPU and FPGA? Kinda disappointing.
    • by dgatwood ( 11270 )

      350 user I/O pins. I think that could control a few Christmas lights. Or make a nifty message-passing bus for a parallel computer.

      Yes! Finally enough I/O pins to make my velocity-sensitive pipe organ idea viable....

      • Actually I'm excited about the possibility of inserting a chip like this into my Hammond M-100A organ, then running Linux apps on it that control all its keys and switches. If I could embed just the tone generator electromechanics and chip in a cabinet, with effects loops and MIDI, I could have all the various hardhack Hammond mods available, and new ones, in a small cabinet. I could even have room for several Hammonds at once, making chorus/phaser/vibrato and really complex combos. Keyed either from softwa

        • by dgatwood ( 11270 )

          Well, my insane idea involves ripping out the actual keyboard from a pipe organ, replacing it with a MIDI keyboard of the same size, and using it to drive a computer that translates the MIDI signals into commands for a series of controllers (one per rank). Because each keypress would send key velocity information, the harder you play, the louder the sound would be (like a piano). This could, of course, be enabled or disabled at will, since it's all done in software.

          Each controller would consist of a contr

  • The right way to do this license the Altera IP and integrate it closely with the CPU. Then the CPU could use it in normal operation, for floating point for example. You have various programs and every time you try to access one that's not in the FPGA an interrupt is generated.
    Almost like in the good old days of WCS.

  • by Jan ( 7105 ) on Tuesday November 23, 2010 @04:13AM (#34315174)

    Before you all speculate widely, try reviewing the actual product brief. [] . In which you will see this is an MCM with an Atom E6xx SoC die and an Altera FPGA die, interconnected by 1-2 PCIe x1 links. It has an amazing 1466 ball grid array package.

    It's not clear to me what this level of packaging and integration achieves compared to mounting a (not integrated) E6xx BGA and a separate Altera or Xilinx FPGA BGA onto the main PCB, interconnected by PCIe x1 or perhaps even x4. Then you would get a broader choice of FPGAs -- and perhaps a simpler PCB escape for the two packages compared to one 1466 ball beast.

    The advantages of this MCM as stated in the brief include:
    * reduced board footprint
    * lower component count
    * simplified inventory control / manufacturing
    * single-vendor support

    True, but forgive me if I'm not over the moon. The dream of integrated FPGA fabric into a heterogeneous SoC (same die) includes a very low latency and possibly cache coherent interconect between the processor(s) and the FPGA. But here the FPGA is on the other side of a narrow PCIe link. It can't share the Atom SoC's memory hierarchy / DRAM channels very effectively. It is probably a very long latency round trip from x86 software control / registers and L1$ data, to some registers or function units in the FPGA, and back to the x86. So I think of this as more of a super-flexible Atom SoC platform than a dream reconfigurable computing platform.

    It's a nice step but I look forward to so much more. [] (1996): "... So as long as FPGAs are attached on relatively glacially slow I/O buses
    -- including 32-bit 33 MHz PCI -- it seems unlikely they will be of much use in general purpose PC processor acceleration. ..."

    • by Lennie ( 16154 )

      I was immediately thinking if that would mean someone could make a cheaper NetFPGA/LibreRouter (line rate forwarding open source router platform using FPGA instead of ASICs which the normal routers from Cisco, Juniper, etc. use for hardware-routing/switching).

      It is mostly used for academic purposes, but if the FPGA does not have direct access to RAM and can not do direct I/O it's probably not useful.

    • Xilinx EPP [] puts an ARM Cortex A-9 in the die with a large Xilinx FPGA. Is that the dream of integrated FPGA fabric come true?

    • And it's only 60K logic elements, making it clear that this device is not intended for number crunching.

  • I love programming and wiring up some microcontrollers as much as the next geek, but at what point does a chip become too complex for realistic home use?

    I don't need hundreds of GPIO pins, and I don't even think I can solder detailed enough or design home-made PCB with enough detail to accommodate a processor with this many pins and features.

    I am pretty happy to see FPGAs making it into commercial projects - they're just so useful.

    "You want your processor to use this specific logic pipeline? There's a chip

    • This part will be sold in PCs with large parallel connectors for interfacing multiple or complex devices to the FPGA pins, along with the rest of the HW that supports a smart embedded device. You won't be soldering directly to the part.

      But it's not really designed for "home use", except for embedded home automation developed by serious engineers. Which could be a DIY, but mostly won't be.

  • There are loads of FPGAs on the market with integrated PowerPC cores. There are probably FPGAs on the market with integrated ARM cores (ah yes, a post already links to one such creation). This is a dual-die package with a 60k gate FPGA. It's a nice option on the market, but it's hardly unique. The cost will be a major issue as well, although so far the prices look reasonable. But you can't put much into 60,000 gates (although maybe they're counted different from Xilinx or Spartan gates), certainly not a Min

    • I've got an industrial control project that's been designed so far around an embedded Atom PC and a custom PCB for sensor/actuators. We might be able to port the custom PCB to this FPGA, leaving only voltage transformers/transistors/relays outboard. Which could save us a lot of money in production, and even more in maintenance/upgrades. There are existing PPC and ARM devices, but our existing SW is for x86. Which is why an Atom/FPGA part could be a great savings for us. And of course ours is a very typical

  • If it's fast enough you could use it as an SSL front end to non-SSL web servers.

  • Can I run Linux and Eclipse on one of these new CPUs locally, and use a good Eclipse module to port Linux kernel functions (like IO logic) from iterated procedures to the FPGA, then test them? Which Eclipse modules would support that development?

  • Please please PLEASE leave this open for hobbyists to download their own FPGA code. I could REALLY use a dedicated FFT or DSP for math crunching!

At work, the authority of a person is inversely proportional to the number of pens that person is carrying.