Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Bug Hardware

Theo de Raadt Details Intel Core 2 Bugs 442

Eukariote writes "Recently, Intel patched bugs in its Core 2 processors. Details were scarce; soothing words were spoken to the effect that a BIOS update is all that is required. OpenBSD founder Theo de Raadt has now provided more details and analysis on outstanding, fixed, and non-fixable Core 2 bugs. Some choice quotes: 'Some of these bugs... will *ASSUREDLY* be exploitable from userland code... Some of these are things that cannot be fixed in running code, and some are things that every operating system will do until about mid-2008.'"
This discussion has been archived. No new comments can be posted.

Theo de Raadt Details Intel Core 2 Bugs

Comments Filter:
  • Re:Yay AMD (Score:5, Informative)

    by BosstonesOwn ( 794949 ) on Thursday June 28, 2007 @09:02AM (#19674703)
    I don't think that is a good thing either. It looks like AMD may be doing this as well.

    (While here, I would like to say that AMD is becoming less helpful day
    by day towards open source operating systems too, perhaps because
    their serious errata lists are growing rapidly too).
  • by Aladrin ( 926209 ) on Thursday June 28, 2007 @09:07AM (#19674757)
    Sure:

    Some of the bugs are so dangerous that it doesn't matter WHAT operating system you're running, code could be written that could attack the entire system. It would still be OS-specific code, but since the exploit is in the hardware, it's a LOT harder to prevent the attack, if it's even possible.

    Some of the bugs are unfixable, as well. (I assume they mean without physcially replacing the chip with a 'fixed' one that doesn't exist yet.)
  • by ardor ( 673957 ) on Thursday June 28, 2007 @09:09AM (#19674781)
    Actually we are talking about VHDL. The "million transistors" argument is just as appropiate as saying "software is so large, it has so many ones and zeros". Development does not happen at this low stage.
  • Re:Good stuff. (Score:5, Informative)

    by Lisandro ( 799651 ) on Thursday June 28, 2007 @09:22AM (#19674915)
    Same here. The guy might seem like a bit of an asshole sometimes, but he surely knows what he's talking about. Some of the things he points out are plain unbelievable:

    Basically the MMU simply does not operate as specified/implimented in previous generations of x86 hardware. It is not just buggy, but Intel has gone further and defined "new ways to handle page tables" (see page 58).

    Some of these bugs are along the lines of "buffer overflow"; where a write-protect or non-execute bit for a page table entry is ignored. Others are floating point instruction non-coherencies, or memory corruptions -- outside of the range of permitted writing for the process -- running common instruction sequences.


    It will be interesting to see what Intel has to say about this.
  • Re:intel issues (Score:2, Informative)

    by artjunk ( 1088603 ) on Thursday June 28, 2007 @09:23AM (#19674933)
    From what I gather from the article, it's irrelevant what OS you use - as some of these issues are at the lower level (under/before the OS). And, since all newer macs are Intel Core Duo's, I think this could be be an issue for them as well.
  • Re:Yay AMD (Score:4, Informative)

    by vadim_t ( 324782 ) on Thursday June 28, 2007 @09:26AM (#19674963) Homepage
    Well, there's VIA as well, althought their stuff left a lot to be desired the last time I checked it out. Their mini-ITX stuff had potential -- small, low power usage, REALLY good crypto and video acceleration to compensate for the slow CPU. Unfortunately when I tried a Nehemiah board, it was very unstable.
  • Re:Time for RISC? (Score:2, Informative)

    by Slashcrap ( 869349 ) on Thursday June 28, 2007 @09:32AM (#19675055)
    I wonder if this means that we should toss out that x86 layer and deal just with the high-performance, straightforward RISC core.

    Did you know that one of the main reasons that x86 outperforms any similarly specified RISC chip is because those horribly inelegant, variable length x86 instructions allow for considerably higher code density than RISC?

    Elegant does not necessarily equal faster or better, no matter how much you might want it to.
  • Re:Patches (Score:4, Informative)

    by Jeff DeMaagd ( 2015 ) on Thursday June 28, 2007 @09:36AM (#19675091) Homepage Journal
    Now, the only thing left to do, is someone tell Intel that they're selling hardware.

    Hardware has had built-in firmware/software for as long as I remember. BIOS is software. Microcode for even consumer CPUs has been done for as long as I remember, Pentium II had it. Apparently, the 8086 had microcode-based instructions.
  • by 0123456 ( 636235 ) on Thursday June 28, 2007 @09:36AM (#19675093)
    "This is going to be a big deal for shared hosting environments for example."

    True, but that depends on how easily they could be exploited in the real world, rather than in the theoretical world. From what I remember, one was about incorrect behaviour when your code runs off the end of a 4GB boundary; certainly that might be exploitable, but not on any system which can't run >4GB of code.

    I skimmed through the bugs which the author said really scared him and didn't see anything that looked easy to exploit from a user program. Yes, if you want total security on your system then they'd be scary, but if it's almost impossible to exploit then it really doesn't matter to anyone much outside the most secret parts of the government (and, even then, bribing people would probably be an easier way of stealing secrets).

    "I wouldn't be surprised if businesses like that started switching to AMD hardware."

    You're assuming that AMD chips are any better.
  • Re:Theo says... (Score:1, Informative)

    by Anonymous Coward on Thursday June 28, 2007 @09:36AM (#19675111)
    Every CPU released for probably as long as you've known about computers has had an errata sheet on it. If you want to stop buying CPUs with errata... well... your computing days are over.
  • Re:Patches (Score:3, Informative)

    by suv4x4 ( 956391 ) on Thursday June 28, 2007 @09:42AM (#19675193)
    Hardware has had built-in firmware/software for as long as I remember. BIOS is software. Microcode for even consumer CPUs has been done for as long as I remember, Pentium II had it. Apparently, the 8086 had microcode-based instructions.

    Don't confuse microcode with firmware. Two different things. Microsode isn't intrinsically updateable, and may be placed in a read-only memory block.
  • by ioshhdflwuegfh ( 1067182 ) on Thursday June 28, 2007 @09:43AM (#19675205)

    Uh, the slashdot summary is pretty lousy. After RTFA I am still a bit confused, can someone at slashdot please provide an "english" translation of the problems and how dangerous they are to normal users?
    The second link [geek.com] in the article, containing brief descriptions of bugs, might be useful, although perhaps still quite technical. One bug that is perhaps easy to communicate to the "normal user" is AE30, where the bug might cause some software running on Core Duo during the dehibernation to reload data from the wrong memory location. It's labeled as "potentially catastrophic", and I imagine that after the wrong reload, more or less anything can happen: some program crashing, OS crashing, to, who knows, maybe even some exploits can be programmed to use this bug...
  • by TheGratefulNet ( 143330 ) on Thursday June 28, 2007 @09:44AM (#19675233)
    AMD64 doesn't like FreeBSD 6.2 at all

    % uname -a
    FreeBSD myhost.grateful.net 6.2-STABLE FreeBSD 6.2-STABLE #0: Mon May 28 09:52:28 PDT 2007 me@myhost.grateful.net:/usr/obj/usr/src/sys/AMD64 i386

    granted, I'm using 32bit mode - but I've been running 6.2 for as long as its been out and my 'always on' freebsd box. what issues are you seeing? this is my production box - but I don't see any problems with bsd. in fact, I also have 6.2 running with an old amd64 3000+ that was a mobile chip and had to have cpufreq enabled just to move it off its default 800mhz and up to the 2.mumble ghz that its supposed to clock at. works fine.

    I have seen some hardware devices not behave well but often its not a well designed piece of hardware or its just not meant for server style loads (cheap consumer onboard sata sometimes times out and usb2.0 always times out if you give it enough load).

    I can't speak to amd64 USING 64bit mode, but 32bit mode works as well as (or better) than linux on headless style computing.
  • by davebert ( 98040 ) on Thursday June 28, 2007 @09:44AM (#19675235)
    Link [realworldtech.com]
  • Some of the bugs are so dangerous that it doesn't matter WHAT operating system you're running, code could be written that could attack the entire system. It would still be OS-specific code, but since the exploit is in the hardware, it's a LOT harder to prevent the attack, if it's even possible.

    Here's a little more detail, based on my (very incomplete) understanding of the issues:

    It appears that Intel has made changes to the way the memory management unit in the processor works, plus there are also some bugs that affect memory management. So what does that mean?

    • Theo mentions changes in how TLB flushes must be handled. Translation Lookaside Buffers (TLBs) are tables where operating systems cache information used to quickly determine what physical memory page corresponds to a given virtual memory page. Each running process has it's own address space (meaning the data at address, say, 1000, is different for each process) and operating systems have to be able to quickly translate these virtual addresses to addresses within the physically-available RAM. The authoritative data on the mapping is in a set of data structures called the "page table", but the processors provide a mechanism for creating and managing TLBs which act as a high-performance cache of part of the page table data. Failing to properly flush the TLBs during a context switch (putting one process to sleep and activating another) might result in the new process' virtual memory mapping being done incorrectly. From a security perspective, this could give one process access to memory owned by another.
    • Another issue mentioned is the possibility that No-Execute bits may be ignored. The OS can set the No-Execute (NX) bit on a page of memory that it knows to be pure data that should not be executed. The processor will refuse to execute code from any memory page with NX set. This makes most buffer overflow attacks impossible, because the normal buffer overflow attack involves getting a bit of malicious code shoved into a stack-based buffer as well as overflowing the buffer to overwrite a return address so that the CPU will jump to and execute the malicious code. Obviously, if the processor sometimes ignores NX bits, the buffer overflow attacks become possible again.
    • Theo also mentions possibly-ignored Write-Protect (WP) bits. The OS can mark memory pages as read-only. This is used for all sorts of things related to security. One of the biggest is preventing processes from writing to the memory in which shared libraries are loaded. If my process could overwrite, say, the C library code implementing "printf", other processes that call this function would execute my code. Some of them will be running as root, so I can execute code with root permissions. Modern operating systems do lots of data-sharing between processes, some of it completely non-writable, other parts of it "copy on write". Copy-on-write pages are implemented by setting the WP bit and then catching the page fault generated by the CPU when a process tries to write the page. The fault handler quickly copies the page in question, allows the write to hit the copy, and reswizzles the page table so the virtual page of the writing process points to the new copy. WP bits being ignored would also break this, so lots of cases where data is "opportunistically" shared would become really and truly shared, allowing one process to corrupt data used by another.

    There are other issues as well... but these are a good sample, and should give an idea of what kind of bad stuff these CPU bugs/changes can make possible.

  • Re:Yay AMD (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Thursday June 28, 2007 @09:51AM (#19675311) Journal
    SPARC is doing very well for certain categories of workload, although mainly web-app types at the moment. Most computers sold these days have some form of ARM chip[1], which is a nice, low-power architecture, but lacks floating point. This isn't a huge problem, since a lot of ARM designs (particularly those from TI) have a DSP on die which can seriously out-perform a general purpose CPU for a lot of FPU-heavy workloads.

    For general-purpose usage, the most interesting design I've seen recently is the PWRficient from P.A. Semi. It's a nice dual-core 64-bit PowerPC, with low power usage, similar performance to IBM's PowerPC 970 series. It has a lot of nice stuff on-die (crypto, a really shiny DMA architecture, etc).

    For a complete round-up of current alternatives, take a look at this article [informit.com] and the next two in the series.


    [1] They are generally marketed as 'cell phones' or similar, rather than 'computers'.

  • by TheRaven64 ( 641858 ) on Thursday June 28, 2007 @10:08AM (#19675507) Journal
    I don't know why Theo posted that link, because it is about the Core, not the Core 2. They are two completely different micro-architectures. The Core was a slightly tweaked Pentium M (which is basically a P6 with extra vector instructions and the NetBurst branch predictor), while the Core 2 is a completely new micro-architecture. If you compare the errata in the two links, you will see that they are quite different.
  • Re:Patches (Score:-1, Informative)

    by stratjakt ( 596332 ) on Thursday June 28, 2007 @10:11AM (#19675553) Journal
    Microcode is updatable on intel processors.

    People have fixed/coded around bugs before. This is all a tempest in a teapot.
  • by Anonymous Coward on Thursday June 28, 2007 @10:13AM (#19675571)
    Scariest post on that thread:

    http://marc.info/?l=openbsd-misc&m=118302016430106 &w=2 [marc.info]

    AMT is a technology intended to facilitate survailance, maintenance
    and control computers remotely.

    * Monitor and control (filter) the network traffic - before/under the
    running operatingsystem

    * sending out patches to computers - even if they are turned off.

    * Control, upgrade, change, add and remove software
  • Re:Patches (Score:2, Informative)

    by Hal_Porter ( 817932 ) on Thursday June 28, 2007 @10:16AM (#19675609)
    Modern processors have some ram for microcode updates

    http://www.enlight.ru/docs/cpu/INFO/mcupdate.htm [enlight.ru]

    I think with this and with clever hacks in the OS [x86.org], you can fix most bugs. So probably there's a lot of person to person communication between processor manufacturers, Bios writers and OS vendors and the net result is that it all seems like it works. Of course if you're an obnoxious vendor of a not too commercially important OS, you're probably excluded from this, which is why Theo is upset.
  • by mwvdlee ( 775178 ) on Thursday June 28, 2007 @10:20AM (#19675639) Homepage
    What worries me most is; with software, you can supply a fixed version to customers for the cost of a CD and a postage stamp (or less), with hardware it's slighly more expensive and thus slightly less likely to ever happen.
  • Re:Quantum effects? (Score:2, Informative)

    by ioshhdflwuegfh ( 1067182 ) on Thursday June 28, 2007 @10:39AM (#19675861)
    Well, there are also many other possible classical reasons for nondeterministic results of this bug, for instance due to some asynchronicity issues related to the inner design of the part of chip that deals with the non-canonical addresses, its connections to other parts of the chip, etc---the chip itself is sufficiently complex that it is hard to tell without looking into details of the design what's up. I'd just give a guess that this nondeterminism is veryvery unlikely to be due to the quantum effects.

    Given that definition [of nanoparticle] every transistor's source, drain and gate are nanoparticles. And we expect them to behave classically why?
    Good question indeed. It is similar to asking, given that every atom is a quantum object, why should the wire made of these atoms behave according to laws of classical physics, like the Ohm's law etc? The physics answer is quite tricky, and it spins around the question how does the famous reductionism break down, or not, in going from classical to quantum world, and vice-versa.
    In the case of the chip, large working temperature of such chips will very much help in suppressing the quantum effects. The length scale of the little wires inside of the chip is very important, so, simply stated, one must counter-balance the size of the wires with the working temperature, and, in the end, you get the chip to behave completely classically, designed using standard laws of digital/analog electronics. Going to much finer wires, like 10 fold thinner, would reintroduce quantum effect very big time, to the point of breaking the classical laws of conduction of these wires, transistors would have to be redesigned/refabricated using completely different technology (that, AFAIK, does not exist yet), etc...
  • by Bri3D ( 584578 ) on Thursday June 28, 2007 @11:16AM (#19676299) Journal
    Another scary bug (perhaps the scariest, since it appears to be the one that most reliably/repeatably occurs) is AI88: Microcode Updates Performed During VMX Non-root Operation Could Result in Unexpected Behavior.
    From what the errata says, unless the host software has specifically disallowed access to parts of the MSR, a VMX guest/non-root system could reload the CPU microcode.
    This leads to a whole universe of complicated data theft/code execution/etc. exploits that will probably never be created due to their complexity. However, it also leads to a very, very, very simple DoS/crash exploit (load some bad microcode, crash the CPU).

  • Re:Time for RISC? (Score:4, Informative)

    by edwdig ( 47888 ) on Thursday June 28, 2007 @11:46AM (#19676729)
    I think the latest Power series will give any Intel CPU a run for it's money as well the latest Sparc.

    Yes, they will. But those chips are designed with a target price of thousands of dollars and without anywhere near as much concern about heat.

    Power has a 128 KB L1 cache (64 KB on Core 2), 4 MB L2 cache per core (4 MB L2 shared on Core 2), and a 32 MB L3 cache (none on Core 2). If you're willing to pay for that, x86 would be a lot faster.

    Oh, don't forget that Power chips run really really hot. Hotter than Pentium 4's. The market has made it clear that lower power usage / heat generation is a priority now.
  • by Durzel ( 137902 ) on Thursday June 28, 2007 @11:49AM (#19676781) Homepage
    Theo also seems quite sensationalist from a first glance (this is the first of his articles I've read). Emotive statements like "These processors are buggy as hell", conjecture like "We bet there are many more errata not yet announced" doesn't really lend credence to his arguments.

    He may be entirely right, and his experience in CPUs, BIOS vendors and Intel, AMD, etc may mean what he is saying is accurate - but the tone doesn't really sound very professional.
  • by m.dillon ( 147925 ) on Thursday June 28, 2007 @01:21PM (#19678123) Homepage
    Ok, lets look at some of these.

    AI65 - Thermal interrupt does not occur if DTS reaches an invalid temperature. What the hell is an invalid temperature? A disconnected sensor or something? It doesn't sound like something a userland thermal-generating loop can exploit but the errata is not detailed enough to know for sure.

    AI79 - REP/STO in specific situation may cause the processor to hang. BIOS patchable. The errata mentions an uncacheable memory store. If this is a pre-requisit then only user programs with access to /dev/io or memory-mapped bus space can exploit it. So e.g. something like XOrg, but not the typical user program. Worse case seems to be a system freeze. Still, this is something to be concerned about.

    AI43 - Concurrent MP writes to non-dirty page may result in unpredictable behavior. This one is extremely serious. It effects any threaded program and possibly even programs which are no threaded. This would cause me to not purchase the cpu. It says that a BIOS workaround is possible (aka microcode update).

    AI39 - Cache access request from one core hitting a modified line in the L1 cache of another core may cause unpredictable system behavior. What the hell? Are they out of their minds? This is a big-time show stopper. It says it can be fixed with the BIOS (aka microcode update). I sure hope so.

    AI90 - Page access bit may be set prior to signaling a code segment limit fault. This one is pretty serious. This cannot occur on most operating systems because the code segment is set to be unlimited and access is governed solely by the page tables. In 64 bit mode emulating 32 bit operation the problem might occur if a bit of code wraps the segment. There are possibly issues in other emulation modes, such as VM86 mode. The effect of setting the page accessed bit will not make a page accessible that was not previously unaccessible, but it will result in unexpected modifications to the page table page and numerous operating systems may free such pages to the page-zerod page list under the assumption that they cleaned the page out when in fact there may be a page table entry with the access bit set (meaning the page wasn't completely zerod when freed). That could cause problems.

    AI99 - Updating code page directory attributes without tlb invalidation may result in improper handling of a page fault exception. This one doesn't look too serious, it just means the wrong exception will be taken first, meaning that the OS will probably seg-fault the program. If the OS corrects the issue and retries, the correct exception will be taken on retry. All BSDs that I know of handle page fault exceptions generically and will not be effected. Of greater concern is what sort of modifications to a page directory entry now require TLB invalidations? On FreeBSD and DragonFly, and I assume most BSDs and probably Linux too, page directory entries usually transition between only two states and a TLB invalidation is made when a page directory entry is invalidated, so they wouldn't be effected by this bug.
  • by Animats ( 122034 ) on Thursday June 28, 2007 @01:24PM (#19678163) Homepage

    That's actually a bad article about a real issue. A better article is here. [monstersandcritics.com]

    Intel's AMT technology puts special purpose hardware in the network controller which recognizes UDP and TCP packets on ports 16992, 16993, 16994, and 16995. This is completely independent of the operating system. Various system administration functions can be performed. Anybody can inventory the machine and read its ID. Other functions, like power off/on, reboot, user disable (disables keyboard/mouse/on-off switch) and remote disk I/O require a password or crypto key.

    This has been around for a while; the previous version was called IPMI, Intelligent Platform Management Interface. It talked UDP only. AMT also talks TCP and HTTP; there's a whole protocol stack in the network controller now just for this. This was originally a server farm management system, but now it's on desktops, too. If HTTP mode is enabled, you can control the machine from a web browser via port 16692.

    It even works while the computer is "turned off"; it's part of "wake on LAN" functionality.

    Supposedly, there is no valid default password or key, and the feature is supposedly off by default. But if any software ever enables this, you're 0wned.

    The computer manufacturer can preload management keys. "An OEM may supply platforms with a PID-PPS pair already written to the Intel AMT Flash memory.", according to Intel. If a vendor does that, they 0wn your computer. Something to watch for. AMT can also be enabled from the Intel Management BIOS extension screen. (Password: "admin", it says in the manual.)

    The normal way AMT keys get loaded in a corporate environment is that you plug in a USB key with a special file ("setup.bin") and power cycle the machine. The machine then tries to connect to the mothership on port 9971, doing a DNS lookup for "ProvisionServer" if no IP address was specified.

    If you don't want AMT enabled, here's how to disable it: [intel.com], "Intel AMT is returned to Factory Mode by selecting the Unprovision option on the BIOS Extension menu or by disabling Intel AMT from the BIOS extension Manageability Feature Selection."

    The whole AMT system is reasonably designed; it even has Kerberos authentication. But it's so powerful and so hidden that if it's ever enabled, it's worse than a root kit. Even reinstalling the OS won't help.

    Here's Intel's technical info about AMT. [intel.com]

  • by m.dillon ( 147925 ) on Thursday June 28, 2007 @01:58PM (#19678635) Homepage
    Now the core duo/solo errata.

    AE1 - CPU to memory copy with FST with numeric and null segment exceptions may cause GP faults to be missed and FP linear address mismatch. In otherwords, a segmentation violation will be missed and a write will be allowed to proceed. This will not effect OSs using page tables for protection, which is all OSs. Sounds bad but doesn't sound like it will effect existing OSs

    AE2 - Code segment violation may occur if a code stream wraps around a segment. No program does this on purpose, and OSs will just seg-fault the program if it does. The intel errata says it could be exploted by a virus but I don't see how by its current description. Maybe there is something they aren't telling us.

    AE3 - POPF/POPFD that sets the trap flag (aka when single-stepping a program) may cause unpredictable behavior. Holy shit. This one is serious.

    AE4 - REP MOVS in fast string mode continues in that mode when crossing into a page with a different memory type. This means that when crossing over from a cacheable page to an uncacheable page, the I/Os remain cacheable. And vise-versa. This will never happen on purpose so the question is whether it can be exploited in some way, and the answer to that is not that I can see.

    AE5 - Memory aliasing with inconsistent dirty and Access bits may cause a processor deadlock. This means a PTE with 'D'irty set but with 'A'ccess not set. FreeBSD and DragonFly always set the A bit when setting the D bit and will not be effected but I don't know about other OSs. This is a very serious bug though.

    AE6 - VM bit will be cleared on a double fault exception. Double faults are usual fatal for the whole machine so unless they can occur in an emulation mode (where the double fault is being emulated). Check your OS. FreeBSD and DragonFly do not try to resume after a double fault and do not take faults in VM mode and are not effected.

    AE7 - Incompatible write attributes in page table verses MTTR may consolidate to UC. Not a big deal, doesn't happen unless something has been misprogrammed.

    AE8 - FXSAVE after FNINIT without an intervening FP instruction may save uninitialized values for FDP and FDS. This isn't an issue unless the data being written represents a security leak of some sort, such as a portion of the state of another program's FP unit. This could be a security issue with regards to one program snooping another program's cryptography. Statistical snooping possible through this sort of mechanic has been shown to be effective in recent years.

    AE9 - LTR can result in a system hang. Well, BSDs don't really use LTR all that much and the conditions required just will not happen on BSD or probably linux either. A break point must be set on the cache line containing the descriptor data? Not from userland!

    AE10 - Invalid entries in the page directory pointer table register may cause a GP fault. Not an issue.

    AE11 - REP MOVS operation in fast string mode continues in that mode when crossing into a page with a different memory type. Not an issue.

    AE12 - FP inexact result exception flag may not be set if the #inexactresult occurs in any FPU instruction with certain instructions occuring afterwords. This is a very serious bug that only compilers can work around (and probably won't).

    AE13 - IFU/BSU deadlock may cause system hang. I've no idea what IFU and BSU is.

    AE14 - MOV with debug register causes a debug exception. Sounds like the worst that happens here is a program seg faults if this condition is hit while the program is being debugged.

    AE15 - INIT does not clear global entries in the TLB. Oh, joy. Intel says that BIOS writers would know of thise errata and cod efor it, but insofar as I know this could be an issue when starting up APs.

    AE16 - Use of memory aliasing with inconsistent memory type may cause system hang. It shouldn't be possible for this to happen with a modern OS. It means mapping the same physical page of memory with different memory contr
  • Re:Yay AMD (Score:1, Informative)

    by Anonymous Coward on Thursday June 28, 2007 @03:04PM (#19679519)
    Um, ARM cores do have hardware float, arm 9 and 11 have VFPs, newer cores have float units too.
  • Re:Yay AMD (Score:2, Informative)

    by G Morgan ( 979144 ) on Thursday June 28, 2007 @04:33PM (#19680903)
    Theo doesn't actually make any concrete claims. He bets that out of 60 bugs 2/3 will be exploitable. He doesn't produce a PoC or even a theory he just takes a number and a percentage then combines them. Essentially he is guessing.

    Not knocking him in general but here he hasn't produced anything we didn't know already.
  • Re:Modern processors (Score:1, Informative)

    by Anonymous Coward on Thursday June 28, 2007 @06:29PM (#19682651)

    Modern "80x86" processors actually have a RISC core emulating 80x86 CISC instructions. That can't possibly be efficient: there are some occasions when you don't need every bit of an 80x86 instruction to happen (for instance, ADC sets the carry flag, but the next instruction may not care about the state of the carry flag).

    Those CPUs don't work quite the way you think they do.

    The internal 'instruction set' (really microcode) doesn't resemble any ordinary RISC instruction set; it's designed specifically to implement x86, not to operate like an end-user-visible instruction set would. Problems of the type you're talking about arise when translating between dissimilar ISAs with different condition code handling and the like. That's just not an issue here.

    Few x86 instructions translate to more than two or three microcode ops, and a large number translate to just one. The point of the translation is mostly to separate ALU instructions with memory operands into discrete loads, stores, and ALU ops. Instructions which don't touch memory are very unlikely to need more than one microcode op.

    Native code running directly on the underlying RISC core, if there was a way to do it, ought to be faster than emulated 80x86 code. A lot faster, if the compiler is good.

    The underlying core has no instruction format suitable for storage in external memory; uops are fully decoded control words, which are much larger than the original instructions. It would be terribly inefficient to expose this to user programs.

    Really, the only reason 80x86 (which really is a truly horrible design, mostly cruft and bodges upon bodges) is still popular at all, is to allow Microsoft to keep the Source Code of Windows secret. The BSDs and Linux don't have any such requirement -- the Source Code is readily available, and they can and do run on almost any processor architecture. AMD's 64-bit architecture is a little cleaner but still held back by the need to implement 32-bit instructions.

    1. x86 isn't nearly as dirty as you think it is. The real cruft was pre-386; if you lock a 386 or later x86 into 32-bit mode and ignore the other modes it's not that bad. The biggest remaining wart was the original x87 FPU instruction set, but that's now possible to ignore in favor of SSE.

    2. What the hell does Microsoft wanting to keep Windows closed source have to do with the popularity of x86? You're completely insane for suggesting a connection. There's nothing which would make Microsoft open-source Windows if it had to run on other CPUs. The proof is that Microsoft had (and in some cases shipped to end users) working ports of Windows NT for PowerPC, MIPS, Alpha, and probably others I'm forgetting. All closed source. All cancelled for reasons which had nothing to do with open/closed source.

    3. AMD64 isn't any cleaner than 32-bit x86. It doesn't change much, just adds support for 64-bit registers, 64-bit ALU ops, and 8 more registers. To do this they needed to use more opcodes, which meant chaining more prefix bytes. Thus you get fun things like register-to-register ALU instructions being longer if one of the operands is one of the new registers.

    Think what we could do with something like ARM, but built in straight 64-bit (i.e. ditch byte addressing and deal strictly in 64-bit words)

    Oh god.

    Never studied how much a mess that made of early versions of Alpha, have you?

    and going back to Furber and Wilson's original concepts which eschewed complications such as hardware multiply and divide precisely because software implementations can be quicker for shorter word lengths (multiplying two 8-bit values requires only 8 additions, but a 64-bit hardware multiplier will always do 64 anyway).

    This is stupid.

    Sorry there's no less harsh a way to put it. It's a bad, dumb idea. RISC ISAs which left out hardwa

  • Re:Time for RISC? (Score:3, Informative)

    by et764 ( 837202 ) on Thursday June 28, 2007 @06:57PM (#19682933)

    RISC binaries tend to be larger than CISC binaries. The reason is the complex instruction set, like in the x86 architecture, were made complex to save memory. Most of the common instructions are represented in only one byte, while the rarer instructions can be much much longer. RISC instruction sets, on the other hand, typically have a fixed instruction size for all instructions, and the average instruction length ends up being longer.

    While most people have plenty of hard drive space now, and RAM isn't as scarce as it used to be, CPU caches are still pretty small. Using a more compact instruction set can make the code caches more effective, which can dramatically improve performance.

    I doubt this completely justifies choosing CISC over RISC, but it is at least one piece of information in favor of sticking with CISC.

  • Re:wrong (Score:3, Informative)

    by Sparohok ( 318277 ) on Friday June 29, 2007 @12:59PM (#19690677)
    Think what you want, but I described the way things actually happened, while you're describing a gloss of revisionist history. Itanium was intended to kill off all other high end RISC development programs, and it very nearly succeeded despite being an "unsuccessful" and desperately late product.

    SGI/MIPS canceled two high end CPUs, Beast and Capitan, specifically because of the threat of Itanium. I was there, I saw it happen.

    http://news.com.com/Silicon+Graphics+scraps+MIPS+p lans/2100-1001_3-210024.html [com.com]

    Compaq killed Alpha before the HP merger, before Carly, with the intention of moving their high end business to IA-64:

    http://en.wikipedia.org/wiki/DEC_Alpha [wikipedia.org]

    Obviously, Itanium redirected HP's focus away from PA-RISC since it was a HP/Intel project.

    Itanium failed to completely derail SPARC, but it caused a great deal of controversy inside SUN about the future of the SPARC architecture and disrupted SPARC development for a year or two.

    http://news.com.com/2100-1001_3-237583.html [com.com]

    IBM's Power architecture was perhaps the least affected by Itanium. IBM was pretty skeptical about Itanium and kept the Power program very much alive. As a result, they are the only RISC family which still has a significant presence on the Top500 supercomputing list.

    http://www.top500.org/stats/list/29/procfam [top500.org]
    http://www.top500.org/stats/list/13/procfam [top500.org]

    I have no idea whether all these other CPU families would have been successful in the marketplace without Itanium. However, the fact is they were killed due to one of the most influential vaporware announcements in the history of the computer industry.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...