Reverse Multithreading CPUs 263
microbee writes "The register is reporting that AMD is researching a new CPU technology called 'reverse multithreading', which essentially does the opposite of hyperthreading in that it presents multiple cores to the OS as a single-core processor." From the article: "The technology is aimed at the next architecture after K8, according to a purported company mole cited by French-language site x86 Secret. It's well known that two CPUs - whether two separate processors or two cores on the same die - don't generate, clock for clock, double the performance of a single CPU. However, by making the CPU once again appear as a single logical processor, AMD is claimed to believe it may be able to double the single-chip performance with a two-core chip or provide quadruple the performance with a quad-core processor."
Isn't that just superscalar? (Score:5, Interesting)
Multiple cores presented as one sounds familiar. Last time I heard about that, it was just called "superscalar execution" [wikipedia.org]. As I understand it, multithreading and multicore were added because CPUs' instruction schedulers were having a hard time extracting parallelism from within a thread.
No, superscalar is different (Score:5, Interesting)
In this case, AMD appears to be trying to decouple the states enough that the out-of-order resolution doesn't require micromanaging all of the processes from a single control point.
Re:No, superscalar is different (Score:5, Insightful)
Where superscalar requires a good dispatcher to minimize branch prediction misses, AMD appears to be making decisions, not about dispatch, but about how to do locking of shared memory (think critical sections).
Critical section prediction might prove less expensive than branch prediction in practice even if they are similar in theory (http://www.cs.umd.edu/~pugh/java/memoryModel/Dou
It's not branch mis-prediction, it's the memory (Score:5, Informative)
The purpose of "good dispatching" (i.e. out-of-order execution) is to hide the latencies of misses to main memory (it takes between 200 and 400 cycles these days to get something from memory, assuming that the memory bus isn't saturated), by executing instructions following the miss but not dependent on it. Out-of-order execution has been around Pentium Pro, btw.
Re:Isn't that just superscalar? (Score:3, Informative)
That's not quite right, and I think there is alot of misunderstanding going around. So let me tell you what I know about this technology. First of all, the entire idea of having two processors work on a single thread of a program isn't that far-fetched, and has been a topic of research for a long time. What most people don't understand is that, in general, it requires a massive revamp of the instruction set. What happens is you de
Itanium (Score:2)
Personally I'm still a big fan of this instruction set/system and feel it's a real shame that backward's compatibility/resistance to change has kept it out of the mainstream. I would dearly love the irony if AMD tried to introduce an Itanium like processor now.
I suggest a compromise (Score:5, Funny)
Re:I suggest a compromise (Score:5, Funny)
Re:I suggest a compromise (Score:2, Funny)
Scheduling Threads (Score:5, Insightful)
Re:Scheduling Threads (Score:3, Insightful)
Re:Scheduling Threads (Score:5, Informative)
At least that's the idea. Whether or not it works is yet to be seen.
Re:Scheduling Threads (Score:2)
The problem with this reasoning is that all contemporary OSes have been designed with multiprocessor machines in mind and are thus not only heavily multithreaded, but also have schedulers designed to detect and take maximum advantage of, multiple CPUs.
I'm highly sceptical that any CPU can do a better job than t
Re:Scheduling Threads (Score:4, Informative)
A kernel intended to run on a single CPU machine can be made to run faster, partly due to less need to use locks. OpenBSD has offers two kernels for the archs that supports multi CPU: one single CPU kernel, and a multi CPU kernel. The single CPU kernel is faster.
Re:Scheduling Threads (Score:4, Informative)
OpenBSD's SMP support is not particularly good, I don't think it's a good example to use for performance comparison purposes.
Re:Scheduling Threads (Score:2)
The issue was that Debian was (probaby still is) considering not shipping any UP kernels, since it's kind of a pain to maintain a UP and SMP flavor for each kernel configuration. It turns out the performance hit is still big enough that, exce
Re:Scheduling Threads (Score:4, Insightful)
Why do you think they included 2 different kernels, and how do you expect a kernel that has been optimised for parallelisation to run as well on a single processor? Seems rather trivial to me..
Re:Scheduling Threads (Score:2)
Re:Scheduling Threads (Score:3)
Consider a write operation. With the OpenBSD kernel, you make the system call, lock the kernel, run to completion, unlock the ker
Re:Scheduling Threads (Score:2)
-matthew
Re:Scheduling Threads (Score:2)
-matthew
Re:Scheduling Threads (Score:2)
I thought OSes were only so good at multiprocessor scheduling because things can only be done in parallel to a certain level of granularity -- data dependencies, data locking, and other problems cause stalls in how well multithreading can work.
I guess what we're all trying to figure out: how does 'figuring out wh
Re:Scheduling Threads (Score:2)
And hardware can indeed be faster than the equivalent software. Witness the rise of the GPU. A dedicated hardware item like a GP
Re:Scheduling Threads (Score:2)
Of course you could choose not to export that info and let the CPU do it transparently, but does that have any sense at all? Now that cores are becoming so important you may end having more than one CPU with different number of cores each one, and t
Re:Scheduling Threads (Score:2)
It won't and that doesn't matter. When will you SMP-fetishist's learn that two simultaneous threads won't be running each in their own CPU? If you have two threads (or processes) running, that doesn't mean that each gets its own CPU; they'll share the 2 cpu's along with the dozens of other running processes. If AMD is right that
Huh? (Score:4, Interesting)
Re:Huh? (Score:5, Funny)
Re:Huh? (Score:2)
Re:Huh? (Score:2)
Re:Huh? (Score:2)
You definitely lo
Re:Huh? (Score:2)
Multicore designs are optimising for a different kind of code - but they suck at running the programs that do ex
Sounds familiar (Score:5, Funny)
Re:Sounds familiar (Score:2)
Re:Sounds familiar (Score:2)
Re:Sounds familiar (Score:5, Funny)
Re: (Score:2)
Hmmm (Score:2)
Perhaps there could be a documented way to access both CPUs directly? That may s
Re:Hmmm (Score:2)
Re:Hmmm (Score:2)
Although I will say that when we were doing parallel programming, we did have access to Sharcnet [sharcnet.ca] nodes, so perhaps you were more limited.
Re:Hmmm (Score:2)
Software isn't evolving. (Score:3, Interesting)
The future lies not with languages such as Erlang, and Haskell, but likely with languages heavily influenced by them. Erlang is well known for its uses in massively concurrent telephony applications. Programs written in Haskell, and many other pure functional languages, can easily be executed in parallel, without the programmer even having to consider such a possibility.
What is needed is a language that will bring the concepts of Erlang and Haskell together, into a system that can compete head-on with existing technologies. But more importantly, a generation of programmers who came through the ranks without much exposure to the techniques of Haskell and Erlang will need to adapt, or ultimately be replaced. That is the only way that software and hardware will be able to work together to solve the computational problems of tomorrow.
occam (Score:5, Informative)
Re:Software isn't evolving. (Score:4, Informative)
Re:Software isn't evolving. (Score:3, Informative)
I actually have a brand new dual-core box and the gnome System Monitor shows both cpu's separately. The only time I've seen both cpu's used by a single program it was a Java pr
may not want to go back.. yeah right (Score:5, Funny)
Hah, yeah right, we started parallel programming just this semester and already I want to kill myself. "May not want to go back"? I'd go back in a heartbeat!
Re:may not want to go back.. yeah right (Score:4, Insightful)
Even though the trade rags haven't realized it, real life software engineers have been using parallel programming techniques for decades. Sure, apps are optimized for what they run on, so most shrinkwrap software at your local CompUSA probably doesn't have much of that in there, but the author missed the boat already when it comes to "had several years focusing on...".
Better learn to like that parallel programming stuff. It's the way things work.
Re:may not want to go back.. yeah right (Score:2)
I can echo that. I have been doing programming on parallel CPUs since 1968 (on a monstrosity at Stanford University that included a 166 and a KA-10 processor). You have to think differently to write parallel code, but once you learn to think that way it becomes no harder than conventional, “linear” programming.
Re:may not want to go back.. yeah right (Score:2)
Gotta love these CPU companies... (Score:5, Funny)
Re:Gotta love these CPU companies... (Score:2)
Or maybe the software industry can start acting logically and license per a machine.
That's of course until the "reverse virtualisation" from Intel happens, that makes your entire server cluster run as a single PC
Re:Gotta love these CPU companies... (Score:2)
You mean NUMA [wikipedia.org]?
Re:Gotta love these CPU companies... (Score:2)
Amdahl's Law (Score:5, Interesting)
What I want to know is which of the premises underlying Amdahl's Law [wikipedia.org] they've managed to escape?
Re:Amdahl's Law (Score:4, Interesting)
Amdahl's Law has little impact when the number of cores is small and the available task is "large", as todays multitaskin OSs are.
Of course that doesn't mean that AMD will get a 100% improvment, but something close to that migth be doable if they can break the tasks at hand into parallel stuff at a much smaller level then threads.
Shi's law (Score:5, Informative)
Researchers in the parallel processing community have been using Amdahl's Law and Gustafson's Law to obtain estimated speedups as measures of parallel program potential. In 1967, Amdahl's Law was used as an argument against massively parallel processing. Since 1988 Gustafson's Law has been used to justify massively parallel processing (MPP). Interestingly, a careful analysis reveals that these two laws are in fact identical. The well publicized arguments were resulted from misunderstandings of the nature of both laws.
This paper establishes the mathematical equivalence between Amdahl's Law and Gustafson's Law. We also focus on an often neglected prerequisite to applying the Amdahl's Law: the serial and parallel programs must compute the same total number of steps for the same input. There is a class of commonly used algorithms for which this prerequisite is hard to satisfy. For these algorithms, the law can be abused. A simple rule is provided to identify these algorithms.
We conclude that the use of the "serial percentage" concept in parallel performance evaluation is misleading. It has caused nearly three decades of confusion in the parallel processing community. This confusion disappears when processing times are used in the formulations. Therefore, we suggest that time-based formulations would be the most appropriate for parallel performance evaluation.
Word (Score:2)
I think thats a great example of the problems facing researchers in matehmatics (and sciences) today. Its really hard to make connections between all of the disperate facts, theories, and expiramental data to draw conclusions and lead to productive research and development. In short, we often experience mental stack overflow errors.
Sounds a lot like Intel's Mitosis research (Score:3, Informative)
http://www.intel.com/technology/magazine/research
The article has simulated performance comparisons.
From the article:
"Today we rely on the software developer to express parallelism in the application, or we depend on automatic tools (compilers) to extract this parallelism. These methods are only partially successful. To run RMS workloads and make effective use of many cores, we need applications that are highly parallel almost everywhere. This requires a more radical approach."
Re:Sounds a lot like Intel's Mitosis research (Score:2)
For what it's worth I think in the long run intel has the right answer the question is whether AMD can steal lots of market share in the sho
Similar to MacOSRumors rumor (Score:4, Insightful)
It supposedly involve Intel. I personally think both rumors are just that, but the timing is curious. Same source behind both? AMD PR people not wanting to lose out in imaginary rumored technology to Intel?
Re:Similar to MacOSRumors rumor (Score:2)
In the Eiffel programming language, they've proposed a concurrency algorithm that doesn't use "traditional" threads.
The idea is, you sprinkle the "separate" keyword onto various objects that it makes sense for. The compiler or runtime then does a dependency analysis and breaks out your program into different threads or processes. A
I know... (Score:5, Funny)
Re:I know... (Score:2)
Re:I know... (Score:5, Insightful)
I'd suggest x86-secret & the Reg have got the wrong end of the stick here. SMT is running two threads on one core - try taking "reverse hyperthreading" literally. I'd suggest that AMD are looking at running the one same thread in lock-step on two cores simultaneously. This is not about performance, it is about reliability - AMD looking at the market for big iron (running execution cores in lock-step is the kind of hardware reliability you are looking at on mainframe systems).
The behaviour of a CPU core should be completely deterministic. If the two cores are booted up on the same cycle they should make the same set of I/O requests at the same point, and so long as the system interface satisfies these requests identically an on the same cycle, then the cores should have no reason not to remain in sync with each other until the next point that they both should put out the next, identical pair of I/O requests. If the cores every get out of sync with each other, this indicates an error.
Just speculation of course, but I seem to recall AMD looking into this having been rumoured previously.
G.
Or maybe predicting both branches? (Score:5, Interesting)
But, with two cores, you could have a way to predict "branch" and "not branch" at every prediction spot. The core that gets it right sends the registers to the other core so they can continue as if every branch were predicted correctly...
That would only work if you had a nice fast way to copy registers accross in a very small number of clock cycles... so again, just a bunch of speculation. But it was a neat enough idea I had to say it.
Yes, AMD! You get it! (Score:2, Interesting)
Re:Yes, AMD! You get it! (Score:2)
I guess by your response I'm highly doubting you admin systems in a large datacenter because it makes absolutely no sense. I don't know any admin that would only want to have one processor, logical or not, in a large server. There's WAYYYY too many things that need to go on at the same tim
Re:Yes, AMD! You get it! (Score:2)
To sweet to be true (Score:3, Insightful)
Even the article writers aren't pretty sure that's possible to do, apparently it's possible to "claim" it though, what isn't
Modern processors, including the Core Duo rely on a complex "infrastructure" that allowed them to execute instructions out of order, if certain requirements are met, or execute several "simple" instructions at once. This is completely transparent to the code that is being executed.
Apparently for this to be possible the commands should not produce results co-dependent of each other, meaning you can't execute out-of-order or at-once instruction that modify the same register for ex.
This is an area where multiple cores could join forces and compute results for one single programming thread as the article suggests.
But you can hardly get twice the performance from two cores out of that.
It's not exactly clear what they have in mind (Score:5, Informative)
There are extensions to known techniques;
A: more execution units, deeper reorder buffers, etc trying to extract more Instruction Level Paralelism (ILP).
B: More cores = more threads
C: hyper threading -- fill in pipeline bubbles in an OOO superscaler architetcure; also = more threads
I personally don't think any of these carry you very far...
Then there are some new ideas:
a: run-ahead threads -- use another core/hyperthread to perform only the work needed to discover what memory accesses are going to be performed and preload them into the cache - mainly a memory latency hiding technique, but that's not a bad thing as there are many codes that are dominated by memory latency
a': More aggressive OoO run-ahead where other latencies are hidden
Intel has published some good papers on these techniques, but according to those papers these techniques help in-order (read Itanic) cores much more than OoO.
b: aggressive peephole optimization (possibly other simple optimizations usually performed by compilers) done on a large trace cache. Macro/micro-op fusion is a very simple and limited start at this sort of thing. (Don't know if this is a good idea or not, or whether anyone is doing it)
But it's far from clear what AMD is doing. Whatever it is, anything that improves single threaded performance will be very welcome. Threading is hard (hard to design, implement, debug, maintain, and hard to QA). And not all code bases or algorithms are amenable to it.
Intels next gen (nahalem) is likely going to do some OoO look-ahead, as they have Andy Glew working on it, and that's been an area of interest to him...
A very interesting new concept is that of "strands" (AKA: dependency chains, traces, or sub-threads). (The idea is instead of scheduling independent instructions, schedule independent dependency chains. - For more info, see http://www.cse.ucsd.edu/users/calder/papers/IPDPS
But it's not clear how well it would apply to OoO architectures, but I would expect that likely approaches would also need large trace caches.
Applying this to an OoO x86 architecture, and detecting the critical strand dynamically in that processor could be very cool, and potentially revolutionary.
It will be very interesting to see what Intel and AMD are up to -- it would be even cooler of they both find different ways to make things go faster...
Re:It's not exactly clear what they have in mind (Score:3, Informative)
It's already in their compiler;
http://www.intel.com/software/products/compilers/c lin/docs/main_cls/mergedprojects/optaps_cls/common
(Their compiler absolutely rocks BTW)
And their excellent paper titled "Speculative Precomputation: Long-range Prefetching of Delinquent Loads" by Jamison Collins, Hong Wang, Dean Tullsen, Christopher Hugh
But what I really want to know... (Score:4, Interesting)
And
Will AMD only hide the fact there's multi-cores from Operating systems other than Microsoft ?
Re:But what I really want to know... (Score:2)
You're barking up the wrong tree here. MS has already addressed this in favor of their customers, and licenses on a per-socket rather than a per-core basis. One core, two cores, four cores, doesn't matter--one processor.
Re:But what I really want to know... (Score:2)
I'm going to guess that AMD will hide the multi-cores from everyone.
The idea is that AMD will have the CPU do all the fancy (de)threading stuff on the chip. The entire point is to increase performance for non-optimized applications.
If you're going to be using programs optimized for dual CPUs/cores, then there really isn't a point in buying a chip with AMD's technology on it, unless AMD plans to stop selling 'normal'
bullshit (Score:2)
So if the theory is to take the three ALU pipes from core 1 and pretend they're part of core 0... it wouldn't work efficiently. Also what instruction set would this run? I mean how do we address registers on the second core?
AMD would get more bang for buck by doing other improvements such as adding more FPU pipes, adding a 2nd multiplier to the integer side, incre
Re:bullshit (Score:4, Informative)
1. core 1 issues a store to memory [dozens if not hundreds of cycles]
2. core 0 issues a read, the XBAR realises it owns the address and the SRQ picks up the read
3. core 0 now read a register from core 1
It would be so horribly slow that accessing the L1 data cache as a place to spill would be faster.
The IPC of most applications is less than three and often around one. So more ALU pipes is not what K8 needs. It needs more access to the L1 data cache. Currently it can handle two 64-bit reads or one 64-bit store per cycle. It takes three cycles from issue to fetched.
Most stalls are because of [in order of frequency]
1. Cache hit latency
2. Cache miss latency
3. Decoder stalls (e.g. unaligned reads or instructions which spill over 16 byte boundary)
4. Vectorpath instruction decoding
5. Branch misprediction
AMD making the L1 cache 2 cycle instead of 3 cycle would immediately yield a nice bonus in performance. Unfortunately it's probably not feasible with the current LSU. That is, you can get upto 33% faster in L1 intense code with that change.
But compared to "pairing" a core, die space is better used improving the LSU, adding more pipes to the FPU, etc.
Tom
Re:bullshit (Score:2)
The point is as it stands now the K8 cannot, repeat cannot, get a register from one core to another FASTER THAN THE L1 CACHE WORKS.
Now that we got that out of the way... realize that
IPC OF 99% OF ALL CODE is less than 1 on most cases and why is that? Aside from register contention there is the three cycle latency of the L1. So it's very trivial to stall an entire execution unit.
So AMD would see little
Re:bullshit (Score:2)
There there THREE FPU pipes. Therefore it is possible to add an adder [or vice versa] to the multiplier then have the decoder be aware of this and feed stuff into either pipe. So technically you don't have to change the ICU at all to support more FPU resources.
As for th
Academia's been proposing this for awhile (Score:4, Informative)
There are several variations of this. One is to use the second core to run in advance of the 1st thread, the first thread effectively acting as a dynamic and instruction-driven prefetcher. One such effort includes "slipstreaming" processors, which works by using the advanced stream to "warm up" caches, while the rear stream makes sure the results are accurate, and to dynamically remove unecessary instructions in the advanced stream. Prior, similar research has been done to perform the same work using various forms of multithreading (like HT/SMT, and even coarse-grained multithreading). See the www.cs.ucf.edu/~zhou/dce_pact05.pdf for more details.
Others, such as Dynamic Multithreading techniques take single-threaded code and use hardware to generate other threads from from a single instruction stream. Akkaray (at Intel) and Andy Glew (previously intel, then amd, then...?) have proposed these ideas, as have others. Some call it "Implicit Multithreading".
Now, the register article is so wimpy (as usual) that there's no actual information about what technologies are used, but maybe it's a variation on one of the above.
Not True! (Score:5, Funny)
Magically Parallelized? (Score:2)
Now, if they're talking about allowing separate processes to run separately without specific SMP code in the kernel, fine. But that's not 2x performance.
Speculative Multithreading (Score:4, Interesting)
Basically the processor will try to split a program into multiple threads of execution, but make it appear as a single thread. For example, when calling a function, execute that function on a different thread and automatically shuttle dependent data back/forth between the callee and the caller.
Re:Speculative Multithreading (Score:2)
Re:Speculative Multithreading (Score:2, Insightful)
There are other academic projects that are attempting to do TLS dynamically, in hardware. PolyFlow at Illinois is one, Dynamic Multithreading (mentioned elsewhere in this story)
beware... (Score:2)
Load balancing might be interesting (Score:5, Insightful)
Re:Load balancing might be interesting (Score:2)
So you can only load balance at the process/thread level.
Tom
This is Like RAID for CPU's (Score:5, Interesting)
Looks like the same thing. You take multiple CPU's present them as one, and let the controller figure out how to best use them.
This could make for hot-swappable CPUs (heh) and the ability to have a CPU die without taking out your system. The redundacy nature of the other RAID configurations don't seem to translate very easily, but the 'encapsilation' concept seems to fit nicely.
Re:This is Like RAID for CPU's (Score:2)
Re:This is Like RAID for CPU's (Score:2)
So just because you may have 4 cores in your box [say dual-core 2P] doesn't mean all of the cores can act as one logically to the OS in a meaningful and efficient manner.
The striping analogy would be to dispatch instructions in round-robin fashion to all the processors. The problem with that is that the architectural state has to be s
Re:This is Like RAID for CPU's (Score:2)
Re:This is Like RAID for CPU's (Score:2)
So all you're talking about is having a larger cache, which is happening.
Re:This is Like RAID for CPU's (Score:3, Informative)
The Ideal Processor and Software Model (Score:2)
There is only one way to achieve optimum performance using multiple cores (or multiple processors) and that is to adopt a non-algorithmic, signal-based, synchronous software model. In this reactive model [rebelscience.org] , there are no threads at all, or rather, every instruction is its own thread or processor object. It waits for a signal to do something, performs its operation and then sends a signal to one or more objects. There can be as many ope
Re:The Ideal Processor and Software Model (Score:2)
Imagine... (Score:2)
No, seriously, I'm having trouble envisioning it.
What about this... (Score:2, Interesting)
Re:multi cpu (Score:2)
Re:multi cpu (Score:4, Insightful)
Sony already assumes that their PS3 chips will have a fault in one of the cores, and simply lock off that section when one is found. One fault no longer kills a chip, though two can render the power unacceptably low.
The cool thing is this scales. If you have a 10cm^2 chip, traditionally your chance of perfection is 1/4th that of a 5cm chip, cutting your yield drastically. But if you have 6 cores on a chip with one dead one, and you want to go to 12, you should get a similar yield for a proportionally similar amount of dead cores.
Cores let you limit damage from manufacturing errors, letting you build bigger chips more cheaply. At least, that's my layman's understanding.
Re:multi cpu (Score:2)
Makes sense since its a difficult and complex mess to write an app or an operating system that can run on 2 or more cpu's efficiently.
My guess is in hardware you can do alot more then in software.
Re:multi cpu (Score:2)
Well you can certainly do it faster. But it's a lot harder to work on a patch.