Hyper-Threading Explained And Benchmarked 245
John Martin writes "2CPU.com has posted an updated article about Hyper-threading performance. They discuss the technology behind it, provide benchmarks, and make observations on what the future holds for hyper-threading. It's actually an easy, interesting read.
Of note, they'll be publishing Part II in the near future which will detail hyper-threading performance under Linux 2.6. Hardware geeks will probably appreciate this."
SMT (Score:2, Troll)
And yes, this is a very good idea. A modern superscaler out-of-order processor, like the Athlon and Pentium Pro (and later), can issue and retire multiple instructions per clock cycle. However, it can *only* do this if there is enough instruction-level parallelism (ILP). Turns out, there is not enough ILP in current programs to take full a
Re:SMT (Score:4, Interesting)
Re:SMT (Score:3, Interesting)
However, there is no reason why you can take two single threaded processes and use one to fill the holes in the pipeline left by the other so SMT should still have a decent benifit if the kernel scheduler is prepar
Re:SMT (Score:3, Insightful)
I would argue that in the vast majority of cases, processor-specific microcode (as opposed to language and algorithmic) optimizations aren't the programmer's job - that's what a compiler is for. A professional-grade compiler like MIPSpro or ICC can generate code over twice as fast as GCC on the same processor, because it's smarter about process
Re:SMT (Score:3, Informative)
Compilers simply can't be asked to pick up the slack for programs written with a poor logical flow. They can't be ask to figure out a completely di
Re:SMT (Score:3, Informative)
Smart code will do more for you than hand optimized assembler, unless you already have written smart code.
Re:SMT (Score:2)
gcc 3.3.2 beats the pants off icc 8.0 on my SSE2 code. Up to a 50:1 ratio on speed tests, 4:1 on average. With earlier revisions of gcc and icc the ratio was 2:1 with icc being faster. This code is written with explicit parallelism so all the fancy loop unrolling icc does doesn't help, and the register allocation algorithm in gcc seems to be th
Re:SMT (Score:2)
Umm, yeah, well icc 8.0 is the newest release. And I checked against several versions of gcc 3.2.3, 3.3.2 and the latest 3.4 from CVS. The 3.3.2 seemed to have the best performance overall, with 3.4 a close contender. Except the 3.4 had worse performance on a couple benchmarks, not
Re:SMT (Score:5, Interesting)
The Cell architecture (which may or may not be used for the PS3) is a multi-processor system designed for scalability; It really does have several processors running at the same time. In contrast, 'Hyperthreading' runs multiple threads on a single processor's core.
They both require multi-threaded code to achieve performance improvements, but fundamentally they're really quite different, and yield quite different price / performance trade-offs.
Re:SMT (Score:2)
Re:SMT (Score:5, Interesting)
In addition, video games are things that don't always lend themselves particularly well to running in multiple threads. I have my artificial intelligence code, collision & physics code, and my rendering code. These 3 parts are the main parts of the code that take roughly 90-95% of the total CPU time available to me. I can't run collisions and physics until after the AI has run, and I can't run my rendering until the collision & physics have been run. I can multi-thread individual game objects, but even these constantly interact with each other. This isn't normally a problem if you double buffer it in a way that, for example, after the AI has run, I keep the current frame's AI output around somewhere while I run the next frame, but this requires additional memory, another resource that is scarce on consoles.
Re:SMT (Score:2)
I understand the need for single threaded performance, it does seem hard to break a game down into enough parts to really benefit from massively multithreaded architectures. I mean, all you really have is input, video, sound, physics, AI and rules (I seperate physics from rules because physics
Assembly sucks? (Score:3, Informative)
Re:SMT (Score:2, Informative)
Interesting. (Score:5, Informative)
Re:Interesting. (Score:5, Funny)
I was actually trying to explain hyperthreading to someone today. I got about three minutes into the discussion and realised that I had absolutely no idea what I was talking about.
The discussion arose because we were talking about stupid salesmen. I saw a salesman in a shop the other week, trying to explain hyperthreading to a lady with a glazed expression on her face.
He was saying that hyperthreading makes it easier to use two monitors on your PC.
From the article: (Score:2, Funny)
hmm... in 6 years of architecture research i have never heard anyone talk about SMT like that. it's not even analogous
Re:From the article: (Score:5, Informative)
The single wood chipper being analogous to the actual processing part of the core, is only going to be able to shred so much wood - but if two people fetching wood from the woodpile can keep it running at 100% capacity they will shred more wood than a single guy running back and forth to the wood pile by himself.
What it is, really (Score:2)
We start with one wood chipper, one wood chipper operator and a pile of wood. We can chip (whatever) per unit time.
We make the chipper faster, and can do more (increase clock speed of processor), but at some point the operator can't bring us the wood. So, we use a wheelbarrow to transport more wood in a go, and we keep the stack next to the chipper (a cache).
Now, there's plenty of wood, so we get a SECOND chipper. The operator can stick wood into whatever chipper is free (multiple ALU units,
Re:From the article: (Score:5, Funny)
Re:From the article: (Score:3, Funny)
What we need is *two* Woodchucks.
The diff between a used-car salesman and a PC one- (Score:2)
There's a really interesting philosopical point here, BTW. If you are chartered to (or are pretending to know) something that you don't really understand, can you really claim that you didn't lie (because you didn't realize what you said was false) or do you have a responsibility to be correct if you offer yourself as an authority on a subject?
Intel's Whitepaper (Score:5, Informative)
Re:Intel's Whitepaper (Score:5, Informative)
Call that hyperthreading? (Score:5, Funny)
Part II should've been published concurrently, using idle time... tch!
For the real technical details (Score:5, Informative)
If you are really interested in the how and why of hypertreading in suggest you read trough the lecture notes of Computer System Architecture [mit.edu] at MIT OpenCourseWare. This gives you enough background to race trough all the articles at Ars Techica et al.
Celery (Score:4, Insightful)
That's pretty cool, but if your primary concern is encoding, then there are some things to keep in mind. A Celeron is much cheaper than a P4 with the hyperthreading ($90 for a 2.6GHz Celeron, and $170 for a P4 2.6C). And if the app you're using doesn't support HT, then a Celery will likely encode faster than a P4 with HT on. HT can also reveal nasty bugs in some drivers (my HDTV card is an example). So unless you're playing games, the P4 is just added expense.
Re:Celery (Score:5, Informative)
So it is, and it's not all that fast either [anandtech.com]. Then again, you shouldn't believe all that you read on the Intarweb.
Re:Celery (Score:2)
One can still turn off the HT. With only a 128k cache, IMO, it is too much of a performance liability to make it worth the lower cost.
I just leave it on because the system seems to respond a little better under heavy load.
Re:Celery (Score:3, Insightful)
Wrong percentages? (Score:5, Interesting)
From the article:
"Sandra's CPU benchmark is obviously quite optimized for hyperthreading at this point, and the numbers certainly show that. We see an average improvement of ~39% when hyper-threading is enabled on the P4
The numbers are:
4328 without HT
7125 with HT
You could say that disabling HT makes this benchmark 39% slower. But the the increase by turning HT on is
7125/4328-1 = 1.646 - 1 = 0.646 = 64.6 %
Hrmpf.
Yup, all over the place... (Score:2, Informative)
If X is the lower number and Y is the higher number, he's figuring his percentage increases as (Y-X)/Y instead of (Y-X)/X .
Or is this some kind of "New New Math" that they started teaching in the 10 years since I graduated?
Re:Wrong percentages? (Score:4, Interesting)
I think it was a case of -wanting- to see a specific number and juggling things in his head until he got the number he wanted. Intel touts the 30% range and if he initially got the 65% number he probably discarded it and kept juggling the books to get the number in the 30's that he wanted.
As someone that has a P4 2.4 (not HT) box sitting right next to a P4 2.4 (HT) box I will assure you that in real life you are not going to see a 65% sustained boost in performance in day to day use. Not 30% sustained boost either, unless you are only running apps that are heavily optimized and multithreaded.
Re:Wrong percentages? (Score:2)
so if i take
50% of a head of lettuce,
100% of an orange,
75% of a bannana and
25% of a cup of Miracle Whip,
I get 62.5% of a fruit salad?
I guess everybody knows why I flunked calculas now!
Being philosophical on this... (Score:5, Interesting)
I do remember when there was that RISC vs CISC thing in the 80s, people were saying that CISC was obsolete, RISC being the future and so on. What we see today is not pure RISC processors but something in between. -- It's just that the answer was not that pure or clean as people thought at first.
Few years ago there was BeBox and its BeOS. Well, BeOS had the philosophy for a machine not having a single super-powerful-burning-hot processor but, instead, several low-power combined.
Well, Hyper-Threading may push distributed processing technology to the desktop, to the masses, so we might have interesting changes in software and hardware philosophy in the future.
Sort of romantic thinking... But one can dream.
RISC gives you more bang for your buck (Score:5, Interesting)
All things being equal, RISC gives you more bang for your buck. The difference is that Intel has pushed CISC, or specifically the x86 architecture, as fast or faster than RISC by using more bucks. The amount of R&D dollars powered into x86 vs the amount poured into PowerPC or Alpha is overwhelming.
When I was at Apple our processor architect, Phil Koch, gave a talk in, I think, 1997, where he said that the PowerPC consortium had essentially optimized for power consumption and dollars spent on R&D. What was amazing at that time was that PowerPC was competitive with Intel given much lower power consumption and much lower investment of R&D dollars. However, noone really cared about lower power consumption so it didn't translate into any real advantage. Without the R&D dollar leverage given by RISC, however, the PowerPC would not have been able to compete at all. Pushing the 68K architecture to be competitive with Intel with the same R&D dollars as PowerPC would have been impossible
Re:RISC gives you more bang for your buck (Score:3, Insightful)
All "Cisc" chips are risc cores with a decoder frontend, and the "cheaply developed" Power PCs before the G5 were slaughtered by X86 in any bench but photoshop gaussian blur.
And the G5 is only a sideproduct from IBMs Power4 program, which cant really be descriped with "low R&D expenses".
Re:RISC gives you more bang for your buck (Score:5, Interesting)
Maybe, maybe not. However, it's hard to tell because nobody makes RISC or CISC processors anymore. The RISC concept, implemented in CPUs like the MIPS R3000, originally meant very simple hardware without pipeline interlocks, instruction schedulers, or more than an absolute bare-bones set of instructions. The current Power PC does not match this at all; it is closer to the current X86.
By the same token, CISC used to mean that many or most instructions were implemented in microcode on the processor. Once again, that's no longer the case. All X86s now have a RISC-like core and resemble the Power PC far more than the 80286.
Pure RISC designs and pure CISC designs have both been superceded by a hybrid approach, and neither one would be competetive today outside the embedded device market.
Basically, you were being fed a line of company FUD to get you all excited about their choice of CPU. Today, cache memory dominates the chip real estate, and CPU performance and power consumption are dictated almost exclusively by cache size and silicon process technology rather than these surface architectural details.
Please get your terms straight! (Score:3, Informative)
Not true at all! RISC refers to the instruction set, not the internal architecture. Even the earliest RISC processors to carry that name included pipeline interlocks -- it was the simplicity of RISC that made such techniques feasible, especially at the chip densities of the 80's.
There's a lot of con
Re:RISC gives you more bang for your buck (Score:2)
However, there are two major differences between traditional microcode ops and RISC-like subops. First, traditional microcode opcodes were usually very wide, with enough dedicated bits to simultaneously control all of the ALU parameters, address calculations and data path multiplexers in the processor.
Second, microcode worked like
Re:RISC gives you more bang for your buck (Score:2)
Here's [yarchive.net] a ref to a discussion of RISC's response to this problem.
Re:Being philosophical on this... (Score:2)
Nice chip, but relegated to the history books now.
Re:Being philosophical on this... (Score:2)
N
oh goodie! (Score:2, Funny)
Re:oh goodie! (Score:2)
Cache Contention (Score:4, Interesting)
Everything I know about Hyperthreading... (Score:5, Informative)
Quick Q (Score:5, Interesting)
Re:Quick Q (Score:5, Insightful)
Because it is cheaper?
SMT increase very little the size of the CPU and can give some good improvements (depending of the application, and the OS as said in the article).
SMT can work in the same motherboard as a single CPU contrary as what you said..
And for the same price, the single CPU performance of your dual-CPU setup will be lower..
Re:Quick Q (Score:3, Interesting)
Cost (Score:2, Informative)
The lameness filter blows. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo conse
Re:Quick Q (Score:2, Interesting)
It'll be interesting to see what happens to "hyperthreading" when dual and quad processors come standard on desktop systems for home users.
I look at Hyperthreading as a quick hack to improve response times on a few things. It's a minor speed boost as well, but I think it has enough drawbacks to merit it as only a minor improvement which may not always be a good idea to have enabled. I doubt it will st
Re:Quick Q (Score:2)
It was supposed to be put into the Alpha processor too, a lot of HT research was done on it and was transferred to Intel.
Most of the CPU players are toying with dual full CPU on-die as well, but keep in mind that HT accounts for under 5% of the die, rather than just requiring a second die.
So you _can_ also have two real processors and two more processors in virtual mode. If you know the Xeon line, the Xeon DP allows two real processor
Re:Quick Q (Score:2)
It actually makes more sense to build one chip that's, say, 8 logical processors and give it several execution units of each type (i.e. 6 integer math units, 4 floating point units, etc.) depending on instruction mix. Of course, that eats chip real estate, but if you have a multithreaded system to run, it will scream.
If you put in 8 distinct processors, that's 8 integer math units, 8 floating point math units, etc. some percentage of which are idle mos
quieter (Score:2)
Re:Quick Q (Score:2)
Also, SMP boards seem to be 2-3x the cost of UP boards before the cost of the CPUs.
[*] FSB speeds permitting. It does 400MHz and 533MHz FSB speeds, but not 800MHz.
--
Cache contention with Hyperthreading (Score:5, Interesting)
To really exploit this, you'd need gang scheduling in the operating system. But it's unlikely that SMT would remain around long enough for any efforts to exploit it to be feasible. CMP with separate cache would likely take over before then since it would behave more like separate cpu's from a performance standpoint and thus offer more consistent behavior.
Nitpick (Score:2)
Re:Cache contention with Hyperthreading (Score:2)
Future prognosis for HT (Score:5, Interesting)
Unfortunately, historically CPU speed has increased faster than memory bandwidth. That's why we've had ever more layers of cache added to our systems, to make up for the relative deficiency.
Unless things change, a technology that works better with a higher ratio of memory bandwith / CPU speed is likely to become progressively less, not more effective.
Of course, there's always the argument that marketing reasons have pushed CPU clockspeed faster than memory bandwidth, and that Intel et al will just shift their focus more towards memory in future. But defying the tide of 'what people think they want' is usually risky.
Re:Future prognosis for HT (Score:5, Insightful)
Aye. Sun has big plans [sun.com] for CMT, which one of their sales reps was quick to tell us all about, up to 32 SPARC cores on one chip. That'll work well in the lots-of-small-tasks model where you can take advantage of direct access (say between disk cache and network card) on FirePlane with very simple code (like a webserver) that can execute out of the processor's cache. But we're heavy database users, and the first question he got asked was, are you seriously telling us Sun is about to makes its memory bandwith an order of magnitude greater? He couldn't answer that question. Now, that means either he was clueless, or Sun is jumping on the Intel benchmark bandwagon.
Memory bottleneck (was: Future prognosis for HT) (Score:5, Interesting)
If you refer back to Marc Tremblay's CMT Article [aceshardware.com], you'll see that one of the approaches is to run one thread until it blocks on a memory read, then run another until it blocks and so on, repeating for as many threads as it takes to soak up all the wasted time waiting for the memory fetches.
The Sun paper on their plans for it is here [sun.com]. Have a look at page 5 for the diagram.
--dave (biased, you understand) c-b
I/O Bottleneck (Score:2)
The key to this effect is that the slowest execution unit is taking the most time forcing all other execution to wait on it. Other faster execution units must wait for one reason or another so they all appear to be as slow as the slowest.
In software you can try to soften the blow by bum
Re:Future prognosis for HT (Score:2)
I think an Alpha board or two went as high as 512 bits wide.
Now, the wider memory bus doen't help x86 or A64 as much as one would think but with hyperthreading, it might.
Situations where HT really becomes useful (Score:5, Interesting)
An issue we encounter is the DCS (Distributed Control System) interface (the bit that links the PC to the fancy membrane keyboards, touch screens, alarm annunciators that the operator uses on the real plant [to maximise training benefit]). Although the interface typically only uses 0.5 to 2% of the CPU, when the simulation goes flat out, there is a noticable impact on other threads to the point where there is timeouts on data requests from the operator console.
In summary, if you have a system where some threads are IO bound (in our case, processing requests coming across via ethernet) and other threads are CPU intensive (high end numerical calculations) you will see a definite benifit. It allows us to give every team member a machine fit for the job at approximately 1/3 the cost (those of you who wish to argue that SMP machines are cheaper, we are bound by corporate purchasing agreements where SMP falls into the "Workstation" catagory while a uni-processor HT machine falls into the far cheaper "Desktop" catagory).
If you are performing just purely calculations and need to run two parallel threads, I would recommend a SMP or similar machine.
As always your milage may vary.
ZombieEngineer
The sound of software breaking (Score:3, Informative)
In the old single-processor days, your calc thread could do a Wait(0) -- according to th
HT is awesome (Score:5, Interesting)
With Xeon with HT, our performance has increased quite dramatically. We use Perl, so we simply fork off the jobs that do the processing. The result is that we fill all the four virtual processors in Linux if we have a sufficient number of jobs running.
Re:HT is awesome (Score:2)
Huh? This is not meant as an offense, or a troll, but that really, really doesn't fit together. Have you considered using something faster (no, not C)? This should have a much bigger effect than a HT proc.
Re:HT is awesome (Score:3, Insightful)
We have profiled our code and optimized the code where we spend most of our time. On those critical sections, we use most of the tricks in the book - dynamically created code, extensive use of hashes, etc. We can even write functions in C using XS if we want to!
Basically, Perl is
how to enable for older processors? (Score:3, Interesting)
Hyper-threading explained in 300 words or less. (Score:4, Informative)
I can't remember the name of the machine, but one parallel shared-memory machine used this exclusively. The CPU had 128 process contexts and would switch through them in order. The time between subsequent activations of each context was great enough that data could be fetched from main memory and loaded into a register. This eliminated cache coherency problems (no cache!) and all delays related to memory fetching.
A P4 with hyperthreading is a simplified and much more practical version of that machine.
The thing that got me about CPU performance (Score:5, Insightful)
I think the advent of SMT confirms that it is indeed the case that a given process cannot of itself (unless it is _real_ special) take full advantage of a modern processor and so SMT is a way of reducing the problem by assuming that whilst one process aint enough to take full advantage, two processes are able to make more advantage. It sure makes sense to me.
But it also presents the very interesting question of the marginal benefit of execution pipelines compared to complexity in the front end to allow SMT. What I mean is, what are the trade offs between having a "virtual" (for want of a better word) processor for each execution pipepline rather than using them to out of order execute parts of a single stream of instructions. Is it simply a question of the nature of the work being undertaken my the machine? Ie a processor with 8 pipelines serving 20 users doing stuff, would it be better doing 1 bis of work from each of 8 users or maybe 2-4 bits of stuff from 4-2 users. And can we answer that question heuristically to allow the front end to make good use of each pipeline with a variable profile over the chaing use of the machine. Fascinating (well to me anyway).
Analogy (Score:4, Interesting)
Could be, but isn't. A better analogy would be two people using the same narrow corridor to perform to chop and pile wood. If one piles wood, whilst the other chops, then they perform better than one person. If they both chop wood, and then both pile wood then they waste lots of time trying to squeeze past each other and accidentally hitting each other with axes.
Okay, so it's not that much better an analogy. But it least it bears some relevance to HyperThreading.
Re:Analogy (Score:3, Funny)
***ducks***
HT and VMWare: perfect together! (Score:3, Interesting)
Re:HT and VMWare: perfect together! (Score:3, Informative)
-Mike
HT Technology (Score:3, Informative)
Distributed Computing (Score:2)
This is a significant boost in production over a non-HT processor because these programs.
I would assume this would also help other DC projects like Seti@Home.
Beware of HT! (Score:2)
For one, burst!, my BitTorrent client simply crashes on start-up. I've been in contact with Intel about the issue, and after some initial jerking me around, I seem to have finally found a tech who's looking into the issue.. Probably has something to do with my compiler (the crash offset is within the delphi RTL).
My app is not alone, as others in this thread pointed out, hyperthreading can also tr
AnandTech on Hyperthreading (Score:3, Informative)
IBM Will Do SMT Right (Score:4, Informative)
"hyper-threading" vs. cache size (Score:5, Informative)
If you want to benchmark a hyper-threaded machine, a useful exercise is to run two different benchmarks simultaneously. Running the same one is the best case for cache performance; one copy of the benchmark in cache is serving both execution engines. Running different ones lets you see if cache thrashing is occuring. Or try something like compressing two different video files simultaneously.
If you're seeing significant performance with real-world applications using a a "hyper-threaded" CPU, that's a sign that the operating system's dispatcher is broken. And, of course, hyper-threading dumps more work on the scheduler. There's more stuff to worry about in CPU dispatching now.
Intel seems to be desperate for a new technology that will make people buy new CPUs. The Inanium bombed. The Pentium 4 clock speed hack (faster clock, less performance per clock) has gone as far as it can go. The Pentium 5 seems to be on hold. Intel doesn't still have a good response to AMD's 64-bit CPUs.
Remember what happened with the Itanium, Intel's last architectural innovation. Intel's plan was to convert the industry over to a technology that couldn't be cloned. This would allow Intel to push CPU price margins back up to their pre-AMD levels. For a few years, Intel had been able to push the price of CPU chips to nearly $1000, and achieved huge margins and profits. Then came the clones.
Intel has many patents on the innovative technologies of the Itanium. Itanium architecture is different, all right, but not, it's clear by now, better. It's certainly far worse in price/performance. Hyperthreading isn't quite that bad an idea, but it's up there.
From a consumer perspective, it's like four-valve per cylinder auto engines. The performance increase is marginal and it adds some headaches, but it's cool.
Re:"hyper-threading" vs. cache size (Score:5, Informative)
That was my suspicion. Hyperthreading can't be much more efficient than threading via the OS, unless the software is specifically compiled for it, or you use a scheduler specific to hyperthreading. Scheduling work STILL has to be performed, and hyperthreading STILL isn't parallel processing. So where are these performance improvements people are seeing coming from?
I'm not using Linux, but FreeBSD. When I got my new HT P4, I considered turning it on. Then I read the hardware notes. Since FreeBSD does not use a scheduler specific for hyperthreading, it can't take full advantage of it. In some cases it might even result in sub-optimal performance. Just like logic would lead you to think.
The OS cannot treat hyperthreading the same as SMP, because they are two different beasts.
Other Conclusions (Score:2)
The Xeon has a slower clock, and yet outperforms the higher clock P4C. This is further evidence that MHz isn't everything.
The P4C has higher memory bandwidth (the FSB) yet slower performance. This shows that on-chip cache can be king over memory bandwidth too.
Some of my historic
Re:Ever buy a car with auto-everything? (Score:5, Interesting)
To put hyperthreading into your car analogy:
Hyperthreading is like a car that has power assisted steering. If you want, you can switch it off; you'll likely have a slightly smoother time with it on. But if you want the control (or don't trust it) then you can switch it off.
For the geek who reads posts as a stack of strings delimited by <br>, Nobody's forcing you to use hyperthreading. Use it, don't use it. Don't complain that it's a Bad Thing[tm] simply because you're being given the choice
Re:Ever buy a car with auto-everything? (Score:2)
That is not a good analogy. Sure you can choose not to use HT, it will give you the same control over the system as you would have on a computer without HT. But there is no way you could utilize the full power of the CPU without HT.
Bug fixing my post (Score:2, Funny)
The 0xCAFEBABE bug just slows it down to a crawl.
Re:Bug fixing my post (Score:2)
And your post is just a trawl!
Btw, at least the 0xCAFEBABE bug doesn't open up the barn door for all viruses and trojans to come in and have a jolly good time in your computer, unline that infamous ActiveX bug! And with 1.4, performance is not that bad either.
Re:Ever buy a car with auto-everything? (Score:5, Insightful)
The Pentium math bug was with division, not addition, and it only occurred in very specific circumstances [maa.org]. So while it supports your general point that complicated systems are more difficult to debug, that wasn't a very good example of an "obvious" bug. Careless, yes.
One thing that was good for the industry was to move away from the complex instruction set (CISC) towards a reduced set of instructions (RISC), and we have seen the speed improvements as well as a general reduction in hardware bugs since that time.
You do realize that Intel x86 processors are still CISC, right? (OK, actually internally they do execute things very much like a RISC chip, but the instruction set is still CISC, and modern x86 processors are certainly not any _simpler_ for having some RISC-like elements to them.
Besides, RISC chips don't actually have fewer instructions. Most of them these days have more. The difference between CISC and RISC is that RISC chips don't have certain complicated, slow instructions, but rather break these up into smaller pieces. For example, CISC processors usually have an instruction to move memory-to-memory while RISC only moves memory-to-register and register-to-memory. Also, CISC processors often have a division instruction while many RISC processors instead just have a multiplicitive inverse instruction (so to compute a/b you instead compute a*inv(b)).
But to add Hyperthreading, an untested and unproven technology which can guarantee no more than a 12% speed improvement, is folly. Better to amp the CPU clock and deal with a known like heat than to risk your company's livelihood on letting the CPU figure out which thread is which. That is something an OS is much more reliable in handling.
Now that's just ridiculous. Hyperthreading is not untested or unproven. Similar ideas have been discussed in academic papers for years; Intel was just the first to put it into a modern CPU. It's hardly untested, either - Intel started seeding the first Hyperthreading-capable processors what, two years ago now? At that point I wouldn't have suggested running a mission-critical application on a machine with Hyperthreading enabled, but now? You'd be crazy not to if it actually speeds up the application you need to run.
The reality is that in order to advance the speed of computer processors, it's necessary to make them more complicated.
YHBT HAND! (Score:5, Informative)
Note to moderators: mod grand-parent down. It is obviously a troll (albeit a rather well written troll!). If you absolutely must mod it up, at least use Funny rather than Interesting
Linguistic CISC vs RISC (Score:2)
This commented used RISC type language, and in the process, a logical error was accidentally introduced... the correct programmatic statement would be:
"Hyperthreading is not untested _nor_ unproven"
CISC has it's advantage in the way the intended statement would be encoded:
"Hyperthreading is better"
This is a complex statement succinctly written with fewer keywords and fewer potential (epistemological) errors.
Re:Ever buy a car with auto-everything? (Score:5, Informative)
Athlon and Athlon64 are generally better able to make use of their execution units, and wouldn't benefit from HT as much as P4/Xeon.
Re:Just Marketing BS by Intel to get suckers to bu (Score:5, Interesting)
64 bits, while not interesting in and of itself, is interesting in AMD's implementation. I have an UltraSparc sitting on my desk at work, and I assure you it's one of the most boring machines in the world. Why is AMD interesting? In the Opteron/Athlon 64 they've fixed some of the shortcomings of the x86 architecture. More registers. Access to more than 4GB of RAM without menutia (like Intel uses). Things that were expensive in a register-starved 32 bit processor aren't on an Athlon64.
No, it's not innovative, not by a longshot. It's the same damn thing Intel did when they introduced the 80386. But it continues the line unbroken, and that's why the processor is important.
Hyperthreading is interesting, I agree, but I'd much prefer more affordable dual processor machines. Why in the world do Intel, AMD, and Microsoft go out of their way to keep SMP machines off the desktop? Apple certainly is going in the opposite direction.
Re:Just Marketing BS by Intel to get suckers to bu (Score:5, Interesting)
No, they aren't. The Apple "common desktop" oriented machines - the eMac, iMac and perhaps at a stretch the 1.6Ghz G5 - are all single CPU machines and are likely to remain so now the G5 has finally appeared (price alone, without going into other aspects, puts the dual G5s into workstation/high-end enthusiast desktop territory).
Apple briefly flirted with putting dual CPUs into their nearly-home-desktop machines, but this was driven by the massive speed deficit at the time of G4 CPUs - they *had* to have dual CPUs to be even remotely competitive. No matter what else Apple's marketing department might have tried to say.
If you could option a dual CPU onto an eMac, and all the iMacs were dual CPU, then your comment would be accurate. Two high-end machines out of a base range of seven (and that's ignoring the laptops) is not a paradigm shift. By that measure, just about any major manufacturer is "going in the opposite direction".
Re:Just Marketing BS by Intel to get suckers to bu (Score:3, Interesting)
Re:Capsule summary. (Score:3, Informative)
The original thinking behind SMT was that with cache and branch prediction misses staring to have very large penalties, switching to an alternate thread would result in significant performance increase.
It turns out however that doing context switching at this ultra-fine granularity causes the cache miss rate to go up as each thread
Re:Capsule summary. (Score:2)
Re:Capsule summary. (Score:2)
Re:Jim Kirk (Score:2, Informative)
Re:Cinebench (Score:2)
"Obviously when hyper-threading was disabled on my P4 test system, I was unable to run the Multiple CPU portion of Cinebench's rendering benchmark."