Intel's Single Thread Acceleration 182
SlinkySausage writes "Even though Intel is probably the industry's biggest proponent of multi-core computing and threaded programming, it today announced a single thread acceleration technology at IDF Beijing. Mobility chief Mooly Eden revealed a type of single-core overclocking built in to its upcoming Santa Rosa platform. It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs. Intel also foreshadowed a major announcement tomorrow around Universal Extensible Firmware Interface (UEFI) — the replacement for BIOS that has so far only been used in Intel Macs. "We have been working with Microsoft," Intel hinted."
Overclocking? (Score:5, Insightful)
Re:Overclocking? (Score:5, Informative)
ACK!!! (Score:2, Funny)
Re: (Score:2, Insightful)
Good lord, let me sell all my web, application, and DB servers then!!!! I've overpaid for 32 CPU systems!!!! ACK!!!
I wouldn't call server applications or dbms "basic applications."
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The original poster is talking about reintroducing the idea of threads into single-threaded code, which is an incredibly difficult task. This is even more complicated a task than just out-of-order execution on scalar code.
I'm hardly surprised that Intel took the easy way out on this one - the "hard" fixes for this problem are on the ve
Re: (Score:2)
I think you are talking about a different level of multi-threading than I was referring to (and the kind that AMD is researching). The kind of threading you are talking about would be at the programming language level, such as a multi-threaded web server or database server. These already exist and do make great use of raw processing power. B
Re: (Score:3, Interesting)
Re: (Score:2)
That's one reason multi-core CPUs were developed. At some point, however, Intel and AMD are going to have to begin increasing clock speeds again. The reason is that many problems (and the programs that solve them) can't be broken into parallel tasks and thus adding cores doesn't help. Only a clock speed increase can speed them
Re:Overclocking? (Score:4, Funny)
Holly only had 3min before she would be gone forever... And that bloody toaster had to ask if she wanted toast.
Lets hope that Intel has solved this issue with their new CPU's.
I for one which welcome, in soviet russia we compute you, and PROFIT!
Re: (Score:3, Funny)
Lister: No, I don't want any toast. In fact, no one around here wants any toast. Not now, not ever. NO TOAST. OR muffins! We don't LIKE muffins around here! We want no muffins, no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and DEFINITELY no smegging flapjacks!
Talkie Toaster: Ahh so you're a waffle man.
..off topic... so shoot me.
Re: (Score:2)
That'd be breaking the license just as much as hacking your license data to say "licensed for 4 processors".
I bet that for the price that you're paying for that software, you could have some undergrad rewrite the application slightly less efficiently and then buy another 8 Opteron boxes to run the new slower software on in a cluster.
Why the surprise? (Score:2, Interesting)
Re: (Score:2)
This optimalization essentially shuts down the other cores in order to let the remaining core perform faster.
So this optimalization is counterproductive when you have applictions that actually use multiple cores.
EFI used by more than Apple (Score:4, Informative)
It is well past time that BIOS went to the grave.
Re:EFI used by more than Apple (Score:5, Insightful)
Yeah... Why, that nasty ol' standard BIOS makes hardware-level DRM just so pesky. And vendor lock-in for replacement hardware? Almost impossible! Why, how will Dell ever survive if it can't force you to use Dell-branded video cards as your only upgrade option? And of course, WGA worked so well, why not include it at the firmware level? Bought a "OS-less" PC, did we? No soup for you!
Sorry, EFI has some great potential, but it has far too much potential for vendor abuse. The (somewhat) standardized PC BIOS has made the modern era of ubiquitous computers possible. Don't take a "step forward" too quickly without first looking to see if it will send you over a cliff.
Re: (Score:3, Informative)
Re: (Score:2)
Speak for yourself! I, for one... oh, never mind.
BIOS still screws up plenty (Score:3, Informative)
It's not as good as you hope. I have three new machines all with BIOS bugs that are a real problem - a SiS mobo that doesn't setup my MTTR registers correctly and so causes the machine to run murderously slow unless I tell the kernel to map out the last bit of RAM or setup my own MTRR registers by hand, an Asus mobo that causes all kinds of problems and kernel pani
Re:EFI used by more than Apple (Score:4, Insightful)
Not really. It just makes improvements and DRM hacks. Add a TPM module to a BIOS-based system and include support in the OS and it will be just as effective for MS's purposes as an EFI one. BIOS makes modern hardware a pain in the butt. The fact that DRM modules are modern hardware is sort of orthogonal to the issue.
Umm, Dell is not even the biggest player in a market that is not monopolized. If Dell requires Dell branded video cards and people care (most probably won't) then people will switch to a vendor that does not do this and Dell will change or die. I don't think Dell or any other PC vendor has enough influence to force such a scheme upon the existing graphics card makers. Only MS really has that much influence and I don't think they have the motivation.
I don't think you have to worry about this problem unless you're running Windows on it.
I disagree. I don't see that vendors will abuse this any more than they already abuse BIOS. In any case, the change is coming. You just need to decide which side of the curve you want to be on. (Typed from an EFI laptop.)
Open Firmware (Score:2)
Open Firware [openfirmware.org] has been around a lot longer than intel's EFI, and is used by Sun, IBM and Apple for their RISC boxes.
There is a free-as-in-speech implementation for the PeeCee called OpenBIOS [openbios.org].
It's implemented in FORTH.
EFI (Score:2, Informative)
Really. I know Google is hard to use, but even Wikipedia [wikipedia.org] would have given some detail on EFI history. (Hint: Itanium only ever used EFI). And it turns out that Macs are not even the first x86 machines to use it, either:
A Marketing Triumph (Score:5, Informative)
Don't get me wrong, this is valuable technology. It is important that microprocessors efficiently use the power available to them. Having a choice on a single chip between a high-performance, high-power single-thread engine & a set of lower-performance, lower-power engines has great promise. But, the way this is presented is a big victory for marketing.
Re: (Score:2)
Who cares, when you won't notice the throttling since the throttled core was sitting idle anyway? They're not slowing down the core you're using.
Re: (Score:3, Informative)
Basicly, if you have a thermal envelope. You know that consumption rises with clockspeed squared. You can either have 4*(1GHz)^2 = 4Ghz processing power or 1*(2GHz)^2 = 2GHz processing power with the same power consump
let's not be subtle about this (Score:3, Insightful)
This chip has to throttle itself when you use all the cores. (probably a power/heat issue)
People hate throttling. Throttling is not marketable.
Intel marketing turned things around, saying that the chip speeds up (a.k.a. "stops throttling") when running single-threaded apps. Speeding up is good! It's like the old turbo buttons.
It's a sane idea. I'd been expecting to see chips that can't run at full speed continuously because of heat issues; this is pretty much the same thing.
Twice the speed? (Score:3, Insightful)
So... If they can have 2 cores at full speed, or 1 core at double speed... WHY THE FUCK do they have 2 cores in the first place?
Re: (Score:2, Informative)
because of pipelining, if you have to swap between tasks, you actually lose a large number of instructuions, which means switching tasks often with a single core is significantly worse for performance than multiple cores.
Re:Twice the speed? (Score:4, Insightful)
Re: (Score:2)
You know, they invented this outlandish thing called time slicing in what, the 60s?
Uh, huh - ever heard of its little friends, called "context switching" and "pipeline stalling"?
Re: (Score:2)
NOT twice the speed (Score:2)
Re: (Score:2)
The doubling in speed is for a different technology - "turbo memory" under a particular (memory-bound) application.
The speed-up for "overclocking" the core is unlikely to be as much as 2x. When you have a multi-threaded app (or several apps) then you want both cores because you'll get more performance that way. When one core is not being utilised the other core can increase its
UEFI? (Score:2, Interesting)
Inquiring minds want to know!
Re:UEFI? (Score:5, Informative)
As I understand it, UEFI will enable some thoroughly nasty DRM, but only so far as the OS vender chooses to take it. Apple and Microsoft will almost certainly make it a miserable experience for all involved, but will probably tire of shooting themselves in the feet at some point. There are alternatives after all and they are looking better every day.
Re: (Score:2)
Re: (Score:2)
You go right ahead and load a DRM protected song from iTunes onto your Sandisk Sansa MP3 player using approved OS X / iTunes functionality. Once you've done that, you can make the claim that Apple doesn't screw their customers. Apple isn't bad overall, but their just as much a villain as Microsoft in this DRM thing.
Re: (Score:3, Interesting)
What if the problem is ten times worse, and there is no easy fix? Are y
Re: (Score:2)
"Caught up"? (Score:5, Insightful)
Or maybe Intel, unlike the story submitter, knows that many apps simply do not lend themselves to multithreading and parallelism. It's not about "catching up".
Multi-core for multithreaded apps? Check.
Trying to get each core as fast as possible for when it's only used by one single-threaded app? Check.
Makes sense to me.
Re: (Score:2)
Also, and I feel dumb for saying this because it's so obvious (I'm not even an expert on these things), but you don't really need all of your applications to be multithreaded in order to benefit from multiple cores. I guess I'm assuming that I'm not the only person who runs multiple applications at the same time.
Of course, it's more likely that you'll be taking good advantage of 8 cores if your apps are multithreaded, but if you're running two single-threaded applications on a dual core system, shouldn't
Re: (Score:2)
Most software people use doesn't (or shouldn't) use 5% of the processor power available to it. Of course, when you fire up the latest 3D game, ray-tracer, or other truly CPU-intensive app, you need all the cycles you can ring from every core. Most of these are multitaskable or parallelable, but it's not always obvious or easy how to do it.
Besides, how e
Re: (Score:2)
Re: (Score:2)
This is a myth, propagated by lazy developers and cheap end users.
There are some classes of computing problems that can't be parallelized, but very few of those problems are the applications that we want to run faster on modern computers.
The only application that shows up on benchmark sites that might not be easily parallelizable is file compression (i.e. "WinRAR"), and if that ever needs to be parallelized a small algor
Re: (Score:2)
One example - which I run into at work all the time: parsing large HTML (or XML, same thing really) files. Web browsers are multithreaded in the sense that they use threads for connections to the server to get different files; it's still (as far as I know) single-threaded per file as far as parsing is concerned.
Another example - games. There's obvious potential fo
Re: (Score:2)
OK, fair enough. But in a lot of (almost all?) workloads, it's not really the time required to process a single file that's the problem, so much as the time required to process a whole bunch at on
Re: (Score:2)
There are some program types where it is (provably) impossible to parallelize them. They're much more rare in real world applications than people seem to think.
All of the examples you give can be parallelized - at least for a small number (say, up to 16) processors.
XML Parser:
journalism at its finest (Score:4, Funny)
That said, this seems to make perfect sense to me. If they're able to pump all that power into a single core while the other one is asleep/idle, all while keeping it within its operating parameters, then I'm all for it.
Given the modern power budgets, this makes sense (Score:2)
Re: (Score:2)
Strange link (Score:2)
"This is not overclocking. Overclocking is when you take a chip and increase its clock speed and run it out of spec."
THis is just a technique to stay under the specified power envelope. Nowadays not the speed is the real problem, but the powerusage. Not that in single thread mode the CPU will run less instructions per watt.... and i guess for every 25% more cpu frequency you you 75% more power or or something like that.
Re: (Score:2)
Btw. robson? Rubs on, Rubs off..
Thanks for the heads up... (Score:2, Insightful)
Now I know to avoid it.
Multi-core is good for jobs (Score:3, Interesting)
The paradigm for using many types of software is pretty well established now, and many new software projects can be put together by bolting together existing tools. As a result of this, there has been a lot of hype about the use of high level application development like Ruby on Rails, where you don't need to have a lot of programming expertise to chuck together a web-facing database application.
However, all the layers of software beneath Ruby on Rails are based on single-threaded languages and libraries. To benefit from the advances of multi-core technology, all that stuff will have to be brought up to date and of course making a piece of code make good use of a number of processors is often a non-trivial exercise. In theory, it should mean many more jobs for us old-schoolers, who were building web/database apps when it took much more than 10 lines of code to do it...
Peter
Re:Multi-core is good for jobs (Score:5, Insightful)
Re: (Score:3, Informative)
--
Simon
Re: (Score:2)
You have got to be kidding. Most desktop applications already use multithreading to some degree - especially if they do any network IO.
Re: (Score:2)
Actually I can't think of any desktop applications that would really benefit from supporting multithreading to actually warrant the extra effort. Most desktop applications for the average person run perfectly fast as single threaded programs.
A lot of desktop applications are already multithreaded. In addition, who says there has to be extra effort? In the upcoming version of OS X, for example, many programs that utilize OpenGL, including games, automatically will spawn a second, feeder thread that simply pumps OpenGL info to the graphics card resulting in up to (but almost certainly less than) twice the performance on multi-core systems. When we get to 4 or more cores as the common standard you'll see 1 core running the OS processes and maybe
Re: (Score:2)
Outlook: 40
Mcaffee Virus Scan: 29
Windows Communicator: 19
Internet Explorer: 19
Explorer: 12
And that's how it should be. Threads are _essential_ for desktop apps, if for no other reason than to allow the UI to remain responsive while something else happens (e.g. background printing in Word, checking email in Outlook, animating some image
Re: (Score:2)
Re: (Score:2)
True, but the many benefits from a multicore system don't come from necessarily runnning one process with shared data on multiple cores (i.e. threading) but running multiple processes that are fairly isolated in parallel. One could be the Ruby interpreter,other could be your database, the other a monitoring or security application, another a backup deamon and so on.
Concurrent programming in shared data environm
Re: (Score:2)
A finite state machine is an algorithm not a task. You can achieve parallelism by using a different algorithm, or by using a splitting heuristic that divides the work among multiple threads running the non-parallel algorithm.
Re: (Score:2)
Re: (Score:2)
Web apps, like Ruby on Rails, is a good example of why multi-threading is not needed. Web servers handle many simultaneous requests, so the workload is easily divisible based on individual requests. The web server
Re: (Score:2)
That's ju
Multi-core CPUs (Score:5, Informative)
And no, EFI didn't appear first on Intel Macs. Intel Macs weren't even the first x86-based machines to employ it.
Where are the EFI video cards and raid cards? (Score:2)
And an non EFI raid card may not be able to boot in a efi system.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why?
And on top of that, most add-in cards today are firmware upgradable.
But can you explain precisely why it would have to be in the ROM? It would have to be there to not have to be somehow installed, I'll grant you that...
Re: (Score:2)
also a raid / sata / ide / scsi card needs a rom to be able to boot from it.
Re: (Score:2)
Otherwise the driver can live on the system. Even on the system disk.
Thread accell. about power or temperature? (Score:2)
Re: (Score:2)
This is not the case, because the heat does not spread out on the chip that much. So the peak temperature of a core doesn't depend on the behavior of the other cores. The fact that Intel is using EDAT indicates that Merom is power-limited but not temperature-limited.
Give us RIOTS! (Score:2)
Give us an FPGA coprocessor on chip.
Re: (Score:2)
Putting the FPGA on-chip would be a bit faster, but make for more expensive chips due to not only putting the FPGA on-chip but making different chips with different kinds of FPGAs or no FPGA. I'd settle for an off-chip, upgradeable FPGA rather than have to upgrade the CPU to upgrade the FPGA co
Below max clock vs. TDP (Score:2, Informative)
The return of trusted computing? (Score:2)
$10 bucks says this heralds a new age of DRM.
Sum the Cores! (Score:2, Interesting)
Re: (Score:3, Informative)
Because many of the CPU math results depend on other results in the chain. Spreading those dependant operands across multiple CPU's may not be efficient.
Re: (Score:2)
Re: (Score:2, Interesting)
I doubt it. My reading of the article is that the CPU detects when only one core is in use and does everything itself. But, even if it does require some level of OS support, I wouldn't worry about Linux's support of it (or of UEFI, for that matter, as Linux runs quite well on Macs and Intel does a good job of supporting Linux, anyway). Linux even has support for hotplugging CPUs, so, even if it comes to that (and I doubt it will), then
Most applications will never become multi-threaded (Score:5, Insightful)
Why should they? The advent of multicore CPUs won't actually hurt single-threaded apps. They just won't get any faster. For most things, that's fine. Legacy apps that aren't changing are most likely already fast enough. Besides, not everything can be parallelized properly, anyway. Multithreaded applications will become more popular, but I think this trend will affect new applications much more than old ones because it's just not that important. Even new apps don't necessarily need parallelization because many things are "fast enough" on a single core.
By the way, I actually hope that many things never become multithreaded. In my experience, most coders simply aren't capable of thinking threading through clearly. For many people, the concept is just too complex. Hopefully, compilers will improve to the point where many things can be parallelized without the coder having to know very much, if anything, about the threading involved, but, today, we're nowhere near that. We desperately need higher-level threading primitives in computer science.
Re:Most applications will never become multi-threa (Score:2)
Re:Most applications will never become multi-threa (Score:4, Insightful)
I agree completely, though you can expect to catch some flack for that one, from the hoardes of poor coders who think nothing (or rather, who don't think about the implications) of splitting off another thread to boost performance (even in a single core environment).
Personally, I consider myself a damned good coder - And I avoid multithreading wherever possible. If I really need the raw CPU power, I'll usually try to model it as a full slave process before resorting to messy threading.
We desperately need higher-level threading primitives in computer science.
We've had it for decades - Just look for multiprocessor support, and you have implicit multithreaded support automatically.
As one "mature" implementation, we could all start coding in HPF. I'd personally rather gnaw my own right leg off, but, to each their own.
Re: (Score:3, Interesting)
Well, yes and no. I think the easiest model for multithreading today is message passing, but it doesn't suit all needs and requires you to design your app to support it from the start. Most mainstream languages (read C/C++, Java, and .NET) don't really support much beyond your basic mutex, semaphore, and monitor. There are a few other things out there that provide various ways of doin
Re:Most applications will never become multi-threa (Score:5, Informative)
Seriously, several constructs in Fortran are designed specifically for parallel execution. The language itself makes it hard to write code that the compiler can't heavily optimise. There's a reason why variable aliasing is strongly controlled in Fortran and why function parameters have an 'intent' attribute. Then there are constructs such as WHERE, which is by its very nature implicitly a parallel set of operations.
Re: (Score:2, Informative)
Also, the array-operations nicked from APL in modern fortran enable a lot of implicit parallelism, as does idiomatic fortran's referential transparency.
DON'T base your opinion of Fortran on GNU Fortran - it'd be like taking Emacs Lisp to be the state-of-the-art in Lisp. The Intel Fortran compiler can do magic things.
Re: (Score:2, Insightful)
In my experience, most coders simply aren't capable of thinking
;-)
threading through clearly
I agree completely, though you can expect to catch some flack for
that one, from the hoardes of poor coders who think nothing (or rather,
who don't think about the implications) of splitting off another thread
to boost performance (even in a single core environment).
Hordes of even "good coders" can't properly code multi-threaded apps. Actually, after more than a decade as a programmer, I'm not sure there are hordes of good coders. There are good coders, and a disturbingly large percentage of them do not understand concepts like multi-threading or effective techniques of fail-safe systems coding.
Personally, I consider myself a damned good coder - And I avoid
multithreading wherever possible. If I really need the raw CPU
power, I'll usually try to model it as a full slave process before
resorting to messy threading.
You may be a good coder, but you apparently fall into the majority camp by your own admission. Not that there's anything wrong with that though. You at least realize that mult
Re: (Score:2)
Re:Most applications will never become multi-threa (Score:2)
As long as we are using C or C++ I doubt that will happen, those languages are just not build with multi-threading in mind and thus aren't easily to parallelize, other languages like functional ones on the other side make the job really easy. When you don't have side effects to worry about, executing things in parallel becomes ra
Re:Most applications will never become multi-threa (Score:2, Interesting)
What's unfortunate is that we're stuck on this idea that concurrency == multiple threads w/shared state. With that approach, sure, apps will never scale. You're right, we do need higher-level threading primitives. I'm just not so sure they're all at the compiler level.
Re:Most applications will never become multi-threa (Score:2)
http://www.vitanuova.com/inferno/limbo.html [vitanuova.com]
http://plan9.bell-labs.com/magic/man2html/2/threa
Re:Most applications will never become multi-threa (Score:2)
I'm not arguing that multicore isn't the right thing to do but your assertion is not correct. Who knows what reality wou
Re: (Score:2)
I think you're wrong. Multicore didn't require any particular development -- basically it's a cut-and-paste job. Intel even
Re:Most applications will never become multi-threa (Score:2)
One workaround, though, would be asymmetric multiprocessing: pair one fast single-core CPU with a slow many-core CPU.
Re: (Score:2)
Re: (Score:2)
In the future, when trolling like this, try to make it funny and/or grammat