Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Single Thread Acceleration 182

SlinkySausage writes "Even though Intel is probably the industry's biggest proponent of multi-core computing and threaded programming, it today announced a single thread acceleration technology at IDF Beijing. Mobility chief Mooly Eden revealed a type of single-core overclocking built in to its upcoming Santa Rosa platform. It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs. Intel also foreshadowed a major announcement tomorrow around Universal Extensible Firmware Interface (UEFI) — the replacement for BIOS that has so far only been used in Intel Macs. "We have been working with Microsoft," Intel hinted."
This discussion has been archived. No new comments can be posted.

Intel's Single Thread Acceleration

Comments Filter:
  • Why the surprise? (Score:2, Interesting)

    by something_wicked_thi ( 918168 ) on Monday April 16, 2007 @09:23AM (#18749457)
    It makes perfect sense that you'd still try to speed up single-threaded applications. After all, if you have 4 cores, then any speedup to one core is a speedup to all of them. I realize that's not what this article is about. In this case, they are speeding up one at the expense of the other, but the article's blurb makes it sound like Intel shouldn't be interested in per-core speedups when that is clearly false.
  • UEFI? (Score:2, Interesting)

    by Noryungi ( 70322 ) on Monday April 16, 2007 @09:29AM (#18749497) Homepage Journal
    While I am all for having something a bit more intelligent than BIOS to init a computer, I can't help but wonder... Does this UEFI integrates DRM functions? Is this the Trojan Horse that will make all computers DRM-enabled?

    Inquiring minds want to know!
  • Re:Who cares? (Score:2, Interesting)

    by something_wicked_thi ( 918168 ) on Monday April 16, 2007 @09:42AM (#18749635)

    This "single thread acceleration" will have to be supported by the OS?

    I doubt it. My reading of the article is that the CPU detects when only one core is in use and does everything itself. But, even if it does require some level of OS support, I wouldn't worry about Linux's support of it (or of UEFI, for that matter, as Linux runs quite well on Macs and Intel does a good job of supporting Linux, anyway). Linux even has support for hotplugging CPUs, so, even if it comes to that (and I doubt it will), then it should still work.

    Does it have the potential to break a half-bad application?

    Any change in a CPU's implementation should not be observable to anyone unless the observer knows to look for it (e.g. with the CPUID instruction). Intel won't release a chip that breaks existing apps. Besides, if you think about it, if apps work on a single-core CPU, why shouldn't they work on a dual-core CPU with one core disabled?

  • by pzs ( 857406 ) on Monday April 16, 2007 @09:43AM (#18749643)
    As many slashdotters are in software development or something related, we should all be grateful that multi-core processors are becoming so prevalent, because it will mean more jobs for hard-core code-cutters.

    The paradigm for using many types of software is pretty well established now, and many new software projects can be put together by bolting together existing tools. As a result of this, there has been a lot of hype about the use of high level application development like Ruby on Rails, where you don't need to have a lot of programming expertise to chuck together a web-facing database application.

    However, all the layers of software beneath Ruby on Rails are based on single-threaded languages and libraries. To benefit from the advances of multi-core technology, all that stuff will have to be brought up to date and of course making a piece of code make good use of a number of processors is often a non-trivial exercise. In theory, it should mean many more jobs for us old-schoolers, who were building web/database apps when it took much more than 10 lines of code to do it...

    Peter
  • by something_wicked_thi ( 918168 ) on Monday April 16, 2007 @10:24AM (#18750103)

    We've had it for decades - Just look for multiprocessor support, and you have implicit multithreaded support automatically.

    Well, yes and no. I think the easiest model for multithreading today is message passing, but it doesn't suit all needs and requires you to design your app to support it from the start. Most mainstream languages (read C/C++, Java, and .NET) don't really support much beyond your basic mutex, semaphore, and monitor. There are a few other things out there that provide various ways of doing things, but none are universal and none seem to have really caught on.

    What we really need is either a language that can express things in such a way that the compiler can easily make good decisions about what can be parallelized, or a compiler that can do that with existing languages. I think that the latter approach may prove impossible. To make informed decisions about threading, a compiler really needs to know things about the data, and most procedural languages just don't cope with that very well.

    It seems that HPF may provide some of these things already. I did a few quick Google searches and it seems interesting, but I wonder how much better it is than current work that is being done on auto-vectorization of loops and such in modern compilers. I'll have to look into that language more closely before I can really draw any conclusions. I believe that IBM has been trying to do some interesting work in this area with the Cell processor, too, and I suspect that's why Sony makes interesting statements about how the true power of the Cell will never be fully realized.

    Regardless, the next decade is going to be an interesting one for compiler writers, I suspect.

  • by kartracer_66 ( 96028 ) on Monday April 16, 2007 @10:34AM (#18750249) Homepage
    Concurrent applications needn't be so difficult to program. Take a look at the actors model [wikipedia.org] and STM [wikipedia.org].

    What's unfortunate is that we're stuck on this idea that concurrency == multiple threads w/shared state. With that approach, sure, apps will never scale. You're right, we do need higher-level threading primitives. I'm just not so sure they're all at the compiler level.

  • EFI = no XP (Score:1, Interesting)

    by Anonymous Coward on Monday April 16, 2007 @10:36AM (#18750267)
    By convincing Intel to make Santa Rosa EFI-Only, MS can ensure that none of their pesky users will be able to install XP on it.

    Nothing like using monopoly influence to prop of sales of your lastest OS that no one really needs or wants.
  • Re:UEFI? (Score:3, Interesting)

    by Kjella ( 173770 ) on Monday April 16, 2007 @10:53AM (#18750491) Homepage
    Locked out, no. Let in, also no. Linux is going to suffer the death of a thousand needles when "secure" movies, "secure" music, "secure" webpages, "secure" e-mail, "secure" documents, "secure" networks, "secure" IM and whatnot get propagated by the 98% running DRM operating systems. I mean, look how many people are frustrated Linux doesn't play MP3 or DVDs out of the box, no matter how little it's Linux's fault, and there is an easy fix.

    What if the problem is ten times worse, and there is no easy fix? Are you going to say "but hey there's this open source network..." "but all my friends are on MSN" "they can come too" "...restore my Windows. Now!" and that'll be the end of Linux on the desktop as anything but a geek's toy.
  • Re:Overclocking? (Score:1, Interesting)

    by Anonymous Coward on Monday April 16, 2007 @12:26PM (#18751845)
    Hey, I only thought about this for 2 seconds but I can see a potential idea. In today's processors there are very powerful branch predictive circuits, speculative execution logic, etc. All of these contribute to the performance gains we've seen over the years. So if you have an idle core why not essentially make it do a fancier version of predictive/speculative work? Basically you have two cores with the 2nd always taking the opposite branch of the first. Whichever path turns out to be the right one, switch processing to that core and keep going. Internally to each core this is already going on but separate processors couldn't do this because of latency and synchronization issues, but I fail to see why cores on the same silicon couldn't.

    Again, it's possible there are some MMX or SIMD instructions that might be parallelized over cores as well... If you have to operate on a vector of things odds are you can cheat somehow.
  • Re:Overclocking? (Score:3, Interesting)

    by Aadain2001 ( 684036 ) on Monday April 16, 2007 @01:22PM (#18752543) Journal
    Those are all good ideas that have already been explored. The bottom line in most of these designs is that you don't get much ROI in the form of decreased execution time. For highly specialized applications that have little data inter-dependence, there is a significant increase, but you could get the same anyway by making the program multi-threaded and told to use both cores. Parallel programming is not simple for most applications since they cannot be broken down into many if any parallel tasks to make the error worth the time and effort. The area that Intel/AMD/etc should be targeting is multi-tasking: play a video game while encoding a movie and running Folding@Home without a performance drop in any of the programs.
  • Sum the Cores! (Score:2, Interesting)

    by onebadmutha ( 785592 ) on Monday April 16, 2007 @02:53PM (#18753891)
    I understand that it doesn't work at this point, sorta like "don't cross the streams" from ghostbusters.. But really, we're talking about a long series of math problems at this point, why not interleave? I understand the math is hard, that's why intel has all of those Phd's. Getterdun. I wants me some Quake 9 a 4.2 billion frames per second. Plus, programming multithreaded is all superhardish!

The moon is made of green cheese. -- John Heywood

Working...