Forgot your password?
typodupeerror
Operating Systems Hardware Linux

Linux May Need a Rewrite Beyond 48 Cores 462

Posted by CmdrTaco
from the it's-all-stacking-blocks dept.
An anonymous reader writes "There is interesting new research coming out of MIT which suggests current operating systems are struggling with the addition of more cores to the CPU. It appears that the problem, which affects the available memory in a chip when multiple cores are working on the same chunks of data, is getting worse and may be hitting a peak somewhere in the neighborhood of 48 cores, when entirely new operating systems will be needed, the report says. Luckily, we aren't anywhere near 48 cores and there is some time left to come up with a new Linux (Windows?)."
This discussion has been archived. No new comments can be posted.

Linux May Need a Rewrite Beyond 48 Cores

Comments Filter:
  • by Chirs (87576) on Thursday September 30, 2010 @12:52PM (#33748882)

    SGI has some awfully big single-system-image linux boxes.

    I saw a comment on the kernel mailing list about someone running into problems with 16 terabytes of RAM.

  • by pclminion (145572) on Thursday September 30, 2010 @12:55PM (#33748948)
    Can somebody please explain what the fuck they are actually talking about? They've dumbed down the terminology to the point I have no idea what they are saying. Is this some kind of cache-related issue? Inefficient bouncing of processes between cores? What?
  • Jaguar? (Score:2, Insightful)

    by MrFurious5150 (1189479) on Thursday September 30, 2010 @12:58PM (#33749008)
    Cray [wikipedia.org] seems to have addressed this problem, yes?
  • by Anonymous Coward on Thursday September 30, 2010 @01:06PM (#33749112)

    UNIX and C were great in their days. But perhaps not in the meg-core era.

    So, what is better in your opinion? Java? Or maybe even ruby? Oh yes, that would be great. Run-time OS reflection through kernel drivers implemented as ruby modules.

    Too bad CPU's don't come with built-in ruby interpreters.

  • Re:Only Linux? (Score:4, Insightful)

    by Attila Dimedici (1036002) on Thursday September 30, 2010 @01:08PM (#33749178)
    Having read eldavojohn's post that summarizes the article, it appears that the reason to pick out Linux specifically is because that is the OS that the writers of the paper actually tested. Since Windows uses a different system for keeping track of what various cores are doing it is likely that Windows will run into this problem at a different number of cores. However, until someone conducts a similar test using Windows we will not know if that number is more or less than 48.
  • by Perl-Pusher (555592) on Thursday September 30, 2010 @01:08PM (#33749182)
    Core !=CPU
  • by interkin3tic (1469267) on Thursday September 30, 2010 @01:15PM (#33749276)

    I don't know, guess I picked a bad title or something?

    Slashdot: dramatically overstated news for nerds... since that seems to be the evolution of news services for some reason?

    I'm working on a submission: Fox news just had a bit about the internet, I'm assuming that their headline is something like "WILL USING OBAMANET 'IPv6' KILL YOU AND MAKE YOUR CHILDREN TERRORISTS?"

  • Hahaha. Oh arrogances from ignorance, how I loath you.

  • how is this news? (Score:4, Insightful)

    by dirtyhippie (259852) on Thursday September 30, 2010 @01:22PM (#33749382) Homepage

    We've known about this problem for ... well, as long as we've had more than one core - actually as long as we've had SMP... You increase the number of cores/CPUs, you decrease available memory thruput per core, which was already the bottleneck anyway. Am I missing something here?

  • by Anonymous Coward on Thursday September 30, 2010 @01:24PM (#33749408)

    Wow, really just wow you sir are the cream of the crop! /sarcasm

    the OP has a very valid point, i come to read about technology news on slashdot not scare pieces with little or no information or value, his post was far superior in every respect and yet got passed over for this garbage post. And you devalue his point further by not even giving him the time of day, way to go asshole.

  • by Wonko the Sane (25252) on Thursday September 30, 2010 @01:25PM (#33749414) Journal

    Your summary was too long.

    Yes, but the submission that got accepted has a bullshit headline.

    Of course "Linux May Need to Continue Making Incremental Changes Like It Has Been Doing For The Last Several Years To Scale Beyond 48 Cores" doesn't draw in as many clicks.

  • by davev2.0 (1873518) on Thursday September 30, 2010 @01:28PM (#33749476)
    Good summaries do not offer commentary. Save the commentary for the comments.
  • by Captain Splendid (673276) <capsplendid&gmail,com> on Thursday September 30, 2010 @01:30PM (#33749510) Homepage Journal
    Which is why he's treated like shit: Can't have any kind of excellence here, Taco wants to keep that old-school newsgroup feel. That's the only explanation that still fits.
  • by Unequivocal (155957) on Thursday September 30, 2010 @01:31PM (#33749534)

    Elaborate please. I'm ignorant and curious.

  • by Anonymous Coward on Thursday September 30, 2010 @01:32PM (#33749544)

    Oh look, CmdrTaco published yet another story with a poorly-written, hypersensationalist summary! Par for the course.

    Remember back when the slashdot "editors" were part of the community and would actually respond to site concerns raised by users? I haven't seen ANY "editor" post a reply to any slashdot user post in friggin YEARS. Good luck with getting their attention these days if you aren't an advertiser.

  • by h4rr4r (612664) on Thursday September 30, 2010 @01:36PM (#33749612)

    Linux supposedly scales to 1024 or something like that. This is not what they supposedly scale to, but the performance impact of actually trying to use that many cores.

  • Re:Only Linux? (Score:3, Insightful)

    by aardwolf64 (160070) on Thursday September 30, 2010 @01:37PM (#33749632) Homepage

    No, their rewrite is also subject to to this issue. Go publicize Windows somewhere else.

    No, it isn't subject to this issue. They removed the dispatcher lock. Go bash Windows somewhere else.

  • by spazdor (902907) on Thursday September 30, 2010 @01:49PM (#33749852)

    The very act of summarization constitutes an act of commentary. You're saying "I think the pertinent parts of this story are these, and the most important questions raised are those."

    A good summary invites commentary and frames the questions in a way which makes for better discussion, but don't for a second imagine the OP ought to be value-neutral (if such a thing could even exist.)

  • by Anonymous Coward on Thursday September 30, 2010 @02:01PM (#33750092)

    If your Ubuntu 10.04 system can't play embedded youtube videos then you should get off your ass and fix it instead of wasting your time pasting xkcd links. Ubuntu plays flash videos out of the box without a single hitch for years.

  • by drsmithy (35869) <(moc.liamg) (ta) (yhtimsrd)> on Thursday September 30, 2010 @02:09PM (#33750212)

    I was kind of wondering about the "modern operating systems" comment... I think he meant "desktop operating systems".

    What's a "desktop operating system" these days ? The only mainstream OS that hasn't seen extensive use and development in SMP server environments for a decade plus is OS X. For all the others, "desktop" vs "server" is just a matter of the bundled software and kernel tuning.

    Even OS/2 could scale to 1024 processors if I recall correctly.

    Yeah. Just like those old PPC Macs were "up to twice as fast" as a PC.

  • by Surt (22457) on Thursday September 30, 2010 @02:09PM (#33750218) Homepage Journal

    PER CPU. As was pointed out in many other comments. Linux has already scaled to thousands of cores across many cpus.

  • by Anonymous Coward on Thursday September 30, 2010 @02:11PM (#33750256)

    Stop posting the same wrong shit as everyone else.

    Before you post:
    1) Read the fucking article.
    2) Read the fucking comments.

    Are there any CPUs with 10240 cores? No.

  • by hardburn (141468) <hardburn@@@wumpus-cave...net> on Thursday September 30, 2010 @02:11PM (#33750266)

    Trolling, I'm sure, but to people who take "GNU/Linux" seriously: how much of any given distro is really GNU code anymore? While GNOME may still be preferred by Ubuntu, there are also a lot of Kbuntu users, and many other distros seem to prefer KDE. Neither XFree86 nor X.Org were ever GNU. Smaller installations, like smartphones and home gateways (which often do run Linux, even if you can't install a custom version like DD-WRT), use busybox for their basic command line tools, and almost certainly do not use glibc. Debian even went for the eglibc fork [osnews.com], partially because Ulrich Drepper makes Theo DeRaadt look like a nice guy. HURD has gone nowhere for 20 years now, even if it does have some neat ideas.

    Non-GNU GUI applications and libraries now make up a huge percentage of a desktop distro, Apache and custom web apps make up a big chunk of server code, and smartphones may or may not have any GNU code at all.

    So what's left of GNU code now? Well, gcc is likely to keep being the world's de facto C compiler (though even this was mainly because of the egcs fork way back when). I'm sure there will be legions of emacs users for years to come, and I guess a lot of people still prefer GNOME. GNU's basic command line tools and bash will no doubt still be used on servers and desktops. But is this really sufficient to warrant a "GNU/Linux" nomenclature, not to mention all the pedantry that surrounds it?

    To the AnonCow troll above: GNU code has nothing to do with how the kernel handles multicore processors, so your whole point is moot within this context.

  • by icebraining (1313345) on Thursday September 30, 2010 @02:20PM (#33750398) Homepage

    Headlines sell adverts. Truth, accuracy, honesty do not. Accept it, you are reading slashdot, it works.

    No, I read /. because of comments like eldavojohn's. If they were to disable the comments I'd unsubscribe it from my feeds immediately.

  • by Anonymous Coward on Thursday September 30, 2010 @02:39PM (#33750674)

    Despite having such a high UID he's got a solid reputation

    Really? I've never thought so. To me, he's come of as a self-important blow hard who always thinks he has something interesting to add on every subject. He writes things that aren't really interesting but he knows will be moderated up. There have been tons of instances where he's had the first post to +5 that adds little-to-no value to the discussion and I'm forced to collapse his thread to get to the interesting stuff posted by people with actual expertise in the subject (yes, those people do exist.)

    To me, he's a karma whore of the first degree who I wish would refrain from posting on subjects where he can't add anything productive to the conversation. The moderation system is great for dealing with trolls, since they're -1'd into oblivion and ignored. It sucks for dealing with karma whores because the first +5 post will receive the bulk of the responses and it's way too easy to get a +5.

  • by CRCulver (715279) <crculver@christopherculver.com> on Thursday September 30, 2010 @02:59PM (#33750960) Homepage
    Debian (and I suppose Ubuntu too) makes use of a lot of Bash scripts behind the scenes. Grub is still the boot loader of choice. A lot of installation CDs use parted to set up the hard drive. Just some examples off the top of my head.
  • by TheNetAvenger (624455) on Thursday September 30, 2010 @03:47PM (#33751660)

    The point isn't that NT Scales to 256 cores, the point is how efficient it is when scaling to this many processors. The NT Kernel in Win7 was adjusted so that systems with 64 or 256 CPUs have a very low overhead handling the extra processors.

    Linux in theory (just like NT in theory) can support several thousand processors, but there is a level that this becomes inefficient as the overhead of managing the additional processors saturates a single system. (Hence other multi-SMP models are often used instead of a single 'system')

    Just simply Google/Bing: windows7 256 Mark Russinovich

    You can find nice articles and even videos of Mark talking about this in everyday terms to make it easy to understand.

  • by amorsen (7485) <benny+slashdot@amorsen.dk> on Thursday September 30, 2010 @04:40PM (#33752454)

    I'm willing to bet that when mainstream 64-core general-purpose CPUs arrive, they will be NUMA and be partitioned in groups with shared cache. I will be surprised if all the cores have a shared cache other than possibly a large slow write-through level-4 cache. It would be very tricky to make an efficient modern cache with deferred writeback and access by 64 cores, and the gains over e.g. 4 smaller caches would be modest. The memory bandwidth requirements of a 64-core chip also make it very tempting to implement separate memory controllers for groups of cores instead of needing an extremely fast shared memory controller.

    So all in all, I think a very fast desktop tomorrow will look like a shrunk version of a modern NUMA server, at least when it comes to what the operating system can see.

  • by GooberToo (74388) on Thursday September 30, 2010 @05:11PM (#33752922)

    Completely agree.

    Of course, this all ignores the fact that Linux already scales well beyond 48 cores. Even more so, it appears the group is confusing bus contention for OS scalability. The problem is, using modern CPUs (cores), they are sharing caching, which is all too frequently the real problem. The shared cache leads to cache contention.

    Linux, right now, is capable of scaling well beyond 128 cores (err...cpus)...and more... Its just not standard code because the overhead is less optimal for 99.999% of the current user base. Basically this boils down to, Windows scales poorly. I've not met anyone who doesn't already know this.

    Long story short, News at 11, a story everyone already knows. No new news is now news. Basically they documented what everyone already knows for almost a decade now.

  • by Kumiorava (95318) on Thursday September 30, 2010 @05:41PM (#33753364)

    I just read the original article that said they used 8 6-core processors to _simulate_ 48-core processor. It would be hard to experiment on a real 48-core processor as those are not readily available.

  • by Bob-taro (996889) on Thursday September 30, 2010 @06:22PM (#33753748)

    Think of in terms of cars. The processes are roads, the CPUs are cars and the cores are the seats in the cars, only the seats can each travel on different roads independently and share resources with the other seats in the same car. If you have a 2-seater and the seats are on different roads, they can obviously only go half as fast as if they are on the same road. Now if you have 48 seats in a car, than it isn't a car anymore, it's a bus, so obviously you'd have to make fundamental changes to the OS.

    When it comes to computers, you can never go wrong with a car analogy.

Whoever dies with the most toys wins.

Working...