Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel AMD Hardware

4 Cores? 6 Cores? Do You Care? 661

An anonymous reader writes "Intel has updated its processor price list earlier today. Common sense suggests that Intel may not care that much anymore whether its customers know what they are actually buying. One new six-core processor slides in between six-core and quad-core processors – and its sequence number offers no clues about cores, clock speed, and manufacturing process. If we remember the gigahertz race just a decade ago, it is truly stunning to see how the CPU landscape has changed. Today, processors carry sequence numbers that are largely meaningless."
This discussion has been archived. No new comments can be posted.

4 Cores? 6 Cores? Do You Care?

Comments Filter:
  • by snooo53 ( 663796 ) * on Monday July 19, 2010 @07:49PM (#32958298) Journal
    Some combination that measures both how many operations per second, and how much power it's going to take to do said operations (i.e. Watts/computing unit). I don't know if even FLOPS is sufficient anymore to describe current computing tasks. Heck, I'd be happy with any sort of standardization.
  • by pclminion ( 145572 ) on Monday July 19, 2010 @07:50PM (#32958306)
    What benefit is there in confusing your customers as to which product they should purchase? When I, as a consumer, feel overwhelmed or confused about a product choice, I usually respond by simply purchasing nothing at all. And I'm sure I'm not alone in that.
  • Intel... meh. (Score:1, Interesting)

    by Anonymous Coward on Monday July 19, 2010 @07:50PM (#32958312)

    I just don't give a crap about intel products...

    They proved long ago they do not win on price/preformance. And hell. someone has to pay for retarded tv commercials. I'll pass on being one of them again.

    And they are still the only company that ever sold me a defective chip that couldn't do math. And their response was? 'Oh well, buy our new one'.
    Eventually they DID replace it. But the entire experience has put me off intel products forever. I wont spec or support intel chip based hardware.

  • by Anonymous Coward on Monday July 19, 2010 @07:57PM (#32958384)

    ESX server licensing is on a per CPU basis but they restrict the number of cores to 6 (from memory) before you need another license. So yes, I would care how may cores I was buying on a server.

  • by Locke2005 ( 849178 ) on Monday July 19, 2010 @08:01PM (#32958440)
    If the chip can't run all the cores at full speed due to heat/power considerations and therefore either throttles back each core's speed or disables some cores under heavy load, than core counts are really just a deceptive pissing contest, aren't they?
  • One guess why (Score:5, Interesting)

    by Locke2005 ( 849178 ) on Monday July 19, 2010 @08:08PM (#32958520)
    Have you considered that the reason the processor numbers tell you nothing is that ALL the chips are fabbed with 6 cores and the ones that have one or two bad cores in testing have 2 cores disabled and are sold as quads?
  • by Pharmboy ( 216950 ) on Monday July 19, 2010 @08:12PM (#32958572) Journal

    The problem now is that you have to do a tremendous amount of research before you buy now. It used to be much simpler: Pentium 60, 66, 75 or 100, pick one. Later it was still simple with Celeron or P2/P3/P4, as you are picking bigger cache and faster bus speed. Now to get the highest return on partially defective silicon, they offer too many models, many that overlap each other, and can be very confusing, with some dual core models that outperform quad core, etc. A year ago I finally settled on a Q9550 but it took reading 50 articles to figure out that it was, at the time, the best bang for the upper middle buck. So yes, the average consumer will get boned.

  • by ultranova ( 717540 ) on Monday July 19, 2010 @08:16PM (#32958600)

    How much use is it having 6 or 8 cores if the program being run only efficiently uses 2 or 4 of them most of the time?

    The program? I dunno about you, but I run plenty of programs at once. And having 4 cores means that I have a few on standby whenever I feel like doing input, even when the machine is busy processing stuff.

    The real issue I see is memory access. Even with a single core did we run into memory bandwidth/latency bottleneck; with 4-6 cores those are 4-6 times as much. In the long run we have to give up Neumann architechture; it simply can't scale to our needs. A NUMA might be an acceptable compromise, but in the long run we need to change to a dataflow architechture, and that also means a step beyond C/C++ and other Algol-descended languages which have dominated our thinking these past decides.

    We need to switch to a system with lots of cores, all with their own local memory, and able to send each other messages. As an added bonus, such a system is also a natural fit for artificial intelligence.

    It's not like everything can just be multithreaded like that and even if it can, there's bound to be some overhead for doing it.

    True, but most hard problems can be redefined as search problems, and those can be efficiently multithreaded. Our current programming languages just make multithreading a pain, since you have to worry about everything manually.

  • by Anonymous Coward on Monday July 19, 2010 @08:28PM (#32958686)

    Multi-core systems are *great* for software developers. "Make -j8" is your friend. It doesn't scale perfectly, but it's pretty good.

    Thing is, you need an assload of RAM to do it. On even moderate size projects, a single G++ can grow to a gigabyte of memory, so make -j8 is going to push your RAM needs up a bit.

    I guess the DCC folks love them too - rendering is embarrassingly parallel.

    Gamers... beyond two, I'm not sure it does much for 99.9% of all games.

  • by rm999 ( 775449 ) on Monday July 19, 2010 @08:37PM (#32958786)

    Even very knowledgeable people have a hard time predicting how fast a CPU will be. CPUs no longer operate in a single dimension that can be quantified by a single number. You have the architecture, cache size, clock speed, number of cores, FSB, etc. A slower quad core CPU may be faster for me, whereas a faster-clocked dual core may be faster for a gamer. A cheap atom chip may be better for my poor cousin who just surfs the web.

    My point is customers should not be using the name of a CPU to decide what to buy. The best thing to do is to find benchmarks that reflect what you want to do and divide by price. The second best thing, great for the typical person, is to find a trustworthy computer builder/seller and let them decide what you need (e.g. buy a "gaming" computer). They can properly market the right CPU to the right person in the "right" price range.

  • Yeah, that's the rub. The premise of this story, that "average" computer users probably have little clue about many technical details, is no doubt true, but the question Intel is really interested in is "how much information can we hide from the buyer, using the excuse that they aren't interested."

    This affects slashdotters a lot, as many of us can usefully use such information, but still mostly buy the same hardware that all the mouth-breathers do. So if Intel starts a campaign of obfuscation under the guise of helping the clueless, we're the ones that suffer...

    [and of course the other reason Intel is probably muttering about this is that currently AMD has a lead in "lots of cores at low prices"... and Intel really really wants to say "oh but that doesn't matter!"]

  • Re:Not at all (Score:4, Interesting)

    by suomynonAyletamitlU ( 1618513 ) on Monday July 19, 2010 @09:08PM (#32959066)

    Actually, I (sometimes) use my quad-core to run a virtual machine on two cores, and the native OS on the other two cores. That means that both OSes can potentially run one crappy application and neither becomes unresponsive.

    Any fewer than four cores, and it's iffy, for exactly that reason.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday July 19, 2010 @09:10PM (#32959080) Journal

    I would hope that the scheduling would get better, actually, but even if the engine is only optimized to take advantage of four cores, it would probably run better if it could actually have all four cores to itself, with the OS and everything else running on core five.

    I suppose it depends how much overhead there is.

  • Yes, I care! (Score:3, Interesting)

    by BLKMGK ( 34057 ) <morejunk4me@@@hotmail...com> on Monday July 19, 2010 @09:40PM (#32959292) Homepage Journal

    I want 6 cores and if possible I'd like it unlocked for a reasonable price. Their current "extreme" 6core is actually looking attractive to me but I keep waiting for the price to come down. I had hoped that a new 6core would come in that would be reasonably priced and that even if locked could be clocked up pretty good. But at $880+ I dunno' - I will wait for the street price to hit before I get interested.

    Why do I want 6cores? Because I compress video pretty often and it's an hours long chore while keeping the quality and resolution high - file sizes plummet though. Hi Def video compression is intensive on the CPU and I often see rates as low as 13fps when compressing. That's on a 920 clocked to 4.2ghz. On water this thing hits 80C with a good sized radiator and multiple fans - I'll be moving to a bigger radiator soon in hopes of solving that. A 6core would give me at least a 30% increase in speed if not more depending on if Hyperthreading continues to buy me anything (it does now). If this new CPU can hit speeds like the unlocked Extreme and hits NewEgg for say $750 I'll score one but not when it's within $100 or so of the unlocked Extreme.

    Frankly, if there was decent code to chain multiple machines together to process video I'd try that but the last I saw of code to do that it was old and not worth my time. Since I also happen to be doing this on Windows chances of finding good code to slave machines together is even slimmer.

    So yeah - I care and I agree this new number scheme SUX! But hey in the end it's the performance I care about and how high it will clock without melting down. These Extremes are sick fast but wow are they pricey :-(

    P.S. Were it not for video processing I'd still consider a C2D just fine or maybe an overclocked i5. This 920 STOMPED my 3.8GHZ C2D though so was well worth the investment and it has also beaten a few dual XEON Macs :-)

  • by Deorus ( 811828 ) on Monday July 19, 2010 @09:46PM (#32959336)

    It wasn't until recently when I had issues with Microsoft Virtual PC because my BIOS (which had already been upgraded once) was bugged and would not enable hardware virtualization that I realized that my CPU (an Intel Core 2 Quad Q6600) was one of the very few with hardware virtualization back when I bought it, as the processor models directly above and below this one did not have it, and I bought this CPU assuming that any "nodern" (2007) quad core CPU would have it, I chose this particular model based on price alone.

  • Re:Yes (Score:3, Interesting)

    by BLKMGK ( 34057 ) <morejunk4me@@@hotmail...com> on Monday July 19, 2010 @10:01PM (#32959446) Homepage Journal

    Umm, my overclocked 920 beat my (high but not highest end at the time) NVIDIA GPU at encoding and it offers me FAR more options for how I want the video encoded rather than a few out of the box profiles. That said sure I'd love to use my GPU to encode - WITH my CPU. So far it's been either or but if you've got a solution by all means share it. Until then I'll also keep piling on more cores and more clock.

  • by Anonymous Coward on Monday July 19, 2010 @10:02PM (#32959452)

    Dataflow, dataflow, dataflow. That is all I ever hear in academia. Have you considered that the dataflow requires languages that may be a POS to code for when compared to imperative programming languages and that parallelism doesn't just hop out because someone said that it does. I'll give you a hint: it actually doesn't.

  • Re:Not at all (Score:2, Interesting)

    by BitZtream ( 692029 ) on Monday July 19, 2010 @10:04PM (#32959464)

    I use an OS that doesn't suck, I can in fact, have an app trying to use 100% of the CPU and STILL manage to get work done because it won't let it! Its called a 'pre-emptive multitasking OS'. Maybe you should try one. Not sure what OS you're using that doesn't do this but its gotta be pretty useless now days.

    One core is more than enough for almost everyone. Office apps don't really use a lot of CPU, even Office 2010. What web pages do you use that you run so much JS that you notice it running? Contrary to what Mozilla and Google are ranting about, JS speed hasn't been an issue for years, even if its the only change they've made to their browsers recently worth mentioning.

    Contrary to popular belief, most people aren't trying to run quake in javascript. Your argument is dumb as it stands.

    You should have referenced flash. You're argument would still be dumb, but at least you'd come up with a reason to need more CPU.

  • Re:What I care is... (Score:3, Interesting)

    by BitZtream ( 692029 ) on Monday July 19, 2010 @10:48PM (#32959760)

    Its doubtful AMD will ever regain the performance lead again, Intel was lazy, lost one round and learned they had to bust their ass cause AMD was going to push them.

    From here on out, barring complacency by AMD, the best you can expect is that AMD will be close to Intel in performance for most things, better at a select few, and almost invariably cheaper resulting in more performance for a given cost, but not being capable of producing the fastest raw speed or the lowest power draw. Intel will win around the board at the raw numbers and will continue to only occasionally have AMD do some things better.

    I hope the two of them continue doing exactly what they are doing for at least 10 more years. They are a duo-oply(? spellcheck failure!@$!@$!$), but one that competes and so far appears to be providing benefits to consumers rather than price fixing with AMD and ripping us off while they sit on their laurels.

  • by Anonymous Coward on Monday July 19, 2010 @10:49PM (#32959762)

    Try networking, ie with a girl, and you will see why longer is better.

    Well, truth is, girth is king. A 10 twizzler stick doesn't satisfy.

    With four ehm, cores, you can dedicate them all to the same port. Ten twizzlers--now we're talking!

  • Re:Yes, I care! (Score:3, Interesting)

    by BLKMGK ( 34057 ) <morejunk4me@@@hotmail...com> on Monday July 19, 2010 @11:00PM (#32959858) Homepage Journal

    Been there, done that - my CPU was faster and I get more options. I use x.264 to compress my video - meGUI is my frontend of choice. I have tested GPU renders in the past that leveraged CUDA and my CPU beat them - this was even before I went to liquid cooling too! I'll grant that a GTX275 is no barn burner but it's no slouch either and when I purchased it was fairly expensive. I'm willing, and will, look at CUDA rendering again but frankly if it's not combined with the CPU then it's worthless to me. Using both together would make the most sense IMO. x.264 is free too which is nice!

    Oh and yeah I run 64bit x.264 and have CoreAVC onboard too but it's really no help. Neither is using an SSD - the bottleneck IS the CPU. x.264 has slowly gotten better for sure though but a BD still takes hours although 3 hours sure beats the 20+ hours I used to get n my C2D!

  • by confused one ( 671304 ) on Monday July 19, 2010 @11:06PM (#32959882)
    lots of legacy VB code. Some jobs run as fast or faster on Pentium D compared to a Core 2. The code is single threaded and it does not take advantage of any of the CPU design improvements implemented since the Pentium II or the early Pentium III. Since the Pentium 4 and Pentium D ALU runs at 2x the processor core speed, for these tasks it does well.
  • Re:One Core at 24GHZ (Score:2, Interesting)

    by Benaiah ( 851593 ) on Monday July 19, 2010 @11:13PM (#32959936)

    Back in 2002 a lecturer for Computer Systems Engineering explained to me why the GHz race was ending (did end). Apparently the engineers were running into issues with clock propogation through the chip. As the leading edge of a clock propagates through a chip at say 10Ghz the wavelength is below 10mm. Thus before the falling edge the signal would have only travelled 5mm. Different travel paths and instruction times was leading the engineers to impossible asynchronous errors. It was predicted that with modern chip design would peak at 5GHz.
    They never quite got that high but he was close nontheless.

  • by Mad Merlin ( 837387 ) on Monday July 19, 2010 @11:15PM (#32959968) Homepage

    PCI is nowhere close to being fast enough for USB 3, USB 2 sure, but not USB 3. Also, even a single 7200 RPM SATA hard drive can outstrip the bandwidth provided via a PCI slot nowadays. On the other hand, PCIe is a totally different story, and just about every motherboard these days includes at least a couple PCIe slots.

  • make -j 200 (Score:1, Interesting)

    by Anonymous Coward on Monday July 19, 2010 @11:42PM (#32960098)

    I care a lot actually, because I need to know whether I can type make -j 200 or make -j 4

    It useful when developing massively parallel build systems.

  • by TheLink ( 130905 ) on Tuesday July 20, 2010 @02:26AM (#32960756) Journal

    > My current system is a C2D 1.8GHz E6300 that's now pushing 4 years of age,
    > yet according to all the benchmarks I've seen by Anntech, Tom's Hardware and others, my performance results are less then 20 percent below the latest/greatest CPU's.

    While you probably don't need to upgrade your CPU, I don't see how your CPU can be only 20% slower than the latest and greatest. Even for single-threaded stuff.

    See: http://www.anandtech.com/bench/Product/61?vs=142 [anandtech.com]

    Note: I'm even comparing the 2.33GHz C2D to the latest and greatest, since the 1.8GHz one isn't listed. But I'm sure the 2.33GHz C2D should be a bit faster than your 1.8GHz C2D.

    For graphically intensive games, though the difference in the average fps would not be as high, the difference in the minimum fps might be, and that might be more important in many real-world scenarios.

    In many ways it's quite impressive what Intel has done with the x86. The equivalent of a hypersonic flying pig beating the less "ugly" MIPS and Alphas ;).

    Assuming nothing breaks, my next upgrade is more likely to be an SSD than CPU, GPU, RAM or HDD. I'm just waiting for the prices to go down to more reasonable levels (and the number of bug reports to dwindle as well ;) ).

  • by Sycraft-fu ( 314770 ) on Tuesday July 20, 2010 @04:53AM (#32961386)

    Everything I see shows that modern OSes not only don't have an overhead with more cores, it helps things. Reason is what OSes really have is a heavy context switching overhead. If a processor is doing something, and the OS needs it to do something else, it has to generate an interrupt, push everything on to the stack, switch to the kernel, switch to the net process, etc. It is a hefty overhead. However that all goes away if instead multiple things run at the same time on hardware. They don't switch contexts, they just keep running.

    This is the reason why web/DB heavy servers like to have lots of cores, even if less powerful. Sun's new chips are designed with that in mind. Each core can handle 8 threads in hardware, meaning it acts like a 64-core CPU though only having 8 actual cores. Why? Context switching. The tasks it normally deals with are not high load, but they switch around a lot. The more than can run side-by-side from the OSes perspective, the less overhead and the more efficient use of processor resources.

    In a desktop the tasks are more intense so it is less useful to have lots of threads/CPU (currently 2 is the highest in the Core i3/5/7 series) but more cores are still quite useful. It allows for more things to happen at the same time, from an OS perspective, and lowers overhead.

    You notice too, using a multi-core, multi-threaded system. Things are damn responsive.

  • Gaming is changing (Score:3, Interesting)

    by Sycraft-fu ( 314770 ) on Tuesday July 20, 2010 @04:57AM (#32961398)

    Turns out in modern games, a lot of shit happens at the same time. While this was traditionally coded as a bigass while loop because systems were singe thread, it doesn't have to be. You can thread all that shit out and have the game engine do multiple things at once. It is still being worked on, but it is getting much better. Most very modern (as released this year or perhaps last year) games make extremely good use of two cores to the point that many require it. They can fully load both, no problem. A smaller number, but increasing amount, can make good use of 3 or 4 cores. Game designers are learning how to code in parallel, tools are developing to make this work better, etc.

    Games are already parallel and are only going to get more so.

  • by Anonymous Coward on Tuesday July 20, 2010 @06:46AM (#32961988)

    I3 = dual core, I5 is both 2core and 4core, I7 is 4 core, but is now 6 cores. Yeah, that makes sense. Uh huh.

    Ha. If only the number of cores was the problem with the numbering system. Thing is, there's also the following to consider:

    Some i5 chips have on-board GPU support. All i3's and i5's use the LGA 1156 socket.

    i7's however.. the i7-8xx series use the LGA 1156 socket, while the rest of the i7's use the LGA 1366 socket. Yes, you read that right, there's two i7's that are dual-channel and use a different socket vs the rest of the i7's that use triple-channel and the larger socket.

    I think Intel was going for the whole i3 = entry, i5 = mid and i7 = high-end, but they screwed themselves with the socket designation.

  • by nOw2 ( 1531357 ) on Tuesday July 20, 2010 @08:12AM (#32962468)

    Unless I'm timing them, I'm hard pushed to tell the difference between my personal computers. I have 2.0GHz C2D, 2.6GHz Core i7 deskop and 2.4Ghz Core i5 mobile. They all do everything I need.

    Today, the graphics chip makes a bigger difference to me: I have two Macs with the same CPU but one has an ATI chip and the other Intel GMA. Guess what, the Intel GMA drives me crazy.

    I guess I'm waiting for the next generation of CPU intensive killer apps.

  • Make -jX (Score:3, Interesting)

    by malloc ( 30902 ) on Tuesday July 20, 2010 @09:09AM (#32962990)

    6 cores. Do You Care?

    Written like someone who's never heard of 'make -j'. Seriously, anybody that compiles stuff wants more cores, and if you ever reach a point were disk IO is the bottleneck just throw in an SSD.

    Random project on my box:

    make clean; time make -j8
    Real: 4.3s

    make clean; time make -j1
    Real: 14.7s

    Compiling is an inherently parallelizable task.

"Engineering without management is art." -- Jeff Johnson

Working...