Forgot your password?
typodupeerror
Intel Hardware

Intel On Track For 32 nm Manufacturing 139

Posted by samzenpus
from the wafer-thin dept.
yaksha writes "Intel said on Wednesday that it has completed the development phase of its next manufacturing process that will shrink chip circuits to 32 nanometers. The milestone means that Intel will be able to push faster, more efficient chips starting in the fourth quarter. In a statement, Intel said it will provide more technical details at the International Electron Devices Meeting next week in San Francisco. Bottom line: Shrinking to a 32 nanometer is one more step in its 'tick tock' strategy, which aims to create a new architecture with new manufacturing process every 12 months. Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist."
This discussion has been archived. No new comments can be posted.

Intel On Track For 32 nm Manufacturing

Comments Filter:
  • I'm just finishing a rebuild of my system, going from an Athlon64X2 to a Core i7. 3DMark06 is downloading now; can't wait to see how well it does on that and Flight Simulator X.

    ...Now if they could only make some progress on coordinating RAID implementations across motherboards, so a MB swap doesn't have to mean that the path-of-least-resistance is a complete reinstall...
    • by afidel (530433) on Thursday December 11, 2008 @01:24AM (#26071065)
      I can't wait for the multichip Xeon's based on Corei7, Intel might finally have a chip that can compete with AMD in the database space next year. Oh and for your raid problem, use HP, a RAID array is portal across all systems and controllers that use the same generation HDD's. I have picked up an array out of a server, put it into a MSA and mounted it through an HBA with no problems then expanded the array online with additional disks to grow capacity =)
    • Re: (Score:2, Informative)

      by Anonymous Coward
      Two words: software raid. You have 4 cores, chances are you will usually be IO bound, so the performance will be better than HW raid.
      • Now if only Windows supported RAID in software.

        • by jaxtherat (1165473) on Thursday December 11, 2008 @01:44AM (#26071221) Homepage

          It does, here is a RAID 5 example: http://support.microsoft.com/kb/323434 [microsoft.com]

          • Re: (Score:3, Insightful)

            by Firethorn (177587)

            Yes, but unless they've changed stuff lately, he can't use RAID 5 on his boot disk - only mirroring is supported, and only sorta at that.

            Though with the way SSDs are going, I'd seriously consider putting the OS on a SSD, then going with the RAID array.

            And have things really changed so much that true hardware RAID is slower? I'm aware that there are RAID devices that depend on the CPU much like winmodems did, but surely a good RAID card still beats software?

            • Re: (Score:3, Informative)

              by josath (460165)
              I usually make a small partition, say 20-50GB, for the system files, and run that in RAID-1 (mirroring) across all 3 disks. I also store any super important documents on this volume, because it essentially has 3 copies. Then I combine the other 90% of the space in a RAID-5, which is much less wasteful than mirroring.
              • by Firethorn (177587)

                Not a bad idea, but SSDs have reached usable sizes for ~$100 for 32GB, enough for an OS and most program files - just install the games, user directories, and other multimedia stuff to the raid system.

                Heh, I wonder how large and cheap a SSD made with a 32 nm process would be.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Assuming you're on Linux, buy a processor with more cores, and use softraid. Autodetect = painless movement.

    • Faster computers are going to be generally irrelevant to about 85% of the population. They only really use computers for surfing the internet, checking e-mail, MS Office, iTunes, organizing photos, and playing The Sims occasionally. Most people play video games on consoles (PS3, WII, Xbox 360). There are few things that 90% of the population regularly do that require a faster computer. These advancements are going to affect businesses and scientists who need super computers to perform large amounts of compu
      • by smilindog2000 (907665) <bill@billrocks.org> on Thursday December 11, 2008 @04:55AM (#26072249) Homepage

        Good point. With solid-state drives coming down the pipe, even that bottle-neck will be somewhat relieved for what most people do (lot's of disk reads, few writes). I write programs to help designers place and route chips. The problem size scales with Moore's Law, so we never have enough CPU power. I'm part of a shrinking population that remains focused on squeezing a bit more power out of their code. I wrote the DataDraw [sourceforge.net] CASE tool to dramatically improve overall place-and-route performance, but few programmers care all that much now days. On routing-graph traversal benchmarks, it sped up C-code 7X while cutting memory required by 40%. But what's a factor of 7 now days?

        • Re: (Score:3, Informative)

          In my programming classes at UW-Milwaukee the professors emphasize that we should design our code to be easy to read/edit even if that means using up more computation cycles. This makes editing the code easier in the future, which is appreciated by future programmers who have to learn your code and can save the company some time and money. And since computation resources have become so cheap (practically unlimited for most applications) it doesn't really affect the performance of the program to a noticeable
          • Re: (Score:3, Insightful)

            was this professor involved with the design of vista at all?

            there is this thing called 'documentation' that you add to your code so other people can understand it.

            ignore your instructor. as a user, i very much appreciate whatever gains in efficiency i can get.

            • Re: (Score:3, Insightful)

              by Peter La Casse (3992)

              In my programming classes at UW-Milwaukee the professors emphasize that we should design our code to be easy to read/edit even if that means using up more computation cycles. This makes editing the code easier in the future, which is appreciated by future programmers who have to learn your code and can save the company some time and money.

              was this professor involved with the design of vista at all? there is this thing called 'documentation' that you add to your code so other people can understand it. ignore

              • by BZ (40346)

                While true, you do want to keep performance in mind when designing your _architecture_. If your program is algorithmically slow, or if it requires a virtual function call for any operation, then all profiling will show is time spent all over the map, because literally everything is slow.

                • I first learned the importance of getting things right. (A payroll program that gets the wrong answers gets the doors torn off the front of cookie factory. [That was my predecessor's mistake. :-])

                  Then I learned the importance of getting them to run fast. (I had a twelve hour window for calculation that suddenly got chopped in six as the company spread over a wider geographic area. The company bought their competitor. Now I had more impatient people to deal with [See previous 'font door' problem.])

                  Squeaked b

            • by frieko (855745)
              http://en.wikipedia.org/wiki/Amdahl's_law [wikipedia.org]

              GP is correct, it's highly counterproductive to put 1337 hax into every line of code you write. This is why you write clear, correct code and then run a profiler. Then 1337hax the few lines that eat the most cycles.
              • O-O code can be optimized by knowing how (and therefore where) to cut up your code.

                The code itself doesn't need to be any different, but how and where you cut it up can make an enormous difference in performance.

                If you can take advantage of RAM to cache intermediate results of seek (find/get) operation, you can get incredible speed out of otherwise 'dead code'.

          • Re: (Score:3, Interesting)

            by smilindog2000 (907665)

            The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40%

            • by jandrese (485)
              Huh? The cache only contains a set of what the processor already things it is likely to need. It's not like it's loading a fixed window of memory over the entire cache space. How C and other such languages organize their own memory space shouldn't matter much at all. Switching to 32 bit offsets instead of 64 bit pointers is fine so long as you never need to reference more than 4 billion records, but one application does not a whole industry make.
              • Re: (Score:3, Interesting)

                by smilindog2000 (907665)

                Check out the benchmark table at this informative link [sourceforge.net]. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. B

                • Re: (Score:3, Insightful)

                  by blahplusplus (757119) *

                  Why don't you write an article about how to go about teaching them? I agree that "so many programmers are batshit stupid!" but what one doesn't understand is that most learning is unconscious, and the fact that you know it better then others means it's highly likely your interested in it for it's own sake. Many programmers don't know where to begin, I really wish everyone complaining about dumb programmers would write articles to teach them the tricks of the trade. If you don't they won't get passed on.

                  • Not a bad idea, but where would I publish it? I could post it on my Dumb Idea of the Day [billrocks.org] blog, but no one reads it (which is ok with me). I would certainly be interested in writing an article about coding for cache performance.

                    • Check it out:

                      http://accu.org/ [accu.org]

                      They also have a discussion list. I think it would be a good idea to see if anyones interested in a "wikibooks" project, i.e. people contribute small articles, and over-time the community edits it into something cohesive.

                      http://en.wikibooks.org/wiki/WB:FB [wikibooks.org]

                      When dealing with teaching, one should teach from the ground up. I've seen way too many programming books that assume previous knowledge and most are really bad. I like the zero-to-hero mentality, where you take someone knowi

                • by sjames (1099)

                  Another place where it gets interesting is when the objects are more than 64 bytes. In those cases, a simple re-ordering of the fields can double performance.

                  Consider a common case of doubly linked structs. Prev and next pointers at the beginning followed by a bunch of other data. If your program needs to scan the list for cantidate objects for an operation, particularly where only a few of the structs will be operated on in a given pass, If the fields you check in the scan passes can be packed into the sa

                  • Yep! If you talk to DSP guys, they do this kind of thing all the time. DataDraw allows me to specify which fields of a class I want kept together in memory, and by default, they're kept in arrays of individual properties. I was able to speed up random-access of large red-black trees in DataDraw 50% with this feature, simply because you almost always want both the left and right child pointers, not just one or the other.

                    Nice to hear from a fellow geek who for whatever reason still keeps an eye on low-leve

          • premature optimization is sometimes as you say, bad. however there is an idea of mature optimization where you know something needs to be written in such a way as to be fast.

            say your task has to run in realtime, and it involves iterating over most of the machine's memory. if it doesn't run fast, you have a real problem.

            always choose the correct read/write patterns, the correct architecture, and then make that code as clear as possible...

          • by yoshi_mon (172895)

            I'm fine with some code being very easy to read even at the expense of performance. But you seem to imply that doing so should always be the case. Which is a huge mistake.

          • by sjames (1099)

            In most applications, maintainability is the more important factor. Even with that, there's a lot of room for improvement. Well thought out code can be efficient and maintainable. In some cases, just cleaning up older code to improve maintainability ends up making it more efficient as well.

            Too frequently code re-use is over-emphasized so you get a stack of objects that goes a bit like: (A) does something, (B) un-does about half of that and re-does it differently, (C) does a bit more and derives some informa

      • Soon enough people will have robots in their homes, doing chores. Very fast computers will be needed for that.

      • by NerveGas (168686) on Thursday December 11, 2008 @05:15AM (#26072375)

        A surprising number of people that I know - and not just tech-savvy people - do video compression, either for converting camcorder movies into DVDs, creating slideshows, or using DVDshrink. And those are apps where more CPU is always good...

        Just wait until HD camcorders are more prevalent, and you have people that want to convert their home movies into X.264 Bluray discs...

      • Re: (Score:3, Insightful)

        by repvik (96666)

        Until the next version of Windows is out...

        Seriously though. Of course the top-of-the-line, state-of-the-art, bleeding-edge PC's are irrelevant for the general populace when they are released. That doesn't mean that they're irrelevant to the general populace in a year or two.
        When the next Windows is released, some new fancy games are released, websites are even more riddled with flash, java and whatever new tech they come up with to use more resources.

    • BTW, you can move your drives from one motherboard to the next so long as the raid is/was done via an intel raid controller. I've moved my complete OS from other motherboard to another with a different chipset with no problems, and that was on a 4-drive raid-0.

      It was from a ICH6R to ICH8R I believe. Of course if you went from an nvidia/amd chipset to an intel one, then you can't. Unless the raid was done via an add-in card, of course.

  • Not surprising. (Score:4, Interesting)

    by pclminion (145572) on Thursday December 11, 2008 @01:17AM (#26070997)
    At WinHEC 2008 the Intel speakers continued to hint at the fact that they had operating, packaged cores at this size. On track for manufacturing? More like they've been making it for 9-12 months already. At any rate, it's cool, though not surprising.
  • Nm (Score:5, Funny)

    by Anonymous Coward on Thursday December 11, 2008 @01:20AM (#26071021)

    Newton-metres? You mean Joules?

    What could possibly make you confuse N which is a symbol for Newton with n which is a prefix for nano.

    You're definitely not geeky enough.

  • At some point, it will stop getting smaller.
  • Chipsets (Score:5, Interesting)

    by lobiusmoop (305328) on Thursday December 11, 2008 @01:24AM (#26071061) Homepage

    It's great that Intel are working on die shrinks for their processors, but I wish they would do the same for their support chipsets. It's annoying that on most laptops the northbridge for Atom processors uses more power than the processor does.

    • Re:Chipsets (Score:5, Interesting)

      by Anonymous Coward on Thursday December 11, 2008 @01:49AM (#26071243)

      This should be partially alleviated once the i7 architecture is fully adopted. Pretty much no more north bridge. That's probably why they're neglecting the current chip set technology with more aggressive updates.

      And who knows, if a better chip interconnect comes around in the next generation (unlikely, but possible), Intel could start putting more and more in the CPU package. Things like a Larrabee GPU and south bridge functionality (audio, networking, general I/O). System on a chip is common place in embedded systems now. If Intel wants to eat ARM's lunch they're going to have to adopt some of the same techniques.

      • The divide between north and south bridges has not really existed for a few years, but there are quite a few things that are in the supporting chipset and not in the CPU for Intel systems. Compare this with a real low-power chip, like the OMAP3530, which has USB, and disk / flash controllers, a GPU and a DSP on die, and RAM and flash stacked on the package, so you don't need much else to make a complete system with a power dissipation of 1.8W (and faster than the computer my mother uses).
      • The whole separate northbridge thing is kind of a legacy idea. AMD ditched it some time ago, now Intel is ditching it. Well, that being the case, little point in pushing forward with advances on it, only to then deprecate it immediately after. It'll probably be till the next "tick" before it is totally gone, but it should happen soon.

        The other problem people have to remember is that they have a limited amount of the highest tech fabs. It isn't as though you flip a switch and the fab suddenly is on a smaller

    • Re: (Score:3, Insightful)

      by zonker (1158)

      Very true. The problem is that chipsets don't sell computers like processors do. Joe Shopper at WalMart doesn't know what a northbridge is but he has some understanding of what a Core 2 Duo is.

      • Re:Chipsets (Score:5, Insightful)

        by Anonymous Coward on Thursday December 11, 2008 @03:09AM (#26071673)

        That's entirely a marketing issue.

        Joe shopper doesn't know what a core 2 duo is any more than he knows what a northbridge is. The only difference between the two is there are millions of dollars poured into making sure Joe recognized the term "core 2 duo". He still doesn't know a damn thing about it.

        Computers are funny from a marketing standpoint. They are purchased by people that don't know anything about them. Sold by people that don't know much about them and supported by people that don't even speak the same language. (often literally).

        Even more interesting, they are the only consumer device I know of where there is very little difference between first and third party parts. Obviously the technical specs change, but the average computer buyer wouldn't know the difference if you highlighted it in red.

        Selling computers therefore is a the most perfect example of marketing at work. Your customer doesn't know ANYTHING about the product in question, and so wants the one that he's heard the most about. So the customer buys what is best advertised.

        • by mgblst (80109)

          Computers are funny from a marketing standpoint. They are purchased by people that don't know anything about them. Sold by people that don't know much about them and supported by people that don't even speak the same language. (often literally).

          Do you really think that is different to most things out there? TVs, Fridges, Cars, Phones.

    • by Yarhj (1305397)

      Interestingly enough, the primary goal of die shrinks is not better performance, but lower cost. If a given die can be shrunk by a factor of k, we can fit roughly k^2 devices on a wafer of the same size. If the smaller chips work just as well as the larger chips we can then turn around and sell them for exactly the same price. It's like printing money(Step 3: PROFIT!). Of course, there's the expense in R&D and equipment to consider as well (Step 2: ????), but the basic reasoning is sound. If our competi

    • Really, this is just a matter of having limited manufacturing capacity. Every time they create a new manufacturing process, they have to upgrade a factory to use it. This puts the factory out of service for however long it takes to roll out the new tech, and costs billions of dollars in the process. In other words, even Intel doesn't have the resources to upgrade all of their factories at once.

      Instead, they take one or two factories running the oldest tech, and upgrade them. Once they are ready, they start

      • by tlhIngan (30335)

        Instead, they take one or two factories running the oldest tech, and upgrade them. Once they are ready, they start manufacturing the high-end processors. The last-generation tech manufactures lower-end processors. The generation before that manufactures chipsets, graphics chips, etc. The generation before that manufactures DRAM / flash / whatever else is needed. This is just an example, I have no idea what the split is in reality.

        Actually, memory devices often use the most cutting-edge technology available,

        • by brucmack (572780)

          You're correct, I forgot about Intel being a big player in the SSD market. A quick search shows that their flash memory fabs run on different node sizes though (50 nm, 34 nm coming) so I guess those fabs are outside of their processor rotation.

  • by Anonymous Coward on Thursday December 11, 2008 @01:28AM (#26071095)

    Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.

    • Re: (Score:3, Interesting)

      by sunami (751539)

      Yea, there's a pretty big wall that's been hit in terms of clock speed, which is why multiple core processors is the direction instead of ramping up speeds.

      • by Spatial (1235392)
        "Speed" as in performance? No. A 3Ghz P4 is a shitload slower than a 3Ghz Athlon X2, which in turn is a shitload slower than a 3GHz Core 2 Duo. The per-core speed of Desktop CPUs has never stopped increasing.
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Tee-he-he-he, you said "shrinkage". (nothing to see here)
    • Re: (Score:2, Informative)

      by ColdWetDog (752185)

      Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.

      I see we haven't been using Adobe software. Or Windows. Or Crysis. Or Slashdot's CSS 'implementation'.

      But if browsing Usenet with Lynx is where you're out, more power to you.

    • Re: (Score:3, Interesting)

      by NerveGas (168686)

      Anything past the P3 may not have been revolutionary, but it's steadily progressed quite nicely.

      I have a dual 1.4GHz P3 system, and a 1.6GHz Core Duo. The Core Duo is *much* faster, and that chip is already outdated. Not to mention the fact that it's comparing the fastest P3s made to the lowest of the Core Duo lineup.

      People also forget about things that can't be measured in nanometers or gigahertz, like the advances that have greatly lowered leakage current. Without them, something like 85% of the power

    • by darkwhite (139802)

      Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.

      It has. You've been living under a rock.

    • by BZ (40346)

      Really? A 3-year-old Core Duo (don't recall the clock speed, but in the 1.5-2 GHz range) is about 10x faster than a P3-733 on cpu-bound (small amount of memory accessed, no disk access) code. That's single-threaded code, so only using one core. 2-3x of that is the clock speed, the rest is the better architecture and process. 3-5x performance gain at the same clock speed is pretty good, in my book.

      You're right that whole-system performance has not kept pace with that, but it never does.

  • If Intel is able to shrink its die size every 12 months AMD is in trouble. A more efficient design is usually beaten by a less efficient design fabricated in less space. That is if you think AMD's design is still more efficient.
    • Re:What about AMD? (Score:5, Informative)

      by vsage3 (718267) on Thursday December 11, 2008 @01:58AM (#26071285)

      If Intel is able to shrink its die size every 12 months AMD is in trouble.

      For what it's worth "tick-tock" is actually alternating between a new architecture and a process shrink every 12 months. "Q4" in the summary means Q4 2009.

      Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.

      I hate to be snarky but you sound like one of those people who bought the crap about the "Megahertz Myth". Processor clock rate has little to do with performance. I'll agree that pentium 4 was underwhelming, but Core was a huge hit and saw huge performance, especially toward the ones that were released in early this year that used the high k dielectric.

      • The "megahertz myth" is that processor clock rate has a lot to do with performance, It seems to me like his post suggests that he didn't buy into it not being impressed with the pentium 4. But the shrinkage is definitely important, as in being able to fit more than one of a modified older design like the pentium III on one chip.
      • Re:What about AMD? (Score:5, Interesting)

        by afidel (530433) on Thursday December 11, 2008 @02:51AM (#26071573)
        Actually I think the biggest post P3 improvement has been the move to dual core as standard on the desktop in the last couple years. At least on Windows the non-blocking nature with a stalled thread is huge for overall system performance and UI snapiness. It's great to be able to get those benefits without a $200 motherboard and two CPU's =)
        • by Firethorn (177587)

          Being that I have a tendency to run a few pieces of software that'll peg a CPU to 100% today, going to a dual core processor was a 'I LOVE THIS!!!' moment.

          I went with a dual core for the higher individual core speed and that games were, on the whole, still not optimized for using multiple cores, so the best you could get is the game on one core and everything else on the second, which STILL wouldn't be strained. Of course, prices come down, performance goes up, software advances, I'd consider a quad today.

  • This is one case where shrinkage is damn good.

    Don't take that out of context.

  • In my day, getting to one micron feature sizes was a big deal. And we were grateful!

    You kids get off my lawn!

    -jcr

    • I have an issue of Byte up in my attic where the process shrink to 1 micron is the cover story. I read it a couple of years ago, around the time of the 65nm process shrink. It really gives you a sense of the speed at which process technology is improving.
  • Intel (Score:3, Funny)

    by IDKmyBFFJill (1428815) on Thursday December 11, 2008 @03:34AM (#26071781)

    It's all about splitting hair nowadays

  • by NerveGas (168686) on Thursday December 11, 2008 @04:55AM (#26072245)

    Intel has always enjoyed a much better manufacturing technology than AMD. But, Intel made some stupid architectural decisions with the P4 architecture.

    Once Intel came out with the Core series, then the combination of a decent architecture and terrific fab capabilities really started eating away at AMD. This will only continue the rally.

    The sad thing is that this will actually be a step back in pricing... it's getting back to where AMD simply cannot touch the higher-end Intel territory, and so Intel is back to enjoying terrific profit margins on those chips.

    • I think AMDs strategy is overclocking and lots of it. Look at what it's introducing in it's latest and upcoming hardware. Features that make overclocking easier. Also I wouldn't count AMD out too soon. Amd is just one design correction away from having perfect hardware for HTPCs And their IGP is still better than Intels.

  • "Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist.""

    Two things. One it doesn't matter how awesome your hardware is. If the majority can't afford it then it doesn't matter? Second as Microsoft is learning prior success can be a barrier to future growth. How many are going to throw out their Core 2 Duos in order to have the most amazing hardware from Intel?

  • I found out from my wife that our home server died and won't reboot. AMD Athlon 3200+ running Fedora.

    It is almost certainly a hardware problem, and that server has been running 24/7 for years now... time to upgrade.

    My hardware philosophy has been to buy big and milk it for a long time. You pay more up front for that power, but the fact that it has power means it doesn't get obsoleted immediately either.

    So then, cut through the marketing crap. Assume a desktop PC purchase in the May-ish time frame, to run Li

    • If it's left on all of the time, you might want to consider low power, rather than high speed. A 45W chip will use 395 kilowatt hours [google.com] per year, which will cost me around around £60 (around $90, now the pound's collapsed against the dollar, around $120 two months ago) per year to run. Something like the BeagleBoard consumes only 1.8W. For a home server, you'd want something in the middle (actually, I'd like a BeagleBoard with 4 SATA connectors, but, sadly, it doesn't exist). Depending on how
    • by jps25 (1286898)

      If it's running 24/7 and your old Athlon 3200+ was good enough, then pick any current dual or quad desktop CPU with the lowest energy usage.
      Pick a motherboard with an IGP and plenty of SATA2.
      Throw in 8 or 16GB RAM and a couple of hdds.
      Most importantly, check silentpcreview.com so you know which case to buy and how to silence it.

    • I just rebuilt my server using an Intel Atom dual core and a 4 port SATA PCI card from my old server. I'm running Linux with three 750 GB HDDs in RAID-5. Have an 80$+ PSU and a case with a 240mm top fan so it's pretty quiet as well. I just rip all my media to it so I can access it from anywhere in my house or through the Internet. It's worked well for me.
    • by b0bby (201198)

      Newegg has an MSI Wind Atom based barebones box for $139 that looks perfect to me for a home server. I'm with the others here - go low power rather than high power.

  • Stupidly uppercasing everything in a headline will regularly backfire if scientific units are used.

    What's meant here is nanometer, not Newtonmeter - which, by the way, is equal to Joule.

    And now here I am, unable to think of a good pun about a 32 Joule chip...
  • Can anyone remember the last time an incremental advance in chip speed was anything close to "jaw dropping?" Having been in this industry a while I can't count the number of times people like Steve Jobs and Andy Grove claimed speed increases of more than double with almost no apparent effect on anything but benchmarks. The early days of 3D accelerators was about the only time I really went "wow!"
  • by brucmack (572780) on Thursday December 11, 2008 @11:27AM (#26075277)

    Just to clarify: the tick-tock strategy means that one year gets a new architecture, the next year gets a new manufacturing process, and the cycle repeats. This means that there is a new architecture and new manufacturing every 24 months, not 12, and in alternating years.

  • I was looking at the range of low power CPUs and noticed that Intel's Atom seemed to do ok compared to the other low powered chips but then noticed that all the other chips were being built on a 65nm process while Intel had the Atom on the 45nm process. Looking at Intels standard "Core" processors showed that their newest CPUs were also on the 45nm process but not the majority of them.

    This was a few months ago but it made me wonder why all the other low power CPU manufacturers were able to get the power and

I judge a religion as being good or bad based on whether its adherents become better people as a result of practicing it. - Joe Mullally, computer salesman

Working...