Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Technology

Is SSD Density About To Hit a Wall? 208

Zombie Puggle writes "Enterprise Storage Forum has an article contending that solid state disks will stay stuck at 20-25nm unless the materials and techniques used to design Flash drives changes, and soon. 'Anything smaller and the data protection and data corruption issues become so great that either the performance is abysmal, the data retention period doesn't meet JEDEC standards, or the cost increases. Though engineers are working on performance and density improvements via new technologies (they're also trying to drive costs down), these are fairly new techniques and are not likely to make it into devices for a while."
This discussion has been archived. No new comments can be posted.

Is SSD Density About To Hit a Wall?

Comments Filter:
  • by symbolset ( 646467 ) on Saturday September 18, 2010 @06:05PM (#33622898) Journal
    Memristor [bbc.co.uk] technology doesn't even work with feature sizes that big, so it's the logical next step. Also it can be layered and so leverage Dimension Z. Products expected in three years from a joint HP and Hynix venture. No worries.
    • Or more likely PCM (Score:4, Informative)

      by Wesley Felter ( 138342 ) <wesley@felter.org> on Saturday September 18, 2010 @06:49PM (#33623096) Homepage

      HP and Hynix are doing memristors, while the entire rest of the industry is doing phase-change memory.

      • by owlstead ( 636356 ) on Saturday September 18, 2010 @07:10PM (#33623200)

        Yeah, they like to push the P-RAM a lot.

      • by cheesybagel ( 670288 ) on Saturday September 18, 2010 @07:15PM (#33623230)
        Phase-change memory... Oh dear. I still remember when it was being pushed as Ovonic Unified Memory (OUM) or calcogenics. I certainly hope Samsung and the usual suspects can get this to work. But it has been a long time in coming. Well, maybe not as long as MRAM but still...
      • by TheRaven64 ( 641858 ) on Saturday September 18, 2010 @08:20PM (#33623540) Journal
        PC-RAM stands a good chance of being the long-term future (I had the good fortune recently to share a very nice bottle of port with one of the scientists behind underlying technology, and came away quite convinced, and a lot drunk), but the largest currently shipping PC-RAM modules are 64MB. It has a lot of catching up to do before it reaches, let alone passes, the density of flash.
        • Re: (Score:3, Funny)

          came away quite convinced, and a lot drunk

          I think you'll find that's not quite so good an argument in favor of the technology once you've sobered up.

        • >>>PC-RAM stands a good chance of being the long-term future

          I don't see Flash Drives replacing disk drives, anymore than I see Cartridges making a comeback in gaming.

      • by Twinbee ( 767046 )

        What are the main advantages and disadvantages between phase-change memory and memristors?

      • Re: (Score:3, Interesting)

        by Bender_ ( 179208 )

        This is not true. You need to be aware of one thing: "Memristors" were not new when they were "discovered". The memory industry knew the concept years before as RRAM [wikipedia.org]. I can assure you that all other nonvolatile memory vendors are developing RRAM or are at least looking into the possibilities. Samsung has been publishing about NiO based RRAM long before it was "discovered" again, IBM has some interesting papers from the Zurich labs. Furthermore, there are several start up companies looking into 3D RRAM which

    • Also it can be layered and so leverage Dimension Z.

      Beware of Dimension Z...I heard Red Lectroids can come through there. Not worth the risk.

  • So... (Score:4, Insightful)

    by dcmoebius ( 1527443 ) on Saturday September 18, 2010 @06:10PM (#33622920)
    Improving upon current SSDs will require new technology! Isn't that sort of implied in the whole concept of, you know, progress?
    • Re: (Score:3, Interesting)

      by mikehoskins ( 177074 )

      Agreed. And, I believe that 34nm is near the best they can do today, in any kind of production.

      So, if you can go from a 34nm * 34nm feature to a 20nm * 20nm feature, you can almost triple the density.

      So, in the same space you can produce a 128G drive, you can then produce a roughly 384G drive, going from 34nm to 34nm.

      So, if a USB Keychain is produced w/ 128G, a 384G can be produced at the same size, barring other issues.

      That assumes they are even using 34nm process SSD's, today, to produce 128G USB SSD dri

      • Re: (Score:3, Informative)

        by vadim_t ( 324782 )

        512GB SSDs aren't a "future possibility"

        1TB SSDs already exist [ocztechnology.com]

      • > Agreed. And, I believe that 34nm is near the best they can do today, in any kind of production.

        There are at least two companies manufacturing 25nm parts right now.

      • by Guspaz ( 556486 )

        Current SSDs tend to use 35nm flash. 25nm flash has been in production for some time, although the quantities produced aren't yet high enough to get into shipping products. Intel's 25nm drives are expected in Q4 of this year (at capacities of 80, 160, 300, and 600 GB).

  • by durrr ( 1316311 ) on Saturday September 18, 2010 @06:11PM (#33622926)
    The wall or plateu or whatever you prefer to call it of electronics progress is similar to the recurring doomsday predictions. It's always right around the corner, but it never happens.
    I guess we could liken it to fusion, strong AI, the second coming of Jesus and whatever else that generally is put in the belive it when see it folder.
    • by MichaelSmith ( 789609 ) on Saturday September 18, 2010 @06:19PM (#33622968) Homepage Journal

      The wall or plateu or whatever you prefer to call it of electronics progress is similar to the recurring doomsday predictions. It's always right around the corner, but it never happens.

      It has to happen.

      • by durrr ( 1316311 ) on Saturday September 18, 2010 @07:02PM (#33623150)
        Well obviously the earth will be fried when the sun goes red giant. However I'm quite certain though that the year 5 billion AD equivalent of a electronics engineers will sit in jovian orbit, hellbent on the continuation of moores law and waiting for the sun to turn white dwarf so they can get to work with their new fancy sub-nm electron-degenerate matter lithography techniques.

        They'll wake you up from cryo when they're done just to taunt you: "Oh, we're having a few billion years between the nodes now, but it fits the curve, just as i told you."
        • They were mass producing 1 Gbit ram chips in 1999. It is now almost 2011 or almost 12 years later. It seems to me if Moore's had continued we should be talking about the 1 Tbit ram chip by now. I think there is a definite wall with ram memory
          • by mlts ( 1038732 ) *

            Very true. However, what happened is that computing dealt with that plateau by finding ways around it. Caching comes to mind for this. After caching in RAM (main RAM and DRAM on the controllers) comes tiered storage and using faster drives as cache for swap (think ReadyBoost.)

            If one tier of computing (l1 cache, RAM, storage) doesn't expand, another will. RAM densities have not gone up that much, but hard disk densities have, so a lot of work is put into caching. If by chance we end up with a breakthrou

          • It seems to me if Moore's had continued we should be talking about the 1 Tbit ram chip by now. I think there is a definite wall with ram memory

            That's a consequence of our current OS monopoly.

            There has been no significant innovation in operating systems in the past decade. System hardware demands are driven by thresholds of user needs. Text had the lowest demands, so 8 bit computers with a few kilobytes of RAM satisfied that. Graphical displays and chip-based sound drove us through 16 bit and tens of meg

          • by smallfries ( 601545 ) on Sunday September 19, 2010 @04:32AM (#33625862) Homepage

            Well, ..... no. There are many things wrong with your post but the biggest one is that you don't seem to be able to double numbers properly. Did you pull 1Tbit out of your ass?

            Moore originally speculated about transistor density doubling every 12 months - but his actual observation that was published was that density doubles every 18 months. This is the figure that has been used for decades when people talk about his "law". In more recent times (the last decade or so) that period has increased to 2 years.

            log_18mths(12yrs) = 8
            log_24mths(12yrs) = 6

            So, if we accept your claim about 1Gbit chips in 1999 then we would expect chips in the range 64Gbit - 256Gbit. A long way off of the 1Tb that you used. Assuming that you mean flash when you say "ram chip" a quick search shows that 64Gbit chips were available in 2007. So your conclusion is bogus.

        • They'll wake you up from cryo when they're done just to taunt you: "Oh, we're having a few billion years between the nodes now, but it fits the curve, just as i told you."

          I can't wait. If anybody can keep Moore's law going for five billion years then good luck to them.

          • They'll wake you up from cryo when they're done just to taunt you: "Oh, we're having a few billion years between the nodes now, but it fits the curve, just as i told you."

            I can't wait. If anybody can keep Moore's law going for five billion years then good luck to them.

            I'm not sure I'd want to meet a computer that was the result of five billion year's worth of Moore's Law. Of course, if the Universe is cyclic then so far as we know, a race that existed in some previous incarnation of the Big U might have built a machine that kept evolving itself long after its creators were dust. Hell, it might even have become God, or something so close that we'd never be able to tell the difference.

            • Such a computer could conceivably simulate the entire universe from the Big Bang up to the time of its own construction ;)

      • by mlts ( 1038732 ) *

        It does happen and we find a way around it. Take CPUs. We ran into a wall with clock speed, so we are going with more cores. Once adding tons of cores onto dies stops giving tangible returns, we might go with stacking larger and larger caches, or some future 3D masking technology to allow the caches to be stacked on top of or below the rest of the CPU. When that peters out, there is always moving to 128 bit word lengths, adding more registers, and even newer CPU architectures and emulation.

    • by rm999 ( 775449 )

      There are notable counterexamples. For example, CPU clock speeds have been approaching a limit for years now. The only reason computers get "faster" over time is Moore's Law, which allows the CPU to do more per clock.
      http://www.gotw.ca/publications/concurrency-ddj.htm [www.gotw.ca]

      • The only reason computers get "faster" over time is Moore's Law, which allows the CPU to do more per clock.

        So if Gordon Moore didn't state his law, CPUs would be forbidden from getting faster?

        • So if someone didn't write down the law of Gravity, we would all be an amorphous cloud of particles?

          Laws put an observed truth down into words. Whether or not this is done doesn't effect that the observed truth is still there.

          • So if someone didn't write down the law of Gravity, we would all be an amorphous cloud of particles?

            Laws put an observed truth down into words. Whether or not this is done doesn't effect that the observed truth is still there.

            Precisely my point. Moore's law didn't "allow" anything. It was the inevitable result of progress.

      • There are notable counterexamples. For example, CPU clock speeds have been approaching a limit for years now.

        Actually the limits had everything to do with thermal factors and materials required to operate at much higher (extreme) clock speeds. EE's use higher frequency components all the time in applications like RADAR. There is a material availalbe to build a chip to damn near any speed you want, diamond. Not only does it make a fantastic semiconductor material, it is a near perfect heat conductor as
    • by Kjella ( 173770 )

      Agreed. If anything I think we will be running into that wall slowly, simply by not having the year-over-year improvements like we're used to. I'd be very surprised if Intel just issues a press release one day that said "You know that 18nm process tech we've been working on? we've hit a brick wall and it's not going to happen."

    • This would be more like "I believe it when I don't see it fodder".

    • You mean cold fusion; fusion itself is quite feasible (though not yet able to be harnessed safely).

    • by Surt ( 22457 )

      In none of those previous cryings of wolf had the number of atoms in a single device dropped into the double digits.

    • The wall or plateu or whatever you prefer to call it of electronics progress is similar to the recurring doomsday predictions. It's always right around the corner, but it never happens. I guess we could liken it to fusion, strong AI, the second coming of Jesus and whatever else that generally is put in the belive it when see it folder.

      A more logical comparison would be the repeated assertions that hard drives would be reaching their "theoretical maximum" capacities we've all heard for the past couple decades. Now the things are beyond a terabyte and still increasing. Heck, I remember back in the 70's when I was playing around with some 1 kilobit RAM devices that some scientists were predicting that memories wouldn't get much denser than that.

  • Does it matter? (Score:4, Insightful)

    by MikeFM ( 12491 ) on Saturday September 18, 2010 @06:13PM (#33622936) Homepage Journal

    It doesn't seem a big deal to me. I'd be more interested in seeing the prices drop and to have larger RAM caches.

    • The only way a density wall will matter is if price can't scale. If price scaling is purely based on density, then we have a problem as SSDs aren't price competitive yet. However if price can keep scaling down, then no problem. As it stands you can pack in flash to easily meet the same density as magnetic storage. They have 512MB SSDs that are 2.5" form factor and I suspect it could be smaller, it is just that size to fit in regular laptop drive slots.

      So we'll see. I don't know what are the price barriers f

      • They have 512MB SSDs that are 2.5" form factor

        Really? I would think it would be more like 512GB ... or more (especially considering the 16GB USB flash memory stick I have)

    • by vadim_t ( 324782 )

      Probably not much.

      In my experience, disk space isn't nearly as limiting as it used to be. Back in '93, a 500MB drive was pretty large, but could be easily filled. I remember that deciding what to keep on one's hard disk, and how to free up a bit more space used to take a considerable amount of time. After all, a single CD was bigger. Today, a 500GB drive won't be filled by most people.

      Also, there are 1TB SSDs in existence already, one reported to be postage stamp sized. That's a very useful size, considerin

      • Re: (Score:3, Insightful)

        by MikeFM ( 12491 )

        My laptop has a 128GB SSD which would be really cramped except I keep most my files on my NAS where it can be kept in RAID and be automatically backed up etc. Really the local drive should only be the files needed to boot and hook to the network and the rest used to cache the files you're most likely to need soon. As you said you can already get decent storage space in the usual form factors so it's not really a big deal if the drives can't get more dense. I dont really care if my NAS takes up a whole serve

        • clouds mean rain (Score:4, Insightful)

          by Anonymous Coward on Saturday September 18, 2010 @07:47PM (#33623398)

          Local storage is a lot cheaper and faster for most people in the USA, which is all I can speak of. Maybe over in Utopialand where everyone has 100 gig speed connections and hosting is pennies a day for terrabytes the "cloud" might be cheaper and better. Our domestic broadband speeds and prices are not even close to keeping up with increased local storage density and lowering prices for same. Saying the "cloud" will do everything is sorta naive, we have all the major ISPs talking about limits and caps now. This is 100% the WRONG time to be shifting to far away "cloud" storage for most people.

            I know I'll be keeping my movies and files handy right here, thanks. I just can't see storing multiple gig sized movies way over there someplace when it would cost me two cents to store it here and have it playback at fast streaming speeds for the cost of the electricity.

          Having to go pay yet again to watch your movie or access your own file..nope. The "cloud" is a marketing buzzword for companies that want to charge you serious coin for access to *your own files*.

          • Local storage is a lot cheaper and faster for most people in the USA, which is all I can speak of. Maybe over in Utopialand where everyone has 100 gig speed connections and hosting is pennies a day for terrabytes the "cloud" might be cheaper and better. Our domestic broadband speeds and prices are not even close to keeping up with increased local storage density and lowering prices for same. Saying the "cloud" will do everything is sorta naive, we have all the major ISPs talking about limits and caps now. This is 100% the WRONG time to be shifting to far away "cloud" storage for most people.

            I know I'll be keeping my movies and files handy right here, thanks. I just can't see storing multiple gig sized movies way over there someplace when it would cost me two cents to store it here and have it playback at fast streaming speeds for the cost of the electricity.

            Having to go pay yet again to watch your movie or access your own file..nope. The "cloud" is a marketing buzzword for companies that want to charge you serious coin for access to *your own files*.

            Well, I agree, however if cloud storage of large files becomes something that the bulk of ISP customers need and want, that would tend to drive infrastructure improvements in order to keep their business. Right now there just isn't such a driving need for gigabit connections that the ISPs see any business case for it (other than marketing hype.) We need that "killer app" that everyone just absolutely has to have, and that requires ungodly bandwidth.

          • by grumbel ( 592662 )

            I know I'll be keeping my movies and files handy right here, thanks.

            Streaming a DVD quality movies already works quite fine with a basic 2mbit connection, now uploading it will be a bit troublesome and I don't think it is that easy to get 1TB of online storage for cheap, but at least in principle, it makes perfect sense to have stuff stored in the "cloud" and only cached on your drive. Not because buzzwords are cool, but simply because you want to access your data from different devices anyway, so having a central server instead of manually copying stuff around is a lot mor

          • by hab136 ( 30884 )

            He mentioned NAS, not cloud storage - which probably means a fileserver in his house, on the same local network. His point was that you can have fast local storage, plus networked slow storage for things that don't need it. Even DVDs only need 10 Mbps; less if they're turned into MP4s, which will easily work over even 12 Mbps 802.11b. So stick all your MP3s, MP4s, and DVD rips on a RAID NAS in your house, and not on your desktop machine.

            And yes, US broadband sucks ass. For comparison, I have 50 Mbps dow

  • by markdavis ( 642305 ) on Saturday September 18, 2010 @06:26PM (#33622994)

    Well, the density is already not bad, so the big key is to get the cost down! For larger applications of Flash memory(like over 250GB) I don't think the physical size is going to be a problem because it is competing with 3.5" and 2.5" hard drives.

    Aside from cost, there are plenty of other non-density things to work on: number of rewrite cycles, speed, reliability, etc. I can't wait for the day that spinning media eventually goes bye-bye.

    • Re: (Score:3, Insightful)

      Size to a large extent is the cost.

      If they can fit twice as much product onto 1" square for the same price then you get an effective decrease in cost.

      The incremental costs of memory are somewhat linear. If size was of no concern then they could sell a 10TB drive for the same price if they just put more spindles into one drive. The cost is per spindle. So yes they could sell you a 10TB drive instead of a 1TB drive that was 10x as large physically, but it's going to cost 10x as much.

  • by KonoWatakushi ( 910213 ) on Saturday September 18, 2010 @06:36PM (#33623040)

    There are far better technologies waiting to replace it, one being P-RAM. The best thing is, none of the newer tech is subject to Flash's crippling block-erase semantics, and so they are far more suitable for SSDs. No longer will SSDs require tremendously complex controllers and firmware in order to attain good performance, allowing new SSDs to be both cheaper, faster, and more reliable.

    • by Klinky ( 636952 )

      Right but when is P-RAM going to be available & have the same production and supply chain that NAND has at this point? It's going to take some time going from the 8MB PRAM chips shipping for mobile phone usage to get something similar to the 1TB NAND SSDs we have now. It's similar to OLED displays, where OLED keeps hovering in the background and each year it's poised to replace LCDs, but yet there still aren't any viable consumer level OLEDs on the market.

  • ..times up.

    Guess we will be stuck where we are on this matter forever.

    Or does this mean there is something that is going to replace this tech anyway?

  • by backslashdot ( 95548 ) * on Saturday September 18, 2010 @06:55PM (#33623112)

    Densities are fine. The main problem is lowering the cost. They need to drop the price by an order of magnitude. I am sure it costs way less than that to manufacture .. they just have to pay back all the research and equipment capital costs and build more production lines. Once they do that it will be dirt cheap. I remember when LCD monitors were a couple thousand bucks. And hard drives were far more expensive than SSDs are today .. and that was only 15 years ago.

    For example an OCZ Technology 250 GB SSD is $450 .. I paid around $400 for a 400 Megabyte drive in 1995. That's works out to hard disks back then being nearly 5 times the price per megabyte of SSD drives today.

    • What I mean by densities are fine is not that we don't need more denisty .. of course we do .. but right now it's price that's the most urgent need.

    • Re: (Score:3, Interesting)

      by queazocotal ( 915608 )

      'Just'.
      It really does cost quite a lot to make flash.
      For example, a fab capable of the latest geometries will set you back over a billion dollars.

      This fab is only cutting edge for a yearish before needing retooled, or moving down the value chain to make cheaper - less profitable - stuff.

      • Re: (Score:3, Insightful)

        by my $anity 0 ( 917519 )
        But, if the technology hits a brick wall, the fab won't need retooling because there will be no cutting edge technology, increasing the amount of years of useful life, and eventually lowering price. It's slower than the lowering by developing better tech, though, one would assume.
    • Re: (Score:2, Insightful)

      by forkazoo ( 138186 )

      Densities are fine. The main problem is lowering the cost. They need to drop the price by an order of magnitude. I am sure it costs way less than that to manufacture .. they just have to pay back all the research and equipment capital costs and build more production lines.

      Density = Cost

      The more bits per cm^2 of silicon, the less silicon you need to buy in order to store your stuff. When people talk about density, they aren't talking about the physical size of the consumer SSD product, they mean density of

      • Si prices (Score:3, Informative)

        by AlpineR ( 32307 )

        I don't think so. Back when I used to do research on microelectronic fabrication methods, we bought 3-inch wafers for about $10 apiece. Those were high purity with doping to whatever type and level we selected. And that was without bulk pricing or favorable price scaling with larger wafers.

        Our molecular beam growth chamber, however, cost hundreds of thousands of dollars plus tens of thousands per year for supplies and maintenance (plus tens of thousands for a postdoc and a grad student to run it).

        So I reall

    • by fnj ( 64210 )

      Your math is a bit suspect. That's 555 times more, not 5 times more!

  • by gman003 ( 1693318 ) on Saturday September 18, 2010 @06:57PM (#33623128)
    but who says the wall is going to win that collision? I've seen it time and time again: a problem is encountered, and dealt with. Optical disk rotation speed. Parallel data buses. Processor clock speeds. They all hit a wall, and we got around that wall. We lowered the wavelength of the laser instead of go to 56x CDs. We switched to serial buses when parallel encountered clocking issues. We switched to multicore processors when we couldn't keep upping the gigahertz. I'm fully confident we'll figure out a solution to this problem as well, whether it be new manufacturing techniques, memristors, or just larger Flash chips.
  • Slow news day. (Score:4, Insightful)

    by twidarkling ( 1537077 ) on Saturday September 18, 2010 @07:48PM (#33623402)

    You know, stories like this used to interest me. Then I noticed that:
    a) they kept reoccurring, and
    b) had a common theme.

    Yeah, it's always "We're approaching a wall with what can be done with current technology, so it's going to either be more expensive, or need a new technique, yadda yadda." Tell you what. Lemme know when we *actually* hit the wall in ANY of these that they keep threatening us with a wall in making, SSD, HDD, CPU size, etc.

    • by Dadoo ( 899435 )

      Lemme know when we *actually* hit the wall in ANY of these

      Well, we've already hit a wall with CPU clock speed, haven't we? According to this [wikipedia.org] page, CPUs hit 3.6GHz in 2005. I haven't seen anything faster, since then - at least, not in the Intel product line. Yes, IBM's POWER chips are up to 5GHz, but that's not much of an increase. If clock speed had maintained its trend, we'd be up to 9 or 10 GHz, by now.

      I expect that, in the not-too-distant future (10 years, maybe), the rest of the computing attributes wil

      • Well, we've already hit a wall with CPU clock speed, haven't we?

        Yes, but not a wall in terms of CPU performance which is all we really care about. And that clock speed limit may yet be broken.

        • Actually you are wrong. At least single threaded performance has seen comparatively feeble increases with each tick and tock. Usually less than 10%. Small little optimizations. Nothing like it used to be with massive clock speed increases every year. It's actually weird that there was ever a time when "there is no good time to upgrade" was true. Anytime is a good time to upgrade now, because CPU performance is dead in its tracks aside from increasing the core count and relying on SMP tactics in software to

    • by bertok ( 226922 )

      You know, stories like this used to interest me. Then I noticed that:
      a) they kept reoccurring, and
      b) had a common theme.

      Yeah, it's always "We're approaching a wall with what can be done with current technology, so it's going to either be more expensive, or need a new technique, yadda yadda." Tell you what. Lemme know when we *actually* hit the wall in ANY of these that they keep threatening us with a wall in making, SSD, HDD, CPU size, etc.

      It comes down to an argument from ignorance. "I can't imagine a way around this problem right in front of me, hence nobody else can, and we're all doooomed".

      I read a lot about esoteric technologies, I had an interest in astronomy and physics at one point. What I discovered was that most people don't realize that the mundane ordinary technology they deal with every day is not the cutting edge, it's just the current cost effective level of technology. The cutting edge is way, way out there, it's just not read

  • by 0111 1110 ( 518466 ) on Saturday September 18, 2010 @07:59PM (#33623452)

    The smaller the NAND flash process size the shorter the write endurance and data retention times. A 25nm NAND flash SSD will have a much shorter lifespan and hold data for a much shorter period of time than current 34nm tech. Does this mean that 2010 NAND flash SSDs will be better than 2011 ones? Well I guess that depends on how much you value reliability and longevity in your storage devices. Lower cost and shorter life is a win/win for the manufacturers. This limit on NAND flash technology has been known since the start. I don't see the big deal. Just stop at 34nm and work at other technologies that are faster or scale in size better. We usually think of larger process size as being better, but in this case it's not.

    http://features.techworld.com/storage/3212075/is-nand-flash-about-to-hit-a-dead-end/?intcmp=ft-hm-m [techworld.com]

    http://hardforum.com/showthread.php?t=1492711 [hardforum.com]

  • Think of all the desktop cases with space. Layers of chips high and the depth too.
    If not, always U2.
  • I would mod this article as a troll/flame bait .

    The sky is always falling. We never know how to make it to the next generation. If we knew that already it would have already ceased being the next generation.... SSDs are now more or less completely tied to the same fate as CPU's and as such if this were really that big of a problem.. The article would be "Is computing power about to hit a wall"

    • Don't think of it as a sky-is-falling article then. Think of it as a NAND-flash-has-some-process-size-limitations article. For NAND flash larger process sizes are simply better. I know that seems counter-intuitive but it's true. We just aren't used to the idea. Despite that fact the manufacturers still want to shrink because it saves them money. Usually it helps the consumer as well, but that is simply not true with NAND flash tech. Maybe a year or two from now larger process sizes will be a value add attri

  • HTML5 drives are the future.
  • Conductor pairing (Score:3, Interesting)

    by Skapare ( 16644 ) on Sunday September 19, 2010 @10:34AM (#33627642) Homepage

    The effects of EM fields can be significantly reduced by conductor pairing. When two currents of equal and opposite magnitude run side by side, the EM field is almost entirely confined to a space around those conductors. This can be achieved by creating cell pairs arranged so they are side by side, but turned in opposite directions. This allows the current of one to be in the opposite physical direction of the other, when the same operation is being performed on each. Since erase and read (but not write) can always be done at the same time, this reduces the number (in the case of read) and severity (in the case of erase) of EM fields, reducing the overall effect of EM fields on adjacent inactive cells.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...