Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

512GB Solid State Disks on the Way 186

Viper95 writes "Samsung has announced that it has developed the world's first 64Gb(8GB) NAND flash memory chip using a 30nm production process, which opens the door for companies to produce memory cards with upto 128GB capacity"
This discussion has been archived. No new comments can be posted.

512GB Solid State Disks on the Way

Comments Filter:
  • Cost? (Score:2, Interesting)

    by Jeff DeMaagd ( 2015 )
    Capabilities aren't very important if they aren't affordable. So maybe some government contractors can afford those things now, I don't think it would be that interesting to the consumer until SSDs get to a tenth of the cost.
    • Re:Cost? (Score:5, Insightful)

      by NickCatal ( 865805 ) on Sunday October 28, 2007 @10:48AM (#21148471)
      Well, defense department would love these. Store a lot of data in places where there is constant vibrations and heat issues (Iraq) without worrying about damaging the disks.
    • Re:Cost? (Score:5, Insightful)

      by eebra82 ( 907996 ) on Sunday October 28, 2007 @11:00AM (#21148539) Homepage
      News flash! We all know that cutting-edge hardware is in almost all cases too expensive. It takes time to adopt new hardware regardless of how practical it is. Once vendors acknowledge the need for such disks and once Samsung receives a boat load of orders, things will look different, but until then, it's expensive to produce because it's being done in small quantities.

      I guess that the next generation of iPods will completely remove the hard drive capable devices from their line-up.
      • I think even the 64GB SSDs are too expensive and they've been out for a while. The 512s probably aren't made yet with those chips. I think it will become affordable eventually, but I bet that they aren't going to be using these chips, these chips will probably be history by then.

        I know iPods will all be flash, but we don't really know if the HDD players will be gone next year. Even if flash has a price of $5/GB next year, the 160GB model would be $800 in flash chips alone. The cost of the memory chips w
  • by AlpineR ( 32307 ) <wagnerr@umich.edu> on Sunday October 28, 2007 @10:40AM (#21148415) Homepage
    It's not a dupe. The previous article [slashdot.org] said that 64 Gb chips could be combined into a 128 GB device. Now they can combine 64 Gb chips into a 512 GB device. A huge advance!
    • Re: (Score:2, Interesting)

      by ILuvRamen ( 1026668 )
      maybe they created a controller that could read and write from then simultanerously so it's double the read/write speed. I hope so cuz it better be able to beat my sata drives in read write speed otherwise I don't really care how fast the seek time is cuz any file over like 100KB would be slower to open on it than a normal hard drive.
      oh yeah and I agree with the other posts. Call me when it's on its way to my budget, not just store shelves lol.
      • Re: (Score:3, Interesting)

        by Jeff DeMaagd ( 2015 )
        The seek times of SSDs should make it such that trying to read and write from the storage array at the same time would seem kind of pointless. It also increases the costs. It would probably go the way of FB-DIMM. FB-DIMM is supposed to allow simultaneous reads and writes to different memory cards, but it's too expensive and has other problems limiting its performance. Now, if the controller designer can apply something like that to a hard drive array, then maybe that would be nice. I think it might be p
    • Well, seeing as how they skipped right over 256GB devices, I'd say it is a major advance!
  • 512GB? (Score:5, Insightful)

    by loshwomp ( 468955 ) on Sunday October 28, 2007 @10:41AM (#21148425)
    You could use the same logic to conclude that 512 terabyte solid-state media is on the way.
  • by aneeshm ( 862723 ) on Sunday October 28, 2007 @10:41AM (#21148427)
    ......when I think that porn, or some equivalent thereof, has been responsible for all human progress throughout history.
    • by stranger_to_himself ( 1132241 ) on Sunday October 28, 2007 @11:18AM (#21148643) Journal
      Maybe human beings are just porn's way of making more porn.
      • by owlnation ( 858981 ) on Sunday October 28, 2007 @02:03PM (#21149777)

        Maybe human beings are just porn's way of making more porn.
        The great thing about slashdot is that there really are some incredibly smart and funny people (two things that usually go together) here. Take the above quote for example, it is both funny and deeply profound. It is an Hall of Fame quote. Thank you, it made my day.
        • >>Maybe human beings are just porn's way of making more porn.

          In case any of our dear readers don't recognise the quote, I believe the GP is ripping off Richard Dawkins whose gene-centred theory of evolution can be paraphrased as "Human beings are just genes' way of making more genes". This is top grade geek humour.

          I look forward to reading the full paper in the next edition of Nature (or at least looking at the pictures).
    • by iminplaya ( 723125 ) on Sunday October 28, 2007 @11:35AM (#21148733) Journal
      War also provides a big push. Now imagine how fast progress would be with more military porn.

      Hey, sailor...
      • Re: (Score:3, Funny)

        Now imagine how fast progress would be with more military porn.

        Porn and War are the two major competing drivers of all progress. It kinda brings new light to the phrase "Make Love not War."
      • War also provides a big push. Now imagine how fast progress would be with more military porn.
        Um, anyone remember Jeff Gannon and military-themed gay male escort services? No thanks!
    • by Kjella ( 173770 )
      As most of the exceptionally brilliant people have had serious personality issues and were far too obsessed with their work to do more than average or less when it comes to reproduction, I doubt progress has much to do with porn. When it comes to achievements in general though, why not? Even creationists believe that you inherit traits like eye color, hair color, personality traits and so on - the evidence is too overwhelming to ignore. Now assume you have a trait "sex drive" or "urge to have children" whic
    • Artists have said as much for thousands of years.
      But that's because the definition of "porn" most people use is "anything that offends me or has naked people in it or has sex implied in it."

      Is it any wonder that porn has done so much?
  • by schnikies79 ( 788746 ) on Sunday October 28, 2007 @10:54AM (#21148505)
    It's no so easy to use the 1,000,000=1mb with this system. Unless they do it anyway.
    • Just wait till marketing decides to call these memory cards 550GB instead of 512GB... then other competing companies others will follow suit and call people who complain whiners and that it's an industry standard way of labeling capacity.
      • by pslam ( 97660 ) on Sunday October 28, 2007 @12:04PM (#21148875) Homepage Journal

        On that subject, whenever the 2^n or 10^n units thing gets brought up, some smart arse always says "it's so illogical to have binary based sizes like that, it's so confusing and the media doesn't work in binary anyway."

        This is just history re-writing bullshit that someone spouts to get mod points and continue another meme.

        There was a time when hard disks were all based on megabytes, and megabytes were always 2^20 = 1048576 bytes. NOBODY EVER GOT CONFUSED. History re-writers say otherwise, obviously. Where did it all change? Well, for hard disk manufacturers, it was a blatantly cheap trick to save 5-10% costs, and whenever anyone complained they could just to that viral history re-write meme about how binary based units were always confusing. Hell, they even convinced SI. SI have absolutely no authority or experience with determining computer units, and the "solution" they came up with is even more confusing and ugly. How do you tell if MeB or MiB is 2^20 or 10^6? Muppets.

        Then came flash cards. Here's a thing a lot of people don't know: flash actually DOES come in binary sizes. That's how it's manufactured. Another thing a lot of people don't know: flash actually gets WORSE for write endurance as its density goes up. It's actually got much worse over time. To begin with, low density flash cards did not suffer much from write endurance problems - to the extent that when you got an 8MB flash card it was basically just writing straight through.

        Densities went up, and you started to need a lot of spares, more error correction, and wear leveling. The result was that after formatting, you ended up with about 5-10% of your flash used up. Quite handily close to the decimal-based size. So manufacturers (and I believe SanDisk were the first to do this) silently started selling 64MB cards as 64,000,000 bytes of data instead of 67,108,864. No asterisks, no notes on the bottom of the packaging - nothing. It's fair enough, but done in a fucking deceptive manner.

        I remember getting bug reports about our MP3 players (years back now) misreporting SanDisk flash cards as 61MB instead of 64MB. In the end (sigh) we put in a hack to spot deceptive cards and switch units to powers of 10.

        So before anyone else spouts how the units are confusing - they weren't until manufacturers tried their damned hardest to make sure they were.

        Next, people will complain about how SDRAM, caches and even registers are in silly powers of 2...

        • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Sunday October 28, 2007 @12:26PM (#21149061)
          The IBM winchester line of drives from the 70s were always labels in units of 1 MB = 10^6. It is just completely false that hard drives have always been labeled using binary prefixes. Digging around, it appears that early PC/workstation drives in the early 80s were mixed. Some used 2^20, some used 10^6. In the late 80s, consumer hard drives made by Seagate, WD, etc. all converged on 2^N for a few years, before switching to 10^6 in the early 90s.

          Bandwidth is always measured in 1 MB/s = 10^6 bytes/s, or 1 Mb/s = 10^6 bits/s. Should 1 MB take 1.04 seconds to transfer of 1 MB/s data link? This includes all forms of Ethernet, SCSI, ATA, PCI, and any other protocol I have looked up. If 1 MB/s does not equal 1 MB per 1 s, someone should be shot, that is just not OK.

          mega = 10^6 in all other fields. Including other computer terms -- 1 MHz, 1 MFLOP, 1 megapixel, etc.

          computer RAM is the only thing that has consistently been labeled using binary approximations to the SI units. And as long as I can remember (computing magazines in the 80s) people have acknowledged that 1 MB = 2^20 is an *approximation* and that mega=10^6.

          Mega=10^6 is right. mega=2^20 is wrong. End of story. It happens that it is technically convenient to manufacture and use RAM in powers of 2. No such constraint applies for hard drives, so there is no reason to use the base-2 prefixes. Stupid OSs should be changed to use the SI prefixes when reporting file sizes. RAM should be labeled using the "base-2" prefixes, but they are admittedly somewhat annoying due to lack of familiarity, and since nobody uses base-10 ram, it isn't a big deal.
          • by HiThere ( 15173 ) <charleshixsn.earthlink@net> on Sunday October 28, 2007 @01:55PM (#21149721)
            Sorry, but for certain algorithms it's important that you are working in powers of 2, and that was always called Mega (Bits, Bytes, Words, whatever) or, more commonly Kilo-whatever was 2^10 whatevers.

            IO has always been a mixture and compromise. Punched cards could hold 12 * 72 bits (7094 row binary) or 12 * 80 bits (column binary, but don't try to read it with the main card reader). Try to fit THAT into your "powers of 10" scenario!

            For the current set of IO devices, capacity measurement was defined by marketing. I saw arguments about it in the trade journals when it was being fought out over hard disks. AFAIK, companies decided independently the choice that was, to them, most advantageous. It was powers of 10. This was not appreciated by any single customer that I was aware of. Some despised it, some didn't care, nobody was in favor. (Yeah, it was a small sample, but it's one that I was aware of. Most didn't care, and many of those weren't interested in understanding.)

            But block allocations of RAM are done in powers of two, and these are frequently mapped directly to IO devices. So having a mis-match creates problems. Disk files were (possibly) created as an answer to this problem. (7094 drum storage didn't have files. Things were addressed by drum address. If a piece went bad, you had to patch your program to avoid it. UGH! Tape was for persistent data, drum storage was transient...just slightly more persistent than RAM.) Drum addresses were tricky. I never did it myself, but some people improved performance by timing the instructions so that they would have the drum head right before the data they wanted to read or write to limit lagging. (Naturally this was all done in assembler, so you could count out exactly how many miliseconds of execution time you were committing, and if you know the drum rotation speed, and the latency...
            So things tended to be stored in powers of two positions on the drum, unless a piece went bad.

            Disks, when they first appeared, were slower than drums, but more capacious. (They were still too expensive and unreliable to use for persistent storage.) But the habit of mapping things out in powers of two transferred from drums storage to disk storage. When files were introduced (not sure about when that was) the habit transferred. This wasn't all blind habit, lots of the I/O techniques that had been developed were dependent upon powers of two. So programmers though of capacity in powers of two. This didn't make any sense to accountants, managers, etc. When computer equipment started being sold by the Megabyte it made sense to the manufacturers to claim powers of 10 Megabytes for stroage, as they could claim larger sizes. (This wasn't as significant for Kilobytes, as 1024 is pretty close to 1000.) It not only made sense to the manufacturers, it also made sense to the accountants who were approving the orders. And when the managers started specifying the equipment...well, everything switched over into being measured by powers of 10.

            No conspiracy. Just system dynamics. And programmers still think of storage in powers of 2, because that's what they work in. (This is less true when you work in higher level langauges, but if you don't take advantage of the powers of two that the algorithms are friendly with, it will cost you in performance, even if you don't realize it.)

            • unprofessional (Score:4, Insightful)

              by PMBjornerud ( 947233 ) on Monday October 29, 2007 @04:42AM (#21155151)

              No conspiracy. Just system dynamics. And programmers still think of storage in powers of 2, because that's what they work in. (This is less true when you work in higher level langauges, but if you don't take advantage of the powers of two that the algorithms are friendly with, it will cost you in performance, even if you don't realize it.)
              However, our job as professionals is to know these facts without bothering the end user with it. 2^10 is a nice and useful hack, but not something to show the end user. Computer users are no longer computer experts, and we should not bother them with internal details.

              Disk capacity is reported to my mother in powers of 2. This simply does not make sense.

              Technical details should not trump users. This makes us look like geeks with a binary fetish instead of professionals.
              • by HiThere ( 15173 )
                If she's not a computer professional, why is she worried about the disk size? Her question should be "Is it big enough?", not "How many bytes is it?"

                That said, if she's not a computer professional, the answer to "Is it big enough?" is almost certain to be "Yes", unless she's using Vista, or some other recently-released giganticaly-humongous OS. (I'm counting animators, architects, etc. as computer professionals. They have legitimate reasons to wonder whether the disk is big enough, But such people proba
          • If you're going to nitpick, note that bandwidth [wikipedia.org] is measured in Hertz. The marketroid term "bandwidth" refers to channel capacity, which is measured in bit/s.
        • by Kjella ( 173770 ) on Sunday October 28, 2007 @01:24PM (#21149475) Homepage

          Hell, they even convinced SI. SI have absolutely no authority or experience with determining computer units, and the "solution" they came up with is even more confusing and ugly. How do you tell if MeB or MiB is 2^20 or 10^6? Muppets.
          I think you're doing a bit of revisionist history yourself. SI was there first. The SI units have always been in powers of ten, and have been used in all other branches of science long before there was a "computer science". It was computer scientists that originally redefined them to be powers of two, and in the computer world it was so for several decades. It was confusing but not more so than "if it ends in -bytes, it's a power of 2". Except the floppy drive which is 1.44 "MB" = 1.44*1000*1024 (1987), or the modem speeds which were reported 1 kbps = 1000 bps (1972) because that's what electrical engineers talked, or Ethernet that ran at 10Mbit/s = 10.000.000 bits/s (1980). This lead to a "bytes is powers of two, bits is powers of ten" which made all sorts of fuck-ups possible.

          Yes, the HDD manufacturers did it because it was a cheap 5-10% savings, but the excuses were plenty and not all bad. It was confusing every time computer science bumped into one of the other sciences and telecommunications in particular, which inevitably used the SI prefixes. However, instead of actually fixing a problem it became only an even greater mess, invalidating pretty much every rule of thumb because the OS would invariably report something else. That's pretty much proof they didn't want to fix anything, just grab some extra profit.

          After that, it was a big mess and with next to no interest in solving it. That's when the people at IEC, not SI, and certainly not pushed by HDD manufacturers, finally said that these units are FUBAR, and the only way to make a long-term solution is to abandon the SI-prefixes and make new and ugly ones, particularly the names. At that point, we're talking 50 years of computer science use against 200 years of other sciences, and with retards messing up the boundary. I think they're ugly as hell, but they're also the only way to go forward from here.
        • Then came flash cards. Here's a thing a lot of people don't know: flash actually DOES come in binary sizes. That's how it's manufactured.

          Uh, no. You can make flash in any size you like. It's just a number of NAND or NOR cells, and there's no reason at all that they have to be in power-of-two sizes. Most of the size limits (SD = 2GB, SDHC = 32GB) are actually power-of-two counts of 512-byte sectors, but the media can be any size up to that.. any number of sectors.

          The basic pages and blocks of flash are themselves not powers of two! Most 512-byte page NAND devices have some number (~16) bytes of extra area in each page for bad block mana

          • Re: (Score:3, Interesting)

            by pslam ( 97660 )

            This is all very well but you are totally wrong. Go download a datasheet of a popular FLASH part. Guess what? The capacity is an exact power of 2.

            I'm not just making this up. NAND is naturally base-2 capacity sized. Yes, there is sparing, but pages are normally 2048 byte (or larger these days) with a few extra bytes per 512 for ECC. The non-ECC areas are still power-of-2 based, and the chip area itself is square and ends up being another power-of-2 pages. End result, a power-of-2. I've been working on this

        • Dude... I know techies has a binary fetish, but get this:

          People don't fucking care about the manufacturing process or memory adressing details. Non-techies always count in powers of ten, and I and you will do nothing but making ourselves look like retards if we try to argue that 512 + 512 = 1k.

          Does power stations redefine a kilowatt to 978W? Does butchers sell kilos of meat by the 1012th gram? Nope. Would I allow them to redefine these terms based their maufacturing process? Nope.

          This is probably harder to
          • by smash ( 1351 )
            Actually... people expect their 500gb porn collection (as reported by windows) to fit on a 500gb disk (as sold by the hd manufacturer)...
        • What does the M stand for in your 100Mb internet connection? What does it stand for in your 700MB CD? Your four Mp digital camera? A 10MW generator?

          Now, doesn't it strike you as fundamentally stupid that it differs? Doing stupid things just because they are tradition is really one of humanities more unfortunate flaws.
        • I'm not saying one is more confusing than the other, but if you're going to refer to the base-2 forms, PLEASE use the proper names: kibblebytes, nibblebytes, gigglebytes, and tribblebytes.
        • the notion that only the manufactures caused confusion seems unlikely at best. I doubt very much that it was as universally known as you think is was. I certainly know that as a some what geeky kid 20 years ago I could not have told you the number of byte in a megabyte BUT even if I was the only one who didn't know at that time, I can bet cash money that even here, right now, the number of people who can tell me how many byte are in 7.23 MiB (2^20) in less that 3 seconds is almost zero. Yet, i can tell you
        • by Yvan256 ( 722131 )
          How is a post that's "rewriting history" gets modded "+5, Insightful" is beyond me.

          When people drive a kilometer, they drive 1000 meters, not 1024.

          Just because you've been living with inches, miles and pounds all your life doesn't mean the rest of the planet doesn't use base 10.

          A kilo means 1000. Just because programmers stole the term kilo and redefined it to "1024" doesn't mean they were right.

          Hey, let's start using "miles" to mean "431 inches". That would make as much sense as "kilobyte = 1024 bytes".

          P.
    • It's no so easy to use the 1,000,000=1mb with this system. Unless they do it anyway.
      Are you telling me that 64Gb is not exactly 64.000.000.000 bits?

      Ugh. And I though that they had seen the light and decided to go in base 10 and count the actual bits.
  • What about IOPS? (Score:3, Interesting)

    by KrackHouse ( 628313 ) on Sunday October 28, 2007 @10:58AM (#21148531) Homepage
    Does anybody know how well flash SSDs perform in RAID arrays? 15kRPM SAS drives are horrendously expensive so if I could plug a couple small flash drives into my RAID card (RAID 0) I'd be a happy camper. Can't find benchmarks anywhere and flash drives have horrible write speeds which means they have terrible OLTP performance.
    • Re: (Score:2, Interesting)

      by pslam ( 97660 )

      Does anybody know how well flash SSDs perform in RAID arrays? 15kRPM SAS drives are horrendously expensive so if I could plug a couple small flash drives into my RAID card (RAID 0) I'd be a happy camper. Can't find benchmarks anywhere and flash drives have horrible write speeds which means they have terrible OLTP performance.

      Individual flash chips have terrible write performance, mostly due to the slow block erase time. However, you always use multiple chips in high capacity storage devices (anything larger than an MP3 player), and you can start doing fancy tricks with interleaving, or just plain have way more buffer memory to hide the erase time. If you really want to crank out even higher performance, then you stick multiple NAND interfaces of the controller chip and drive it all in parallel.

      If you stack about 4-8 chips in

    • >Does anybody know how well flash SSDs perform in RAID arrays?

      They are truly random access devices, so you can use throughput/blocksize to get IO/s. Of course write IO/s and read IO/s are very different.
      I don't see the point about OLTP. Normally you don't write a whole lot of data and since the access time is virtually zero, for random writes they might still be superior to disk drives. Together with write caching that should make them very suitable for this kind of application - as opposed to streaming
    • The best place in a database server for these is probably something like logfiles or the scratchspace. Somewhere that gets fragmented quickly due to frequent resizing. The main data files may get internally fragmented but if they're being fragmented due to frequent resizing then your DB has basic configuration issues - preallocate main files' initial sizes larger from the start.

      I hope this is for a test/dev or a personal learning server; if 15Krpm SAS drives are 'horrendously expensive' for a prod
  • iPhone (Score:3, Funny)

    by imputor ( 841598 ) on Sunday October 28, 2007 @11:39AM (#21148747) Homepage
    So when does the 512GB iPhone come out?
    • by Kjella ( 173770 )
      512/8 = 64 = 2^6 => 6*18 months (Moore, why not?) = 9 years. Ok, maybe that's a bit optimistic but your kids will definately have one when they go to college. Hell, it amazes myself that I'm walking around with 4GB on a memory stick these days, compared to what I started with. And no, we're not even talking floppies we're talking C64 cassettes.
    • In eighteen months, we're assured.
  • by 3seas ( 184403 ) on Sunday October 28, 2007 @11:41AM (#21148761) Homepage Journal
    SSD, doesn't that stand for Single Sided Disks, as in floppies... ; may as well...

    anyways, if we had 1000 terabyte solid drives for $10 then you'd hear wining for the yet to be released Googleplex drive for $5...

    Like damn, anyone using up their new 100 gig drives faster than the next size is out for less money?

    To back up very large drives today, it near cheaper in time/labor and costs to just use hot swap drives, where the back up is the removed drive, plugged in and run for 15 minutes a few times a year, if even that. Or a rotation system as was done with tape.
  • So solid state disks are all about NAND flash memory, right? I thought that SSDs would be all about MRAM, and that MRAM SSDs would be viable by the late 2000's. What's up with that?

  • Don't flash chips have a much shorter lifespan than regular hard drives and relatively low number of reads and writes? Or is that just with older flash tech?
    • They're getting better plus they use wear leveling, which is like forced fragmentation but there are no moving parts so it doesn't incur a performance penalty. The mean time before failure is a lot longer than your typical spinning platter drive in the newer drives.
    • I believe the problem is only with writes, not reads. Which, with a windows machine means that as long as there is a hardware switch to disable writes, it is more secure as well as faster to boot off a flash drive.

      • I believe the problem is only with writes, not reads. Which, with a windows machine means that as long as there is a hardware switch to disable writes, it is more secure as well as faster to boot off a flash drive.

        It is actually not even that bad. The problem is clearing bits, not setting them, so the system can spare a small marker for every so many chunks of data and mark it as "bad" when it is unable to erase it again. Thus you don't get sudden failures of the entire drive, rather you will get a reduced

  • by DamonHD ( 794830 ) <d@hd.org> on Sunday October 28, 2007 @12:30PM (#21149097) Homepage
    Hi,

    I already boot/run my main Internet-facing server (Ubuntu) from a 4GB memory SSD card to minimise power consumption, and I have more than 50% space free, ie it wasn't that hard to do.

    http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]

    I'm not being that clever about it: using efs3 rather than any wear-leveling SSD-friendly fs, and simply minimising spurious write activity, eg by turning down verbosity on logs. And laptop-mode helps a lot of course.

    Now that machine does also have a 160GB HDD for infrequently-accessed bulk data (so the HDD is spun down most of the time and a power-conserving sleep mode), and it would be good to get that data onto SSD too. But a blend, as in many memory/storage systems, gives a good chunk of maximum performance and power savings for reasonable cost.

    Rgds

    Damon
    • If you worry about wear, why not move your logging to a secondary USB flash disk?

      Then you would be able to swap out your high-access part of the system without touching your OS and other setup.
      • by DamonHD ( 794830 )
        Complexity. If I can KISS and it won't wear out for at least a year or two then that's all I need.

        Reducing logging, etc, hasn't taken much effort at all anyway.

        Rgds

        Damon

        PS. Plus more USB devices is more power draw, and this project is minimising power draw.
  • Time to buy some stock in solid state manufacturers, perhaps... I can only foresee one evolutionary change in data storage for common home use, really. The technology is still young, but already showing lots of promise.
  • 64 GB flash? Pff... The next big thing is ion memory!

    A thumb drive using [programmable metallization cell memory technology] could store a terabyte of information

    http://www.wired.com/gadgets/miscellaneous/news/2007/10/ion_memory [wired.com]

  • The price premium for laptops with even small SSD's is astonishing. Almost $1000. As much as I love the idea of an SSD laptop and ever bigger storage for phones and PDA's the price has to become realistic before anyone will buy these in volume.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...