Follow Slashdot stories on Twitter


Forgot your password?
Data Storage Media Hardware

Fusion-io IoXtreme's Consumer-Class PCIe SSD — Impressive Throughput 110

MojoKid writes "When Fusion-io's first ioDrive product hit the market, it was claimed to be a 'disruptive technology' by some industry analysts, with the potential to set the storage industry on its ear. Of course the first version of the ioDrive was an enterprise-class product that showed the significant potential of PCI Express direct-attached SSD storage, but its cost was such that the mainstream market couldn't possibly justify it, no matter what the upside performance looked like. Then we heard of Fusion-io's more consumer-targeted play, the ioXtreme, that was announced this past summer. Fusion-io has only very recently released these new, lower cost cards to market. The first-ever full performance review of the product over at HotHardware shows the half-height PCI Express X4 cards are capable of a robust 800MB/sec read bandwidth and about 300MB/sec of write bandwidth. The cards particularly excel versus a standard SSD at random read/write requests and even perform relatively well with small block transfers."
This discussion has been archived. No new comments can be posted.

Fusion-io IoXtreme's Consumer-Class PCIe SSD — Impressive Throughput

Comments Filter:
  • In the right place (Score:3, Insightful)

    by Froze ( 398171 ) on Monday November 16, 2009 @05:48PM (#30122216)

    This is the proper place for memory, on the system bus.

    Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).

    • by MobileTatsu-NJG ( 946591 ) on Monday November 16, 2009 @05:58PM (#30122366)

      Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).

      Yeah, no kiddin. I mean if the whip has bugs in it, isn't that a driver issue?

    • Re: (Score:3, Insightful)

      SATA does have its advantages, though: laptop support, bootability, hot-swap, cross-platform (no drivers needed), etc.

      • Re: (Score:3, Informative)

        by tlhIngan ( 30335 )

        SATA does have its advantages, though: laptop support, bootability, hot-swap, cross-platform (no drivers needed), etc.

        A proper PCIe (miniPCIe) card supports bootability (appears as a regular controller+disk), laptops often boot from miniPCIe SSDs (netbooks notably - Asus eeePC and the SSD Acer Ones, amongst others). Hot swap not so much (I know SATA supports it, but do real world motherboard controllers support it?), though I suppose if someone were to make it an ExpressCard design, possibly. Cross-platform

        • The ioDrive/ioXtreme doesn't appear as a regular IDE or AHCI controller because that would significantly degrade its performance; most of Fusion-io's "special sauce" is in the driver.

          • But as long as it has a bios extention rom on it that knows how to make basic use of the interface (it doesn't have to be particulally fast, just good enough to let the OS kernel/drivers be loaded by the bootloader) then it should be bootable.

            • by Amouth ( 879122 )

              thats the point - to make it so the generic OS can see the card and get to that point it has to advertise it's self as an existing standard compliant card - which it isn't because the existing standard isn't fast enough for it. instead you end up with OS/Application specific drivers to present the card as storage space.

              Sure they might be able to make it present it's self as a standard ATA or SCSI interface and volume with degraded performance and then some how load a driver in the OS to talk over that exis

              • it has to advertise it's self as an existing standard compliant card
                No it does not, it just has to have a rom that knows how to talk to the card, is loaded as a bios extension and traps interrupt 13h. The bootloader uses that interrupt to access the drive and load critical parts of the OS including the drivers needed for the main hard drive. The OS then switches into protected mode and the driver takes over.

                There are plenty of SATA/SCSI/RAID cards/chips that are bootable and yet need a special driver for wi

        • Re: (Score:3, Insightful)

          by marcansoft ( 727665 )

          Most motherboards these days do implement SATA hotplugging. In fact, it's pretty important for eSATA.

        • Most netbooks I know about use a modified minipcie pinset, with usb and sata onboard that most cards use, rather than actual minipcie.

        • by LoRdTAW ( 99712 )

          Those laptop PCIe SSD's are not PCIe. The compact PCIe connector also has pins for SATA and USB just like express card. So the SSD's use the SATA pins. I thought I could use one in a PCIe x1 adapter card to boot an ATX motherboard but after some research found out that was not possible.

    • by quitte ( 1098453 )

      It's in the right place, but will it behave the right way?

      When those mainboards with extra flash for vista were announced I hoped for it being accessible directly via linux mtd.
      Without reading the article I still assume that it will again be just another hdd simulator that doesn't allow the os to do the wear levelling or map the storage directly into the accessible memory.

      Too bad. Since Debians live-helper made building live systems easy I'm running my desktop and laptop from squashfs anyways so I'd love to

  • sweet (Score:5, Insightful)

    by Lord Ender ( 156273 ) on Monday November 16, 2009 @05:49PM (#30122242) Homepage

    I bought a SATA SSD which can read and write at around 200MB/s. It was the greatest upgrade I've ever done, and for just $200 (less than my CPU or GPU). Now, I can't stand waiting for things to load when I have to work using mechanical hard drives.

    If 200MB/s is that big a difference, 800MB/s is going to be... actually probably not that much better. My computer already feels "instant."

    • Re: (Score:3, Insightful)

      by XanC ( 644172 )

      It's the read latency, not MB/s that's most important for desktop usage or for most databases. Everybody quotes the numbers that they're used to quoting, but the game is different with SSDs.

      • Re:Latency (Score:3, Informative)

        by InvisiBill ( 706958 )
        This ioXtreme is rated at 80 microseconds, while the Intel X25-M G2 is rated at 50 microseconds.
        • And the worthless JMicron controller SSDs probably have read latencies under 100 microseconds as well.

          It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

          • It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

            And that throughput is dominated by latency in HDDs. Much less so for SSDs.

          • by tepples ( 727027 )

            It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

            For one thing, in a hard disk, seek latency dominates for throughput for random loads. SSDs improve throughput by cutting latency. For another, interactive tasks demand high throughput on a burst of transactions, which needs low latency.

          • by AllynM ( 600515 ) *

            The main issue with the JMicron controllers is the latency occasionally spikes to closer to half a second when hitting it with random writes. The other issue is that during this latency spike, even reads hang, leading to the 'stuttery' issues with those models. The 602B controller was supposed to address this, and it did to some degree, but not enough to compete with Indilinx and especially Intel. I've noted Samsung drives to be stuttery as well, but only after you've hit it with a bunch of random writes

            • I kind of was thinking that if "latency" measured the time between sending a read request to the drive and the time when you get back the very FIRST bit, then even the JMicron probably does that ok.

              • If you send a write request and then a read request to the JMicron controller, you could very well wait a second before your first bit is returned. That is quite a bit of latency.

            • I had one JMicron drive that was silently failing to write one section of the drive (every other bit was always zero). I'd always heard that SSDs were supposed to check that, but I'm not entirely sure that was a JMicron specific issue.

    • I got the Kingston v-series 64M for around $120--I think it's only rated around 100MB. Still feels a lot faster, especially after boot-up.

    • Re:sweet (Score:4, Funny)

      by Monkeedude1212 ( 1560403 ) on Monday November 16, 2009 @06:00PM (#30122410) Journal

      Instant, or is there a "speed of light []" delay?

    • it's not the throughput that makes your computer feel that much responsive. It's the latency (or lack thereof). Access times of harddrives are easily a factor or 100 higher than of SSD's
    • The random access speed is what makes it seem faster, not the throughput. That's only about twice as fast as a good HDD in terms of throughput, but the access times are orders of magnitude lower.
    • Re: (Score:3, Insightful)

      A lot of people feel their fast mechanical disks "instant" too. I guess that there are a lot of things you -can- do four times faster with this SSD than with the one you have. Killing mosquitoes with a gunshot is also fast.
    • It's not the throughput you're noticing. It's the seek latency, at which SSDs are many times faster (comparing Intel's X-25M to WD's 10K RPM Velociraptor, you're looking at about 65x faster. Comparing to a 7200rpm drive, you're looking at about 100x difference.) than mechanical drives.

    • and my rebuttle to your post (copied and pasted from my reply to someone on another forum)

      I recently installed 2 of the 120gb Agility OCZ drives in RAID0 - apparently SSD's scale better with raid than a regular hard disk.
      I can read at 390mb/s write at 220mb/s and the random 4k reads and writes are about 23mb/s (regular disks can do about .7mb/s in such tests)

      According to benchmarks, a single OCZ disk is pretty darn close to the intel in the real world performance tests and one can only guess that 2 of them

      • With a mechanical disk, you must wait on apps to load. With a fast SSD, they load as fast as you click. That is a huge difference. Your train of thought is never derailed due to disk waits.

        There is no cure for net latency yet. This is irrelevant. My computer works as fast as I think, and I love that!

        • I know all this, search for my history on disks - I know how they work, I know about latency, I know which portions of disk operations should be quicker and I'm telling you, on a high end machine with a 7200RPM disk and 6gb of ram the difference is negligable, especially on a quad core rig which used a 2gb readyboost disk.

          • In my experience, Netbeans takes about ten times longer to load on a mechanical disk. If you call that negligible, you have a very strange definition of "negligible."

      • FWIW, SATA 3.0 is next year, ONFI 2.0 is next year and Intel and Indilinx (ocz) revision 3 is next year,... I am almost tempted to change my stance and suggest waiting.

        I'm waiting until they hit my price point of under $1/GB for the better units. MLC based SSDs are still up around $2.25 to $2.45 per gigabyte for the low-end stuff, with the better MLC in the $2.50 to $3.25 per gigabyte range. I think the best spot price I've seen yet is around $1.90 for MLC.

        At $1/GB, I'd quickly replace the 2.5" SATA
  • by jandrese ( 485 ) <> on Monday November 16, 2009 @06:03PM (#30122464) Homepage Journal
    It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it. This means you'll need at least one old fashioned drive with the OS on it to get your machine going, which is a shame because the system files can often make good use of SSD performance.

    On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference. When people went from their laptop HDD to the Intel drive, they often saw startup times and whatnot go from multiple (tens!) of seconds to less than a second. This card is likely to push them from less than a second to a smaller less than a second, it's just not worth it to most people.
    • Re: (Score:3, Insightful)

      by TeknoHog ( 164938 )

      It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it. This means you'll need at least one old fashioned drive with the OS on it to get your machine going, which is a shame because the system files can often make good use of SSD performance.

      I have a Linux machine that boots off a hard drive (i.e. bootloader and kernel) and the rest of the system runs on a SSD. The HD can then spin down until next boot. I guess other real operating systems can do this too.

      • by linzeal ( 197905 )
        Try that with windows. All the operating system files for windows must be on the same drive and partition.
        • by topnob ( 1195249 )
          I believe he said "real operating systems".... *lays the bait and runs*
        • by afidel ( 530433 )
          Uh, Boot and System volumes can in fact be different. The GUI mode setup might not let you do this but multibooters have known it for years.
        • by Barny ( 103770 )

          Wrong actually.

          I have mounted the Program Files (x86), the Users and the UserData folders as HDD partitions mounted via NTFS folders to these points, and it means your system boots and runs some apps as fast as the SSD can let it, but for storage of big files and junk, the rust takes over.

      • If you're not writing to it much, then you can plug a (very cheap) 4GB CF card in via a $2 CF to IDE adaptor as a boot volume and get rid of mechanical disks altogether. O course, if your SSD is bootable then this isn't an issue.
    • Still workable (Score:2, Interesting)

      by ciroknight ( 601098 )
      You don't really still need the spinning media. There's a cheap, incredibly easy, fast and inexpensive media that's perfect for booting your computer, and your computer is loaded with ports for it. It's called a USB thumbdrive.

      It's pretty simple actually: they're cheap and easily available in all kinds of different sizes ranging from "I just need to boot Linux" (256MB) to "I want all of my apps on it too" (32GB+), they're writable so you can update the OS, and you've likely got a multitude of ports inside
      • Erm. Booting Windows 7 off of a USB thumbdrive? (you'd need that 32GB model)

        Dunno, doesn't sound like a very good idea. The OS is huge, and needs lots and lots of IO accesses, both for booting and during normal operation. Thumbdrives generally aren't really designed for that kind of continuous use. And finally, the slowdown from waiting to boot would possibly cause more lost time than you'd gain from having an $800 PCI-express card for your application files.

        • by karnal ( 22275 )

          In this instance, "booting the PC" doesn't necessarily mean "loading all of the system files." Mainly just means getting the system up to the point that you're then pulling system (e.g. \windows files) off of the SSD.

    • On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference.

      This is the perfect caching layer for ZFS. One command to insert it as a read cache between the OS and a big array can make a huge difference [] in IOPS. I can't easily convince my boss to buy a machine with 80GB of RAM that will be used for nothing but filesystem caching, but I wouldn't hesitate to ask him for a PCIe card to drop into the servers we already have.

  • Well (Score:5, Insightful)

    by ShooterNeo ( 555040 ) on Monday November 16, 2009 @06:03PM (#30122480)

    First off, late in the article they show that game level load times are faster with these PCIx SSDs. Left For Dead loads about twice as quick with the Fusion IOXtreme. So the end user would notice a difference (especially as time goes on and apps become more and more bloated)

    One thing this product does effectively illustrate is that SATA 6 is already obsolete. All this card really is is the same grade of memory chips that goes in a lesser SSD like an Intel X-25M. The difference is that the controller gangs together 25 channels instead of just 10 like the Intel product. The controller isn't even that high performance a part - it's using an FPGA. An ASIC version of the chip could be cheaply fabbed using technology several generations back. So, in the long run, the cost to design and manufacture a PCIx SSD is virtually identical to the cost of a SATA SSD. And SATA 6 is already too slow for SSDs to use (and too fast of an interface for a mechanical hard drive)

    All in all, I predict that in a few more years, basically all SSDs sold will use a PCIx interface to connect to the host PC. Laptop manufacturers will have to change their internal mounting scheme slightly. And, prices should fall drastically from the $900 this IoXtreme is MSRPing at.

    • what'd be cool is multi-channel SATA - if the host can see that one device is on the other side of multiple channels, it can just bond them together and send/receive data on whatever I/F is free at the moment.
      • I guess. The thing is, PCI express x4 is perfect for the job. Another poster mentioned that modern machines often hang the SATA controller off of the PCI express bus anyways...might as well reduce the complexity. All the interface chips have been out for years, and are very cheap and ready to go. The only missing element is that you do need a very high performance design for the drive controller on your SSD in order for it to be worth it.
    • Actually I think these observations only serve to reinforce Professor John Frink's predictions that in 10 years time SSDs will be twice as fast, 10,000 times the physical size and will be so expensive that only the five richest kings of Europe can afford it.
  • Unfortunately, a bit of a let-down for some might be, that the product still currently can't be utilized as a boot volume.

    That means you still need some other drive (probably an "old" SATA SSD) to boot from. You can then load all your apps (and probably even some parts of the OS with a little hacking) onto this beast, but you still can't use it as your primary drive.

    Fusion-io assures us that this feature will be supported in future driver and/or firmware revisions but also didn't commit to a schedule for that roll-out just yet.

    Hopefully it comes along soon and at no cost for the early adopters of this item. I'd love to see these become the standard, but it doesn't really fit for me at the moment. As stated above, the jump from HDD to SATA SSD is a much larger percentage increase than

    • by XanC ( 644172 )

      You just need to load the kernel from some other medium. An old hard drive, a USB stick, an old flash card or something.

      Unless you're running a truly backwards OS like Windows. Then, yeah, you have to put a lot of stuff on your boot drive.

    • The unfortunate part is that there is no technical reason for a PCIe device not to appear to be an additional drive controller, and thus be bootable. Back in the day my first HD was a 32MB "Hard Card" that simply slotted into a 16-bit ISA slot.
      • As has been said before, it's the ioXtreme's driver that helps provide this performance, and using a standard driver would greatly diminish this speed advantage.
        • Explain to me why ioXtremes driver is mutually exclusive with a regular one.

          It sounds to me more likely that they skimped big time on the hardware end, than that they met up with a technical limitation.
        • Irrelevant. The same is true of most IDE drives. You can access them via BIOS interrupts, but you get poor performance. Bootloaders do this because it's incredibly simple and they only need to load a kernel (or a second-stage bootloader) which can then load a real driver. This then bypasses the BIOS driver and has better performance.
    • by Predius ( 560344 )

      I've got to try this again, but back in the day you could install on a drive that Windows had a driver for, but the BIOS couldn't boot as long as you had a small NTFS/FAT partition on a drive the BIOS COULD boot to hold the bootloader and driver... So you primary drive/OS would live on the SSD, and that legacy pile of junk hanging off your ATA port could be a tired piece of CF for all Windows could care.

  • In looking at similar items pricing, sorry, don't care if it displays information before I think to ask for it..
  • For about $900, or the cost of the Fusion ioXtreme 80GB card, I bought two Intel 160GB SSD drives that I have in a RAID 0 configuration. It's very fast and 4X the capacity for the same price. Oh, and it's bootable.
    • by maxume ( 22995 )

      Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.

      • by MentlFlos ( 7345 )
        Perhaps he wanted 320GB of space in one volume.
      • You are uninformed. The drives are fast enough that they hit the cap for a single SATA connection.
        Here's a review of 16 Intel drives in raid-0,2388.html []

        Its not quite 1600% faster but its about 1300% faster than the peak transfer rate of a single SATA connection.

        Then again.. if you really wanted performance for cheap, you could get 8 of the new 40 gig Kingston (intel based) drives and raid-0 them for the same price as the Fusion ioXtreme card. I'd challenge so

        • by maxume ( 22995 )

          Peak transfer isn't a particularly interesting workstation benchmark (If I were chasing performance, I might put a bunch of spinning disks in RAID 0 to cut down on latency, but the RAID isn't going to make the USB drive I am copying files to any faster, so the transfer rate isn't really that interesting).

          And really, I wouldn't be shocked if OP was using software RAID.

          • by adisakp ( 705706 )
            FWIW, you can get near linear scaling on many MB RAID controllers with SSD drives up to 3 drives. You may get a boost on the 4th drive as well, but it's not as much (some MB RAIDs top out at around 666MB/s and 3 Intel SSD drives will push this limit). As a matter of fact, with less than 4 drives, the difference in speed between built-in MB RAID and dedicated HW RAID is almost indistinguishable.

            There are plenty [] of benchmarks [] on the net if you look for them that show both a large speedup in transfer rates
            • by maxume ( 22995 )

              Incomplete is probably a better word than flawed, the context is comparing the speed boost of going from a spinning disk to a SSD or a couple of SSDs in a RAID setup and the copying example is just a case in my usage where there really isn't any difference between the two.

              As you pointed out in your other reply, I was wrong about the benefits of the RAID setup, but I still have trouble looking at it from anything other than a cost/benefit perspective (where, again, for me, the 10 seconds that the RAID saves

      • by adisakp ( 705706 )

        Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.

        I actually tried both single drive and RAID0 in my Vista configuration. Single drive took about 20 seconds to boot (once POST completes), RAID0 takes about 10 seconds to boot. So it's twice as fast based on a simple real world timings (VISTA boot speed).

  • 80GB is small

  • The link in the slashdot is only to page 4 and one datapoint. Here's the main page: []
  • And five years from now, they'll be dusty leftovers found in plastic bins at the local electronics surplus shop. If you can even find them.

    Ten years from now, people will hold them up and squint at them and wonder what they were originally built to do. Computer cards all look the same. The only notable thing about these ones is that they don't have any ports on the back. After a couple seconds of interest, they'll get tossed back into the bin.

    No real point to this post, other than the "gosh" factor. It

  • The price tag vs capacity and limitations makes this a worthless purchase for ANY Serious minded individual.

    Hotswap isn't really a viable option for failed devices.
    RAID if possible would not be conventional or standardized.
    Price tag is completely stupid, especially when you can have an Intel x25 80 gig for much less in cost.

    Most people are awe inspired and fooled by the grand total throughput of this thing at 800MB/s. Let me tell you, that is not really all that impressive. Just 8 HDD's could turn that nu

  • You have to ask yourself, what do you need that kind of speed for vs a more portable, hot-swappable, and likely longer-lived SATA/E-SATA standard? Maybe a transactional store for a database, but that is pretty much it. A PCI-e style interface would be relegated only to those situations where extreme performance is required. Such devices will always be priced at a premium over their SATA counterparts simply by virtue of their lower volume production.

    I do have an interest in how well a SSD could be used to

    • It's speed and usefullness is limited only by your imagination. I know we have workloads here where the data set mightn't be huge, but those extra IOPs would make a huge difference. Cost, well it's irrelevant, all new tech is expensive when it's released. Hell if you go by SSD standards, they are still way more expensive then their spinning platter brethren.

      I see your defeatist attitude, and raise you one positive and thoroughly excited attitude that wonders where the tech world will go next.
      • In the scope of a consumer product, I can't think of many common workloads that would really benefit from a PCIe interface. The innovation of using PCIe as a SSD interface created a nice middle point between DRAM and RAID volumes. Such a middle point just seems totally unnecessary right now in the consumer market. The biggest issue for SSD adoption faced by most people is price, so its not defeatest to say that price should be the focus of products that actually get to market. As previous posters have p
        • And what about the businesses that aren't big enough to go beyond the "consumer" scope. What about Uni's with limited budget? Go look at the latest and greatest "consumer" Intel processor [], I can guarantee you won't have much change out of $1000. Yet give it 12 months it'll probably be less then a 1/3 of that price (maybe sooner, I don't keep up with pricing).

          With that kind of attitude 640K would be enough for anyone.
          • First, are there really small businesses who need this kind of performance, same question for Universities?

            The CPU comparison is just apples to oranges. The primarily competitor for SSDs, right now, is HDDs. So HDDs:SDDs = x86 CPUs:? Even if I take your analogy is valid, the only reason processors come down in price so fast, is because they sell about a bazillion (rough estimate) of their actually affordable processors, recoup their R&D and optimize their yields.

            > With that kind of attitude 640K

            • Tell me, what is the right direction then?
              • For whatever reason, SSDs are still expensive. If the reason is material and basic process costs, then I concede and will agree that improving the value of an SSD should be done through improved performance. However, I don't think this is the issue, so the right direction ought to be bringing costs down, without entirely sacrificing SSD advantages.

                Some manufacturers are doing this by pairing premium controllers with non-premium NAND MLC (a la OCZ Agility). I'll say it again: Transfer rates are important,

                • And if your ahead of the Curve, your product is so much more polished then your competitors when it is relevant.. Judging by the article, these devices aren't using dedicated chips yet (using programmable ones instead) which bumps up the cost by around $200. I'd reckon once they finalise things like Booting, that chip will be replaced with a dedicated one. You could see it drop by a 1/3 in 6 months.

                  I'd say take a walk in the real world, not everything is perfect and costs money right from the word go. You
        • In the scope of a consumer product, I can't think of many common workloads that would really benefit from a PCIe interface.
          Well the review showed it as cutting game load times in half compared to a conventional SSD. Is that worth adding $1000 to the cost of your gaming rig? I personally don't think so but I bet there are some gamers who think otherwise just as there are some who will spend $1000 each on CPU, and $700 each on thier SLI graphics cards. These early adopters cover some of the R&D and hopef

          • A valid point. Although, CPU and SLI charge hilarious premiums for something that could be a real competitive advantage for a gamer, i.e. frame rate. It would be a harder sell to convince a gamer that 1337 loading times will lead to similarly 1337 headshot percentages.

    • I wish people would stop jumping on the "wear on the flash chips" issue. It's not that big of a deal anymore, drop it people.

    • That kind of speed is needed to run things faster. It's like saying, who needs 16 cores, all they do is run things faster!

      Less stuff will have to be loaded into RAM as the cost of a disk read isn't catastrophic, IO can substitute for computation - store precomputed textures instead of computing transformations to textures with imprecise fast routines, get away from the mad sequentiality that's everywhere in high performance computing.

      RAIDing and striping hard disk requires huge enclosures, heat dissipation

  • It was a RAM drive that went in the old Epson QX-10 and QX-16 computers. I remember when we dropped one of those in the old QX-10 and TP/M and ValDocs launched almost instantly. And two freakin' megabytes of storage. It was HUGE!!! And the battery backup could keep your data safe for a good 6 hours without power.

  • ok assuming home use you're still going to need to have a spinning disk for two reasons: 1. you still need a place to put the bootloader, might as well have it on the disk because your going to have the disk anyway because of: 2. You still the large capacity drive to hold all of you pron/movies/music.

Logic is the chastity belt of the mind!