Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Data Storage

Will PCIe Flash Become Common In Laptops, Desktops? 372

Posted by Soulskill
from the unless-the-singularity-gets-here-first dept.
Lucas123 writes "With Apple announcing that it is now using PCIe flash in its MacBook Air and it has plans to offer it in its Mac Pro later this year, some are speculating that the high-speed peripheral interface may become the standard for higher-end consumer laptops and workplace systems. 'It's coming,' said Joseph Unsworth, research vice president for NAND Flash & SSD at Gartner. The Mac Pro with PCIe flash is expected to exceed 1GB/sec throughput, twice the speed of SATA III SSDs. Apple claims the new MacBook Mini got a 45% performance boost from its PCIe flash. AnandTech has the Air clocked in at 800MB/s. Next year, Intel and Plextor are expected to begin shipping PCIe cards based on the new NGFF specification. Plextor's NGFF SSD measures just 22mm by 44mm in size and connects to a computer's motherboard through a PCIe 2.0 x2 interface. Those cards are smaller than today's half-height expansion cards and offer 770MB/s read and 550MB/s write speeds."
This discussion has been archived. No new comments can be posted.

Will PCIe Flash Become Common In Laptops, Desktops?

Comments Filter:
  • Yes (Score:5, Insightful)

    by Intrepid imaginaut (1970940) on Tuesday June 11, 2013 @07:05PM (#43980119)

    In ten years we'll be using equipment that makes the current best look like pocket calculators, just like we're buying gear today for a few hundred that would have been worth tens of thousands ten years ago, if we could even manufacture it. Goddamn I love living in the future.

    • Re:Yes (Score:4, Insightful)

      by AlphaWolf_HK (692722) on Tuesday June 11, 2013 @08:11PM (#43980649)

      The real desktop/laptop performance measurement is iops at low queue depth. Large sustained rates are meaningless for all but servers. (I mean really, how often are you going to copy files big enough for these speeds to matter, and what are you going to copy it to that can keep up? Certainly not cloud storage or a USB drive.)

      This is sounding to me like MHz myth 2.0

      • Re:Yes (Score:5, Funny)

        by MightyMartian (840721) on Tuesday June 11, 2013 @08:19PM (#43980701) Journal

        I'll have you know I copy big files back and forth all day long, you insensitive clod!

        • Re:Yes (Score:4, Insightful)

          by real-modo (1460457) on Tuesday June 11, 2013 @10:27PM (#43981499)

          Haha, but lots of Mac Pro users do exactly this. They edit video.

          So, 0.1% or 0.2% of all computer users out there will find increased bandwidth very useful.

          • by fyngyrz (762201) on Wednesday June 12, 2013 @12:54AM (#43982113) Homepage Journal

            The new Mac Pro isn't that great -- and I've been waiting for it. Really had my hopes up.

            Flash drives seem to be characterized by very high failure rates. Changing the drive? Unclear this is a user operation. All real drives -- the ones you use for your data -- would have to be external bricks. Whereas standard HD's for the current design go in and out trivially. It's wonderful. Four of 'em.

            External drives? External graphics? (3 display max it would seem unless you have external boxes.... yech) Nah.

            Best thing right now seems to be the last generation of the big box. 12 cores, 12 more semi-competent hyperthreads, holds four drives, can push six monitors, RAM is (user!) upgradable...

            And they finally fixed OSX so it handles multiple monitors correctly, fixed the broken menu paradigm, fixed how full screen apps work... perfect.

            The mac pro.... unless there are some real differences between what they say they're making and what they actually make, I think it's the big box for me. My older 8-core can live in the ham shack doing SDR and digi-mode duty. :)

            This way I know I can do the big jobs, and without littering my workspace, which I am quite particular about, with bricks and cables. I *really* don't understand what they were thinking.

            • by kthreadd (1558445) on Wednesday June 12, 2013 @01:04AM (#43982167)

              According to some people att WWDC replacing a "drive" is merely a matter of taking the cover off and popping it out of the PCIe slot. Plugging the new one in and closing the cover. The units they have on display features two such slots. Seams pretty OK to me.

              RAM is definitly user upgradeable. Four slots for DDR3 1866 MHz ECC. Works like any other RAM slot.

              It should be possible to replace the GPUs as well. The only question seams to be that it's unclear how many GPUs will be availble that fits within the form factor.

            • Changing the drive? Unclear this is a user operation

              Changing everything seems to be a user operation, it's as easy to get in this new box as the old

              External drives? External graphics?

              Thunderbolt? Which even allows for external GPU expansion...

              3 display max it would seem

              It's not three displays, it's up to three *4K" displays (4096 x 2160). Where you really driving six displays of that resolution before? You could drive more displays with lower resolution.

              Basically it seems like you didn't bother to even

      • Ahmdahl's law: A system needs a bit of IO per second per instruction per second.

        Given that the i7-3720QM is capable of 20,333 "MIPS" source [digitaltrends.com],

        we will need 20 billion bits of IO per second.

        We're close, but not quite there.

        • Like say, a 20Gbps optical thunderbolt connection? Ahmdahl didn't say anything about all I/O being disk based.
          What about the 60+Gbps memory bandwidth in your average PC?
          A couple of 10Gbe connections gives you 20Gbps I/O too.
          You even get 5Gbps out of USB3

      • Re:Yes (Score:5, Interesting)

        by Solandri (704621) on Tuesday June 11, 2013 @11:29PM (#43981763)
        Even high IOPS is starting to become meaningless. Here's an Anandtech comparison of top SSDs [anandtech.com] from two years ago of typical tasks which stressed IOPS. He played it straight for this one page and showed benchmarks in units that matter to people's perception of speed - seconds to complete a task. The result is utterly uninteresting. The HDD is substantially slower. The SSDs are for all practical purposes identical.

        But boring graphs are bad for review sites. If the reviews are boring, people won't read them, and the sites lose out on ad revenue. So they invert the metric to make smaller differences appear bigger. Instead of the practical sec/MB, they use the more ephemeral MB/sec. That makes the graphs more interesting and gets people coming back to the sites before buying, instead of just buying some random cheap SSD without really caring about the max speed.

        "But sec/MB and MB/sec are the same number! Why should inverting it make a difference?" Because when you invert a metric, the big numbers become small numbers, and the small numbers become big numbers. e.g. Say you have a HDD which can read 100 MB/s, a cheap SSD which can read 200 MB/s, and an expensive SSD which can read 500 MB/s. So in 1 second, the HDD reads 100 MB, the cSSD 200 MB, and eSSD 500 MB. Expressed in MB/s you gain 100 MB/s switching from HDD->cSSD, and a whopping 300 MB/s switching from cSSD->eSSD. Switching from cSSD->eSSD gives you 3x the benefit of switching from HDD->cSSD! So the extra money for the expensive SSD is definitely worth it! Right?

        Hold on. Invert to s/MB and say you need to read 1 GB. The HDD takes 10 sec, the cSSD 4 sec, and the eSSD 2 sec. Switching from HDD->cSSD saves you 6 seconds. Switching from cSSD->eSSD only saves you 2 sec. So in terms of time you spend waiting, the HDD->cSSD switch saves you 3x as much time as the cSSD->eSSD switch. The vast majority of your time saved can actually be obtained from the switch to the cheaper SSD. The next step switching to the expensive SSD only gives you a marginal improvement. (Even if you insist on using relative measures of time, the cheap SSD still wins. 10 sec to 4 sec is a 60% reduction in time. 4 sec to 2 sec is only a 50% reduction in time. Or if you want to be a purist, of the 8 sec saved going from 10 sec to 2 sec, the cheap SSD gets you 75% of that speedup, the expensive SSD gives only the remaining 25%)

        Unless you're regularly doing tasks where you find yourself twiddling your thumbs for several seconds or minutes waiting for the SSD to finish reading/writing several GB of data, the difference between 600 MB/s and 1.25 GB/s is imperceptible despite being a 2x speedup. Twice as fast as the blink of an eye is still as fast as a blink of an eye to our perception.
        • Heck, in this thread even. SSDs are all more than fast enough for today's usage on desktops. They aren't the bottleneck. With the lower latency, and good random access, they all seem to work well.

          There's a difference between synthetic benchmarks and what you notice on the wall clock, and just because it is faster doesn't mean it is needed. Another area you see it is RAM. DDR3 scales up to 2133MHz by the spec, and you can find stuff of to 3000MHz. The Sandy/Ivy bridge controllers support RAM speeds async wit

        • Re:Yes (Score:4, Insightful)

          by EdZ (755139) on Wednesday June 12, 2013 @06:53AM (#43983551)
          There's a metric you're missing: responsiveness. One of the big gains of moving to SSDs is not tasks completing faster, but of UI elements responding sooner.
  • Of course if Apple follows it's past history and wants too high of a royalty on it, the mobo & other hardware manufacturers will find something else to satisfy their need for speed. After all, that's why USB exists.
    • by tmark (230091)

      And how long were Apple users using Firewire drives before USB 3 - heck, even USB 2.0 if you want to pretend that's anywhere close to Firewire ? Even now Firewire is a viable interface.

    • by Osgeld (1900440)

      USB doesn't exist because of fire-wire, at the time they were for 2 radically different reasons, one was for simple peripheral devices, the other was for fast transport of large streams such as video. How USB won is because it was the port that seemed to do everything whereas firewire still limits itself to media equipment and hard drives

      • by MightyYar (622222)

        USB also uses the CPU for the heavy lifting, so it is cheaper to implement. Which is a plus or a minus, depending on the use case.

        • USB also uses the CPU for the heavy lifting, so it is cheaper to implement.

          The statement that USB uses CPU for the heavy lifting is thrown around a lot, but is it still true?

          • by amiga3D (567632)

            With CPU power being what it is nowadays it's irellevant. The average computer has spare CPU cycles to burn. I will say that I have an external hard drive case and when I run benchmarks on it using USB2 and Firewire 400 it's obvious that Firewire is superior. USB2 is supposedly faster but not in real life.

          • by MightyYar (622222)

            To be honest, I don't know if USB 3.0 still relies on the CPU as heavily. It was certainly true with USB 2.0 - you could demonstrate it easily enough. In theory, it was a design goal, but I haven't seen any real-world tests.

      • by Nutria (679911)

        How USB won is because it was the port that was a hell of a lot cheaper and pushed by Intel

        FTFY.

        If FW had been reasonably priced, there would now be 1 USB1.0 port on machines for kb+mouse, and 3 or 4 FW1200 ports for cameras, external HDDs, scanners, etc.

      • While true, there's no denying that Apple charging a per-port fee on Firewire ports (on devices at both ends of a cable) was a major motivation to get USB2 spec and devices out there faster.

        It was a facepalm moment when I first heard it announced and knew right then Firewire's chances of becoming a ubiquitous interface was over. This was in early 1999, almost 3 years before they introduced the first iPod. Apple was in no position to dictate the direction of hardware at the time, having just started shipping

    • I am not aware of Apple owning the PCIe standard. Did I miss something?
    • This is not some proprietary Apple tech that it even can claim any royalties for. They are just among the first to have it in production systems, they did not invent hooking SSD onto PCI-e cards. The article is just saying that as apple has it in their products, perhaps the install base is large enough for other makers to move away from the s-ata bottleneck.

    • Well except that some of us have been using PCIe flash for years. It's not an apple invention.

  • by King_TJ (85913) on Tuesday June 11, 2013 @07:18PM (#43980237) Journal

    From the photos Apple has on their site of the Mac Pro with its cover open, it looks to me like the flash storage used is a "mini PCIe" form-factor. I've already purchased and used an identical looking 480GB flash drive to fit in an HP "Ultrabook" type of portable called the "Spectre XT Pro".

    (HP claims the notebook can't be purchased with a drive larger than 256GB, even in a custom build order on their web site, but a technical manual I found clearly showed it took the mini PCIe type of flash drive, so I bought a 480GB from CDW to try it and it worked just fine.)

    I've seen a few comments yesterday and today though claiming some of these mini PCIe form-factor SSDs are not *really* following the standards for the PCIe connector? So in effect, they perform with a lot less throughput, the same as any existing SSD drive, except just using that type of physical connector.

    Anyone know if there's much truth to such claims .... meaning what Apple is offering here really will be more advanced than current SSD technology, or is this a case where companies like HP have really been using the same stuff for at least the last 1-2 years in select ultraportables?

  • by Sycraft-fu (314770) on Tuesday June 11, 2013 @07:21PM (#43980255)

    While the speed sounds impressive on paper, SSDs are really already going beyond what is needed for storage speeds. You can try this by upgrading from a SATA II to SATA III SSD yourself. I've done that, and I even went from a slow one (WD SiliconEdge Blue) to a fast one (Samsung 840 Pro). Actual difference in system performance? Eh, I doubt I could tell you which was which in a blind test.

    The big numbers are mostly dick-waving in a desktop setup. I think the advantages offered by a storage connector and controller are likely to outweigh speed.

    Also please note SAS 12g is coming out soon, and that means SATA at the same speed is soon to come as well.

    It just really isn't that big a deal on the desktop. For SANs, databases, other high performance shit? Sure, there are cases where you need more IO or iops then you can get out of a SAS interface and then PCIe or the like may be an answer. But for user systems, SSDs are already more than fast enough, additional speed gains don't seem to translate in to wall time gains.

    • by mjwx (966435)

      While the speed sounds impressive on paper, SSDs are really already going beyond what is needed for storage speeds. You can try this by upgrading from a SATA II to SATA III SSD yourself. I've done that, and I even went from a slow one (WD SiliconEdge Blue) to a fast one (Samsung 840 Pro). Actual difference in system performance? Eh, I doubt I could tell you which was which in a blind test.

      The big numbers are mostly dick-waving in a desktop setup. I think the advantages offered by a storage connector and controller are likely to outweigh speed.

      This,

      In sports car communities this is called "Hard Parking". People who modify their cars, intakes, cat-backs, chips, so on and so forth but never actually take it out on the track. They compare dyno scores and talk about how their latest tuning netted them an extra 5 bhp between taking photo's of their never-tracked car. For those of us who aren't hard parkers, I have to say it's a lot more fun taking an unmodified S13 around a track than sitting on a dynamometer in a highly modified WRX STI (I.E. I'd

      • by smash (1351)

        Whilst it is a case of diminishing returns, sure... but...

        If i have to wait AT ALL for my machine to do something it is wasted time in my life I will never get back. Until everything I do on the machine is INSTANT, i'll take any speed improvements they can provide, thanks.

        • by mjwx (966435)

          Whilst it is a case of diminishing returns, sure... but...

          If i have to wait AT ALL for my machine to do something it is wasted time in my life I will never get back. Until everything I do on the machine is INSTANT, i'll take any speed improvements they can provide, thanks.

          Right.

          Way to miss the point. For the most part, the time you spend waiting isn't for disk I/O. It's not about diminishing returns, rather it's ineffective as it's not the bottleneck.

          Anyway, as the OP said, I highly doubt you'd be able to tell the difference in a blind test. The only people it would matter to are people who have disk operations that are measured in hours.

          Also, you have some serious problems if you cant wait 30 seconds for anything. Seriously, people suffering from ADHD tend to have

          • by Pulzar (81031) on Tuesday June 11, 2013 @09:11PM (#43981037)

            Also, you have some serious problems if you cant wait 30 seconds for anything. Seriously, people suffering from ADHD tend to have more patience than that. However as someone who sells high priced items that provide minimal gain, I like suckers like you.

            Ok, you had good points until here.

            Any (good) programmer, artist, writer, or anyone else who creates on a computer for a living will tell you that they hate unresponsive applications. Open a new file and wait 5 seconds before you can see it? It's distracting, and it breaks your train of thought.

            It's not ADHD, it's the fact that we're used to, from the "real world", to have instant response to actions -- pull out a piece of paper and you can read it immediately. Put a brush to the paper, and the colour shows up instantly. The brain expects the computers, which are trying to model this real world interaction, to work the same way.

        • by noh8rz10 (2716597)
          have you considered multitasking? while waiting for your machine to do something you could clip your toenails, for example. just a thought, it works well for me.
    • by timeOday (582209)
      Even with an SSD I still find suspending and resuming VMs to be slow enough that I avoid it until necessary. I am delighted by these improvements; for decades, hard drives were an increasingly narrow bottleneck in computer performance relative to other components, and it seemed it would always stay that way. But I guess I won't be totally satisfied until the L1 cache on my CPU is big enough to store my media collection and we can do away with the entire memory hierarchy.
    • by JoeyRox (2711699)
      For throughput maybe but not IOPs. SATA adds a tremendous amount of overhead to I/Os. For spinning disks it didn't matter since the overhead represented a fraction of the rotational and seek latencies. For flash media however the SATA overhead is huge and inhibits transactional performance. As fast as SSDs are for IOPs they will see a quantum jump in IOPs with direct-attached interfaces like PCIe.
  • Can't people even get the half-dozen different computer models that Apple makes right?
  • Both in traditional storage and cards that allow you mount volatile RAM as a HDD. Long before the age of SSD's friend had one of these drives with 4 x 2 GB sticks to form an 8 GB drive (at the time a 500 GB drive was the largest commercially available, XP service pack 2 was causing great consternation and people used Friendster). These never took off because.

    - Performance benefits weren't useful outside niche applications.
    - They simply weren't practical.

    SATA has a huge legacy, is cheap to produce and
  • by slashmydots (2189826) on Tuesday June 11, 2013 @07:37PM (#43980401)
    Self contained modular components are always superior in ease of replacing and overall use to both the manufacturer and the consumer. If it's some one of a kind custom one made just for them, that's trouble because the manufacturer has them by the balls and it's just one supply source. The last time I heard of a company getting 100 "identical" Dell laptops, there were 4 different hard drive models in them. That's because of cost and supply changes. With just 1 item to choose from, that's bad.

    Then from the consumer side, some modified BIOS that only boots off of PCI-E controlled storage devices and then not being able to use Acronis or GParted because it's a custom driver on a custom controller are both huge problems. Not being able to replace it with any 2.5" drive, just 1 single replacement option at a price-gouging 5x charge from the manufacturer is pretty awful too. Your upgrade options go out the window too.

    Then it's just some anonymous nothing brand. There are 3 brands of SSDs that I buy and that's it because I don't want the flash chips failing in a year like Kingston SSDNow or Adata or Sandisk or any of those wacky off brand ones. HP and Dell are famous for garbage like rebranded lite-on DVD burners that fail constantly instead of something nice like Phillips so you bet it's going to be a true piece of crap.

    Overall, it's a terrible idea.
  • Most Motherboard manufacturers have been coming out with designs that reduce the number of slots for expansion, this may be a reversal and there's mini PCIe which I'd love to see available in more and more systems. It will definitely push for more SSD solutions in laptops/desktops and workstations but I was also curious about the release of the new Mac Pro yesterday and their expectation of externally connected hardware as well. While they've reduced the footprint of the system I can imagine a bunch of ca

    • If PCIE disks gain market share, motherboard manufacturer will inevitably add more PCIE slots, and gradually start removing SATA2 slots, on at a time.

  • basis and are almost always wrong. Why should we believe them this time?

    If you can, do.
    If you can't, teach.
    If you can't teach, pontificate.

  • by smash (1351)

    ... only use cheap components, same design as everybody else, blahblahblah...

    No.

    I doubt PCIe based flash will be universal or even that common for a long time. Hell, one of the tablet/notebook convertible things (HP envy I think?) I tested recently was trying to run Windows 8 Pro on SD based flash. Took me a while to figure out why it was so slow and unresponsive...

    Totally ruined the performance of the machine, but hey its cheap!

  • by Murdoch5 (1563847) on Tuesday June 11, 2013 @08:47PM (#43980905)
    Apple is the king of put what you don't need into computers. Unless your doing intensive video editing or mass virtualization, you simple don't need the bandwidth that is given from PCIe flash. A standard SSD over a Sata 3 interface is more then fast enough for 97% of general computer users. I would really like to hear the actual reason, beside price increase, that Apple can give as to why anyone needs this. How about they put the Ethernet port back on the notebook, they include more USB ports and a solid optical drive. The next thing Apple is going to include is a 10GB Fibre interface, because they can and it looks / sounds cool.
    • The Mac Pro isn't for the 97%. For example, those FirePro's in it are $1000 each. The processor is $1000-$2000 depending on speed. I suspect the Mac Pro will start at no less than $4,999, and that's not a price range the 97% will be looking at.

    • by drinkypoo (153816)

      Apple is the king of put what you don't need into computers. Unless your doing intensive video editing or mass virtualization, you simple don't need the bandwidth that is given from PCIe flash.

      Sadly, you have this 100% backwards. Apple is the king of omitting what you need. In this case, they're omitting the SATA controller and connector. You need it so that you have more storage options. They're also omitting a full fleet of memory slots, and normal GPU connection.

  • by ikaruga (2725453) on Tuesday June 11, 2013 @10:04PM (#43981395)
    Why Apple is taking credit for this new trend? Sony new Vaio Pro line has optional 20Gbps PCIe 256/512GB flash storage. I pre ordered one(first vaio in 9 years) simply because of that. Credit where it's due.
    • by king neckbeard (1801738) on Wednesday June 12, 2013 @02:41AM (#43982571)
      Ah, but when it comes to credit and Apple, you don't have to do things first, you just have to be the first to masturbate into a a massive crowd about doing it. Apple are masters at dropping tech at the crossover between early adopter and early majority. It's got a very good ratio in R&D investments to PR payoffs.
    • by itsdapead (734413)

      Why Apple is taking credit for this new trend?

      Hands up who had heard about Sony offering PCIe Flash as an option.
      Now hands up who had heard about Apple offering PCIe Flash as standard.

      What Apple "gets" is that it is no good innovating unless you're going to market the fuck out of it. Apple didn't invent [GUIs,LANs,laser printers,small form-factor,USB,music players,touch screen phones,app stores,tablets,'retina' displays,...] they just persuaded people to buy them in quantity while the original inventors sat around admiring their new mousetrap and wa

We don't know who it was that discovered water, but we're pretty sure that it wasn't a fish. -- Marshall McLuhan

Working...