Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Upgrades IT

Seagate Debuts World's Fastest NVMe SSD With 10GBps Throughput (hothardware.com) 66

MojoKid writes: Seagate has just unveiled what it is calling "the world's fastest SSD," and the performance differential between it and the next closest competitive offering is significant, if their claims are true. The SSD, which Seagate today announced is in "production-ready" form employs the NVMe protocol to help it achieve breakneck speeds. So just how fast is it? Seagate says that the new SSD is capable of 10GB/sec of throughput when used in 16-lane PCIe slots. Seagate notes that this is 4GB/sec faster than the next-fastest competing SSD solution. The company is also working on a second, lower-performing variant that works in 8-lane PCIe slots and has a throughput of 6.7GB/sec. Seagate sees the second model as a more cost-effect SSD for businesses that want a high performing SSD, but want to keep costs and power consumption under control. Seagate isn't ready yet to discuss pricing for its blazing fast SSDs, and oddly haven't disclosed a model name either, but it does say that general availability for its customers will open up during the summer.
This discussion has been archived. No new comments can be posted.

Seagate Debuts World's Fastest NVMe SSD With 10GBps Throughput

Comments Filter:
    • by Anonymous Coward on Tuesday March 08, 2016 @12:38PM (#51659969)

      Imagine a 10GB picture of a football field. This SSD can transfer ONE of those pictures per second!

    • If you break a cookie into 10 pieces that each represent a GB and drop them from 4.9 meters into a box in the middle of a football field, it is like that.
    • TFA clearly states that it is damn fast:

      The company is also working on a second, lower-performing variant that works in 8-lane PCIe slots and has a throughput of 6.7GB/sec (which is still damn fast).

      Crucially, does anyone know whether I can safely use this with my "System D" system? It molested my cat last month, twice, so I want to make sure all is safe before slotting this sucker in.

      • TFA clearly states that it is damn fast:

        Do note that "damn fast" is NOT equal to "ramadan fast".

        There's differences: it takes less long than ramadan fast, to start with, and the cache-miss penalty is less severe per byte.

    • If one carrier pigeon can carry a 1 MB floppy across a football field in 10 seconds, the pigeon's bandwidth is approximately 100KB/s. You'd need 100 pigeons carrying floppy drives across the football field to equate to 1 spinning platter drive (10 MB/s).

      For a typical SSD, that means 3,000 pigeons carrying floppies (and a lot of interns to manage the disks). (300MB/s)

      For this SSD, you'd need 100,000 pigeons carrying floppies. Or you could just be smart and go with 10 pigeons carrying 10GB thumb drive
  • Inconceivable (Score:2, Informative)

    by Anonymous Coward

    Hard to grok. You can fill up that new Samsung 16TB drive in 2 min 40 sec.

    • by kamakazi ( 74641 )

      Fill it from what? /dev/urandom isn't even that fast on any normal hardware, and it would take a lot of spindles and a dedicated 100Gb/s network card to fill that pipe. This thing isn't practical for anything in a normal datacenter. The only place I can see something like this currently being a justifiable purchase would be as caching drives in a massive data acquisition system, like the LHC or similar, or very large scale modeling, like weather. I am actually curious about the capacity of these drives,

      • by Gondola ( 189182 )

        With a ~1-second Sleep command that isn't buggy as shit, I'd actually turn my PC off every night.

      • /dev/zero of course!
      • by KGIII ( 973947 )

        I'm thinking modeling with numerous disparate inputs into a dedicated array with multiple I/O ports. It's still going to hit the bottleneck but you can probably push and pull pretty fast - multiples of these would have been a godsend back in the day where the disk was the bottleneck and not the bus. Now, the bus is in the way and that's improving, slowly. This sits right there almost on the board. You should be able to slam it with multiple I/O and be able to (reasonably) get a decent bi-directional data st

  • by tlhIngan ( 30335 ) <slashdot&worf,net> on Tuesday March 08, 2016 @12:45PM (#51660025)

    Crap. Now what to do here for my new PC build?

    Most motherboards with 2 or 3 x16 slots really only have all 16 lanes hooked into one slot - the others are usually 8 lanes or less - 16-8-4 iisn't even an uncommon configuration (PCIe tip - the slots are really just physical - you can put x16 slots even though it's hooked up to x1 so you can fit in any PCIe card, albeit only running at x1 speeds. It's why Apple's old Mac Pros used x16 slots - that way they can accept ANY PCIe card).

    So now what to do... GPU in x16 slot, and slow down my fast SSD by putting it in a x8? Or have my SSD be nice and fast by putting it in the x16 slot and slow down my FPS by putting the GPU in the s8 slot?

    Nevermind if you want to do SLI or CrossFire and now have to deal with 2 x16 GPUs and 1 x16 SSD...

    • by Kokuyo ( 549451 ) on Tuesday March 08, 2016 @12:53PM (#51660095) Journal

      Dude, any GPU will do just fine at 8x. Or how do you think SLI would work otherwise? The beefiest gamerboards have 20 PCIe lanes max.

      • by Anonymous Coward

        Try this board from 6 years ago:
        http://www.overclockersclub.com/reviews/asus_sabertooth_x58/3.htm

        Now consider that there are systems with 2 X58's in them.

        That said GPU's typically need fast interfaces between onboard memory and the GPU. Filling textures from an HD or system ram only has to take place once per level (or whatever), so speed there isn't very important.

      • Re: (Score:3, Informative)

        Intel boards with LGA 2011-3 sockets have 40 PCIe lanes available coming off the CPU.

    • You will see zero benefit when booting pc and running standard programs. What really drives speed is IOPS and cpu (after mechanical disks and value ssds are gone). Tomshardware did a benchmark with raid 0 ssds vs standard ssds back in 2013.

      They booted slower than non raid. Game loading speed didn't make a different either. BUT winzip and transfering a large 2 gig files where crazy fast.

      So a server would benefit maybe but I doubt most have 10 gbs ethernet to come close. So unless you work on databases (those

    • by DRJlaw ( 946416 ) on Tuesday March 08, 2016 @01:49PM (#51660471)

      So now what to do... GPU in x16 slot, and slow down my fast SSD by putting it in a x8? Or have my SSD be nice and fast by putting it in the x16 slot and slow down my FPS by putting the GPU in the s8 slot?

      Simply, no.

      You are right about the physical PCIe slot connectors.

      You could be right about the physical PCIe slot wiring, but in many boards with two x16 slots (assuming that there are only two, with the chipset-wired slots not being physically x16) both are electrically wired to be x16. You do not have to put a card in a specific one of the two slots to have an operational x16 connection.

      You are wrong about the functional connections. I'll assume an Intel-compantible motherboard since that is what I'm familiar with. AMD-compatible motherboards could be different - I simply do not know.

      Intel CPUs provide 16 PCIe lanes for connection to the x16 slot(s). If you have one card inserted, that slot will be allocated all 16 lanes. If you have two cards inserted in a board providing two slots, each slot will be allocated 8 lanes. In Z170 boards there could be three CPU-connected slots, and with three cards inserted in such a board, the slots would be allocated x8/x4/x4. See here [gamersnexus.net].

      Everything else runs off chipset-provided PCIe lanes, which are connected to the CPU by a PCIe x4-like . Thus, for example, in my Ivy Bridge system (Z68), there is a third PCIe x16 physical slot that is PCIe x4 electrically wired and functionally PCIe x1-connected unless I set a BIOS option that disables certain other peripherals (USB3 and eSATA add-ons). [wikipedia.org]

      If you connect your GPU and this SSD at the same time, you will be either x8/x8 (if using CPU-connected slots) or x16/x4 (if using one CPU-connected slot and one chipset-connected slot). That x4 would also be shared with every other I/O connection in the system due to the DMI "x4" like bandwidth limitation.

      PCIe PLXs switches add lanes to slots, but do not add further connections to the CPU or chipset. At the end of the day you're sharing either the 16 CPU-provided lanes or the 4 chipset provided lanes in Intel's consumer-oriented boards. You have to go to the LGA2011 socket and workstation chipsets to gain more available bandwidth to the CPU.

    • Get a socket 2011 mb, it's not like these are meant to go into a single socket board.

    • by pjrc ( 134994 )

      If you can afford this "enterprise" SSD, you can certainly afford a Xeon or Haswell-E and LGA2011 motherboard with 40 PCIe lanes.

      • by Kjella ( 173770 )

        If you can afford this "enterprise" SSD, you can certainly afford a Xeon or Haswell-E and LGA2011 motherboard with 40 PCIe lanes.

        Yeah. The nice thing about x16 cards is that you can probably reuse graphics-designed systems. Like this one, four x16 slots in a 1U chassis, 2-way system so you have 80 lanes total:

        http://www.supermicro.com/prod... [supermicro.com]

        Drop in four of those cards and you'll have a pretty decent database server, I imagine...

  • Seagate has terrible MTBF rates.

    http://arstechnica.com/informa... [arstechnica.com]

    • by Anonymous Coward

      That's for spinning platters - perhaps in an effort to get users to switch to "more reliable (only 10% failure rate) SSD from Seagate"

      Mind you, I haven't seen any actual failure rates for Seagate SSDs, I didn't even know they made any pure SSD-only drives. They're best known for that horrible hybrid contraption which can likely easily conflate high SSD and high mechanical platter failures!

  • by evolutionary ( 933064 ) on Tuesday March 08, 2016 @12:52PM (#51660085)
    We have no model number, no pricing, and no precise release date. Sounds like these tests are preliminary. Sounds like "beta" to me. I've been the the victim of an entire batch of enterprise drives which had experimental firmware (which actually shut down the drives at specific date!) so I'd take this announcement as market priming and take it was a grain of salt.
  • Talk to me about IOPS -- Input/Output operations Per Second -- or don't pretend you're talking to me about performance.

    Not saying this isn't actually really exciting, but that's the metric in at least 90% of use cases.
  • When will it become practical to eliminate the difference between temporary storage and long-term storage and just "execute in place", using RAM as a disk cache? It sounds like the speed is there already. If the storage is dangling off the memory controller rather than the PCIe controller, that would eliminate the worry about "lanes" as well.

    • Well given that Flash based SSDs have an limit on the number of writes, using Flash for memory operations is not practical in the long term.
      • by Mal-2 ( 675116 )

        This is why I mentioned "execute-in-place" specifically. If RAM is provisioned for working data sets but programs can simply be dumped at any time (because the pointer goes to the storage, not to the RAM), then the flash would take less wear than currently. Rather than swapping things out, they just get flushed and re-read as necessary rather than the current paradigm of "load from disk, execute from RAM".

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...