Seagate Debuts World's Fastest NVMe SSD With 10GBps Throughput (hothardware.com) 66
MojoKid writes: Seagate has just unveiled what it is calling "the world's fastest SSD," and the performance differential between it and the next closest competitive offering is significant, if their claims are true. The SSD, which Seagate today announced is in "production-ready" form employs the NVMe protocol to help it achieve breakneck speeds. So just how fast is it? Seagate says that the new SSD is capable of 10GB/sec of throughput when used in 16-lane PCIe slots. Seagate notes that this is 4GB/sec faster than the next-fastest competing SSD solution. The company is also working on a second, lower-performing variant that works in 8-lane PCIe slots and has a throughput of 6.7GB/sec. Seagate sees the second model as a more cost-effect SSD for businesses that want a high performing SSD, but want to keep costs and power consumption under control. Seagate isn't ready yet to discuss pricing for its blazing fast SSDs, and oddly haven't disclosed a model name either, but it does say that general availability for its customers will open up during the summer.
Can anyone explain that speed in football fields? (Score:1)
Or at least in mph.
Re: (Score:1)
Re: (Score:2)
Re:Can anyone explain that speed in football field (Score:4, Funny)
Imagine a 10GB picture of a football field. This SSD can transfer ONE of those pictures per second!
Re: (Score:2)
Why would you need two of them though when you got good ol friendly /dev/null to copy all your files into!
Re: (Score:3)
Is that an Imperial or metric fortnight?
Re: (Score:2)
Re: (Score:2)
TFA clearly states that it is damn fast:
Crucially, does anyone know whether I can safely use this with my "System D" system? It molested my cat last month, twice, so I want to make sure all is safe before slotting this sucker in.
Re: (Score:2)
TFA clearly states that it is damn fast:
Do note that "damn fast" is NOT equal to "ramadan fast".
There's differences: it takes less long than ramadan fast, to start with, and the cache-miss penalty is less severe per byte.
Re: (Score:2)
For a typical SSD, that means 3,000 pigeons carrying floppies (and a lot of interns to manage the disks). (300MB/s)
For this SSD, you'd need 100,000 pigeons carrying floppies. Or you could just be smart and go with 10 pigeons carrying 10GB thumb drive
Re: (Score:2)
Why would I be proud of someone who thinks bankruptcy is the first option, that taxpayers should foot the bill for private companies, who ran their casinos into the ground while everyone else was flourishing, who has been married 3 times and whose grasp of reality seems to be on par with Jeb Bush's, "My brother protected this country"?
Re: (Score:2)
This is Seagate you're talking about here... We're not dealing with a flash memory manufacturer like Samsung, Intel, Micron, SanDisk, or Toshiba, so who knows what grade F- stuff Seagate has purchased for the lowest price possible.
Re: (Score:2)
This is for servers not desktops any ways the desktop systems are limited in pci-e lanes and a high end system will have 2 or more X16 video cards + an X4 ssd.
Re: (Score:2)
In any event, for enterprise hardware. Oh, FYI, most server motherboards in my experience have 2-3 X16 PCI-ex slots. Some only ha
Re: (Score:2)
http://www.supermicro.com/prod... [supermicro.com]
http://www.supermicro.com/prod... [supermicro.com]
Inconceivable (Score:2, Informative)
Hard to grok. You can fill up that new Samsung 16TB drive in 2 min 40 sec.
Re: (Score:2)
Fill it from what? /dev/urandom isn't even that fast on any normal hardware, and it would take a lot of spindles and a dedicated 100Gb/s network card to fill that pipe. This thing isn't practical for anything in a normal datacenter. The only place I can see something like this currently being a justifiable purchase would be as caching drives in a massive data acquisition system, like the LHC or similar, or very large scale modeling, like weather. I am actually curious about the capacity of these drives,
Re: (Score:2)
Well, we're just upgrading our SAS VA installation to 2TB of ram, and upgrading our SQL Server production db to 768GB of ram - it's pretty cheap nowadays and certainly much cheaper than optimizing a lot of queries - and we have better things to do than optimize queries.
That 16 TB SSD would be pretty neat too - I can already see how that would benefit the logs and tempdb on our installation. And 16TB is a bit too large for just tempdb and logs, but the 10Gbps is cool and I would certainly like the 15TB when
Re: (Score:2)
With a ~1-second Sleep command that isn't buggy as shit, I'd actually turn my PC off every night.
Re: (Score:2)
Re: (Score:2)
I'm thinking modeling with numerous disparate inputs into a dedicated array with multiple I/O ports. It's still going to hit the bottleneck but you can probably push and pull pretty fast - multiples of these would have been a godsend back in the day where the disk was the bottleneck and not the bus. Now, the bus is in the way and that's improving, slowly. This sits right there almost on the board. You should be able to slam it with multiple I/O and be able to (reasonably) get a decent bi-directional data st
Decisions decisions (Score:3)
Crap. Now what to do here for my new PC build?
Most motherboards with 2 or 3 x16 slots really only have all 16 lanes hooked into one slot - the others are usually 8 lanes or less - 16-8-4 iisn't even an uncommon configuration (PCIe tip - the slots are really just physical - you can put x16 slots even though it's hooked up to x1 so you can fit in any PCIe card, albeit only running at x1 speeds. It's why Apple's old Mac Pros used x16 slots - that way they can accept ANY PCIe card).
So now what to do... GPU in x16 slot, and slow down my fast SSD by putting it in a x8? Or have my SSD be nice and fast by putting it in the x16 slot and slow down my FPS by putting the GPU in the s8 slot?
Nevermind if you want to do SLI or CrossFire and now have to deal with 2 x16 GPUs and 1 x16 SSD...
Re:Decisions decisions (Score:4, Interesting)
Dude, any GPU will do just fine at 8x. Or how do you think SLI would work otherwise? The beefiest gamerboards have 20 PCIe lanes max.
Re: (Score:1)
Try this board from 6 years ago:
http://www.overclockersclub.com/reviews/asus_sabertooth_x58/3.htm
Now consider that there are systems with 2 X58's in them.
That said GPU's typically need fast interfaces between onboard memory and the GPU. Filling textures from an HD or system ram only has to take place once per level (or whatever), so speed there isn't very important.
Re: (Score:3, Informative)
Intel boards with LGA 2011-3 sockets have 40 PCIe lanes available coming off the CPU.
Re:Transfer rates not bottleneck (Score:3)
You will see zero benefit when booting pc and running standard programs. What really drives speed is IOPS and cpu (after mechanical disks and value ssds are gone). Tomshardware did a benchmark with raid 0 ssds vs standard ssds back in 2013.
They booted slower than non raid. Game loading speed didn't make a different either. BUT winzip and transfering a large 2 gig files where crazy fast.
So a server would benefit maybe but I doubt most have 10 gbs ethernet to come close. So unless you work on databases (those
Re:Decisions decisions (Score:5, Informative)
Simply, no.
You are right about the physical PCIe slot connectors.
You could be right about the physical PCIe slot wiring, but in many boards with two x16 slots (assuming that there are only two, with the chipset-wired slots not being physically x16) both are electrically wired to be x16. You do not have to put a card in a specific one of the two slots to have an operational x16 connection.
You are wrong about the functional connections. I'll assume an Intel-compantible motherboard since that is what I'm familiar with. AMD-compatible motherboards could be different - I simply do not know.
Intel CPUs provide 16 PCIe lanes for connection to the x16 slot(s). If you have one card inserted, that slot will be allocated all 16 lanes. If you have two cards inserted in a board providing two slots, each slot will be allocated 8 lanes. In Z170 boards there could be three CPU-connected slots, and with three cards inserted in such a board, the slots would be allocated x8/x4/x4. See here [gamersnexus.net].
Everything else runs off chipset-provided PCIe lanes, which are connected to the CPU by a PCIe x4-like . Thus, for example, in my Ivy Bridge system (Z68), there is a third PCIe x16 physical slot that is PCIe x4 electrically wired and functionally PCIe x1-connected unless I set a BIOS option that disables certain other peripherals (USB3 and eSATA add-ons). [wikipedia.org]
If you connect your GPU and this SSD at the same time, you will be either x8/x8 (if using CPU-connected slots) or x16/x4 (if using one CPU-connected slot and one chipset-connected slot). That x4 would also be shared with every other I/O connection in the system due to the DMI "x4" like bandwidth limitation.
PCIe PLXs switches add lanes to slots, but do not add further connections to the CPU or chipset. At the end of the day you're sharing either the 16 CPU-provided lanes or the 4 chipset provided lanes in Intel's consumer-oriented boards. You have to go to the LGA2011 socket and workstation chipsets to gain more available bandwidth to the CPU.
Re: (Score:2)
It helps if you read what was written before you reply.
"At the end of the day you're sharing either the 16 CPU-provided lanes or the 4 chipset provided lanes in Intel's consumer-oriented boards. You have to go to the LGA2011 socket and workstation chipsets to gain more available bandwidth to the CPU."
Is Haswell-E a consumer-oriented platform
Re: (Score:2)
Re: (Score:2)
Get a socket 2011 mb, it's not like these are meant to go into a single socket board.
Re: (Score:3)
If you can afford this "enterprise" SSD, you can certainly afford a Xeon or Haswell-E and LGA2011 motherboard with 40 PCIe lanes.
Re: (Score:2)
If you can afford this "enterprise" SSD, you can certainly afford a Xeon or Haswell-E and LGA2011 motherboard with 40 PCIe lanes.
Yeah. The nice thing about x16 cards is that you can probably reuse graphics-designed systems. Like this one, four x16 slots in a 1U chassis, 2-way system so you have 80 lanes total:
http://www.supermicro.com/prod... [supermicro.com]
Drop in four of those cards and you'll have a pretty decent database server, I imagine...
The real downside? (Score:2)
Seagate has terrible MTBF rates.
http://arstechnica.com/informa... [arstechnica.com]
Re:Seagate platters != SSDs? (Score:1)
That's for spinning platters - perhaps in an effort to get users to switch to "more reliable (only 10% failure rate) SSD from Seagate"
Mind you, I haven't seen any actual failure rates for Seagate SSDs, I didn't even know they made any pure SSD-only drives. They're best known for that horrible hybrid contraption which can likely easily conflate high SSD and high mechanical platter failures!
We advertising, bragging..but... (Score:5, Insightful)
IOPS or bust (Score:2)
Not saying this isn't actually really exciting, but that's the metric in at least 90% of use cases.
End of RAM vs. file system? (Score:2)
When will it become practical to eliminate the difference between temporary storage and long-term storage and just "execute in place", using RAM as a disk cache? It sounds like the speed is there already. If the storage is dangling off the memory controller rather than the PCIe controller, that would eliminate the worry about "lanes" as well.
Re: (Score:2)
Re: (Score:2)
This is why I mentioned "execute-in-place" specifically. If RAM is provisioned for working data sets but programs can simply be dumped at any time (because the pointer goes to the storage, not to the RAM), then the flash would take less wear than currently. Rather than swapping things out, they just get flushed and re-read as necessary rather than the current paradigm of "load from disk, execute from RAM".