He did give an answer, and from that answer I think he probably does know. In short:
it depends.
I know, it sucks that we can't sum up all aspects of a technology in a single paragraph. Computers aren't easy and are still for people who don't mind doing their own homework.
I used to own a backup company and we did a lot of testing, testing both software and the major hardware vendors. Including some $2,000+ raid cards.
Once upon a time there were plusses and minuses.
Software raid is better. Specifically the Linux MD raid, with LVM on top. It's much more flexible / featurefull and doesn't make your data dependent on a specific controller.
Back in the day, hardware raid had the advantage of not using CPU cycles. At full write bandwidth, raid could use up 10% or even 20% (during
I've seen enough bespoke systems with (admittedly excellent, until they weren't) ancient controllers break after 10 years and someone having to scrape Ebay for a replacement to say yes, this is the correct answer in most all current use cases.
While slightly true, ssh caching and encryption at rest are still far better performers with hardware raid with dedicated processors for crypto. I've also seen far more examples of ZFS pools going missing after a reboot than I have had logical volumes go missing with hardware raid. Feel free to substitute MDRAID in for ZFS. SSD caching with ZFS does perform fairly well but in most cases with hardware raid you are using a hypervisor at which point you are already using a large portion of the fancy new proce
Both processors will be bottlenecked by the interface that they are connected to. The difference is the hardware raid card doesn't have to send everything over pcie and system memory to do operations between the disks.
Still, the advantages of a real software raid solution like zfs can't be discounted. You're not going to be able to run it on that super advanced 4ghz i9 though since intel hates ECC.
PS you also mentioned sending "drive to drive" data (rebuild?) through system memory. System memory bandwidth is 640 Gbps. The SAS connection is 6 Gbps.
Going through system memory is literally over a 100 times faster than reading or writing the disk. It's also several times faster than the memory onboard the raid card, in most cases.
You should have a plan in place to age out old hardware, and have new hardware ready to take over production. While also maintaining backups the entire time, which can use to pre-load your spares and dramatically reduce the replication time when you go live.
One of the reasons to go with a hardware RAID controller in the past was also port density - when even server mainboards only have 4 SATA ports on them, it makes software RAID of any density harder. Now you can grab a cheap 16 port SATA RAID card and just pass the disks through as JBOD and you're in business for mdraid, LVM, ZFS, or whatever.
But it wasn't always that way.
A mathematician is a device for turning coffee into theorems.
-- P. Erdos
How is this on Slashdot? (Score:-1, Flamebait)
Re: (Score:5, Insightful)
If you don't know, why post?
I'm interested as well, and I want to read suggestions from people who have been there.
Re: (Score:0, Flamebait)
He did give an answer, and from that answer I think he probably does know. In short:
it depends.
I know, it sucks that we can't sum up all aspects of a technology in a single paragraph. Computers aren't easy and are still for people who don't mind doing their own homework.
Used to be "it depends". Now software is better (Score:5, Insightful)
I used to own a backup company and we did a lot of testing, testing both software and the major hardware vendors. Including some $2,000+ raid cards.
Once upon a time there were plusses and minuses.
Software raid is better. Specifically the Linux MD raid, with LVM on top. It's much more flexible / featurefull and doesn't make your data dependent on a specific controller.
Back in the day, hardware raid had the advantage of not using CPU cycles. At full write bandwidth, raid could use up 10% or even 20% (during
Re:Used to be "it depends". Now software is better (Score:5, Insightful)
Software raid is better.
I've seen enough bespoke systems with (admittedly excellent, until they weren't) ancient controllers break after 10 years and someone having to scrape Ebay for a replacement to say yes, this is the correct answer in most all current use cases.
Re: (Score:2)
Hardware raid for best performance.
Software raid for most flexibility.
So it all comes down to what you need.
Re: (Score:2)
That was true in the mid 1990s. Consider these two processors:
8-core i9 @ 4 GHz per core
Single core Celeron @ 500 MHz
Which do you think will give better performance?
The raid card has the Celeron on-board.
The software raid uses the i9.
Re: (Score:2)
While slightly true, ssh caching and encryption at rest are still far better performers with hardware raid with dedicated processors for crypto. I've also seen far more examples of ZFS pools going missing after a reboot than I have had logical volumes go missing with hardware raid. Feel free to substitute MDRAID in for ZFS. SSD caching with ZFS does perform fairly well but in most cases with hardware raid you are using a hypervisor at which point you are already using a large portion of the fancy new proce
Re: (Score:2)
Both processors will be bottlenecked by the interface that they are connected to. The difference is the hardware raid card doesn't have to send everything over pcie and system memory to do operations between the disks.
Still, the advantages of a real software raid solution like zfs can't be discounted. You're not going to be able to run it on that super advanced 4ghz i9 though since intel hates ECC.
Re: (Score:2)
> The difference is the hardware raid card doesn't have to send everything over pcie and system memory to do operations between the disks.
Yeah it doesn't send it over the 64 GBps PCIe, bus it sends it over the 0.75 GBps sas or sata connection. (6 Gbps is 0.75 GBps).
Yep, the hardware raid doesn't send it over a super fast connection, it instead uses a connection that runs 85 times slower.
Re: (Score:2)
PS you also mentioned sending "drive to drive" data (rebuild?) through system memory. System memory bandwidth is 640 Gbps. The SAS connection is 6 Gbps.
Going through system memory is literally over a 100 times faster than reading or writing the disk. It's also several times faster than the memory onboard the raid card, in most cases.
Re: (Score:2)
If you're using some oddball RAID controllers. I don't recall every having an issue mounting Adaptec RAID volumes on new controllers.
Re: (Score:3)
RAID is not backup. Why should you have to go and buy an ancient controller if the card fails? Start fresh and restore from backup.
Re: (Score:2)
ding ding ding.
You should have a plan in place to age out old hardware, and have new hardware ready to take over production. While also maintaining backups the entire time, which can use to pre-load your spares and dramatically reduce the replication time when you go live.
Re: (Score:2)
One of the reasons to go with a hardware RAID controller in the past was also port density - when even server mainboards only have 4 SATA ports on them, it makes software RAID of any density harder. Now you can grab a cheap 16 port SATA RAID card and just pass the disks through as JBOD and you're in business for mdraid, LVM, ZFS, or whatever.
But it wasn't always that way.