RAID5 SSD: Pros & Cons of Replacing Failed HDD w/SSD

  • Thread starter Vanadium 50
  • Start date
  • Tags
    Ssd
In summary, the conversation is about the speaker's homebrew DVR that uses three Seagate 1 TB 2.5" drives in a RAID5 configuration. One of the drives failed and the speaker is considering replacing it with an SSD instead of another HDD. They discuss the pros and cons of using SSDs and HDDs in a RAID configuration, as well as the cost and reliability of each option. The conversation also touches on the idea of using a mix of SAS and SATA drives, but the speaker is leaning towards using all SSDs. The main concern is the cost and the potential for data recovery in case of drive failure.
  • #1
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2023 Award
35,005
21,672
TL;DR Summary
I'm interested in experiences people may have using SSD for RAID when speed is not a factor.
Here's the history. I have a homebrew DVR using three Seagate 1 TB 2.5" drives in a RAID5 configuration. On Friday the DVR sent me an email (sweet, huh?) telling me a drive had failed. I cleared the fault and in a second it faulted again. I rebooted - it never came back up and the drive was making a woodpecker sound. I power cycled, it booted, no woodpeckers but it cannot find the drive.

Conclusion: the drive is toast.

It;s still in the system, which is not so nice. I switched to a mirror, so went from 2 TB usable to 1 TB but as I am only using 300-400 GB it;s no immediate problem.

The obvious solution is to pop in a replacement drive, reconfigure back to RAID5 and pretend none of this ever happened.

However...

Seagate no longer makes this drive. There are still a few in inventory, but their price has soared to $145. And what happens next time? I have seen these drives fail every couple of years.

The next idea is to replace it with the newer version of the drive. This uses SMR - shingled magnetic recording - which has issues with RAID in general and my RAID in particular. A rebuild takes under an hour, and I estimate over 2 days with the new drive. That's a long time to be vulnerable to a second fault.

There are some other options I considered, and won't go into unless they come up, but I am considering replacing all three drives with SSDs. No-name 1 TB SATA SSDs cost less than new 1 TB HDDs, and for a few bucks more, one can get a WD Blue. My thoughts:
  • One good reason to go HDD over SDD is cost. But the cost differential is small and may actually be slightly negative.
  • Another is reliability. WD says they stand by their SSDs to 400x capacity, i.r. 400 TB. My cache drive, which has slightly heavier use, sees about 5 or 6 TB/year. So I don't see this as substantially worse than the HDDs which last ~5-7 years before failing.
  • I get more speed than I need, but so what? It's not hurting anything.
  • Related, I can probably drop the cache drive. )A small SSD I have in the system now)

Am I missing anything?

Now...here is where it gets crazy. Suppose I just got one 1 TB SSD and replaced the failed drive. I could go back to RAID5 and my 2 TB capacity. This should work, right? I'm just not sure that anyone has ever tried it. And while speed is not a concern, it should be fast. Not as fast as a pure SSD system, but faster than a pure HDD system. And I could leave the cache drive attached if I wanted.

Is this absurd?
 
Computer science news on Phys.org
  • #2
It's not absurd and will likely work well but a SSD and HDD raid will only be as fast as the slowest drive so I'd pick one or the other.

With a HDD there usually is some chance of data recovery. It might be expensive but there are lots of places that can do the job unless there's been a head crash that turned the platters into shavings. I like to build and use systems that have at least two drive fail RAID (Redundant Array of Inexpensive Disks) redundancy (for home use it's 900GB SAS enterprise drives I buy surplus and surface format for errors) and have real backup to a separate backup system with redundant raid drives. I've only used SSD drives in a mirror configuration as a multi drive fail RAID of them gets expensive.

My case might not be typical as:
1688958255514.png

I run surplus enterprise servers (that have enterprise level raid error reporting and prediction) at home on a 10G house backbone that uses 24/7 off-grid solar power (battery bank)
for the computer rack and cooling.
1688958505377.png

24V 200Ah LiFePO4 Battery with 4000W inverter.
1688958540672.png

Power system build pics.

A typical work related RAID 10 storage system
https://www.acronis.com/en-us/blog/posts/whats-raid10-and-why-should-i-use-it/
1688958694223.png
 
  • Like
Likes berkeman
  • #3
One thing to consider with SSD technology is filling it to capacity. I'm not sure how that would work in a RAID configuration.

Basically, filling up an SSD with a lot of static files and then hammering away at the remaining space can cause premature drive failure as in doing a defrag.

https://www.makeuseof.com/tag/5-warning-signs-ssd-break-fail/

Here's a writeup about using a pure SSD RAID setup:

https://www.enterprisestorageforum.com/hardware/ssd-raid-boosting-ssd-performance-with-raid/
 
  • Like
Likes russ_watters
  • #4
nsaspook said:
s some chance of data recovery
I don't think it's an issue for a DVR. If I lose an episode of The Muppet Show ("Nothing like good entertainment." "Yeah, and that was nothing like it!") I just wait for it to come around again. Anything I really, really want to keep gets transferred to my regular RAID anyway.

nsaspook said:
raid will only be as fast as the slowest drive
It's kind of academic, but I don't think it's the case. Consider the simplest RAID, a mirror. The system can get data off either drive, so if one is 3x faster than the other, it will serve 3x as much data as the other. The whole system should have reads 4x faster than the slow drive. Writes, I agree - it will be the slowest drive that throttles things.

In RAID5 things are more complicated, but the same principle applies. The system gets some data off one HDD and needs to get it's partner. It can get it from either the HDD or the SSD, so it more often than not gets it from the SSD. While the HDD is happily seeking away, the SDD is ready to go.

Now in my case, since there is a cache SSD in front of the RAID, I expect to run out of cache most of the time, and that can saturate a GbE link. So the RAID doesn't really need to go faster. The real rationale for this odd hybrid system isn't speed - it's cost, or availability of drives, which is much the same thing.

I confess I hadn't considered SAS drives. A mixed SAS/SATA system seems to me to have the same pros and cons as a mixed SSD/HDD system, plus I'd need to buy a controller, plus the drives look to be about the same price as new SSDs. And for a DVR quiet is a bonus. (I can't hear the Seagates except when they are dying)
 
Last edited:
  • #5
jedishrfu said:
One thing to consider with SSD technology is filling it to capacity.
HDDs don't like it much either. :wink:

I think the DVR use case is probably the best for this. To first order, a show comes on, is recorded, is watched, and then erased. Yes, some programs are kept for a while, so this isn't exactly the case, but it's close.

It also means that the data is dominated by a few large files. Not sure of the impact of this, but it's certainly a feature.

I don't see speed as an issue. Typically, HD is about 1.6 GB/hour. I have four tuners, so my max write speed is 6.4 GB/hour or 15 Mb/s. (Note the switch to bits) That's tiny. As I said, read speed saturates the network, so I won't see any boost there.

Because of the caching structure, unless I have recorded a lot of shows without watching them, I am likely getting the data from memory, or at least my cache SSD. So the speed of the RAID array doesn't enter into it except for programs I recorded a long time ago and haven't watched and erased.

Remember, people used to "tape" shows - so the speed of magnetic tape was adequate.
 
  • #6
Vanadium 50 said:
HDDs don't like it much either. :wink:

I think the DVR use case is probably the best for this. To first order, a show comes on, is recorded, is watched, and then erased. Yes, some programs are kept for a while, so this isn't exactly the case, but it's close.

It also means that the data is dominated by a few large files. Not sure of the impact of this, but it's certainly a feature.

I don't see speed as an issue. Typically, HD is about 1.6 GB/hour. I have four tuners, so my max write speed is 6.4 GB/hour or 15 Mb/s. (Note the switch to bits) That's tiny. As I said, read speed saturates the network, so I won't see any boost there.

Because of the caching structure, unless I have recorded a lot of shows without watching them, I am likely getting the data from memory, or at least my cache SSD. So the speed of the RAID array doesn't enter into it except for programs I recorded a long time ago and haven't watched and erased.

Remember, people used to "tape" shows - so the speed of magnetic tape was adequate.

I have a DVR type recording system using MYTHTV, Zoneminder and a DLNA server that what writes/reads large program sized files via SAS HDD raid. Speed is not a problem with good hardware and a fast network. Linux XFS to XFS file system transfers on a 10G Ethernet backbone.

Bulk file transfer numbers.
1689015576417.png

1689015984418.png


A typical raid mirror SATA drive
root@sma2:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Nov 11 20:35:49 2018
Raid Level : raid1
Array Size : 7813895488 (7.28 TiB 8.00 TB)
Used Dev Size : 7813895488 (7.28 TiB 8.00 TB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Mon Jul 10 07:13:08 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Name : sma2:0 (local to host sma2)
UUID : 3c87c6d1:7f6272fa:3cc0c731:55b9eae6
Events : 23685

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 32 1 active sync /dev/sdc

root@sma2:~# smartctl -a /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.3.0-2-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD80EFZX-68UW8N0
Serial Number: VK0X0TPY
LU WWN Device Id: 5 000cca 3b7ccbd61
Firmware Version: 83.H0A83
User Capacity: 8,001,563,222,016 bytes [8.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5319
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon Jul 10 12:21:03 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0005 130 130 054 Pre-fail Offline - 120
3 Spin_Up_Time 0x0007 193 193 024 Pre-fail Always - 323 (Average 358)
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 46
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 128 128 020 Pre-fail Offline - 18
9 Power_On_Hours 0x0012 095 095 000 Old_age Always - 39710
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 46
22 Helium_Level 0x0023 100 100 025 Pre-fail Always - 100
192 Power-Off_Retract_Count 0x0032 098 098 000 Old_age Always - 2843
193 Load_Cycle_Count 0x0012 098 098 000 Old_age Always - 2843
194 Temperature_Celsius 0x0002 171 171 000 Old_age Always - 35 (Min/Max 16/49)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0

SMART Error Log Version: 1
No Errors Logged
 
  • Like
Likes berkeman
  • #7
Your system is very impressive, no doubt about it. But I probably won't put together a solar-powered enterprise cluster to replace a failed drive so I can time shift The Muppet Show. ("Hey maybe we should be on the stage." "Yeah...there's one leaving town in five minutes.")

My experience with mdadm RAID is spotty - some work great, others seem to require some manual intervention every time the host is power cycled. And one terrible day at work when they replaced a failed mirror and the system dutifully copied the new blank drive onto the old working drive. Oops.

Your 8 TB Reds are CMR, so you won't see any of the SMR problems.
 
  • Informative
Likes nsaspook
  • #8
Vanadium 50 said:
Your system is very impressive, no doubt about it. But I probably won't put together a solar-powered enterprise cluster to replace a failed drive so I can time shift The Muppet Show. ("Hey maybe we should be on the stage." "Yeah...there's one leaving town in five minutes.")

My experience with mdadm RAID is spotty - some work great, others seem to require some manual intervention every time the host is power cycled. And one terrible day at work when they replaced a failed mirror and the system dutifully copied the new blank drive onto the old working drive. Oops.

Your 8 TB Reds are CMR, so you won't see any of the SMR problems.

I mainly posted that to show what cheap surplus server hardware is capable of today if you have affordable power.
What you can get for under $200 is amazing.

Yes, it's a little much for the X-files or SG-1 recordings but much, much more valuable is knowing it's a rock solid hunk of hardware that's unlikely to fail just because of a reboot.

1689043388097.png
 
  • #9
Stargate SG-1 and Colonel Jack O'Neill are forever in your debt.

The G'ould not so much and I think the Ori would concur citing the enemy of my enemy is my friend.
 
  • Like
Likes russ_watters and nsaspook
  • #10
Just a data point here.

I've been using multiple WD Blue drives for many years. Their lifetime has been around 5000 ±500 hours.

Worst case was around 1800hrs. due to bad plating on the connector to the heads and positioner. Best case was 10,000hrs. for a few that spent almost all their life just spinning, (they were on-line backups, rarely read or written).

I typically dedicate one physical drive for all the temporary, paging, and scratch files. That seems to be "hard service" for them, they last roughly 2500 - 3000hrs.

Cheers,
Tom
 
  • Like
Likes hmmm27, nsaspook and Vanadium 50
  • #11
nsaspook said:
SG-1
With a young Terryl Rothery...she's pretty.:heart:
 
Last edited:
  • #12
Tom.G said:
WD Blue
Can you clarify? HDD or SSD? They market both now.

One annoying thing about WD is that their "Blue" drives of a given capacity actually include a half dozen or so models. There was also the merger of the Green and Blue lines which added to the confusion.

Many, many years ago, WD did something not very nice, I caught them at it, and they made it right. So I think of them in a generally positive light, even though I am a bit wary of what they say.

I do think that their introduction of SMR was handled poorly. First, it should have been announced, and they should have put it on their data sheets as soon as it happened. And second, I don't believe SMR is suitable for RAID, especially RAID5/6. They took advantage of the Red's reputation to sell lots and lots of drives unsuitable for the purpose for which they were sold.

If I place a spinning Blue drive into a RAID and the performance is terrible, it's on me. If I place a spinning Red drive into a RAID and the performance is terrible by design, WD needs to own it.

SDDs are of course different. In 2,5" SATA, I don't think the Red even exists, and advantages like "low vibration" are a non-issue.
 
  • Informative
Likes nsaspook
  • #13
nsaspook said:
will only be as fast as the slowest drive
I thought about it some more, and looked in a little.

I expect RAID5 to have the same write speed as one drive: the data has to be written three times, once per drive. I would expect the read speed to be 1.5x as fast. I only need to read from two of the drives to get my data, so with three drives, it should go 3/2 as fast: I read one sector from AB, one from BC and one from AC and I now have three sectors in the time of two.

Here there is data from all sorts of configurations: https://calomel.org/zfs_raid_speed_capacity.html

He sees better peformace than that. An excerpt:

Code:
1x 4TB, single drive,           3.7 TB,  w=108MB/s , rw=50MB/s  , r=204MB/s 
 2x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=50MB/s  ,  r=488MB/s
 3x 4TB, raidz1 (raid5),        7.5 TB,  w=225MB/s , rw=56MB/s  , r=619MB/s

I find this surprising. Both reads and writes go faster than simple scaling predicts.. While I did not run my own benchmarking, in part because every number in every configurration is much better than I need, I can say that scrubes - reading all the data from every disk and checking its integrity ran faster in RAID5.
 
  • Like
Likes nsaspook
  • #14
Vanadium 50 said:
I do think that their introduction of SMR was handled poorly. First, it should have been announced, and they should have put it on their data sheets as soon as it happened. And second, I don't believe SMR is suitable for RAID, especially RAID5/6. They took advantage of the Red's reputation to sell lots and lots of drives unsuitable for the purpose for which they were sold.
Good grief, this had completely passed me by - I have always used Seagate IronWolf and I'm not regretting that now!

Vanadium 50 said:
Here there is data from all sorts of configurations: https://calomel.org/zfs_raid_speed_capacity.html
...
I find this surprising.
Yes I find some of those numbers a bit odd too - and I'm particularly surprised the only data for 4 disks is for RAID 6 and RAID 10, surely RAID 5 is the optimal compromise with 4 disks for most situations.
 
  • #15
bonnie++ -u root -r 1024 -s 16384 -d / -f -b -n 1 -c 4

For this testing systems set of 600GB SAS drives in all slots.
/ raid 50
/images raid 6
/backups raid 5
1689112273778.png
 

Attachments

  • lo.txt
    4.8 KB · Views: 120
  • #16
Well, I'm grateful that The Benchmark Guy did all that work. I think the overall message is that most configurations work as expected, with two exceptions: one is that writes tend to be faster, and the other is that RAID5 performs better than expected.

In my main RAID, I have some IronWolfs, and some Enterprise Capacities. (Formerly "Constellations") They work fine - they do send a lot of SMART messages saying "nothing's the matter". The only WD drive in there now is an HGST Ultrastar Helium. Seven drives total in a 3x2 array with 1 spare. I have to say the spare works shockingly well - temporarily lost one drive in a thunderstorm, and the spare automatically kicked in and resilvered. One reboot later and the process reversed.

But that's my main raid. The DVR is smaller and simpler. I'm kind of stick with 2.5" because of the enclosure, and I don't see many CMR drives in that size. Basically whatever is still on the shelf.
 
  • Informative
Likes nsaspook
  • #17
Vanadium 50 said:
Can you clarify? HDD or SSD? They market both now.
HDD - the ones with the shiny mirror that goes 'round 'n 'round.

BTW, my wife ordered a replacement drive from the Amazon site! :cry:
It was shipped via the Postal Service (US Post Office), :cry:
in a cardboard box with some bubble wrap on one side of it,
and it rattled in the box upon arrival. :cry: (the bubble wrap was collapsed)

Life is getting in the way, so it wil be several days before I can run a diagnostic on it.
 
  • #18
Vanadium 50 said:
Seven drives total in a 3x2 array with 1 spare.
That's some rig. I do wish people wouldn't post stuff like this, it's just made me blow a days billing on a new box for no better reasons than (i) I can; and (ii) I have a good place to donate the old one.

Vanadium 50 said:
I have to say the spare works shockingly well - temporarily lost one drive in a thunderstorm, and the spare automatically kicked in and resilvered. One reboot later and the process reversed.
Power issues are much less of a factor in the UK (urban area): never had a surge and an outage is maybe once in 24 months. It makes one complacent.

Vanadium 50 said:
But that's my main raid. The DVR is smaller and simpler. I'm kind of stick with 2.5" because of the enclosure, and I don't see many CMR drives in that size. Basically whatever is still on the shelf.
Yes I figured that. Now SSDs are bigger, cheaper and more reliable I think I'd ditch the RAID entirely for this application unless I couldn't live with losing Statler and Waldorf.
 
  • #19
Tom.G said:
it rattled in the box
That's not a problem. When you pick up the drive itself and it rattled, now that's a problem!

One of my best Blues is a sketchy one I got from Amazon. It was 2/3 the price as usual, and was advertised as new. I got it, and it had 2000 hours and some poor schmuck's data on it. Amazon refunded my money, told me to keep the drive and the drive disappeared from their website. It's been running without incident for several years now.

It's a funny looking thing. 3.5" form factor, but the shiny part where the disk innards live is only half as tall as usual.

But I am not sure what inferences can be drawn between Blue HDDs and Blue SSDs. There are lots of different models sold as "1 TB Blues", like my oddball. Some may have been HGST or even IBM. The SSDs are SanDisks.

I'm less concerned about the individual drive I put in than whether there is some systemic issue. For example, the RAID may notice rhat the HDDs are substantially slower, and thehn decide they have faulted. I don't think this is a situation where the user base has a whole lot of experience.
 
  • Like
Likes nsaspook
  • #20
pbuk said:
That's some rig.
Thanks. It kind of grew organically. Today, it has enough capacity for me, is fast - as fast as a local disk, with the server being on a 10 GbE connection, and most importantly, I have never lost a byte of data.

There's little point to upgrading this setup. Making it faster won't help, since it will run up against the networking. And, like I said, it has speeds like a local disk. (about 8 Gb/s typical, and 4-5 for long transfers) If I ever need more capacity, I can just swap some disks. These are 6 TB's, so replacing a pair of 6's with a pair of 12's (the sweet spot in price today) is straightforward. It would also be accomplished with the RAID in use, so there's little impact in my work.

The DVR RAID is simpler. Its job really is to make sure that I can lose a disk and still watch The Muppet Shown ("That wasn't half bad." "No, it was all bad!") while it gets fixed. Losing the data here is not like losing my tax forms (which have multiple backups in addition to being RAIDed). But the RAID exists not so much to preserve data as to preserve uptime.
pbuk said:
Power issues are much less of a factor in the UK

The real issue is thunderstorms. They cause brief power issues - maybe a second, maybe a little less. I don't know if this is my GFI, the power company's or something else. It;s annoying. What can happen is that the computer stays up, but gets into a funny state. Not funny ha-ha, but funny as in "how old is this food anyway? It tastes funny." And because these events are correlated, you can get a double fault - a fault when repairing from the first fault.

So getting 2 TB of capacity with three 1 TB drives rather than one 2 TB drive has its advantages.
 
  • #21
Having a RAID just for a DVR seems like overkill. And such small drives are odd - the cheapest drives I see around are 4 tb these days.
 
  • #22
Vanadium 50 said:
so replacing a pair of 6's with a pair of 12's (the sweet spot in price today) is straightforward
Not in the UK: I just paid GBP18.50 per TB for 8TB, 12s are nearly GBP27 per TB. Not worth going 6TB to 8TB, but I was on 4TB drives (and the 8s are 7200 RPM vs the 4s 5400).

Vanadium 50 said:
So getting 2 TB of capacity with three 1 TB drives rather than one 2 TB drive has its advantages.
Yes, I didn't realise how much those old guys meant to you. I'd worry about sustained write performance on consumer SSDs but the sweet spot on IronWolf is 2GB at GBP 78 per TB, but I bet they are as safe as houses: RAID 1?
 
Last edited:
  • #23
pbuk said:
I'd worry about sustained write performance on consumer SSDs
The rate shouldn't be so bad. I have numbers upthread, but you can look at it this way - if an ordinary HDD is good enough, surely an SSD is good enough.

pbuk said:
Not in the UK:
I think that's because the drives need to spin in the opposite direction. :olduhh:

Algr said:
Having a RAID just for a DVR seems like overkill.
So maybe.maybe not. It adds what, $40 to the total cost? And what does it buy you? Really, what it buys you is the ability to keep recording during a drive failure. And we all know that drives fail at inconvenient times, like the middle of the night or when when I am out of town,

Do I need 2 TB? Not really - what I really need is 1 + ε TB. But that means 2 TB. And making a 3x1 TB RAID5 is a lot cheaper than a 2 TB mirror, especially since I already own 2/3 of the drives I need.

1.2 TB is a common size in SAS. However, going that route again requires more drives, plus a controller, plus 10K drives are not exactly quiet, plus I don't need the speed. So it' not really that good a choice.
 
  • Like
Likes nsaspook
  • #24
Algr said:
Having a RAID just for a DVR seems like overkill.
A lot of people would say for any purpose whatever RAID5 is underkill.
 
  • Informative
Likes nsaspook
  • #25
RAID5 shouldn't directly cause data loss because you should have a backup but it will affect online data availability when you need to restore a nuked array from that backup after the array died during a parity rebuild with the replaced drive.

Increasingly redundant RAID arrays simply reduces the chances of needing a full backup, it doesn't eliminated the need for backup.

1689345369257.png
 
  • #26
I don't consider RAID any kind of substitute for backup. That sounds like a plan that will eventually make me sad.

The issue with RAID5 is surviving a rebuild. Most consumer drives say they have an unrecoverable error every 1014 bits or so. Drives are usually better than this, but that's what the datasheets say. Suppose my 3x2 main raid was configured as a 4 drive RAID5 rather than 3 mirrors. That would save me the price of two drives.

Now, suppose I replace a drive and need to rebuild. I need to read 18 TB of data to rebuild the parity. At 1014 there is about a 70% chance of a second failure, this time during the rebuild, and my pool will be borked.

If I mirror it instead, I am reading only 1/3 as much data, so my failure rate is 3x smaller. Instead of a 70% chance of losing my pool., there is a 70% chance it will survive.

But...

My DVR RAID is small. If you work out the numbers, the only difference in survivability between a mirror and parity is the total volume of data. So the only reason RAID5 is less safe is because there's more data on it. And it's only a little more data. So I would argue it's safe enough - or at least as safe as its ever going to get.
 
  • #27
nsaspook said:
it's 900GB SAS enterprise drives I buy surplus
Just for fun, I ran some numbers on what would happen if I used these for my RAID.

I have 6x6TB drives (plus a spare) so I would get the same capacity with 40 (!) 900 GB's. This is my main RAID, not the DVR. This would obviously require more controllers, probably more PCI lanes, and some...er..aggressive cable management.

Arranged in 20 (!) two-way mirrors, I'd by looking at something like 6000 MB/s read speed. That would saturate 10 GbE, and it would saturate 40 Gb/s InfiniBand. (Even more OCI lanes there) It would read an hour long TV show in under a second.

Seven times as many drives means ~7x as many failures (even transient ones) so rebuilds every 2 months or so. But they'd only talk half an hour.

I wouldn't even know what to do with this much speed.
 
  • Like
Likes nsaspook
  • #28
Thunderstorms here last night. During the monthly RAID scan/rebuild, so the worst possible time. Had a drive drop out, and the system did exactly what it should have - stopped what it was doing, attached the spare, rebuilt the array, and continued the scan. After power cycling it, it re-attached the original drive and put the spare back where it found it.

I haven't gotten around to adding the SSD, but my plan is to initially add it as a 3rd mirror drive. If all goes well for a while, then switch from a 3-way mirror to a RAID5. This is technically impossible but also there is a technical trick.
 
  • Like
Likes nsaspook
  • #29
Things are more complicated than I thought.

Open up the case and unplug what I think is the bad drive, plug in the SSD. Linux tells me the RAID has one drive, and doesn't see the SSD. Huh.

OK, so I undo what I did. The RAID now sees both drives, and the BIOS sees three: the two working RAID drives and the broken one. Huh.

SMART tells me there's nothing wrong with the "bad" drive. Huh.

I use two SATA to USN connectors, Windows and Linux, and nobody even sees the SSD. Huh.

So, I am now operating a 2-drive mirror and tests on the bad drive, outside of the mirror. If that looks good, I'll put it in the mirror for a week or so, and if that still looks good, I'll do the Secret Magic Trick to turn it from a 2-way mirror into a RAID5.

Now to figure out what is happening with the SSD.
 
  • #30
I see I never got around to reporting the outcome. The SSD went bad. The words "firmware PROM" were spoken, but not by anyone who knew anything. WD slowly replaced the drive, and in the interim I found a 1 TB spinning disk.

So I used the replacement SSD elsewhere, and the DVR happily uses a 3x1 RAID5 configuration.
 
  • #31
pbuk said:
Yes I find some of those numbers a bit odd too -
@pbuk, I looked at the numbers more carefully and actually <shudder> looked at the documentation.

Pure mirrors and stripes behave as expected. Parity RAID sometimes looks wacky. One thing I learned is that the RAID5 implementation cheats. If you have X bytes to store, you would expect it to put X/2 on one drive, X/2 on another, and have a X/2 parity block on a third. However, there is a minimum block size. If X is less than the minimum block size, the system mirrors the data on 2 of the 3 drives and there is no parity block.

The storage utilization is still inefficient, just not as inefficient as it could have been: 2 partially filled blocks rather than 3. But it means that performance will be somewhere between a true RAID5 and a mirror.

The next complication is that these are likely 4Kn/512e drives operating with a 512 sector size.

The data is uncompressed, and in real life it would be. This is a 20-30% effect on most of my data, and it some cases surprisingly good: my backups, which Windows itself compresses, is shrunk by 13%. That means 13% higher throughput. On user data it's even better: 36%. Since parith compresses less well tnan data, this is an additional penalty for paity RAID.

So, while I don't quantitatively understand the parity RAID performance, I am less surprised now.

pbuk said:
I'm particularly surprised the only data for 4 disks is for RAID 6 and RAID 10, surely RAID 5 is the optimal compromise with 4 disks for most situations.

I'd use RAID5 for three. and do. But 4? I'd set it up as a pair of mirrors. Yes, I get only 2/3 of the capacity, but I'll get faster reads, faster writes, a good chance of surviving a double failure, and less intensive/risky rebuilds. Further, if I need more capacity, I can keep the pool up and add only two disks at a time, not all 4.
 

FAQ: RAID5 SSD: Pros & Cons of Replacing Failed HDD w/SSD

What are the benefits of replacing a failed HDD with an SSD in a RAID5 array?

Replacing a failed HDD with an SSD in a RAID5 array can significantly improve the read and write performance of the array. SSDs have faster data access speeds compared to traditional HDDs, which can lead to quicker rebuild times and overall better system responsiveness. Additionally, SSDs are more reliable and have a lower failure rate, which can enhance the durability and longevity of the RAID array.

Are there any compatibility issues when mixing SSDs and HDDs in a RAID5 array?

While it is technically possible to mix SSDs and HDDs in a RAID5 array, it is not generally recommended. The performance of the RAID array will be limited by the slowest drive, which in this case would be the HDDs. This can negate the performance benefits of the SSD. Additionally, there may be firmware and controller compatibility issues that could complicate the array's operation and maintenance.

How does the cost of replacing a failed HDD with an SSD compare to replacing it with another HDD?

SSDs are generally more expensive per gigabyte compared to HDDs. Therefore, replacing a failed HDD with an SSD can be significantly more costly, especially if the storage capacity required is large. However, the performance gains and increased reliability may justify the higher initial investment, depending on the specific use case and requirements.

What impact does replacing an HDD with an SSD have on the RAID5 array's rebuild time?

Replacing an HDD with an SSD in a RAID5 array can considerably reduce the rebuild time. SSDs have faster read and write speeds, which allows the RAID controller to reconstruct the data more quickly compared to using an HDD. This can minimize the window of vulnerability during which the array is susceptible to additional drive failures.

Are there any potential drawbacks to using SSDs in a RAID5 configuration?

One potential drawback of using SSDs in a RAID5 configuration is the issue of write amplification, which can reduce the lifespan of SSDs. RAID5 involves a lot of parity calculations and write operations, which can exacerbate the wear on SSDs. Additionally, the cost of SSDs can be a significant factor, and the overall performance benefit may be limited by other bottlenecks in the system. Finally, managing a mixed environment of SSDs and HDDs can add complexity to maintenance and troubleshooting.

Similar threads

Replies
4
Views
9K
Back
Top