Profile Details

Toggle Sidebar
Loading cover... Drag cover to reposition
Recent updates
  • Tony Ellis

    Just thought about doing some research on SATA cards before going to bed (it's currently 2.20 am :(
    Came across this... https://www.jethrocarr.com/2013/11/24/adventures-in-io-hell/ - bit different to your problems but some of the comments at the bottom are telling...

    Seems like the Marvell SATA chipsets/drivers might not be the best if this is anything to go by... Have a Silicon Image 3114 (and still use in a backup Server that's only powered up once a week for backup updates) so your PCI card is probably OK (provided the manufacturer was careful laying out the PC board traces to minimise cross-talk - mine is a different brand to yours) Just make sure the SATA cable is good quality with a nice tight fit. Mine has no notches for the clips... However, based on the URL above there are suspicions about PCIe Marvell based cards - perhaps change yours for a decent one - with no Marvell chipset!. Never used Marvell myself so no first hand experience there.

    My SI 3114 card has 1 drive attached, the motherboard has 4 SATA ports with 4 drives. The 5 drives are combined in a software raid 5 array. The OS resides on 2x IDE drives that are mirrored in Raid 1. All my drives support TLER (ERC) set to 7 seconds timeout.

  • Tony Ellis

    Hi Leon, Once a UUID is assigned to each array during raid creation it doesn't matter what the /dev/sdx order is if you specify by UUID in mdadm.conf... madadm scans all drives looking for the UUIDs and uses that to ascertain which drive is which
    Did you create the script to change the disk time-out and place it, for example, /etc/rc.d/rc.local so it runs when booting? Those drives of yours are not suitable for use in Raid 5/6 without doing that.

    No idea how you are setting the drives up - never watch a "How-To" on YouTube or anything else similar - can read far faster than any narrator can speak and thus lean a lot more in the same period of time... and you can always refer to any written part instantly. Lot better than having to replay something to make sure you heard and understood a certain passage correctly... Initially several years ago downloaded and used the Redhat Administration Manuals and went from there... Studied the complete set, beginning to end, while going to//from work on the train.

    If you continue to have problems - then it might be time to look at the hardware... would be inclined to ditch the two budget PCI/PCIe controllers and get a decent 8-port one with correct Linux support, eg a modern LSI, assuming one of your 8x or 16x PCIe slots is vacant... are you using good quality SATA cables with clips? The old original ones with no clips are notorious for creating intermittent connections as are cheap controllers that don't have the little notch for the clip to latch onto. Wouldn't be surprised if your two add-on controllers fall into that category. No clips means you are relying on friction - you might get away with it - but the number of drives you have provides more opportunity for vibrations and connector movement...

  • Nit "systemctl status firewall.service" not "systemctl firewall.service status"
    service and systemctl have their parameters in opposite orders - catches me out too :)

  • Create is dangerous in that you need to specify the disks, in the create command, in the same order that they had become in the raid when it broke. Since you did a grow they may not be in strict alphabetical order any more - do not write anything to the drive - mount it read-only until you have verified the data in large files is OK. Since you have 10 drives the number of possible combinations is enormous, See the Section "File system check" https://raid.wiki.kernel.org/index.php/Recovering_a_damaged_RAID - you really should have done this with overlays - I pointed to this procedure before. For all you know the correct order may be /dev/sdc /dev/sdf /dev/sdd.... etc. On the other hand you may have been very lucky...

    Comment out the entry in fstab - (assume you put it back) and do not add it back until you are sure the array will always assemble on a boot. in the mean time do a manual mount if and when the array is assembled, then reboot. Can you stop /dev/md127 and /dev/md0 and will it now assemble with a --detail --scan ?

  • Fingers crossed - :)

    What speed is it rebuilding?

    What did you do to get the array started?

  • Thanks Leon - minor quibble the STAT4000 is a PCI card using the Silicon Image 3114 controller - it is NOT PCIe
    see http://www.sunix.com.tw/product/sata4000.html
    The SATA2600 is PCIe http://www.sunix.com/product/sata2600.html - so you have 1x PCI and 1x PCIe

    So we have quite a mixture here :)
    Drives are SATA III 6Gb/s (Compatible with SATA I and SATA II)
    Intel Motherboard Controllers SATA II 3Gb/s
    SATA2600 Marvell 91XX SATAIII 6Gb/s (transfer will be limited as has only 1x PCIe connector)
    SATA4000 SI 3114 SATA I 1.5 Gb/s (limited even more since motherboard slot is 33 MHz PCI)

    Should work - but the PCI card is creating a bottle-neck :(

  • Leon - a question about the hardware
    [code]
    The board had 6x SATA port and i have 2 PCIe cards to give me an additional 6 SATA ports
    [/quote]
    Make and Model of the "2 PCIe cards" please...

  • Thanks Leon... OK - those drives are not suitable for use in a Raid 5 or 6. See the Section in"Timeout Mismatch" at https://raid.wiki.kernel.org/index.php/Timeout_Mismatch - You really need to do get that script working first before anything else - you cannot afford a drive to be kicked out at this stage...

    Then use smartctl to check that every raid drive does not have a Current Pending Sector count warning or other serious error - this is important if you end up using only 8 of your ten drives to recover the array - you don't want a drive kicked for any reason. Run the long test on every drive and check the output. You can run the test on all drives concurrently. Lots of help on this on the web eg https://www.linuxtechi.com/smartctl-monitoring-analysis-tool-hard-drive/
    An example showing Current Pending Sector:- https://community.wd.com/t/help-current-pending-sector-count-warning/3436/3

    Then...
    See https://raid.wiki.kernel.org/index.php/Assemble_Run
    Are the event counts for the 'good' drives the same or very very close?
    Believe you have a Raid 6 - so you should be able to recover if 8 of the 10 drives are OK - then add the other 2 latert.

    Otherwise, it might mean working through https://raid.wiki.kernel.org/index.php/Recovering_a_damaged_RAID - ypu might also want to contact the experts on the raid mailing list...

    Do we assume you created the raid and then used grow without making a viable backup first? If so, that is highly dangerous. You don't use raid as a backup - it is to guard against one kind of hardware failure - a drive failure. There's lots of failure modes that raid doesn't guard against such as file corruption (software problem, power drop etc), human error (deleting files by mistake), viruses and other malware, etc. With a backup the quickest way would be to create the raid again and restore from the backup, having a procedure in place to ensure your backups are complete and useable...

  • As for the raid - did you stop /dev/md127 and /dev/md1 (assuming /dev/md1 is your raid device) before doing the "mdadm --detail -scan"

    If not, then try again stopping both arrays first...

    # cat /proc/mdstat # show us output
    # mdadm -S /dev/md127 # show us output
    # mdadm -S /dev/md1 # show us output (assuming /dev/md1 is your array)
    # mdadm --detail --scan # show us output

    if the ...scan fails, after using both 'stop raid commands', try

    # mdadm -vv --assemble --force /dev/md1 /dev/sd[abcd...]1

    that's two 'v's not a 'w' - show us output - where "abcd..." are all the ten correct drive letters for your raid array and assuming you are using partition 1 for raid which appears to be the case from your output...

    Please do ***NOT*** use the 'create' command yet - that is dangerous and ***last** resort only

    Here's the output from a simulated failure

    Much good information at https://raid.wiki.kernel.org/index.php/Linux_Raid

    Tony... http://www.sraellis.com/

  • OK - let's deal with the disks first - and am concerned... This is from Seagate documentation - does give TLER (ERC) specification...


    Barracuda XT drives—The performance leader in the family, with maximum capacity, cache and SATA performance for the ultimate in desktop computing

    Application Desktop RAID

    Cannot find a strict definition - but "desktop raid" often means Raid0 and Raid1 ***ONLY*** - why? because often they don't support TLER (ERC) - and that's a big drawback when used in Raid5 and Raid6 - search the web for all the gory details - but basically on an error occurring the drive should timeout first before the software timeout - this will cause the raid to initiate error recovery. If the software times out first (which is what happens with 'desktop' drives) - the raid thinks the disk is 'broken' and kicks it out of the array. Basically with raid you want the disk to timeout fast as this prevents 'hangs' for the users and raid can recover the data using data from the other drives to re-construct, then re-write the correct data to the 'failing' one. With 'desktop' environments there is only one copy of the data and is only that one drive - so the drive will try desperately, for ages if necessary, to recover the data - the user will experience a 'hang' while this takes place...

    To test for TLER (ERC) see below - please give the results from your drives. If they do not support TLER (ERC) - then we need to change the timeouts within the Linux software disk tables...