Forums

Resolved
0 votes
I did a stupid thing, I built a system using SMR hard drives. That's Shingled Magnetic Recording. These things are large and cheap but very very slow on writes!

ClearOS 7 uses LVM which one of the features is adding an SSD to a Logical Volume as a cache drive. This would be something I could throw into a running system but I don't want to break it. Has anyone here done this on ClearOS?

I found this article which makes it seem easy;
Wednesday, May 12 2021, 08:25 PM
Share this post:

Accepted Answer

Saturday, May 22 2021, 05:36 PM - #Permalink
Resolved
1 votes
Solved for a VM ClearOS 7 on Proxmox

The VM has 2 cores, 4GB RAM, 320GB HDD, 32GB SSD



vgextend clearos /dev/sdb
lvcreate -L 30G -n cache clearos /dev/sdb
lvcreate -L 1G -n meta clearos /dev/sdb
lvconvert --type cache-pool --poolmetadata clearos/meta clearos/cache
lvconvert --type cache --cachepool clearos/cache --cachemode writeback clearos/root

dracut -v -f --regenerate-all




I've not tried this on a physical system and it may break something if used on a live system so be careful. Enter the above instructions one by one. The two lvconvert instructions warn you that data will be lost, it won't be, just say Yes. The special sauce to make it still boot is the last instruction, it fixes up the mess made by lvconvert. It takes a little while to run.

If this works well then it would be cool to have this as some kind of wizard that can be run after slapping a small SSD in a system.

It does speed up disk writing. I have a 44GB file that copies at 90MB/sec over LAN until it reaches about 30GB then it slows down to 40GB/sec which is the slow speed of my Proxmox hard drives. I want it mainly for write caching because reading is fast enough on a 1gig LAN.

Next I will try it on a physical system
The reply is currently minimized Show
Responses (8)
  • Accepted Answer

    Sunday, May 23 2021, 11:24 AM - #Permalink
    Resolved
    0 votes
    Georgina wrote:

    A ZFS file system is the very worst fs to use with SMR drives

    Thank you for your interest. There is a lot of info in these posts. Read it again and you will see that I'm using SMR drives on a RAID card on a ClearOS system I built. The purpose of my experimentation is to see if an SSD will speed up writes on a slow ClearOS system. It does.

    My Proxmox system is where I am running the VMs. It happens to have a slow file system because I used 8 cheap drives in ZFS RaidZ2.
    I also have a much faster TrueNAS system using 4 of these drives in RAID10 with Optane caching and it's super fast.
    I have now physically built a test ClearOS system with an old Seagate 750GB and an old Samsung 840 EVO 120GB SSD. This is also super fast at both read and write, unlike the SMR system I will speed up at some point.
    The reply is currently minimized Show
  • Accepted Answer

    Georgina
    Georgina
    Offline
    Sunday, May 23 2021, 03:50 AM - #Permalink
    Resolved
    0 votes
    A ZFS file system is the very worst fs to use with SMR drives
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, May 22 2021, 08:13 PM - #Permalink
    Resolved
    0 votes
    Wayland Sothcott wrote:

    ---
    It does speed up disk writing. I have a 44GB file that copies at 90MB/sec over LAN until it reaches about 30GB then it slows down to 40GB/sec which is the slow speed of my Proxmox hard drives. I want it mainly for write caching because reading is fast enough on a 1gig LAN.

    ---


    Just to prove the point I tried the above test copying the 44GB file to the server but without the SSD caching. Terrible! After 1.5GB speed had dropped from 90MB/sec to 30MB/sec. As I write this it's copied 4.7GB and running at 18MB/sec. It's an extreme test because the ClearOS VM is sitting on a very slow ZFS file system.
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, May 22 2021, 03:26 PM - #Permalink
    Resolved
    0 votes
    Georgina wrote:

    OK - already RAID 1.
    Have you seen this?


    You have pointed me to a clue;
    It turns out that there is a missing dependency on the thin-provisioning-tools package that contains the cache_check binary. Furthermore, in order to be able cache the root file system, you need to manually configure initramfs-tools to include this binary (and the C++ library that it requires) in the initrd. That out of the way, things worked smoothly.


    I don't know if thin provisioning is something ClearOS is using but we are trying to cache the root file system. I did see something about 'initramfs-tools' but was not clear when it would be important.

    Looking up how to do this on CentOS might be the answer because ClearOS is a derivative.

    Yeah, as for using a RAID card that's just because it's an easy reliable way to do RAID. I've done it with LVM+MDADM on a ClearOS 7 system with 4x 4TB drives and it works beautifully giving multiple different levels of RAID across 4 drives at the same time. For instance the boot and root partitions are on all 4 drives so any one of them can boot. It should be possible to get very fancy indeed with a couple of SSDs thrown in.
    The reply is currently minimized Show
  • Accepted Answer

    Georgina
    Georgina
    Offline
    Saturday, May 22 2021, 02:34 AM - #Permalink
    Resolved
    0 votes
    OK - already RAID 1.
    Have you seen this?
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, May 22 2021, 12:58 AM - #Permalink
    Resolved
    0 votes
    I'm using a 3ware RAID card in RAID 1. The drives are the cheap 2TB Seagate ST2000DMZ08 ones they sell on Amazon.
    I've built using this card before and the performance has been excellent. However I think Seagate have stitched me up on these new drives.

    The RAID card does have a write cache but I turned it off.

    I'm most interested in getting the LVM SSD cache thing to work. It seems promising but won't survive a reboot.
    The reply is currently minimized Show
  • Accepted Answer

    Georgina
    Georgina
    Offline
    Saturday, May 22 2021, 12:12 AM - #Permalink
    Resolved
    0 votes
    You did not indicate which RAID level and how many drives you are using. SMR hard drives are particularly bad when used with parity RAID. Maybe consider investigating adding a non-SMR drive and using RAID 1 or RAID 10 with the "--write-mostly" option - this would at least allow reads in the main to be fast while the writes are in progress. See RAID Wiki for more.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 21 2021, 10:14 PM - #Permalink
    Resolved
    0 votes
    I'm running a test ClearOS7 as a VM to try out caching
    My VM is on the hard drive and I added a virtual 32GB SSD
    Then I input these commands from ssh root
    pvcreate /dev/sdb
    vgextend clearos /dev/sdb
    lvcreate -L 30G -n cache clearos /dev/sdb
    lvcreate -L 1G -n meta clearos /dev/sdb
    lvconvert --type cache-pool --poolmetadata clearos/meta clearos/cache
    lvconvert --type cache --cachepool clearos/cache --cachemode writeback clearos/root


    It gave me the final
    Logical volume clearos/root is now cached


    Testing indeed shows the caching is speeding up disk reads and writes however it's impossible to reboot. The progress bars slowly creep across the screen and it drops me in the emmergancy kernal. logs indicate it can't see the clearos volumes any more. It's a very similar failure to when you set the bios to the opposite SATA mode like ACPI when you had installed it with ACPI off.

    I've not done this on real hardware but I expect the same problem. I've been rolling back to an earlier VM every time I screw it up.
    Anyone got any suggestions?

    I suppose I could create a storage volume group and only cache that and not clearos/root
    The reply is currently minimized Show
Your Reply