Installing ClearOS with LVM and Raid
For advanced installations, you may want to configure your ClearOS server with RAID (Redundant Array of Independent Drives) support or with LVM (Linux Volume Manager) support or perhaps you want the best of both worlds. Please note the we do not usually recommend LVM because we have found that it can often add an additional layer of complexity especially when dealing with compromised data. If you know what you are doing and would like to proceed anyways, this is the guide for you.
RAID gives you the ability to stack disks or partitions together. Using software RAID on Linux allows you to put partitions together that yield a variety of results. These methods can optimize speed, provide redundancy, or reduce costs. The RAID options available on ClearOS are as follows.
Called a volume set. RAID 0 allows you to place multiple partitions in a large, single volume. This can, if properly configure increase disk I/O performance. However, RAID 0 has increased risk because if any member of the RAID fails, the entire volume is lost.
RAID 1 is called mirroring. With RAID 1 the contents of one partition are mirrored to the other. Whenever the disk writes to the volume, it writes to both disks. Whenever a read operation occurs, it occurs from the primary disk only. RAID 1 has the performance of a single drive by itself, but it provides increased reliability because the volume is still functional even if one of the RAID members fail.
RAID 5 is called striping with parity. With RAID 5, the data is spread across the volumes in the set and a parity bit is generated for each block of data. RAID 5 is as extensible as RAID 0 but provides the assurance that if one member of the set fails, the volume is still sound although performance is degraded.
LVM can add flexibility to environments that need pliable volumes that can be manipulated. According to Wikipedia LVM can:
Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
Create read-only snapshots of logical volumes (LVM1).
Create read-write snapshots of logical volumes (LVM2).
Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
Mirror whole or parts of logical volumes, in a fashion similar to RAID 1.
Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.
Please note that with major improvements in the Multi-Disk module for Linux comes the ability to now resize RAID Arrays. This means that the primary benefit for using LVM is now lost and you may want to simplify your life by just using RAID alone.
Implementing RAID on ClearOS
ClearOS supports RAID through the the Multi-Disk Manager built into the Linux kernel and LVM through the LVM kernel module. Utilities and tools for configuring RAID and LVM are available both at command line in ClearOS 5.1. Utilities for flagging partitions as LVM are available in the installation process but you cannot assign sub-partitions. This is by design in the current version to discourage the broad use of LVM.
Boot RAID 1
Setting up RAID during the installation
When you install ClearOS you will be prompted to allow the system to automatically configure the volumes or to manually configure the partitions. If you are setting up LVM and/or RAID you must select 'I will do my own partitioning' at the prompted at the Partitioning section of the installation.
After selecting this option, you will not be immediately take to the Disk Druid but will be asked to select which modules that you intend on installing. After this, the install switches to the next stage which includes the partitioning tool, Disk Druid.
If you are using new disks on this system you may be prompted to initialize these disks. Be advised, doing so can delete all data, especially if this data was formatted with an unsupported master boot record and partition system. You will be asked to initialize each disk in turn.
You will also be asked whether you want the system to automatically select a default partitioning method or if you want a custom install. Also, you will be prompted as to which disks you would like to use. By default all disks are selected. Select Create Custom Layout and then select OK.
Installing RAID with Disk Druid
For this next section I will use the example of a system that has 5 disks. 2 of the disks are 8 GB in size and 3 of the disks are 20 GB in size. The name and order of the disks are listed in this table. The goal for this machine would be a typical low volume web application server that needs a database with a bit of performance and disk I/O capability.
| Volume || Device Name || Size
| 0 || /dev/sda || 8GB
| 1 || /dev/sdb || 8GB
| 2 || /dev/sdc || 20GB
| 3 || /dev/sdd || 20GB
| 4 || /dev/sde || 20GB
For our partitions, I want to configure a 120MB /boot partition in RAID 1, 2x 1GB swap partitions, about 7.5+ for my / (root) partition, 15 GB of RAID0 which I intend to place a database /store/data1, a Gig or so for /tmp, and 30 GB for my /var partition which will contain, among other things, my web content. I want my database and web content on LVM so I can grow it later but I'm pretty sure I won't need to increase the overall core operating system size so plain old RAID for / and /boot.
Here are the partitions in a table.
| Partition || Purpose || Type
| /boot || Contains GRUB, kernel, and initrd information || RAID 1
| / || Contains system binaries, users folders (no real users) and configuration files || RAID 1
| swap1 || First swap space || SWAP
| swap2 || Second swap space || SWAP
| Partition || Purpose || Type
| /var || Contains log files, and web content || RAID 5 on LVM
| /store/data1 || Database files || RAID 0 on LVM
| /tmp || Temporary files || RAID 0 on LVM
To get started, we will make the the /boot partition on the first two disks.
Select New. Leave the mount point blank for now. Tab to the file system type and select 'Software RAID'. Tab to Allowable Drives. Deselect all partitions except for sdb. Tab to Size. Enter 120. Leave the next section selected on 'Fixed Size'. Tab to 'Force to be a primary partition. Check the box. Tab to OK and press enter.
Repeat the process for the other half of the /boot RAID partition as above but Deselect all partitions except for sda instead.
Your menu should appear like the above image with a size of approximately 120MB. The reason why it won't be exactly 120MB is because Linux will round the size to the closest cylinder for the drive. If you have made an error or want to change the size, tab to drive list, highlight the drive and select Edit or Delete. You should be able to see two partitions, sda1 and sdb1. The Type will read 'software R' (this field is limited to 10 characters in text mode.)
To make these partitions a RAID volume, ←–why does it stop here? Finish this Please