I will answer to my post
I found the solution after hours of web browsing.
The solution is simple, but difficult to find
My mistake was that I imagined the RAID system of my motherboard was acting as RAID system I met before, and should be used the same way to install a Linux distribution.
I mean create an array from RAID manager and give it to the distrib.
This is not the way CentOS wants it, and I think this comes for the RAID manager directly, the way RAID1 is managed on my motherboard.
Solution was to leave the distrib manage all the stuff.
I mean I let all my HDD visible as normal/classical HDD and I had to create RAID arrays from the SW RAID manager of Linux.
This is done quite easily with the installer:
* in HDD selections screen of the installer, check all the HDD you want to use for the RAID arrays to create and choose to manage partitions creation manually:
=> in my case I checked 2 HDD to have RAID 1 arrays: I wanted to have one RAID1 array for the system /, one other RAID1 to mount on /var , and one more for swap
* remove all mount point that you will see on the left part of the partitions definition window if you have any
* create a new entry point ; you will have to give name, mount point and size
* then you will be able to select RAID 1 as file system in the window to define the entry point
Installer will manage to create all what has to be created to reach size and RAID1 availability at boot (if your partition contains boot area)
The installer will also take care to install the loader as it has to be installed.
That's alll. I have tested it several times and it was working in all cases.
Hope this may help someone. This is not related to ClearOS but to CentOS. But as I have posted the question here, I wanted to answer it here also.
A bit fed-up from installing my home server always and always, I wanted to see if ClearOS could help me to make it easy and clean ...
I spent several hours to understand why (for me a deep dive in anaconda and blivet) but my RAID1 array was not proposed by the install UI.
I have a ASUS M4A78 Pro AMD 780G Mainboard that provides a native RAID feature so that OS should see this RAID1 array just as a single disk.
I know this may be fake-HW Raid, but it works with all distributions (including based on RedHat).
If I pause the install at the disk recognition stage, I was able to log in a shell and able to mount all partitions defined on the RAID drive.
I was able to remove them (create a brand new partition table), rebuilt them all things that show that it is not a module issue or something like that.
The system itself is able to manage perfectly my RAID1 disk without any limit, errors, warning, etc ...
So I am quite mechanical issue somewhere in anaconda or blivet that seems a bit complex (for me) in their "devicetree" management.
I had a look in python code and I see in logs that the disk is detected, considered as a disk, and then blivet (as far as I can see) just remove it from the list and hide it so that I can't even re-add it manually though the UI.
Even after 4 hours, I was not able to understand why it run this way, adding entries, hiding then, removing them ... I was lost ...
At the end I mounted the partitions and declared them in /etc/fstab as python code seems to use this as a source, but did not fix the issue.
I saw on the web that this may come from CentOS as CentOS installer had some difficulties in the past like this.
So I also tried the most recent iso hoping the issue was fixed ...
But I'm still stuck and I'm sad not being able to even install ClearOS.
So if someone has a clue to share with me, I would be happy.
Maybe a way to force a device to be in the decide listed in the UI?
I am not a newbie in Linux even if I don't consider myself as an expert in all layers such as Udev, anaconda etc ...
But I can edit python code if someone has a track or a post that I can use to start a new session because I am still interested in ClearOS ...