FAI install with huge JBODs present - disk/by-id?

Steffen Grunewald steffen.grunewald at aei.mpg.de
Tue Dec 17 16:45:56 CET 2024


On Tue, 2024-12-17 at 14:02:17 +0100, Thomas Lange wrote:
> >>>>> On Tue, 17 Dec 2024 13:52:22 +0100, Steffen Grunewald <steffen.grunewald at aei.mpg.de> said:
> 
>     > Can I use the full-fledged path, e.g.
>     >   /dev/disk/by-path/pci-0000:06:00.0-ata-..0
>     > in all places? This seems to work for the disk_config line itself,
>     > but for mdraid definitions?!
> I never tried it, but IIRC disk-by-... should work in the disk_config
> line but maybe only there.

That's what I had tried in the meantime, and it works, but I hesitated to write
something like

raid1 / /dev/disk/by-path/pci-0000:06:00.0-ata-1.0-part4,/dev/disk/by-path/pci-0000:06:00.0-ata-2.0-part4 ext4 rw

- perhaps this would even work?

> But then try to modifiy the disklist variable (see
> class/20-hwdetect.sh for that) to get a well defined order and then use disk1, disk2,...

Yep, as I had extracted the by-path strings above, that was a piece of cake.

What I was missing (and what is somewhat counter-intuitive) was the naming of the
partitions that came out of the "disk_config disk1" and "disk2" sections.

> FAI 6.2 uses systemd. You should remove this line from
> /etc/fai/NFSROOT:
>     sysvinit-core systemd-sysv-
> 
> and recreate the nfsroot. Make sure sysvinit-core is NOT installed
> inside the nfsroot.

Good point, I'll watch out for that, now that I have confessed that I didn't
read the manual.
Perhaps there should be a default hook checking for such pitfalls?

>     > Hm, perhaps I could tweak $disklist and pass a "short list" of two disks
>     > to the installer, then use "disk1" and "disk2"? That's be in a partition
>     > hook, right? (Write to variables.log?)
> That's a perfect idea. See above for an example to tweak $disklist.

additional.log it is ;)

>     > Currently it looks like GRUB doesn't find the grub.cfg at all, having to
>     > parse 200+ existing partitions with "insmod zfs" enabled.
>     > An older, somewhat similar setup with 60 disks boots without zfs enabled,
>     > so the disk count is not the main problem, it's apparently the combination
>     > of zfs and many disks that eats memory.
> I also think that may be the roblem and I would have disabled the grub
> zfs module to see if that works.

That would have failed as well as everything on that machine was ZFS... but
this idea may become important now that I'm booting from an MD... so -
Disable - how? (Remove from /boot/grub/x86_64-efi in a LAST install script?)


Thanks so far,
 Steffen

-- 
Steffen Grunewald, Cluster Administrator
Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
Am Mühlenberg 1 * D-14476 Potsdam-Golm * Germany
~~~
Fon: +49-331-567 7274
Mail: steffen.grunewald(at)aei.mpg.de
~~~


More information about the linux-fai mailing list