Has anybody had good luck using FAI where it repartitions on a system that previously contained an lvm setup?
khahn at engr.wisc.edu
Mon Jul 15 23:33:55 CEST 2013
On 07/12/2013 02:24 AM, Thomas Neumann (FAI) wrote:
> Probably the only one who knows that is Michael Tautschnig, but he hasn't
> been seen on this list since April 2011. I have run into these problems and
> tried to dig into setup-storage's code but to be frank it's not exactly the
> prettiest in terms of style and readability. Wasn't to happy with my patch
> since I'm pretty sure preserving partitions is not possible with it:
For anybody else trying to figure it out in the future, here's the bit
of it I've been able to put together. The descriptions are from the post
perspective (that is.. a command with this post-dependency provides the
exist_<DEV> - device exists (this gets double used for devices and
partitions.. could be a problem)
wipefs_<PART DEV> - device has had wipefs run
self_cleared_VG_<VG> - Volume group has been "cleared"
self_cleared_<LVM VOLNAME>, Logical Volume has been "cleared"
cleared1_<DISK DEV> - all partitions have been "cleared" on DEV
cleared2_<DISK DEV> - disk label has been made on DEV
prep2_<PART DEV> - partition has been made (usually then translated into
has_fs_<PART DEV> - partition has been formatted with filesystem
vgchange_a_n_VG_<VG> - mostly useless?. The VG has been disabled.
vg_enabled_for_destroy_<VG> - vgchange -a y has been run to enable
wipefs_vg/<LVM VOLNAME> - logical volume has been wipefs'd
lv_rm_<VG>/<LVM VOLNAME> - logical volume removed from volgroup
vg_removed_<VG> - volume group removed from usage
pvremove_<VG> - physical volume removed
pv_sigs_removed_wipe_<DEV>_<VG> - wipefs was run on physical volume that
was part of VG
pv_sigs_removed_<VG> - wipefs has been run for all PVs in VG (all devs)
flag_lvm_<DEV> - dev has been flagged in the partition table as an lvm
pv_done_<DEV> - pvcreate has been run on DEV
vg_created_<VG> - VG has been created
vg_enabled_<VG> - VG has been enabled (vgchange -a y VG)
I'm still trying to understand what "cleared" means in its various contexts.
> So I started on a different project to create a test harness for testing
> different partition setups in short time. Main usage was intended to check
> if setup-storage is doing the same things after tweaking the code and
> applying bugfixes. (Would be pretty embarassing to get LVM to work but break
> MD- oder crypto-devices.)
> It works pretty nicely except that you can't verify disk configs using
> /dev/disk/by-path/, by-id/ etc. since the volumes are emulated with loopback
> devices and one would have to do some udev magic to make it work.
Good deal there. Congrats on it... do you actually have any test cases
with lvm and preserve? If not, what are others doing?
> Since all my work-related itches are scratched I can only allot my copious
> free time to it. There isn't too much of that at the moment - so I'm
> currently not actively working on finding a solution. [Apart from the free
> time I must confess that I'm actually scared of the perl code inside setup-
> storage - on the outside it looks like it's split nicely into different
> libraries but on the inside it's more like it's one big fscking pile with
> usage of package-scoped variables all over the place.] I was kinda hoping
> that Michael would come back sooner or later and would fix this mess himself
> but that does seem less and less probable with each month passing.
Nod, my work-related itches are asking for lvm with preserve. (not so
interested in re-size or encrypted or other file-systems). I'm probably
just going to roll back to a pre-4.0 version of fai to make it happen.
(I'd love to use my spare time, but it involves 2 kids less than 4 years
I've taken a look through the code to see what I could do in a few days.
I originally thought there was never a case where vgchange -a n VG was
required. I was able to remove vgchange -a n VG (Volume Group) from the
code and the related dependencies, and add some ways to tell one "true"
command from another. That said, I was a bit surprised to find out that
even when a partition is preserved, the entire partition table is blown
away anyway and then rebuilt with the preserved partition using the same
boundaries. Scarily enough this means if the install crashes at a
certain point, the preserved partition will effectively have
disappeared. (Obviously erasing and re-writing the partition table
requires having all vg's disabled, making my original assertion wrong)
You're right about the code being a "pile", and I'm sorry to hear that
the original contributor isn't showing up any more. This leads me to a
1. Am I an odd case, do other people have lvm and preserve working for them?
2. Is there a bit of functionality that 4.0 provides that 3.8 doesn't?
3. If this code is really as bad as it seems with no maintainer, why not
roll back to the pre-4.0 version of setup-storage. (I get group
collaboration/development, but that either requires somebody that will
reverse engineer, or somebody that has the overall design in their head.
I don't think that exists here.)
Anyway, I'd be glad to hear if I'm wrong/out of line, with these
suggestions/questions. I'm interested in helping fix things, as we use
FAI, here, at work, but I'm also facing a fall semester deadline and a
number of other things to do in order to have our labs be ready.
At least, I hope the partial documentation of the dependencies are
helpful (some of it even <em>seems</em> straight forward).
P.S. I can pass along my changes to the code if requested, but since it
appears to be a dead end compared to the original design, I don't think
it is worth much.
More information about the linux-fai