Not all fixed. Re: Has anybody had good luck using FAI where it repartitions on a system that previously contained an lvm setup?

Ken Hahn khahn at
Sat Jul 20 00:14:01 CEST 2013


Still looking at this issue (been gone to a conference), but I'm noting
that FAI/debian changed from using liblinux-lvm-perl .14-2 to .16-1.
This ends up changing what the "current device tree" looks like.
Specifically it appears the lv's are represented by /dev/ paths in the
old version and by volume name only in the new version.  This is really
significant because the post and pre depend on everything being a dev name.

Old method:
          'VG_vg' => [

New method:

          'VG_vg' => [

(I discovered this while trying to use the old setup-storage in a new fai.)

Also, note this info in the  Linux::LVM changelog:

0.14  Wed Jul  2 02:54:14 CDT 2008
        - Fixed some logic errors to get it working a little better
        - Full rewrite underway

I guess one of my questions is: is this a bug in setup-storage, or a bug
in Linux::LVM or perhaps even one of the lvm command line tools?

Also, how long have people be trying to give patches for setup-storage
because of a difference that really arose in Linux::LVM?

If anybody has any info on this before I keep digging on Monday, it
would be appreciated!

-Ken Hahn

On 07/11/2013 05:11 PM, Ken Hahn wrote:
> Hello,
> I've been looking through the code, and perhaps I need to ask this on
> the dev list, but I can never see a reason that we need to call vgchange
> -a n.  It appears to just cause problems.
> Am I correct about this?  Can I get a scenario where it would be
> required? perhaps resize... I'm not sure.
> Thanks,
> -Ken
> On 07/11/2013 02:46 PM, Ken Hahn wrote:
>> Hello,
>> It looks like this fix doesn't resolve the problem when a preserved
>> partition is desired.  Still looking into it.
>> -Ken
>> On 07/11/2013 11:40 AM, Ken Hahn wrote:
>>> Hello,
>>> Thanks, indeed that patch fixed the problem and made sense to me.  I am
>>> still, however, curious if there is indeed any documentation on the
>>> various pre and post states, just so I can understand what they are
>>> supposed to mean?  The new vg_enabled_for_destroy_* makes sense to me,
>>> but I'm curious what the original, vgchange_a_n_VG_* is supposed to mean
>>> exactly.
>>> I think confusion in diagnosing this kind of problem is related to the
>>> lack of this information and also a lack of a way to see the full graph
>>> generated. (we see a topologically sorted dump in the debug, which does
>>> get most of the way).
>>> Anyway, thank you, again, for the pointer at the patch.
>>> -Ken
>>> On 07/11/2013 03:26 AM, Bjarne Bertilsson wrote:
>>>> Hi,
>>>> I think the patch posted in this bug report will fix the problem with lvm, haven't tested it yet. Notice there are two bugs in that report but the one you want is the one posted on github.
>>>> Not sure why this hasn't been addressed yet on upstream.
>>>> BR
>>>> / Bjarne
>>>> On Wed, Jul 10, 2013 at 10:05:10PM +0200, Ken Hahn wrote:
>>>>> Hello,
>>>>> I'm trying to get FAI working for an install of several labs using
>>>>> Debian Wheezy.  I'm using the latest wheezy install of FAI (which is
>>>>> version 4.0.6).
>>>>> My install process has worked fine when I empty out the disk (dd
>>>>> if=/dev/zero of=/dev/sda bs=1024 count=512 is my friend) of the client
>>>>> machine.  However, when I try to reinstall on top of a previous system
>>>>> which used LVM, I continually have failures.  This led me to a few
>>>>> questions specifically about setup-storage:
>>>>> 1. Is there any documentation on all the names for the pre and post
>>>>> dependencies for a command?  I'm having a very hard time deciding if
>>>>> there's a bug, or if my config has problems because it's hard for me to
>>>>> decode these strings. Specifically, what is self_cleared_* and why does
>>>>> it sometimes have a dev node suffix, and other times have a logical
>>>>> volume name?
>>>>> 2. Has anybody had luck with installing where an lvm setup previously
>>>>> existed?  I see that the wipefs command always depends on a vgchange -a
>>>>> n command, and I don't understand how that could work, as the vgchange
>>>>> removes the device node. With no device node, there's no device to wipe.
>>>>>  (Also, I see that for lvm, wipefs refers to a path like vg/fscache
>>>>> instead of /dev/vg/fscache.  I'm not sure how that would ever work, either.)
>>>>> One of the few things that I can think of is that the kernel causes
>>>>> different behavior as to the dev nodes appearance/disappearance. I am
>>>>> using a stock debian kernel instead of the grml one because the grml one
>>>>> was crashing randomly on my test machine (which is similar to my lab
>>>>> machines).
>>>>> I appreciate any relevant feedback.
>>>>> -Ken Hahn

More information about the linux-fai mailing list