Has anybody had good luck using FAI where it repartitions on a system that previously contained an lvm setup?
Christoph Kluenter
ck at iphh.net
Thu Jul 11 10:15:54 CEST 2013
Hi Ken,
* Am Wed, Jul 10 at 15:05:10 -0500 , schrieb Ken Hahn:
> Hello,
>
> I'm trying to get FAI working for an install of several labs using
> Debian Wheezy. I'm using the latest wheezy install of FAI (which is
> version 4.0.6).
>
> My install process has worked fine when I empty out the disk (dd
> if=/dev/zero of=/dev/sda bs=1024 count=512 is my friend) of the client
> machine. However, when I try to reinstall on top of a previous system
> which used LVM, I continually have failures. This led me to a few
> questions specifically about setup-storage:
>
> 1. Is there any documentation on all the names for the pre and post
> dependencies for a command? I'm having a very hard time deciding if
> there's a bug, or if my config has problems because it's hard for me to
> decode these strings. Specifically, what is self_cleared_* and why does
> it sometimes have a dev node suffix, and other times have a logical
> volume name?
>
> 2. Has anybody had luck with installing where an lvm setup previously
> existed? I see that the wipefs command always depends on a vgchange -a
> n command, and I don't understand how that could work, as the vgchange
> removes the device node. With no device node, there's no device to wipe.
> (Also, I see that for lvm, wipefs refers to a path like vg/fscache
> instead of /dev/vg/fscache. I'm not sure how that would ever work, either.)
>
> One of the few things that I can think of is that the kernel causes
> different behavior as to the dev nodes appearance/disappearance. I am
> using a stock debian kernel instead of the grml one because the grml one
> was crashing randomly on my test machine (which is similar to my lab
> machines).
>
> I appreciate any relevant feedback.
I had the same problem and added a script-hook.
It just wipes all disks before trying to partition them.
This script was posted on this list before I think.
cheers,
Christoph
commit f00f42e0d8cefa4ab759534b647b9fb275245dc6
Author: Christoph <ck at iphh.net>
Date: Wed Jun 19 11:41:57 2013 +0000
work around debian bug # #693701
diff --git a/hooks/partition.WIPEDISKS b/hooks/partition.WIPEDISKS
new file mode 100644
index 0000000..7d2b46a
--- /dev/null
+++ b/hooks/partition.WIPEDISKS
@@ -0,0 +1,49 @@
+#!/bin/bash
+#
+# hooks/partition.WIPEDISKS
+#
+# author : W. Walkowiak, 2013-01-03
+# changed:
+#
+# Stop LVM an MD RAIDs if existing and wipe all disks with wipefs and dd
+#
+# $Id: $
+#===========================================================================
+
+error=0; trap 'error=$(($?>$error?$?:$error))' ERR # save maximum error code
+
+#--- functions
+
+#----
+
+# stop volume groups
+echo "Stopping VGs:"
+vgs
+vgchange -an $vg
+
+# stop MD RAID arrays
+echo "Stopping and removing MD RAID arrays:"
+mdadm --detail --scan
+for array in $(mdadm --detail --scan | cut -d ' ' -f 2 | xargs readlink -f )
+do
+ parts=$(mdadm --detail $array | grep '/dev/' | grep -oE "[^ :]+$")
+ mdadm --stop $array
+ [ -x $array ] && mdadm --remove $array
+ for part in $parts; do
+ echo "zeroing $part"
+ mdadm --zero-superblock $part
+ done
+done
+rm -f /etc/mdadm/mdadm.conf
+
+# wipe all disks with wipefs and dd
+if [ -n "$disklist" ]; then
+ echo "Wiping boot sector of disks: $disklist"
+ for disk in $disklist; do
+ wipefs -a /dev/$disk
+ dd if=/dev/zero of=/dev/$disk bs=512 count=1
+ done
+fi
+
+exit $error
+
>
> -Ken Hahn
--
Christoph Kluenter E-Mail: support at iphh.net
Technik Tel: +49 (0)40 374919-10
IPHH Internet Port Hamburg GmbH Fax: +49 (0)40 374919-29
Wendenstrasse 408 AG Hamburg, HRB 76071
D-20537 Hamburg Geschaeftsfuehrung: Axel G. Kroeger
More information about the linux-fai
mailing list