FAI Installation in Mainframes

Thomas Lange lange at informatik.uni-koeln.de
Fri Aug 6 15:26:34 CEST 2010


Just for the archive and interested people

From: Martin Grimm <extern.martin.grimm at zivit.de>
Subject: Re: Reg FAI Installation in Mainframes
Date: Thu, 05 Aug 2010 10:21:58 +0200

Dear Anandhakrishna,

I've no practice in installing SuSE Linux with FAI but I'll try to explain
to you how we got it running for Debian guests, so you'll have to do some
research on your own to adopt this to SuSE (maybe Thomas has more information
about this).

- We use an fai installation server (debian guest) that exports an installation
  system and configuration data for our systems via nfs.

- The guest to be installed is connected to a private vswitch via z/VM and
  IPLed with a normal debian kernel pushed in the reader, a special parmfile
  and a customized initrd, that mounts the nfs-exported system as / and
  starts the FAI installation process.

I think these should be the first goals to reach, customizing the FAI installation
doesn't make sense before.


To setup a fai-server (installation system) on a debian system, you have to install
the packages fai-server and fai-client, configure fai (especially /etc/fai/make-fai-nfsroot.conf)
and generate the installation system with the tool "make-fai-nfsroot" delivered
with fai-server in a subdirectory and export it via nfs.
If you wan't to do this with a SuSE system, you may check the faiwiki for some advice.


Next step is creating a initrd. On Debian-Systems this works via "mkinitramfs"
and can be customized with hook-scripts unter /etc/initramfs-tools.

We provide a custom /etc/initramfs-tools/scripts/live-premount/01qethdev
(attached to this mail) to setup up the qeth-device necessary for network connection.

Next you have to set the BOOT-Option for initramfs to live:

 echo "BOOT=live" > /etc/initramfs-tools/conf.d/boot

and create the initrd:
mkinitramfs -o initrd.fai <kernel-version>

I've no idea if the structure of initrd on SuSE systems is similar to Debian so
you'll have to check this.

Now transfer the kernel and initrd to a zVM-CMS disk just like you do for
manual zLinux installation as files KERNEL FAI and INITRD FAI.

Create a parmfile PARMFILE FAI like this(change values to fit your setup):
---8<-----------------------------------------------------------
dasd=200-21f root=/dev/nfs nfsroot=1.2.3.4:/data/fai/nfsroot
qethdev=0.0.1000,0.0.1001,0.0.1002 layer2=1 boot=live
ip=1.2.3.5:::255.255.255.0:::off
FAI_FLAGS=sshd FAI_ACTION=install vmpoff=LOGOFF
---8<-----------------------------------------------------------

Short description of some keys:
dasd - minidisk-addresses to use as linux disks
nfsroot - nfs-server-ip and nfs-share to mount
qethdev - the device-addresses for your nic
layer2 - use layer2 network or not
ip - ip address and netmask used for the new system at installation time

Create a script FAIINST EXEC to start installation:
---8<-----------------------------------------------------------
00001 /* REXX EXEC TO IPL DEBIAN GNU/LINUX  */
00002 /* FOR S/390 FROM THE VM READER.      */
00003 /*                                    */
00004 'CP CLOSE RDR'
00005 'PURGE RDR ALL'
00006 'SPOOL PUNCH * RDR'
00007 'PUNCH KERNEL    FAI * (NOHEADER'
00008 'PUNCH PARMFILE  FAI * (NOHEADER'
00009 'PUNCH INITRD    FAI * (NOHEADER'
00010 'CHANGE RDR ALL KEEP NOHOLD'
00011 /*'TERM MORE 0 0'*/
00012 'CP IPL 000C CLEAR'
---8<-----------------------------------------------------------

All these files have to reside on a CMS-DISK accessible by the new linux guest.

Login as the new guest and start installation by calling FAIINST.

If all works well the kernel should start and the initrd should access the
nfs-share as root and try to start FAI. A ssh-server should be running on the
guest.

If you've gotten here, we'll talk about the next steps.


Greetings
Martin


Am 04.08.2010 07:04 schrieb Anandhakrishna R/TVM/TCS:
> Dear Martin,
> 
> Pls help us to proceed with the FAI solution.If possible we can arrange for a call for a discussion.
> 
> Currently we are using IBM Tools to dynamically provision Suse Linux v 10.1 images over zVM LPAR's.
> 
> We would like to replace it with the open source tool like FAI.
> 
> The following are the details about the FAI Installation for our environment.
> 
> * We don't have Debian guests on our z10. We have only Suse Linux guests running over zVM.
> * We are using ECKD (minidisk) in our environment.
> * Network Connectivity is through vswitch.
> * The FAI Host can run in any platform but the provisioning of Suse Linux images has to be done over zVM on Mainframes.
> 
> It would be very helpful if you provide information regarding this.
> 

-- 
Martin Grimm
Zentrum für Informationsverarbeitung und Informationstechnik
Dienstsitz Bonn
An der Küppe 2
53225 Bonn
Tel.: +49 228 99 680 5298
e-mail: extern.martin.grimm at zivit.de

----------------------------------------------------------------------
#!/bin/sh

#  * /dev, /proc, and /sys are already mounted.  / is a ?? ro/rw
#	filesystem... etc. more documentation.
#
#  * It is expected that /proc and /sys will be umounted before
#	changing over to the real root file system, so you must not keep
#	any files open on them beyond these scripts.
#
# Because this script will be run as a full separate process, rather
# than sourced inside the context of the driver script, if it needs to
# pass information to another script that may run after it, it must do
# so by writing data to a file location known to both scripts.  Simply
# setting an environment variable will not work.
#

#
# List the soft prerequisites here.  This is a space separated list of
# names, of scripts that are in the same directory as this one, that
# must be run before this one can be.
#
PREREQ=""

prereqs()
{
	echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
	prereqs
	exit 0
	;;
esac

. /scripts/functions

get_field() {
  opts="$3"
  if [ -z "$opts" ]; then
    # only use lines with delimeter, use : as delimeter
    opts="-s -d :"
  fi
  echo $1 | cut $opts -f $2
}

parse_ip(){
  stuff=${1##ip=}
  # ip=x.x.x.x or ip=x.x.x.x:x.x.x.x:x.x.x.x:... are valid formats
  # the first doesn't use a colon, other options require the colon
  ipaddress=$(get_field $stuff 1 "-d :")
  if [ -z "$nfsserver" ]; then
    nfsserver=$(get_field $stuff 2)
  fi
  gw_ip=$(get_field $stuff 3)
  subnet=$(get_field $stuff 4)
  hostname=$(get_field $stuff 5)
  interface=$(get_field $stuff 6)
  autoconf_type=$(get_field $stuff 7)
}

# Do the work here.

# parse cmdline options (kernel args)

for opt in $(cat /proc/cmdline) ; do
  case $opt in
    ip=*)
      ipaddr=${opt##ip=}
      parse_ip $ipaddr
      ;;
    qethdev=*) ccwgroup="${opt##qethdev=}" ;;
    layer2=*) layer2="${opt##layer2=}" ;;
  esac
done

# set hostname to vm guest name, if unset
if [ -z "$hostname" -a -e /proc/sysinfo ]; then
   hostname=$(cat /proc/sysinfo | grep -i "^vm.. name:" | cut -d : -f 2 | awk '{print $1}' | tr A-Z a-z)
fi

echo "nfsserver     = $nfsserver"
echo "ipaddr        = $ipaddr"
echo "gw_ip         = $gw_ip"
echo "subnet        = $subnet"
echo "hostname      = $hostname"
echo "interface     = $interface"
echo "autoconf_type = $autoconf_type"

# set hostname
export HOSTNAME=$hostname
hostname $hostname
echo "HOSTNAME=$hostname" >>/conf/param.conf

if [ -z "$ccwgroup" ]; then
  echo "not configuring eth0, no ccwgroup option found..."
  exit 10
fi

ccwfirst="$(echo $ccwgroup | cut -d , -f 1)"
echo
echo "*** Activating qeth device '$ccwfirst' (layer2: $layer2) ***"
modprobe qeth
echo "$ccwgroup" >/sys/bus/ccwgroup/drivers/qeth/group
if [ "$layer2" = 1 ]; then
   echo "1" >/sys/bus/ccwgroup/drivers/qeth/$ccwfirst/layer2
fi
echo "1" >/sys/bus/ccwgroup/drivers/qeth/$ccwfirst/online

#panic "Let's panic"
export DEVICE=$ip




More information about the linux-fai mailing list