/dev/md1 is already in use. SW raid
David Dreezer
dave at socialstrata.com
Tue May 3 18:40:37 CEST 2011
Hi Michael,
Sorry for the incomplete response. By "this worked" I do indeed mean that the server installed properly and with no errors.
The server doesn't completely reboot when it is finished installing however. The libata.force=noncq kernel flag never does get permanently set and I'm not sure how to go about doing that with Grub2 and FAI. So, while I do have a server that is now properly installing while using the NTFS kernel and the FAI flags, I don't yet have a computer that will run on it's own.
But getting the array working is a giant step in the right direction.
David Dreezer
Customer Advocate, Social Strata, Inc.
Online Community HQ Since 1996
Hoop.la · LiveCloud · Eve Community
Follow us: @socialstrata
On May 3, 2011, at 7:29 AM, Michael Tautschnig wrote:
> Hi David,
>
> Sorry for confusing those files...
>
>> Here it is, this worked.
>>
>> Commands.pm
>>
>> 294 # check RAID arrays if there are pre-existing ones
>> 295 &FAI::push_command("mdadm --stop --scan && mdadm --assemble --scan --config=$FAI::DATADIR/mdadm-from-examine.conf",
>> 296 "", "mdadm_startall_examined") if (scalar(keys %FAI::current_raid_config));
>> 297 foreach my $id (keys %FAI::current_raid_config) {
>> 298 my $md = "/dev/md$id";
>>
>> Thank you for your help on this.
>>
>
> "this worked" == your server got installed nicely? That'd be cool, I'll then see
> which changes are to be made for the most proper solution. As you noticed, there
> might also be a need for changes in Exec.pm (well, thanks a lot for that, I
> might have missed this).
>
> Best regards,
> Michael
>
More information about the linux-fai
mailing list