September 31, 2023

Fedora 38 -- Bug! Won't boot, initrd-switch-root

I first reported this (on Bugzilla) a month ago (8-31-2023). My report has received no comments or attention. Maybe bugzilla is dead? Maybe Fedora is dead?

Here are links to the errors I see as well as my bug report.

I have continued to run the 6.4.10 kernel, trying various kernels along the way up to 6.5.5 which came in about a week ago, none of them work any differently. I now have in /boot:

vmlinuz-6.4.10-200.fc38.x86_64
vmlinuz-6.4.11-200.fc38.x86_64
vmlinuz-6.4.12-200.fc38.x86_64
vmlinuz-6.4.13-200.fc38.x86_64
vmlinuz-6.4.14-200.fc38.x86_64
vmlinuz-6.4.15-200.fc38.x86_64
vmlinuz-6.5.5-200.fc38.x86_64

Oops! I deleted my System.map

It turns out this isn't the tragedy I thought it was. Keep reading.

I just did something stupid. I was trying to prune stuff in my /boot so it wouldn't fill up (in retrospect, filling up would have been OK as long as 6.4.10 was intact). What I did was to run "rm" without the "-i" switch and nuked all the System.map files. So don't have it for any of my kernels. I have been unable to locate the original 6.4.10 kernel rpm package. So I may be in trouble when the next power surge reboots my system.

I tried this to try to restore the files for 6.5.5, but no System.Map appeared:

su
dnf erase kernel-6.5.5
dnf install kernel-6.5.5
I use this command on another intact system:
rpm -qf /boot/System.map-6.2.14-200.fc37.x86_64
kernel-core-6.2.14-200.fc37.x86_64
And based on that, I do:
dnf erase kernel-core-6.5.5
I abort this, as it wants to remove 24 dependent packages. Instead I try this:
dnf reinstall kernel-core-6.5.5
This works fine and I now see:
System.map-6.5.5-200.fc38.x86_64
But I try this (with great hope) and get:
dnf reinstall kernel-core-6.4.10
Last metadata expiration check: 3:47:26 ago on Wed 27 Sep 2023 01:11:42 PM MST.
Installed package kernel-core-6.4.10-200.fc38.x86_64 (from updates) not available.
Error: No packages marked for reinstall.

What is System.map and can we regenerate it?

It is a plain ascii file and a symbol table for the kernel. It looks like this:
000000000000c000 d exception_stacks
0000000000018000 d entry_stack_storage
0000000000019000 D espfix_waddr
0000000000019008 D espfix_stack
....
ffffffff85000000 B __bss_stop
ffffffff85000000 B __end_bss_decrypted
ffffffff85000000 B __end_of_kernel_reserve
As I read, it doesn't sound deadly not to have a System.map, but they say that you will get complaints like "System.map does not match actual kernel" when you run "ps". And you will get unreliable information when you have a kernel oops (which I never do).

They tell me that the System.map file is generated by scripts/mksysmap near the end of the kernel build process. It is the output of the nm command. Using "locate" I find a plethora of "mksysmap" scripts on my system. It is a nice simple bash script you use as "mksysmap vmlinux System.map" and the heart of it is:

$NM -n $1 | grep -v             \
        -e ' [aNUw] '           \
        -e ' \$'                \
        -e ' \.L'               \
        -e ' __crc_'            \
        -e ' __kstrtab_'        \
        -e ' __kstrtabns_'      \
        -e ' L0$'               \
> $2
However, the vmlinuz file needs to be expanded. The way to do that is:
/path/to/kernel/tree/scripts/extract-vmlinux  >vmlinux.
In my case locate also found a bunch of these scripts for me as well (since I have umpteen copies of the linux kernel source).
So, I replace $NM with "nm" in the script above, and do this:
cd /boot
cp /u1/linux/linux-git/scripts/mksysmap .
# edit NM to nm
/u1/linux/linux-git/scripts/extract-vmlinux vmlinuz-6.4.10-200.fc38.x86_64 > vmlinux
./mksysmap vmlinux System.map
This is all in vain, the symbols have been stripped.

What the heck is really going wrong with sysroot?

There was quite a bit of online chatter about this, and I tried a recommended scheme to rebuild my initramfs (to no good effect). What is this crazy error really all about? When I searched the last time, this link seemed informative: A random tip was to look at the boot line for the kernel in my grub setup, comparing the case where it works with the one where it doesn't, paying special attention to the "root=" clause therein.

In the meanwhile

In another 2 or 3 weeks F39 should be out. I might be tempted to do a fresh install on a new 1T SSD I have, keeping my current 128G SSD as a backup. I'll do my own partition scheme with good old fashioned partitions, no fancy LVM stuff.

Along with that I have a F38 Xfce spin live DVD iso on a flash stick if things get dire.


Have any comments? Questions? Drop me a line!

Adventures in Computing / [email protected]