I had a working mdadm raid1.
With the intent of making a backup of my primary OS when it was unmounted, and after researching the safe commands to reassemble the raid in a different linux OS, I rebooted to System-Rescue-10, copied the mdadm.conf to /etc over the existing template. and attempted to mount /dev/md0
I got only one drive up. Ensue the mild panic, because this is the reason I did research first.
I aborted, and rebooted back to Debian 11 to see if anything was harmed. Sadly yes.
It says the drives are clean, but it won't reassemble. I am uncertain which path to take to get it back together.
If it was marked degraded, I would remove the bad one, and re-add it.
How can I determine why it is getting stuck?Among the attempts I did also do a mdadm --stop md0 to remove the busy message, then reassemble from the right starting point, not showing in lsblk.
Also the 21,000 events happened with the previous enclosure in January, so is not relevant now.
I want to understand what happened before trying --force, or --zero-superblock, which appeared as solutions in my subsequent searches.
I can't tell for sure if the superblock is the problem, but since I don't see any error about it, I am looking for other ideas.
Thank you
With the intent of making a backup of my primary OS when it was unmounted, and after researching the safe commands to reassemble the raid in a different linux OS, I rebooted to System-Rescue-10, copied the mdadm.conf to /etc over the existing template. and attempted to mount /dev/md0
I got only one drive up. Ensue the mild panic, because this is the reason I did research first.
I aborted, and rebooted back to Debian 11 to see if anything was harmed. Sadly yes.
It says the drives are clean, but it won't reassemble. I am uncertain which path to take to get it back together.
If it was marked degraded, I would remove the bad one, and re-add it.
How can I determine why it is getting stuck?
Code:
$ sudo mdadm --examine /dev/sd[de]1/dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 1e3f7f7e:23a5b75f:6f76abf5:88f5e704 Name : roxy10-debian11-x64:0 (local to host roxy10-debian11-x64) Creation Time : Sat Jan 27 12:07:27 2024 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 11720777728 (5588.90 GiB 6001.04 GB) Array Size : 5860388864 (5588.90 GiB 6001.04 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : b65dd512:8928c097:47debae7:9c944a3eInternal Bitmap : 8 sectors from superblock Update Time : Sat Mar 2 18:50:02 2024 Bad Block Log : 512 entries available at offset 32 sectors Checksum : 39e567c9 - correct Events : 21691 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)/dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 1e3f7f7e:23a5b75f:6f76abf5:88f5e704 Name : roxy10-debian11-x64:0 (local to host roxy10-debian11-x64) Creation Time : Sat Jan 27 12:07:27 2024 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 11720777728 (5588.90 GiB 6001.04 GB) Array Size : 5860388864 (5588.90 GiB 6001.04 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : c7b9578b:0eef6ae2:6c33a25b:386cf478Internal Bitmap : 8 sectors from superblock Update Time : Tue Mar 5 07:38:27 2024 Bad Block Log : 512 entries available at offset 32 sectors Checksum : aba80e46 - correct Events : 21704 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing, 'R' == replacing)$ sudo mdadm --assemble --verbose /dev/md0 /dev/sdd1 /dev/sde1mdadm: looking for devices for /dev/md0mdadm: /dev/sdd1 is busy - skippingmdadm: Merging with already-assembled /dev/md0mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 0.mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 1.mdadm: /dev/sdd1 is already in /dev/md0 as 0mdadm: added /dev/sde1 to /dev/md0 as 1mdadm: /dev/md0 has been started with 1 drive (out of 2).$ cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid1 sde1[1] 5860388864 blocks super 1.2 [2/1] [_U] bitmap: 1/44 pages [4KB], 65536KB chunkunused devices: <none>$ lsblksdd 8:48 0 5.5T 0 disk └─sdd1 8:49 0 5.5T 0 part sde 8:64 0 5.5T 0 disk └─sde1 8:65 0 5.5T 0 part └─md0 9:0 0 5.5T 0 raid1 /mnt/Ugreen_RAID1_6Tb$ sudo dmsetup tableNo devices found$ sudo mdadm -E /dev/sdd/dev/sdd: MBR Magic : aa55Partition[0] : 4294967295 sectors at 1 (type ee)$ sudo mdadm -E /dev/sde/dev/sde: MBR Magic : aa55Partition[0] : 4294967295 sectors at 1 (type ee)
Also the 21,000 events happened with the previous enclosure in January, so is not relevant now.
I want to understand what happened before trying --force, or --zero-superblock, which appeared as solutions in my subsequent searches.
I can't tell for sure if the superblock is the problem, but since I don't see any error about it, I am looking for other ideas.
Code:
$ cat /etc/fstab | grep dev/md/dev/md0 /mnt/Ugreen_RAID1_6Tb ext3 defaults,noatime,rw,nofail,x-systemd.device-timeout=4 0 0$ sudo fdisk -l /dev/sddDisk /dev/sdd: 5.46 TiB, 6001175126016 bytes, 11721045168 sectorsDisk model: 726T6TALE604 Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: 89AEDBB8-E01E-47FA-859B-A415D7DDEE35Device Start End Sectors Size Type/dev/sdd1 2048 11721043967 11721041920 5.5T Linux filesystem$ sudo fdisk -l /dev/sdeDisk /dev/sde: 5.46 TiB, 6001175126016 bytes, 11721045168 sectorsDisk model: 726T6TALE604 Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: FF7400C3-E30B-43C6-98F4-9783F92981D0Device Start End Sectors Size Type/dev/sde1 2048 11721043967 11721041920 5.5T Linux filesystem$ cat /etc/mdadm/mdadm.confARRAY /dev/md0 metadata=1.2 name=roxy10-debian11-x64:0 UUID=1e3f7f7e:23a5b75f:6f76abf5:88f5e704$ cat /proc/version Linux version 5.10.0-26-amd64 (debian-kernel@lists.debian.org) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP Debian 5.10.197-1 (2023-09-29)$ cat /etc/debian_version 11.8
Statistics: Posted by seahorse41 — 2024-03-06 18:40 — Replies 2 — Views 42