# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 10
# self = https://watcher.sour.is/conv/4uamtsq
Today's project: Put 2 failing hard drives in RAID 0 and boot from it. What could go wrong?
@mckinley It booted. I was going to do more but I had actual work to do so I shelved it. Maybe I'll come back to it another time. These drives are in really bad shape, though. They hold up udev by 30-60 seconds on every boot, even when booting the Arch install ISO, covering the console with lots of SATA errors and timeouts I don't really understand.

Badblocks via mkfs.ext4 -cc was taking too long on the full 1+1 TB array so I made new 250 GB partitions and neither drive had bad blocks in that range so it was just a waste of time. Maybe if I come back to it I'll do the full array and have the EFI system partition in RAID 1 just for fun. I didn't know that worked with software RAID.

> The key part is to use --metadata 1.0 in order to keep the RAID metadata at the end of the partition, otherwise the firmware will not be able to access it.

I had the ESP on a USB stick for simplicity's sake and booted from that.
@mckinley It booted. I was going to do more but I had actual work to do so I shelved it. Maybe I'll come back to it another time. These drives are in really bad shape, though. They hold up udev by 30-60 seconds on every boot, even when booting the Arch install ISO, covering the console with lots of SATA errors and timeouts I don't really understand.

Badblocks via mkfs.ext4 -cc was taking too long on the full 1+1 TB array so I made new 250 GB partitions and neither drive had bad blocks in that range so it was just a waste of time. Maybe if I come back to it I'll do the full array and have the EFI system partition in RAID 1 just for fun. I didn't know that worked with software RAID.

> The key part is to use --metadata 1.0 in order to keep the RAID metadata at the end of the partition, otherwise the firmware will not be able to access it.

I had the ESP on a USB stick for simplicity's sake.
@mckinley Ouch sounds painful 😢 Any data loss so far, or still trying to recover any data at all? 🤔
@mckinley Ouch sounds painful 😢 Any data loss so far, or still trying to recover any data at all? 🤔
@prologic There's no important data on them, and the first 1/4 of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@prologic No pain here. There's no important data on them, and the first portion of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@prologic No pain here. There's no important data on them, and the first 1/4 of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@mckinley Ahh I see 👍
@mckinley Ahh I see 👍