# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 21
# self = https://watcher.sour.is/conv/u6n2s5a
ZFS Operations: Replacing a failed disk - HedgeDoc -- Had to replace a failed disk in my NAS today as it was on its way out. For anyone else running a ZFS pool, here's the steps involved:
1. zpool offline <pool> <device>
2. Remove failed disk and replace with new disk.
3. zpool replace <pool> <device>
4. Wait.
#ZFS
ZFS Operations: Replacing a failed disk - HedgeDoc -- Had to replace a failed disk in my NAS today as it was on its way out. For anyone else running a ZFS pool, here's the steps involved:
1. zpool offline <pool> <device>
2. Remove failed disk and replace with new disk.
3. zpool replace <pool> <device>
4. Wait.
#ZFS
ZFS Operations: Replacing a failed disk - HedgeDoc -- Had to replace a failed disk in my NAS today as it was on its way out. For anyone else running a ZFS pool, here's the steps involved:
1. zpool offline <pool> <device>
2. Remove failed disk and replace with new disk.
3. zpool replace <pool> <device>
4. Wait.
#ZFS
ZFS Operations: Replacing a failed disk - HedgeDoc -- Had to replace a failed disk in my NAS today as it was on its way out. For anyone else running a ZFS pool, here's the steps involved:
1. zpool offline <pool> <device>
2. Remove failed disk and replace with new disk.
3. zpool replace <pool> <device>
4. Wait.
#ZFS
@prologic I've done this many times myself (been using ZFS on my main storage cluster since 2013 or so) and I still stress about it every time π¨ Even though I've never had anything bad happen during a disk replacement and I have backups, it still worries me.
@abucci Haha me too! π€£ Although this my first disk replacement π±
@abucci Haha me too! π€£ Although this my first disk replacement π±
@abucci Haha me too! π€£ Althogu this my first disk replacement π±
@abucci Haha me too! π€£ Although this my first disk replacement π±
@abucci Haha me too! π€£ Although this my first disk replacement π±
@prologic speak of the devil, I just had to do this today and as always it's a π±
I have a pool with 7 disks arranged in 3 mirrors for content plus one for logs, following this person's advice. One of the disks in one of the mirrors started throwing errors yesterday night, apparently. It made a real mess because I sync backups to that array at midnight, I have a media server with music and movies running off it, I have an app that automatically takes snapshots and prunes old snapshots that runs regularly, etc etc etc. All that stuff was in various states of hung/failed/conflicted/angry because the array was much slower than usual. ZFS is great for remaining functional even in a degraded state, but it can get slowwwwww.
I went through the procedure here as usual, except it looks like I forgot to stop a process that was using the array and it vomited all sorts of checksum errors and then I/O was suspended. This is what always stresses me out about this process, I forget something and for a brief moment I feel like I've fucked up the whole array.
Anyway, it's resilvering now and zpool status
reports the blessed errors: No known data errors
so I thinkkkk I'm OK.
@prologic speak of the devil, I just had to do this today and as always it's a π±
I have a pool with 7 disks arranged in 3 mirrors for content plus one for logs, following this person's advice. One of the disks in one of the mirrors started throwing errors yesterday night, apparently. It made a real mess because I sync backups to that array at midnight, I have a media server with music and movies running off it, I have an app that automatically takes snapshots and prunes old snapshots that runs regularly, etc etc etc. All that stuff was in various states of hung/failed/conflicted/angry because the array was much slower than usual. ZFS is great for remaining functional even in degraded state, but it can get slowwwwww.
I went through the procedure here as usual, except it looks like I forgot to stop a process that was using the array and it vomited all sorts of checksum errors and then I/O was suspended. This is what always stresses me out about this process, I forget something and for a brief moment I feel like I've fucked up the whole array.
Anyway, it's resilvering now and zpool status
reports the blessed errors: No known data errors
so I thinkkkk I'm OK.
@abucci Oh man π
That sounded stressful π€£ But glad it's all back to normal π -- Perhaps you _might_ want to consider rebuilding your pool and make things a bit simpler on you? π€ -- Btw I'm using restic for backups, and I intend to (soonβ’) buy a TrueNAS Mini X as a secondary NAS with ZFS pool in my office that the other one syncs to say every day.
@abucci Oh man π
That sounded stressful π€£ But glad it's all back to normal π -- Perhaps you _might_ want to consider rebuilding your pool and make things a bit simpler on you? π€ -- Btw I'm using restic for backups, and I intend to (soonβ’) buy a TrueNAS Mini X as a secondary NAS with ZFS pool in my office that the other one syncs to say every day.
@abucci Oh man π
That sounded stressful π€£ But glad it's all back to normal π -- Perhaps you _might_ want to consider rebuilding your pool and make things a bit simpler on you? π€ -- Btw I'm using restic for backups, and I intend to (soonβ’) buy a TrueNAS Mini X as a secondary NAS with ZFS pool in my office that the other one syncs to say every day.
@abucci Oh man π
That sounded stressful π€£ But glad it's all back to normal π -- Perhaps you _might_ want to consider rebuilding your pool and make things a bit simpler on you? π€ -- Btw I'm using restic for backups, and I intend to (soonβ’) buy a TrueNAS Mini X as a secondary NAS with ZFS pool in my office that the other one syncs to say every day.
@prologic I don't think it's the structure of the pool that's the problem. I think its the fact that so many automated processes rely on it that I forget one or two when I need to perform maintenance. i have it all documented and monitored but I ignore all that and dive into "must fix nowwww" mode when the array has a problem and it bites me in the ass every time.