# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 11
# self = https://watcher.sour.is/conv/z4jndza
@darch Unfortunately the last series of changes I made for you to track "Timeview" view "last updated" timestamps _actually_ costs about ~10x in altecny (from ~60ms to ~600ms) and ~6x in CPU (from ~5% to ~30%). I actually not certain I can accept this. I will have to consul some RPi Pod owners like @eldersnake and/or @jlj πββοΈ
@darch Unfortunately the last series of changes I made for you to track "Timeview" view "last updated" timestamps _actually_ costs about ~10x in altecny (from ~60ms to ~600ms) and ~6x in CPU (from ~5% to ~30%). I actually not certain I can accept this. I will have to consul some RPi Pod owners like @eldersnake and/or @jlj πββοΈ
See here:
And:
See here:
And:
See here:\n\n
\n\nAnd:\n\n
@prologic Difficult one, I guess for me I wouldn't want to add any extra latency. My RPi actually typically has about a 2500-3500ms response time with Yarn as-is and the CPU momentarily spikes to 50-70% when I load my timeline. I always figured it was mostly my povo network setup (well not povo, more the only thing I can get where I live!) but it's identical even when I load it on my local network. This doesn't actually worry me, and majority the time my RPi barely knows yarnd
is there, it's just when loading the timeline.
@prologic I donβt think it is worth it for the information gotten, and how it is updated.
Not sure what @jlj 's experience is? For me being a RPi I never expect miracles anyway. It does the job but this is one of the reasons I didn't make my personal pod a public instance. π
Thanks guys! πββοΈ I _managed_ to optimize one aspect of the cache, I'll give me pod 1/2hr or so to see if it makes any significant difference, so far my metrics and graphs are not showing a significant improvement, a _little_ perhaps. There are two other opportunities for the cache to be optimized, but they are significantly harder to implement.
- Keeping an LRU cache of each user's personal timeline
- Keeping an LRU cache of each user's personal mentions
Implementing these would get us to prior levels of performance from ~3hrs ago.~
Thanks guys! πββοΈ I _managed_ to optimize one aspect of the cache, I'll give me pod 1/2hr or so to see if it makes any significant difference, so far my metrics and graphs are not showing a significant improvement, a _little_ perhaps. There are two other opportunities for the cache to be optimized, but they are significantly harder to implement.\n\n- Keeping an LRU cache of each user's personal timeline\n- Keeping an LRU cache of each user's personal mentions\n\nImplementing these would get us to prior levels of performance from ~3hrs ago.~
Thanks guys! πββοΈ I _managed_ to optimize one aspect of the cache, I'll give me pod 1/2hr or so to see if it makes any significant difference, so far my metrics and graphs are not showing a significant improvement, a _little_ perhaps. There are two other opportunities for the cache to be optimized, but they are significantly harder to implement.
- Keeping an LRU cache of each user's personal timeline
- Keeping an LRU cache of each user's personal mentions
Implementing these would get us to prior levels of performance from ~3hrs ago.~