# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 19
# self = https://watcher.sour.is/conv/qcods7a
Made lots of improvements to Spyda over the weekend 👌
Made lots of improvements to Spyda over the weekend 👌
@prologic This is looking good! Is the source public somewhere?
@will Not yet ☺️ Mixh more work to do!
@will Not yet ☺️ Mixh more work to do!
@will Not yet ☺️ Mixh more work to do!
@prologic nice! Watching this one with interest.
@prologic nice! Watching this one with interest.
@eldersnake Thanks! My main problems right now are two fold:
* Improving the presentation of results (_managed to get highlighting working recently, which is a BIG improvements_)
* Improving the crawler and scraper on _some_ sites that basically suck and have no useful readable content whatsoever!
@eldersnake Thanks! My main problems right now are two fold:\n\n* Improving the presentation of results (_managed to get highlighting working recently, which is a BIG improvements_)\n* Improving the crawler and scraper on _some_ sites that basically suck and have no useful readable content whatsoever!
@eldersnake Thanks! My main problems right now are two fold:\n\n* Improving the presentation of results (_managed to get highlighting working recently, which is a BIG improvements_)\n* Improving the crawler and scraper on _some_ sites that basically suck and have no useful readable content whatsoever!
@eldersnake Thanks! My main problems right now are two fold:
* Improving the presentation of results (_managed to get highlighting working recently, which is a BIG improvements_)
* Improving the crawler and scraper on _some_ sites that basically suck and have no useful readable content whatsoever!
@prologic Haha that last line gave me a chuckle! :p I'm sure there's a few of those out there. On that point, does Spyda handle some of those horrible overly-JavaScript driven sites? I know Google spiders can navigate JS powered content, although those sorts of sites should probably be penalised in rankings anyway.. okay I'll stop before I go on a rant LOL.
@prologic Haha that last line gave me a chuckle! :p I'm sure there's a few of those out there. On that point, does Spyda handle some of those horrible overly-JavaScript driven sites? I know Google spiders can navigate JS powered content, although those sorts of sites should probably be penalised in rankings anyway.. okay I'll stop before I go on a rant LOL.
@eldersnake no it doesn’t so they are greatly penalized almost 0 🤣
@eldersnake no it doesn’t so they are greatly penalized almost 0 🤣
@eldersnake no it doesn’t so they are greatly penalized almost 0 🤣