# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 12
# self = https://watcher.sour.is/conv/fqekltq
@mckinley I was personally considering writing man pages for HTML elements at some point. man pages are very cool!
What does creating man pages for HTML elements have to do with scraping web pages?! 😆
What does creating man pages for HTML elements have to do with scraping web pages?! 😆
@adi Wow, that's a great idea. I wonder if MDN could be used as a data source. The Markdown would need some significant transformation done. https://github.com/mdn/content/blob/main/files/en-us/web/html/element/span/index.md
@mckinley Haha I see hmmm 😆🤞
@mckinley Haha I see hmmm 😆🤞
@prologic I assumed he was scraping for offline viewing, in which case man pages solves more problems, but there's some work in porting from HTML.
@mckinley Ah, you have Markdown, I believe you could do a lot of the work with awk.
@adi Yup I see your point now 😆 Quite clever use of man pages 👌 (not that I use them much if at all 🤦‍♂️)
@adi Yup I see your point now 😆 Quite clever use of man pages 👌 (not that I use them much if at all 🤦‍♂️)
@mckinley Care to donate some 💰 💵 if I do the job? 😀
@prologic They're very fast as you have them in your terminal, you don't have to fire up a browser, move the mouse, do any search, watch tabs... just something like man body.