# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 1
# self = https://watcher.sour.is/conv/rkhilgq
Erlang Solutions: Effortlessly Extract Data from Websites with Crawly YML
## The workflow

So in our ideal world scenario, it should work in the following way:

1. Pull Crawly Docker image from DockerHub.
2. Create a simple configuration file.
3. Start it!
4. Create a spider via the YML interface.

The detailed documentation and the example can be found on HexDocs here: [https://hexdocs.pm/crawly/spiders\_in\_yml.html#content](https://hexdocs.pm/crawly/spiders_in_yml.html#c ... ⌘ Read more