# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 3
# self = https://watcher.sour.is/conv/zy4an6a
Over the past few weeks I've been experimenting with and doing some deep learning and researching into neutral networks and evolutionary adaptation of them. The thing is I haven't gotten very far. I've been able to build two different approaches so far with limited results. The frustrating part is that these things are so "random" it isn't even funny. Like I can't even get a basic ANN + GA to evolve a network that solves the XOR pattern every time with high levels of accuracy. 😞
This is one of my attempts:


$ go build ./cmd/xor/... && ./xor
Generation  95 | Fitness: 0.999964 | Nodes: 9   | Conns: 19
Target reached!

Best network performance:
  [0 0] β†’ got=0 exp=0 (raw=0.000) βœ…
  [0 1] β†’ got=1 exp=1 (raw=0.990) βœ…
  [1 0] β†’ got=1 exp=1 (raw=0.716) βœ…
  [1 1] β†’ got=0 exp=0 (raw=0.045) βœ…
Overall accuracy: 100.0%
Wrote best.dot – render with `dot -Tpng best.dot -o best.png`
@bender There is no aim. Just learning πŸ˜… That way I can actually speak and write with authority when it comes to these LLM(s) a bit more 🀣 Or maybe I just happen to become that random weirdo genius that invents Skynetβ„’ πŸ˜‚