# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 1
# self = https://watcher.sour.is/conv/g7f3soq
大白話!解析大模型原理!**
LLM 的工作原理對大多數人來說是個謎。雖然它們本質上在於 “預測下一個詞”,並需要大量文本進行訓練,但具體細節往往令人困惑。原因在於這些系統獨特的開發方式:基於數十億詞彙訓練的神經網絡,不同於傳統的人類編寫的軟件。儘管沒人完全理解其內部機制,但研究人員正努力探索。本文旨在以非技術、非數學的方式解釋 LLM 的工作原理,包括詞向量、Transformer 模型及其訓練方式,以及爲何需要海量數據來取 ⌘ Read more