# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 1
# self = https://watcher.sour.is/conv/ioxycna
LLM 基礎模型系列:Fine-Tuning 總覽**
\\ 文|龐德公編輯|郭嘉由於對大型語言模型,人工智能從業者經常被問到這樣的問題:如何訓練自己的數據?回答這個問題遠非易事。生成式人工智能的最新進展是由具有許多參數的大規模模型驅動的,而訓練這樣的模型 LLM 需要昂貴的硬件(即許多具有大量內存的昂貴 GPU)和花哨的訓練技術(例如,完全分片的數據並行訓練)。幸運的是,這些模型通常分兩個階段進行訓練——預訓練和微調。其中前一個階段(要)昂貴得多。鑑於 ⌘ [Read more](https://www.readfog.com/a/1740892818141974528)