# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 16
# self = https://watcher.sour.is/conv/5ym4qia
Been playing around a bit with Continue.dev and Ollama.ai in VSCode (_which all runs locally_). I have to say, Continue.dev is not a bad tool in terms of "utility" and the overall UX is kind of nice. However; I dunno whether I'm just using inferior models like codellama or codellama (See Models), or whether I'm expecting far too much out of these "glorified" token prediction machines, but all this seems to be good for is banging out repetitive keystrokes.

The darn thing is just so well umm, fucking stupid and just umm clueless?! 🤦‍♂️ I'm not really sure what to think of any of this anymore... It's been so heavily hyped up over the past couple of years, but why? LIke you can't really get these models to do much for you, even its "summarize this ..." is kind of garbage really 😅
Been playing around a bit with Continue.dev and Ollama.ai in VSCode (_which all runs locally_). I have to say, Continue.dev is not a bad tool in terms of "utility" and the overall UX is kind of nice. However; I dunno whether I'm just using inferior models like codellama or codellama (See Models), or whether I'm expecting far too much out of these "glorified" token prediction machines, but all this seems to be good for is banging out repetitive keystrokes.

The darn thing is just so well umm, fucking stupid and just umm clueless?! 🤦‍♂️ I'm not really sure what to think of any of this anymore... It's been so heavily hyped up over the past couple of years, but why? LIke you can't really get these models to do much for you, even its "summarize this ..." is kind of garbage really 😅
Been playing around a bit with Continue.dev and Ollama.ai in VSCode (_which all runs locally_). I have to say, Continue.dev is not a bad tool in terms of "utility" and the overall UX is kind of nice. However; I dunno whether I'm just using inferior models like codellama or codellama (See Models), or whether I'm expecting far too much out of these "glorified" token prediction machines, but all this seems to be good for is banging out repetitive keystrokes.

The darn thing is just so well umm, fucking stupid and just umm clueless?! 🤦‍♂️ I'm not really sure what to think of any of this anymore... It's been so heavily hyped up over the past couple of years, but why? LIke you can't really get these models to do much for you, even its "summarize this ..." is kind of garbage really 😅
Is it _actually_ any better using the much more (_supposedly_) powerful ChatGPT from OpenAI and wll that jazz that runs some crazy $250k/day to run?! 🤔 Anyone?
Is it _actually_ any better using the much more (_supposedly_) powerful ChatGPT from OpenAI and wll that jazz that runs some crazy $250k/day to run?! 🤔 Anyone?
Is it _actually_ any better using the much more (_supposedly_) powerful ChatGPT from OpenAI and wll that jazz that runs some crazy $250k/day to run?! 🤔 Anyone?
@prologic They suck bad! Artificial stupidity as I said. Real problem with ChatGPT is to discover when it's actually outputting bullshit because it's outputting it in a very convincing way, but in the end it's still bullshit. Maybe that's why they call it "intelligence", because he's good at lying to us.
Actually, it's outputting bullshit most of the times.
Was sort of hoping for a more objective response and experiences with using any LLM local or Oyherwise as a "coding assistant" 😁
Was sort of hoping for a more objective response and experiences with using any LLM local or Oyherwise as a "coding assistant" 😁
Was sort of hoping for a more objective response and experiences with using any LLM local or Oyherwise as a "coding assistant" 😁
well, I tried Continue thanks to your twt, and I'm enjoying it mainly for Python. It's quicker than opening a new ChatGPT tab and waiting for it to load, and also auto-imports your selected code.
And it seems to use 120 MB on its database?

(I'm a lazy programmer)
well, I tried Continue thanks to your twt, and I'm enjoying it mainly for Python. It's quicker than opening a new ChatGPT tab and waiting for it to load, and also auto-imports your selected code.
@eapl.me Cool! 🙃 I'm still trying to learn to use it effectively but I'm unconvinced I'll use it long term and I find it quite umm "dumb" and frustrating at times 🤦‍♂️
@eapl.me Cool! 🙃 I'm still trying to learn to use it effectively but I'm unconvinced I'll use it long term and I find it quite umm "dumb" and frustrating at times 🤦‍♂️
@eapl.me Cool! 🙃 I'm still trying to learn to use it effectively but I'm unconvinced I'll use it long term and I find it quite umm "dumb" and frustrating at times 🤦‍♂️