# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 61063
# self = https://watcher.sour.is?uri=https://twtxt.net/user/prologic/twtxt.txt&offset=46291
# next = https://watcher.sour.is?uri=https://twtxt.net/user/prologic/twtxt.txt&offset=46391
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/prologic/twtxt.txt&offset=46191
@dkordic What's "Speed Bump"? Can you link us? I only know of the concrete kind that's designed to slow vehicles (cards) down 🤣
@dkordic What's "Speed Bump"? Can you link us? I only know of the concrete kind that's designed to slow vehicles (cards) down 🤣
@dkordic What's "Speed Bump"? Can you link us? I only know of the concrete kind that's designed to slow vehicles (cards) down 🤣
@dkordic What's "Speed Bump"? Can you link us? I only know of the concrete kind that's designed to slow vehicles (cards) down 🤣
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci It's actually one of the aspects of the "family of machine learning" that I find the most intriguing. If you've ever played the game Creates (I haven't, but know a lot about it), it was an amazing piece of work. I'd love to work on something like this one day or see something like it at a much larger scale 👌
@abucci It's actually one of the aspects of the "family of machine learning" that I find the most intriguing. If you've ever played the game Creates (I haven't, but know a lot about it), it was an amazing piece of work. I'd love to work on something like this one day or see something like it at a much larger scale 👌
@abucci It's actually one of the aspects of the "family of machine learning" that I find the most intriguing. If you've ever played the game Creates (I haven't, but know a lot about it), it was an amazing piece of work. I'd love to work on something like this one day or see something like it at a much larger scale 👌
@abucci It's actually one of the aspects of the "family of machine learning" that I find the most intriguing. If you've ever played the game Creates (I haven't, but know a lot about it), it was an amazing piece of work. I'd love to work on something like this one day or see something like it at a much larger scale 👌
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi 🤣
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi 🤣
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi 🤣
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi 🤣
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@bender True and good points, my only gripe though we should call it what it _really_ is 😅
@bender True and good points, my only gripe though we should call it what it _really_ is 😅
@bender True and good points, my only gripe though we should call it what it _really_ is 😅
@bender True and good points, my only gripe though we should call it what it _really_ is 😅
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know 😅 I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know 😅 I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know 😅 I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know 😅 I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@bender Yeah I looked it up as well 😆 Looks like a weird distro to me 🤣
@bender Yeah I looked it up as well 😆 Looks like a weird distro to me 🤣
@bender Yeah I looked it up as well 😆 Looks like a weird distro to me 🤣
@bender Yeah I looked it up as well 😆 Looks like a weird distro to me 🤣
@marado Oh my 🙇♂️ What a lovely thought 🙏 Thank you so much and happy free Software day! 🥳
@marado Oh my 🙇♂️ What a lovely thought 🙏 Thank you so much and happy free Software day! 🥳
@marado Oh my 🙇♂️ What a lovely thought 🙏 Thank you so much and happy free Software day! 🥳
@marado Oh my 🙇♂️ What a lovely thought 🙏 Thank you so much and happy free Software day! 🥳
@abucci Oh yeah yq is great 👌 I use it all the time! Yhere is also another one for binary files called fq I think 😆
@abucci Oh yeah yq is great 👌 I use it all the time! Yhere is also another one for binary files called fq I think 😆
@abucci Oh yeah yq is great 👌 I use it all the time! Yhere is also another one for binary files called fq I think 😆
@abucci Oh yeah yq is great 👌 I use it all the time! Yhere is also another one for binary files called fq I think 😆
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? 🤔 (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? 🤔 (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? 🤔 (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? 🤔 (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
@ychbn @bender is righ. This is a deliberate design decision. We will add full search capabilities soon (as I have more spare time) and we'll think about ways we can address this without making earthing permanently visible for all forever (searchable is okay).
@ychbn @bender is righ. This is a deliberate design decision. We will add full search capabilities soon (as I have more spare time) and we'll think about ways we can address this without making earthing permanently visible for all forever (searchable is okay).
@ychbn @bender is righ. This is a deliberate design decision. We will add full search capabilities soon (as I have more spare time) and we'll think about ways we can address this without making earthing permanently visible for all forever (searchable is okay).
@ychbn @bender is righ. This is a deliberate design decision. We will add full search capabilities soon (as I have more spare time) and we'll think about ways we can address this without making earthing permanently visible for all forever (searchable is okay).
@adi Interesting view point 🤔 Also welcome back! 🤗
@adi Interesting view point 🤔 Also welcome back! 🤗
@adi Interesting view point 🤔 Also welcome back! 🤗
@adi Interesting view point 🤔 Also welcome back! 🤗
@eldersnake It's brilliant along with mosh if you ever are in a position of shotty connectivity
@eldersnake It's brilliant along with mosh if you ever are in a position of shotty connectivity
@eldersnake It's brilliant along with mosh if you ever are in a position of shotty connectivity
@eldersnake It's brilliant along with mosh if you ever are in a position of shotty connectivity
As @lyse pointed out, this is a bit of a security nightmare. Now you have a compiler that will download another version of the compiler (you hope) 🤦♂️
As @lyse pointed out, this is a bit of a security nightmare. Now you have a compiler that will download another version of the compiler (you hope) 🤦♂️
As @lyse pointed out, this is a bit of a security nightmare. Now you have a compiler that will download another version of the compiler (you hope) 🤦♂️
As @lyse pointed out, this is a bit of a security nightmare. Now you have a compiler that will download another version of the compiler (you hope) 🤦♂️
@stigatle You probably answered this before, but what breed of dog is Nanook? 🤔
@stigatle You probably answered this before, but what breed of dog is Nanook? 🤔
@stigatle You probably answered this before, but what breed of dog is Nanook? 🤔
@stigatle You probably answered this before, but what breed of dog is Nanook? 🤔
@lyse I really wish Russell Cox and co would seriously reconsider what they are doing with the language and toolchain. They seriously risk affecting the reputation of Go here in ways that cannot be predicted (just look at history for some examples). I can seriously see a Go fork coming out of this.
@lyse I really wish Russell Cox and co would seriously reconsider what they are doing with the language and toolchain. They seriously risk affecting the reputation of Go here in ways that cannot be predicted (just look at history for some examples). I can seriously see a Go fork coming out of this.
@lyse I really wish Russell Cox and co would seriously reconsider what they are doing with the language and toolchain. They seriously risk affecting the reputation of Go here in ways that cannot be predicted (just look at history for some examples). I can seriously see a Go fork coming out of this.
@lyse I really wish Russell Cox and co would seriously reconsider what they are doing with the language and toolchain. They seriously risk affecting the reputation of Go here in ways that cannot be predicted (just look at history for some examples). I can seriously see a Go fork coming out of this.