Yeah I'm kind of glad they're better at Hardware too and not this (questionable) "social media" thing 🤣 #Mitre10 #Hardware #Social~
Yeah I'm kind of glad they're better at Hardware too and not this (questionable) "social media" thing 🤣 #Mitre10 #Hardware #Social~
yarnd UI using BeerCSS from scratch, but it's an awful lot of work 🙄
yarnd UI using BeerCSS from scratch, but it's an awful lot of work 🙄
Finally fixed so that usernames mentioned in a post shows up as @user , and not with brackets and twtxt file url, looks so much better now! One thing I want to focus on next - is handling replies to a status, that will make it much easier to follow a conversation.
Finally fixed so that usernames mentioned in a post shows up as @user , and not @
Finally fixed so that usernames mentioned in a post shows up as @user , and not "@Oh I bet, nearly getting hit by lightning is very frightening.
Fighting kestrelsMore peaceful before that: https://lyse.isobeef.org/turmfalke-2024-08-07/
david@dreadnought:~/$ apt-cache depends -i --recurse shellcheck
shellcheck
Depends: libc6
Depends: libffi8
Depends: libgmp10
libc6
Depends: libgcc-s1
libffi8
Depends: libc6
libgmp10
Depends: libc6
libgcc-s1
Depends: gcc-14-base
Depends: libc6
gcc-14-base
david@dreadnought:~/$ sudo apt depends shellcheck
shellcheck
Depends: libc6 (>= 2.34)
Depends: libffi8 (>= 3.4)
Depends: libgmp10 (>= 2:6.2.1+dfsg1)
You do have an interesting point there 🤔 Seems rather wasteful just to produce some heat 🔥
You do have an interesting point there 🤔 Seems rather wasteful just to produce some heat 🔥
The result is interesting, but the Neuroscience News headline greatly overstates it. If I've understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn't quite as magically effective as people say --- if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model's performance, and they turn them off because they are not part of what is called "emergence": "an ability to solve a task which is absent in smaller models, but present in LLMs".
They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.
I'd love to hear more from someone more familiar with this stuff. (I've done research that touches on ML, but neural nets and especially LLMs aren't my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.
When I woke up at 5am, I had a quick look in the Northern sky and saw a tiny shooting star. I then happily went back to bed. :-)
tt, I have to press r to toggle the read status for each and every message. The disadvantage is that I have to mark all messages read explicitly, the advantage is that I have to mark all read explicitly, and hence no silly automation messes with me and causes wild surprises. But in theory it would be possible to automatically mark a message read when it is selected for three seconds or something like that. Not sure, though, how well any of that would work with a web UI.
Again, I could completely misunderstand the use case here. But assuming it's not connected to the internet, since you just have HTML and plain text files on the USB stick, no PHP or other stuff that needs to be interpreted first, you could just view these files locally in any browser (via local
file:// protocol) without the web server (via http(s)://) in between. Much simpler.
tired legs. felt overall a bit easy but still feel like i am getting over something.
#running #treadmill
tired legs. felt overall a bit easy but still feel like i am getting over something.
#running #treadmill
tired legs. felt overall a bit easy but still feel like i am getting over something.
#running #treadmill
> I don’t know how we will handle the resetting of it, after reading…
I thought about it a few times, but I've never really been able to figure out a way of coming up with a viable solution to that.
> I don’t know how we will handle the resetting of it, after reading…
I thought about it a few times, but I've never really been able to figure out a way of coming up with a viable solution to that.
Phanpy, Fediverse client, showing the little bell on top right corner with a dimmed dot, indicating activity
php-fpm with Caddy. Unless I am missing something, FrankenPHP is a modified Caddy. If I already run Caddy, why would I need another one?Of course, FrankenPHP might fit @off_grid_living needs, if he is to switch from Apache to FrankensteinPHP.
#!/bin/sh, it still gets me a Bash that does NOT enter strict POSIX mode. 🫤 The script below uses Bashisms and requests #!/bin/sh but still runs happily …#!/bin/sh
foo=1
if [[ "$foo" == 1 ]]
then
echo match
fi=
#!/bin/sh, it still gets me a Bash that does NOT enter strict POSIX mode. 🫤 The script below uses Bashisms and requests #!/bin/sh but still runs happily …#!/bin/sh
foo=1
if [[ "$foo" == 1 ]]
then
echo match
fi=
#!/bin/sh, it still gets me a Bash that does NOT enter strict POSIX mode. 🫤 The script below uses Bashisms and requests #!/bin/sh but still runs happily …#!/bin/sh
foo=1
if [[ "$foo" == 1 ]]
then
echo match
fi=
#!/bin/sh, it still gets me a Bash that does NOT enter strict POSIX mode. 🫤 The script below uses Bashisms and requests #!/bin/sh but still runs happily …#!/bin/sh
foo=1
if \n]
then
echo match
fi=
#!/bin/sh, it still gets me a Bash that does NOT enter strict POSIX mode. 🫤 The script below uses Bashisms and requests #!/bin/sh but still runs happily …#!/bin/sh
foo=1
if [[ "$foo" == 1 ]]
then
echo match
fi=