Crazy, never seen or heard that. Very interesting.
Not bad, sounds like a cool setup.

With all the rain we had yesterday the forest paths were saturated with puddles. One big hell of a mess. I could also see plenty of new ponds everywhere next to the tracks in the undergrowth. But I assume in a few days it's all gone again. 15 more photos.

I actually wanted to hike much further, but my calf somehow hurt, so I slowed down and strolled around some other routes. Also didn't need my tucker. But carrying water was a good idea. The clouds made for some rather nice scenery but had disappeared by the evening, so the sunset wasn't too terrible nice today. But oh well. It still was a very nice trip into nature.
I never put much thought into it, but I reckon that plenty of things can be tracked down to some pretty old roots. Perhaps especially if there is some kind of power and control involved. Also I think that most people nowadays (maybe also in the past?) – including me – don't get the connections. They just take it for granted, that's the way it is. No questions on the why asked. Because there no important reason to do so (other than curiosity and interest on the subject). Just my wild guess. :-)
The three red tongued black lions are the lesser coat of arms of this federal state here and date back to the Duke of Swabia and the Staufer dynasty in particular. The Mt. Hohenstaufen was the Staufer's local home mountain, so that makes it really connected. That flag here is exactly on that mountain. Frederick I or just Barbarossa ("red beard") is the most famous Staufer who also became a Holy Roman Emperor. That's all I know about the lions, heraldry is not my specialty, but the article might get you going with your research. ;-)

They replaced the flag, but the shredded scrap still hangs all over the trees. Nothing spectacular today, so it's not really worth heading over. In case you still do and wonder what the sign says: "End of sledge run". With like 6 or 7°C it was really warm today. Sweated like a pig. They cleared and pushed over plenty of trees on the hillside, looking really sad now. I spared you the look.
In July (conversation you just linked) I think I was coming down the narrow, beaten track from my mountain. These squirrels over there seem to be a bit more tame than the rest. At least I was able to capure them a few times already. Here it was maybe three meters away from me. But usually, they don't come *this* close. It was an exception. But like up to ten meters is realistic with them. Others probably at most 20 meters.
But I also stop and sneak in very slowly when I see one. It's not like they're coming directly towards me when I just walk by there. Sometimes it also takes some patient waiting to blend in with the surroundings or build trust and confidence that I'm not a predator of some sort.
By far the craziest squirrel I've ever met was this one. Probably a once in a lifetime experience.



My second best choice would have been the flying raven. Hahaha, yes, go for your new forest office! You'll love it. :-D

My second best choice would have been the flying raven. Hahaha, yes, go for your new forest office! You'll love it. :-D

The 8°C were quite nice though. Only the colors suffer a bit. Sorry. Anyways. Even caught a raven in flight, which I'm a tiny bit proud of. Also explored a new path and found a top notch structure. You can move right in. Furniture included.
And yes, this is the first time I actually used GitHub as a code search engine to try figuring things out by looking at real code and attempting to map their course of action to my problems. Some code looks like complete garbage, however, other was definitely written by someone who is much more knowledgeable than I am. Another thing I didn't think of before is that I somehow need to find a way to ignore duplicates. A lot of search results just list the same code for all the fifty quadrillion forks or more. That's a bit annoying. Yeah, one piece of code showed up literally over 40 times already.
Oh nice, this reduction is quite an improvement I wouldn't have guessed. And yes, faster programs are always great. Fully agree here.
tt
rounds to the next minute. But it shouldn't be very hard to go to the next five minutes, no. The only problem here is that I manually have to track that I don't create duplicates in the sense of timestamps. Already have an item on my todo list that tt
will help with that by coloring the timestamp field if a duplicate is found. I'm always posting from the future. Or am I!?!? @movq Maybe because they look nice. :-P
User-Agent
header is not verified either. So you can advertise your shitty spam feed this way. If the attacked feeds happen to look at their log files.
pudb
FTW!) with dateutil.parser.parse("2022--31T23:59:00+01:00")
where the day
is missing. Turns out I'm running version 2.8.1 and somebody fixed this bug already half a year ago in version 2.8.2. In my virtual environment I setup on Friday I already had the fix, but not on my system level Python installation. The Debian package even for sid still ships the outdated version with the bug. Instead I had to use pip
to upgrade. Sigh.
tt
. I will try to split this up next time.
tt
the creation timestamp of a twt can be changed, in case one wants to. And I certainly do. Because of privacy reasons I *fancy*. My timestamps are all (with one single, but important exception) in five minutes granularity. Yup, since the very beginning. You probably haven't noticed, because you don't care too much about these timestamps anyway. So it's alright that I continue this fashion, you might consider silly. And now some of you got curious and checked my raw feed. Got you! ;-)Last night I also wanted to fiddle with the creation timestamp and somehow something went wrong. At some place in the `dateutil` library, which I also mentioned in said blown up twt. No wonder it was cursed. The error didn't happen in my own code but rather occurred in the
dateutil
library when it tried to handle an error about an invalid timestamp. Very weird, I wasn't able to reproduce this ever since. No idea what went wrong here. The stripped and annotated stacktrace is as follows:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/usr/lib/python3/dist-packages/dateutil/parser/_parser.py", line 1238, in _build_naive
if cday > monthrange(cyear, cmonth)[1]:
File "/usr/lib/python3.9/calendar.py", line 124, in monthrange
(3) raise IllegalMonthError(month)
(4) calendar.IllegalMonthError: bad month number 0; must be 1-12
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
…cut off for brevity…
File "/usr/lib/python3/dist-packages/urwid/widget.py", line 461, in _emit
signals.emit_signal(self, name, self, *args)
File "/usr/lib/python3/dist-packages/urwid/signals.py", line 265, in emit
result |= self._call_callback(callback, user_arg, user_args, args)
File "/usr/lib/python3/dist-packages/urwid/signals.py", line 295, in _call_callback
return bool(callback(*args_to_pass))
File "/usr/local/bin/tt", line 552, in update_preview
preview_widgets, rows_calculating_delegate_widget = self._render_new_twt()
File "/usr/local/bin/tt", line 565, in _render_new_twt
created_at = self._created_at
File "/usr/local/bin/tt", line 627, in _created_at
created_at = dateutil.parser.parse(self._created_at_edit.edit_text)
File "/usr/lib/python3/dist-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/usr/lib/python3/dist-packages/dateutil/parser/_parser.py", line 657, in parse
(1) six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
(2) TypeError: unsupported operand type(s) for +: 'int' and 'str'
So as seen in (1)
e.args[0]
must have been an integer rather than a string (2) as it was expected by the dateutil
programmer and so the TypeError
was raised by the Python interpreter. To me it appears as if the calendar.IllegalMonthError
from above in (3) was the culprit. Okay, it wasn't expected to be caught down there at around (1) and (2). Maybe. Judging by its message (4), the bad month number 0
is the first and only argument, as seen in the constructor call in (3).If everything had gone to plan, the
ParserError
in (1) would have been caught in tt
and everything would have been dandy. The field would have just appeared in red and the "Publish" command button deactivated. But I just caught the ParserError
, not the TypeError
. So it blew apart.Now again, I wasn't able to reproduce this so far. No luck whatsoever. Writing this twt gave me an idea that I could now look into the code to see how the
calendar.IllegalMonthError
will be triggered in the first place. I have the slight feeling that this could be the ticket. So please excuse me now, I have to get my scuba gear, I'm about to dive deep. The original twt will be reconsidered another time.On a closing note, since starting this post exactly 45 minutes have passed. I will hit the "Publish" button in a bit more than 20 seconds to avoid another lost twt.
tt
in the future. Let's see if I find some motivation today, to write it up all again what I lost last night. Wasted an exact hour to the minute.
NameError
after NameError
. :-D
async
/await
stuff for several years now. This is the first time I actually get after it. During this endeavor I came across this nice What color is your function article. Highly recommended. And so far I completely agree with the author, Go's model using goroutines is much nicer to work with compared to the explicit style declaring where the task can be switched. But I'm far from being used to async
/await
. So maybe once I wrap my head around it, it's getting comparably easy.And lastly, it's supposed to integrate well with
urwid
's event/IO loop which I rely on in tt
. I finally try to fetch the feeds on my own, so I can rip out the twtxt
reference implementation. Still a very long way to go, but I have to start somewhere, and why not directly at the root. :-) So far I thought the interoperability between asyncio
and urwid
would be really elegant, but turns out it's a bit more complicated than I imagined (probably because of Python 2 and ancient Python 3 versions support). But maybe I just don't know the hidden tricks.Anyways, to not block the user input, I believe I have to do it this way, there's not much choice. Threads in Python are a joke with the GIL, so I don't want to open that can of worms. If
tt
wasn't a user driven program, I'd just fetch the feeds synchronously. No need for this more involved stuff if it runs in a cronjob.On a closing note, once I finally fetch and process the feeds myself, I can also support Gopher/Gemini feeds and implement parts of the metadata spec. Currently, there's absolutely nothing available in the cache, that is written by the reference implementation and read by
tt
. Looking forward to that.
aiohttp.web.Response
has absolutely no way to exclude the Content-Type
header in responses, it will always send one by falling back to application/octet-stream
, regardless of what you do. For testing purposes I'd like to omit this response header, though. So now I monkey-patched my test server handler in the unit tests like this. Let's see when this falls apart with a future aiohttp version:async def http200_no_content_type(request):
original_write_headers = request._payload_writer.write_headers
async def write_headers(status_line, headers):
del headers["Content-Type"]
await original_write_headers(status_line, headers)
request._payload_writer.write_headers = write_headers
return aiohttp.web.Response(body="abcäöüß".encode("utf-8"))=

Naah, just kidding. Went on a five hours hike to flee the chainsaw noise from the neighbor's roof. They get rafters and roofing tiles replaced. Weather could have been worse, sight was poor in the beginning, but got better later on. Explored some new paths I've never been to, mostly dead ends as it turned out. Was good fun, though. When I unsuspectingly took off my socks I discovered two huuuge blisters on my heals at home.
For your viewing pleasure I stripped the 437 shots to just 44. Head over if you didn't/don't plan to get off the coach today and want to waste even more time.

Maybe even just use the current Unix timestamp in milli-, micro- or nanoseconds. Seconds-only precision increases the danger of collission at parallel uploads. In any case you should check for duplicate filenames in case of clock adjustments. It's super simple and fast, though.
Or you could hash the data and use that as the filename, again checking for duplicates. That has the advantage that you can detect identical file uploads. Not entirely sure if that property is something you really want, but might work out in your favor. Uploading the exact same image is probably not of much use. Any hashing algorithm will do, cryptographic ones should be favored. Hashing does not come for free, some computational effort is required which heavily varies with the selected algorithm.
Now, if you want to keep as much from the original filename as possible for whatever reason then
basename($filename)
is a very good start. Limiting even further to only alphanumeric characters including dot (.
), underscore (_
) and dash (-
) makes the result a tad better. (Make sure to put the dash as the last character in the choice of the regular expression.) But then you also need to check for duplicates and handle them somehow, since höllo.jpg
and høllo.jpg
would both be truncated to the same (hllo.jpg
). Might be completely different images, though. Your filename might also end up (quite) empty or just consists of your extension (depending on order of checks). You easily can see there are quite some things to be aware of with that whitelist approach.So unless you really have to, I'd strongly recommend to go the generated filename route. It'll make your life easier. Pick one approach whose properties suit your use case. Personally, I'd select UUIDs or hashing (probably SHA-1 or even successors).
service network stop
in the start script.
GOPATH=$HOME
wouldn't suit me. On the other hand, _src_ in _~_ is what I do, too. Except Go sources, which in fact bothers me a bit. So actually, while writing this, I think I could change my mind and might give this a try. For the rm='rm -i '
alias the space at the end is superfluous. :-) Other than that, very tidy. I should clean up my stuff, too.~