i like this one
i like this one
Pues no seré yo el que se queje de que les quiten a los perros las pelotas de goma. ⌘ Read more****
They introduced these ribbons a few years back. It's a really cool system. The colors of the ribbons vary from town to town. It seems most actually use yellow ribbons. The rules are to be respectful, only take what you really need (common household amounts) and be careful not to break branches, not to trample down higher grass, watch out for pants and animals, etc. Sometimes, a tree owner only grants access to a few trees. So, you're only allowed to take from the explicitly marked ones. I mean, common sense really, don't be an asshole. :-)
We just pick up what has fallen down. You're also allowed to pick directly from the tree, but the apples on the ground are already fully ripe. Or bad, but you can typically distinguish between the two rather easily. The apples that fall down early are usually full of worms. Later on, it's the ripe ones. Yeah, if a ripe one lands in a patch of spoiled ones, it's also going bad fairly quickly. So, it pays off to visit regularly and check.
Not all apples are equal, though. It's important to check the variety before gathering them. Cider apples are worthless to us. They just taste awful. Typically, these are the tiny ones, but there are also some tiny ones which are actually very delicious. So, a taste test is mandatory.
Then for apple sauce we just wash off the occasional dirt on the apples at home. Typically, you can get rid of the worst already by wiping it on the grass when picking. We simply cut them in quarters, bigger apples also in eights. Bad spots and the cores are removed. To avoid oxidation, we throw them in a bowl of water with citric acid. Once that bowl is full, we transfer them into a big pot. Rinse and repeat.
The pot has some water in it, so the apples do not scorch. Shortly before we finish cutting the apples, the stove is heated. Then, we just let the whole mass heat up. Don't forget to stir every now and then. The longer it simmers, the easier it gets to actually stir the now softer mass. It also sinks down a bit. You can also use a potato masher to help get some sort of a pulp.
When the pulp is fairly soft it's pressed through a strainer. People here call the food mill "Flotte Lotte" (quick Charlotte) after a brand name. We use the tiniest sieve with 1mm holes. Unfortunately, there's no smaller one. But it gets 99.99% of the junk out, skin, missed seeds, all the coarse stuff. After each load the food mill has to be cleared from pomance, so it doesn't plug up all the holes or worse, the coarse crap is pressed through.
For some strange reason we have not figured out, we got quite a bunch of skin pieces in the apple sauce on Wednesday. Somehow they managed to get through. Very strange, this has never happened before. To filter them out, we just passed the whole thing through the Flotte Lotte a second time.
Around 10% sugar by weight is added to help preservation. A pinch of cinnamon and then it's basically ready when mixed up properly.
Fill the apple sauce is in jars and make sure to leave enough space for some expansion when getting cooked in a moment. Wipe any spilled sauce form the glas rims, close the lids with a rubber seal and clamp 'em shut. The jars are placed in a big pot or "Einkochautomat" (translates roughly to preserving machine). It's a large pot that is electrically heated and automatically maintains the temperature using a thermostat. The water level has to be about 2/3 of the top layer of the jars (they can be stacked). Any higher is unnecessary and just wastes water. The jars get cooked for half an hour at 90°C. Then, they can be lifted out with a pairs of jar tongs. After cooling down, the clamps are removed. If a jar hasn't sealed properly, you notice it right away.
The last thing is to label and store them in the cellar or somewhere.
Eventually, pull on the rubber seal's tab to open a jar, put the apple sauce on a waffle or something else and enjoy the blast of taste in your mouth. :-)
Oh, that text got a wee bit longer than anticipated. 8-)
Joke aside, if anyone using a sane protocol (sorry, sorry, no more jokes!) wants to see what's been referred about here, without leaving the browser, head over.
Joke aside, if anyone using a sane protocol (sorry, sorry, no more jokes!) wants to see what's been referred about here, without living the browser, head over.
> "If you are not using AI everyday, you're working too much", and "completely worth it [referring to the use of ChatGPT], no question. Same work output, in less of my time. More breaks for me."
It is not to rely on it 100%. It's just a tool.
> "If you are not using AI everyday, you're working too much", and "completely worth it \n, no question. Same work output, in less of my time. More breaks for me."
It is not to rely on it 100%. It's just a tool.
photomatt) on that HN entry.
o1-preview. I've used it for various tasks from writing documentation, specs, shell scripts, to code (in Go).The result? Well I can certainly say the model(s) are much better than they used to be, but maybe that isn't so much the models per se, but the sheer processing power at OpenAI's data centers? 🤔
But here's the kicker though... If anyone ever for a moment ever think that these "AI" things are intelligent, or that the marketing and hype is ever remotely close to trying to convince of us this "AGI" (Artificial General Intelligence) or ASI (Artificial Super Intelligence), you are sorely mistaken.
Chat-GPT and basically and any other technology based on Generative-AI (Gen-AI), these pre-trained transformers that use adversarial neural networks and insanely multi-dimensional vector databases to model all sorts of things from human language, programming languages all the way to visual and audible art are (_wait for it_):
Incredibly stupid! 🤦♂️
They are effectively quite useless for anything but:
- Reproducing patterns (_albieit badly_)
- Search and Retrieval (_in a way that "seems" to be natural_)
And that's about it.
Used as a tool, they're kind of okay, but I wouldn't use Chat-GPT or CoPilot. I'd stick with something more like Codeium if you want a bit of a fancier "auto complete". Otherwise, just forget about the whole thing honestly. It doesn't even really save you time.
o1-preview. I've used it for various tasks from writing documentation, specs, shell scripts, to code (in Go).The result? Well I can certainly say the model(s) are much better than they used to be, but maybe that isn't so much the models per se, but the sheer processing power at OpenAI's data centers? 🤔
But here's the kicker though... If anyone ever for a moment ever think that these "AI" things are intelligent, or that the marketing and hype is ever remotely close to trying to convince of us this "AGI" (Artificial General Intelligence) or ASI (Artificial Super Intelligence), you are sorely mistaken.
Chat-GPT and basically and any other technology based on Generative-AI (Gen-AI), these pre-trained transformers that use adversarial neural networks and insanely multi-dimensional vector databases to model all sorts of things from human language, programming languages all the way to visual and audible art are (_wait for it_):
Incredibly stupid! 🤦♂️
They are effectively quite useless for anything but:
- Reproducing patterns (_albieit badly_)
- Search and Retrieval (_in a way that "seems" to be natural_)
And that's about it.
Used as a tool, they're kind of okay, but I wouldn't use Chat-GPT or CoPilot. I'd stick with something more like Codeium if you want a bit of a fancier "auto complete". Otherwise, just forget about the whole thing honestly. It doesn't even really save you time.
Really we should all think hard about how changes will break things and if those breakages are acceptable.
Really we should all think hard about how changes will break things and if those breakages are acceptable.
But keeping a good eye on Zen Browser's progress.
Super simple:
Making a reply:
0. If yarn has one use that. (Maybe do collision check?)
1. Make hash of twt raw no truncation.
2. Check local cache for shortest without collision
- in SQL:
select len(subject) where head_full_hash like subject || '%' Threading:
1. Get full hash of head twt
2. Search for twts
- in SQL:
head_full_hash like subject || '%' and created_on > head_timestamp The assumption being replies will be for the most recent head. If replying to an older one it will use a longer hash.
Super simple:
Making a reply:
0. If yarn has one use that. (Maybe do collision check?)
1. Make hash of twt raw no truncation.
2. Check local cache for shortest without collision
- in SQL:
select len(subject) where head_full_hash like subject || '%' Threading:
1. Get full hash of head twt
2. Search for twts
- in SQL:
head_full_hash like subject || '%' and created_on > head_timestamp The assumption being replies will be for the most recent head. If replying to an older one it will use a longer hash.
- Based on Firefox instead of Chromium.
- Got tiling pans when you need them... (just like a tiling window manager).
- I can hide the Tabs and Nav-Bar with a single short-cut!! AKA Compact Mode ...
- Based on Firefox instead of Chromium.
- Got tiling pans when you need them... (just like a tiling window manager).
- I can hide the Tabs and Nav-Bar with a single short-cut!! AKA Compact Mode ...
- Based on Firefox instead of Chromium.
- Got tiling pans when you need them... (just like a tiling window manager).
- I can hide the Tabs and Nav-Bar with a single short-cut!! AKA Compact Mode ...
> /ME slow claps...
> /ME slow claps...
> /ME slow claps...
- Update the Twt Hash extension.
- Increase its truncation from 7 to 12
@xuu is right about quite a few things, and I'd love it if he wrote up the dynamic hash size proposal, but I'm inclined to just increase the length in the first place mostly because my own client
yarnd doesn't even store the full hashes in the first place 🤦♂️ (I thinnk)
- Update the Twt Hash extension.
- Increase its truncation from 7 to 12
@xuu is right about quite a few things, and I'd love it if he wrote up the dynamic hash size proposal, but I'm inclined to just increase the length in the first place mostly because my own client
yarnd doesn't even store the full hashes in the first place 🤦♂️ (I thinnk)
I have the feeling that the hashing part is the most important one that should be sorted first.