You can slant your intuition the other way if you like. The claim is that in an information environment with lots of specialized sources, people will seek out information sources that support, or at least don't contradict, what they already believe. I.e., they will enter an echo chamber. But it is just as reasonable to believe that in an information environment with that much diversity, people will be exposed to a wide variety of ideas in spite of themselves, and people who actively seek out nuance won't have any trouble finding it. Some people might get sucked into an echo chamber, but most won't.
That's just as intuitive a stance to hold.
It's also the stance that seems to fit the data
> Using a nationally representative survey of adult internet users in the United Kingdom (N = 2000), we find that those who are interested in politics and those with diverse media diets tend to avoid echo chambers. This work challenges the impact of echo chambers and tempers fears of partisan segregation since only a small segment of the population are likely to find themselves in an echo chamber.
Here's a more expository account that surveys numerous data points; as the authors put it
> A deep dive into the academic literature tells us that the “echo chambers” narrative captures, at most, the experience of a minority of the public. Indeed, this claim itself has ironically been amplified and distorted in a kind of echo chamber effect.
The notion that reasonably well-adjusted people who mostly read stuff by other reasonably well-adjusted people are somehow at risk of some ill-defined "echo chamber" effect is bunk. Folks tend to seek out information and adjust their own notions accordingly, unless they've been "info poisoned" for lack of a better term.
> Indeed! It comes to mind the popular saying, “How do you deal with nazis? — You punch them in the face.”
One of my favorite animated GIFs depicts exactly that 😆
A little less violently, deplatforming works. That's been demonstrated time and again. It's one of the many reasons to be alarmed by what Elon Musk is doing at Twitter, un-banning hateful accounts that had been banned previously. He is re-platforming people who don't merit a platform, and he himself is amplifying them.
It is foolish to think otherwise. It is just as foolish as believing water puts out all fires and throwing water onto an oil fire. You have to recognize the reality you're living in, then choose the right tool for the job. If you're living in a time where political violence is normalized/is being normalized and demonization is rampant, and you're facing a bad faith argument from a bad actor who is preaching something like antisemitism, you don't reach for "debate" as your tool of choice. You reach for "deplatforming" (for example), because that demonstrably works. You take them, and their damaging ideas, off the public square completely and keep them out of it.
> the point on debating in social network, is not stopping people from spreading bad ideas. Is to make everybody else that look at the debate think, and not fall on those bad ideas, by hiding the bad ideas, and not debating them, we may push others people to believe in them, and we may push people that already believe in them to stay in an echo chamber
No. This is a naive point of view, and it does not jibe with current research. Really. I urge you to read up on disinformation research especially after Facebook was called out for the Cambridge Analytica scandal. Other people *do not* look at a debate, see the bad information exposed as bad by good arguments, and change their minds. It doesn't work that way. Misinformation purposely targets people's emotions, and when the emotional appeal works, they tend to view the people debating against the view as enemies. They *reject* the good ideas even more forcefully.
Sure, there are hypothetical people who will see a debate, recognize that bad information has been exposed, and react by rejecting that bad information. Probably most of the people here fall into that group. But people like that were never the problem. The problem is the vast number of people who will react by *believing the bad information even more stubbornly*. Read the research--this is a real, documented effect I am describing.
Also, the dangers of the "echo chamber" that you evoked are very much overblown, almost surely by purveyors of disinformation because that fear helps them do their work (I'll note you raised this as a danger--an emotional appeal--instead of citing data). The echo chamber effect, to the extent it exists, is bad for people *who are already suffering from information poisoning*. People who've already bought into some piece of misinformation fall into or stay in an echo chamber. Once again, misinformation purveyors have very detailed strategies--Google, you can find them--for how to *draw unsupecting people* into an echo chamber and keep them there.
One aspect of cyberwarfare that information warriors take advantage of is that well-meaning people *spread the bad information by reacting to it*. Misinformation tends to target the emotions, and receptive people (which is all of us, basically) react to it on an emotional level. However, well-meaning people tend to react to the logical content of the information. They debate the facts being presented, or they attack the logical structure. But this functions to *reinforce the bad information in people who react emotionally*. In other words, the process of debating misinformation functions to reinforce it. Bad actors know this full well. I've read training materials for spreading misinformation--they know exactly what they're doing.
I don't know what the answer is, but we can't be naive and think that just by "debating" we are going to stop people from spreading bad ideas. That's like throwing water on an oil fire--it makes it worse, not better. We need to be better equipped than this.
Yes, the device might have an impact on the child. Of course, that's obvious.
But we're talking about creating a dossier that is on the internet, available to anyone who looks, and that modifies how the child is perceived by countless people before they are able to give consent for that kind of crafting of their image.
You may not care about either of these in the ways that I do, but you have to admit they have very different impacts on the kid.
> So the Hyperloop, for example, he admitted to his biographer that the reason the Hyperloop was announced—even though he had no intention of pursuing it—was to try to disrupt the California high-speed rail project and to get in the way of that actually succeeding.
In other words, Musk explicitly, consciously killed a high-speed rail project, and probably made off with some state of California funding in the process. When we wonder why we have lousy rail service in the United States compared to Europe for instance, it's partly explained by people like Elon Musk.
Con artist through and through. It'd be pathetic if it weren't destroying things.
I think, as a matter of self protection, we collectively need to *stop idolizing rich tech people*. They are, almost to a one, bad actors and not worthy of our time let alone our adulation. Given the opportunity they will do bad stuff. Just think of all the people over *decades*, like Bill Gates, Jeff Bezos, and Elon Musk who initially were propped up as some kind of unsung tech genius, only to finally reveal themselves as nothing more than greedy money hoarders who won't hesitate to harm people. This is a feature, not a bug, and we need to be better at identifying it sooner.

Not a good look imo
You'd let them shout "fire" in a crowded movie theater? Because this has been litigated already in the US, and it's illegal.
Today's "twitter is becoming a dangerous shell of its former self" news.
The 4channing of Twitter continues...

whew!
git bug push
did this:h
remote: Updating references: 100% (1/1)
To $REPO
remote: Updating references: 100% (1/1) 19cf0dc6b52363cf5b8032755b16a5 -> refs/identities/af97ed38e619cf0dc6b52363cf5b8032755b16a5remote: Updating references: 100% (1/1)
To $REPO
* [new reference] refs/bugs/00fd29b9f50294a64ad72c039a7340b5863d7907 -> refs/bugs/00fd29b9f50294a64ad72c039a7340b5863d7907
So it puts stuff in
$DIR/.git/refs
. It creates a cache directory too. I have to say, it's surprisingly full-featured given that it's pre 1.0 and the main author warns that there be dragons here (though not so surprising given that there are over 2,000 commits!). You can do the entire create/label/comment on/push/pull/clear bug workflow entirely on the CLI with
git
subcommands, which is how I'd probably use it were I to adopt this. The webui looks remarkably like github/gitea/etc if you're into that.
> Distributed, offline-first bug tracker embedded in git, with bridges
Interesting!
[Teresa Heffernan: Artificial Intelligence and the (Post-)Apocalyptic Imaginary](https://www.youtube.com/watch?v=hGz4jMIgECQ)
> Dear "AI" people,
>
> Stop doing this shit. Just stop. https://arstechnica.com/information-technology/2022/11/after-controversy-meta-pulls-demo-of-ai-model-that-writes-scientific-papers/
>
> Love,
> An AI person
Obviously they *should* be using a certain text-oriented network where you keep control of your own data and aren't targeted with ads!
scala
code that can easily handle manipulating and searching 100-million-document corpora without breaking a sweat on 2010-era rack server hardware. The platform and language are not the performance problem here.
vizier
has a lot of potential but it feels early stage.I started with a text file with 70,000 lines of tab-delimited data. Three columns are ints, and one column is a date string. The closest data format
vizier
had was CSV. However, it did not give any options for changing the delimiter. So, I preprocessed the data to make it a "standard" CSV with comma as the delimiter, and it imported fine. However, vizier
did not autodetect the data, instead treating that column as a string. Strike 1.Next, I tried to make a line plot out of the int columns.
vizier
stewed on that a bit, then told me that there were too many data points. 70,000 is a lot, so that's fair. But any other plotting tool I use regularly can handle this automagically, e.g. by downsampling (and giving you the ability to finetune what it does if you want). Strike 2.I added a cell to downsample the data, and tried the plot again. This time it worked fine. I don't see any obvious way to change the appearance or axes of the plot once it's made. There is a Download button next to the chart, but when I clicked it, nothing happened at first. Eventually, after I'd decided it probably failed, a PDF file was downloaded. Presumably my chart. However, the file was empty. Strike 3.
One thing that's really nice about
vizier
is that it keeps track of these dependencies, and if you alter anything in the dependency chain it will regenerate only what's needed to update your views. For instance, it knows that the chart is built from the downsampled data, and that the downsampled data comes from a data file. If I altered the file from within vizier
, the downsampling and charting would re-run. If I altered the downsampling parameters, the chart would be regenerated. All of this is version controlled and can be rolled back. This feature solves a world of headaches in data analysis, and I'd love to see the rest of the tool come together well enough to make it usable on a daily basis. Not there yet though, for me.
JVM
version number problem. vizier
depends on Apache Spark libraries that are cranky unless you use jdk 8
or jdk 11
, apparently. It runs fine for me if I hard specify java 1.8.If I weren't so used to seeing errors like this, it'd be extremely offputting and probably show stopping. How would someone new to the JVM world know what to do with an error like that unless they got lucky with a StackOverflow search? The JVM ecosystem really took a shit after 1.8, with all these bizarre incompatibilities and uninterpretable error messages. If the C ecosystem weren't worse I'd consider going full scala native and ditching the JVM.
h
$ vizier
Checking for dependencies...
Setting up project library...
Starting Mimir...
Exception in thread "main" java.lang.ExceptionInInitializerError
...
...after the install process went smoothly and didn't throw any errors. Does not bode well.
It also created a directory in my home directory, which I hate 😠 Why don't people use
$HOME/.local
or some equivalent ffs
Wow, this looks interesting. A nice departure from Jupyter. It resembles Polynote, superficially, but is funded by the US National Science Foundation instead of Netflix OSS the way Polynote is. Interested in taking it for a spin.
Random thoughts:
I have nothing against Jupyter or JupyterLab, and use them regularly. However, the promise of truly polyglot notebook tools like Polynote is so high. I've never done a non-trivial data analysis in a single language/tool. Inevitably, there's a great library for doing X in some other language from the one you started the analysis using, and you really want to do X without trying to rewrite it from the ground up. It's been common for me to bounce between two or more of scala, python, sage, R, and KNIME in a single project.
I've been tinkering with Quarto, and while I like it a lot and the flexibility of its output formats is amazing, it's a bit stiff the way Jupyter is when it comes to using multiple languages in one project. It's also more tailored for publishing as opposed to being a notebook where you tinker. Cocalc is great and has amazing features, but it's expensive if you pay for it and I'm unsure whether their docker container for self hosting is going to survive forever. I do like Polynote, but I don't like that it looks to be supported largely by a corporation. So, the search goes on.

Might play with this at work next week.
I have a feeling you'd have better luck googling if you used "partioning" rather than "sharding" as a keyword. In my experience (which may be overly limited of course), "sharding" is a term used for relational databases and their cousins. In the KV world, I've seen the word "partitioning" used to mean what I think you want: each node stores a subset of the full set of key/value pairs.
honk
here will it show up on mastodon?
./honk
Self-hosted Pinterest.
It's probably not super accessible yet, but maybe some day!
https://arstechnica.com/information-technology/2022/11/new-go-playing-trick-defeats-world-class-go-ai-but-loses-to-human-amateurs/