# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2172
# self = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1472
# next = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1572
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1372
Congrats @prologic and crew! Thank you for all the work you do.
Had a nice chat tonight with @prologic, @ocdtrekkie, and @taigrr. Some things we talked about today:

- The release of yarnd 0.15
- Packaging apps for Sandstorm
- SSD performance
- KVM on WSL
- Mitigating smart home spyware for in-store demos
- https://github.com/berthubert/googerteller
- Google's policies around using external code
- The whirlpool currently taking place at Twitter

Also, we discovered an interesting statistic in the call tonight.

100% of technology enthusiasts have at least one Raspberry Pi, but only 25% of technology enthusiasts use them for anything. (it's @taigrr)
@aryak Congrats, I'm glad to see another gopherhole online. I need to set one up myself.
@movq Very interesting read. I love reading about how terrible computers are at time.
@prologic I just like to see everything that's happening in the twtverse. Although, if the current trends keep up, I'll probably have to switch to the boring old timeline.

Actually, I just had an idea. Would it be feasible to add a configuration option to exclude followed feeds from Discover? That way, we wouldn't have to filter through all the posts we've already read to find new feeds.
@prologic I do follow @news, but I usually stick to the Discover feed.
@prologic I guess I did... It's been hard to keep up the past few days.
It's okay. I think I'm the only one that uses the Discover feed, anyway. :)
@prologic Interesting, I didn't know about that.
@prologic I'm glad to hear that, but I was making a joke.
@prologic I'm certainly glad to hear that, but I was making a joke.
@prologic twtxt.net started tracking @reddit_funny and it blew up my Discover feed...
@prologic Heh, "commit".
@prologic Heh, "commit".
@dendiz Bookmarks allow you to save posts for later viewing. They're accessible from your user feed, and you can control the visibility of them in your settings.
@dendiz You can only delete your most recent post on yarnd.
@dendiz Welcome to Yarn!
@lyse Image rendering in terminals is usually done with Sixels and it's not straight forward. A Newsboat developer has confirmed that it won't be implemented any time soon because of complications with ncurses.

Photon supports image rendering with sixels. It's just a feed viewer. It have a database or anything, it just pulls in feeds specified on the command line and displays the items chronologically in a grid view.

It works fine for XKCD. It just looks a lot better in the article view because of the color limitations.

Photon displaying the XKCD RSS feed

Photon's article view on the most recent XKCD
@lyse Image rendering in terminals is usually done with Sixels and it's not straight forward. A Newsboat developer has confirmed that it won't be implemented any time soon because of complications with ncurses.

Photon supports image rendering with sixels. It's just a feed viewer. It have a database or anything, it just pulls in feeds specified on the command line and displays the items chronologically in a grid view.

It works fine for XKCD. It just looks a lot better in the article view because the images aren't downscaled very well.

Photon displaying the XKCD RSS feed

Photon's article view on the most recent XKCD
@lyse Image rendering in terminals is usually done with Sixels and it's not straight forward. A Newsboat developer has confirmed that it won't be implemented any time soon because of complications with ncurses.

Photon supports image rendering with sixels. It's just a feed viewer. It have a database or anything, it just pulls in feeds specified on the command line and displays the items chronologically in a grid view.

It works alright for XKCD. It looks a lot better in the article view. The images aren't downscaled very well.

Photon displaying the XKCD RSS feed

Photon's article view on the most recent XKCD
I think this is a great change, but do we need to mark every human as such on the Web interface? I think it just adds clutter to the page.

I can also see people (read: me) being "trained" over time to not notice the icon because it's a human 99% of the time.
@tkanos I'm with you. I'll believe it when I see it.
Oh man, it's going nuts now.

Add this to trackers.conf. Source

Then, add the following to teller.conf:


[cloudflare]
balance=1
freq=2000
I got it going again. It's awfully quiet on my system. I wonder how difficult it would be to track connections to Cloudflare.
@abucci I thought about putting it on a hotkey in my window manager, but I think I'd drive myself crazy.
@lyse Good choices!

@tkanos, welcome to the club. https://lab6.com/rss.xml is one of my favorites, but posts are very infrequent.

I'm also a big fan of https://www.prologic.blog/feed.xml, https://codemadness.org/atom_content.xml, and https://jcs.org/rss.
@tkanos I used DDG for a while. Switched away from them because they tracked clicks. I'm not sure if they still do.

Then, I used Startpage which uses Google results. I switched away from them because they kept thinking I was a robot and making me solve a captcha.

Now, I'm on Brave Search. I don't like their browser much, but I think their search engine is nice. They have their own crawler, which isn't common. The results are usually pretty good, but when I'm trying to do a per-site search I switch to a Whoogle instance.

Marginalia Search is a search engine with their own crawler that prioritizes simple, readable websites.

Kagi is a paid search engine that, apparently, doesn't spy on you. However, all your Web searches are tied to your real identity because you can't pay anonymously.
@movq @akoizumi Otter Browser uses Qt Webkit, if that counts.
@prologic Thank you, I'll give it a try a little later. It looks very promising.
@abucci I didn't time all of them, I probably should have, but xz has its own timer. If I remember correctly, it took 7 minutes and 17 seconds on my toaster to compress 1.36 GiB, mostly text, at the highest compression level. I don't think that's all that bad.

xz also lets you use multiple threads, which isn't common on these tools. I didn't do it for this test because there is an extremely small size penalty for doing so and I wanted to go all-out.

Here's a good blog post that shows the differences with multi-threading. The size difference is negligible, and that test showed no measurable difference in file size between 2 cores and 32 cores. There are diminishing returns in speed, though.
@abucci Oh, wow, you can do that. RSS feeds work too, I checked. That's pretty neat.

Reddit lets you do something similar. https://www.reddit.com/r/linux+openbsd+serenityos
> Is “Gopher + TLS” still “strictly Gopher”? Nah. But neither is using UTF-8 in Gopher pages and a loooooooot of people do that.

Also, I'm the line endings for just about everything Gopher are CRLF per RFC 1436. I'd be willing to bet that a lot of hand-written gophermaps use LF.
This document was an interesting read, posted by Hiltjo in the second thread linked by @movq.

It's Bitreich's backwards-compatible standard for extensions to the Gopher protocol, including TLS.
@movq I didn't know Geomyidae supported TLS. That's a little embarrassing, I have a copy of it on my computer.

main.c, line 31:


#ifdef ENABLE_TLS
#include <tls.h>
#endif /* ENABLE_TLS */
@abucci RSS feeds for each account + tags in Newsboat?
@abucci Feeds for each account + Newsboat tags?
@abucci I set that up a couple months ago, it's pretty cool.
2 in the morning is a great time to run a compression algorithm comparison.


1       1458553185  build/
0.322   469704387   build.tar.Z     compress -k build.tar   (oh, how far we've come)
0.185   269780511   build.tar.gz    gzip -k9 build.tar
0.082   119839762   build.tar.bz2   bzip2 -zk9 build.tar
0.046   67705992    build.tar.xz    xz -zk9e build.tar
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                 Algorithm
1       1458553185  build/
0.451   658022612   ../node-modules/
0.322   469704387   build.tar.Z         compress -k build.tar   Lempel–Ziv–Welch (LZW) (oh, how far we've come)
0.185   269780511   build.tar.gz        gzip -k9 build.tar      Deflate
0.082   119839762   build.tar.bz2       bzip2 -zk9 build.tar    Burrows–Wheeler transform
0.046   67705992    build.tar.xz        xz -zk9e build.tar      Lempel–Ziv–Markov (LZMA)
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                 Algorithm
1       1458553185  build/
0.451   658022612   ../node-modules/
0.322   469704387   build.tar.Z         compress -k build.tar   Lempel–Ziv–Welch (LZW) (oh, how far we've come)
0.185   269780511   build.tar.gz        gzip -k9 build.tar      Deflate
0.082   119839762   build.tar.bz2       bzip2 -zk9 build.tar    Burrows–Wheeler transform
0.046   67705992    build.tar.xz        xz -zk9e build.tar      Lempel–Ziv–Markov (LZMA)


0.046 is *really* mind-blowing.
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                     Algorithm
      1  1458553185 build/
  0.451   658022612 ../node-modules/
  0.322   469704387 build.tar.Z         compress -k build.tar       Lempel–Ziv–Welch (LZW) (oh, how far we've come)
  0.185   269780511 build.tar.gz        gzip -k9 build.tar          Deflate
  0.082   119839762 build.tar.bz2       bzip2 -zk9 build.tar        Burrows–Wheeler transform
  0.047    68258612 build.tar.br        brotli -kZ build.tar        Brotli
  0.047    67989604 build.tar.zst       zstd --ultra -22 build.tar  Zstandard
  0.046    67705992 build.tar.xz        xz -zk9e build.tar          Lempel–Ziv–Markov (LZMA)


0.046 is *really* mind-blowing.
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                 Algorithm
      1  1458553185 build/
  0.451   658022612 ../node-modules/
  0.322   469704387 build.tar.Z         compress -k build.tar   Lempel–Ziv–Welch (LZW) (oh, how far we've come)
  0.185   269780511 build.tar.gz        gzip -k9 build.tar      Deflate
  0.082   119839762 build.tar.bz2       bzip2 -zk9 build.tar    Burrows–Wheeler transform
  0.046    67705992 build.tar.xz        xz -zk9e build.tar      Lempel–Ziv–Markov (LZMA)


0.046 is *really* mind-blowing.
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                     Algorithm
      1  1458553185 build/
  0.451   658022612 ../node-modules/
  0.322   469704387 build.tar.Z         compress -k build.tar       Lempel–Ziv–Welch (LZW) (oh, how far we've come)
  0.185   269780511 build.tar.gz        gzip -k9 build.tar          Deflate
  0.082   119839762 build.tar.bz2       bzip2 -zk9 build.tar        Burrows–Wheeler transform
  0.047    68258612 build.tar.br        brotli -kZ build.tar        Brotli
  0.047    67989604 build.tar.zst       zstd --ultra -22 build.tar  Zstandard
  0.046    67705992 build.tar.xz        xz -zk9e build.tar          Lempel–Ziv–Markov (LZMA)


0.046 is *really* mind-blowing. I don't need a torrent, we're approaching e-mail attachment file sizes here.
0.046 is *really* mind blowing. I wouldn't need a torrent, that's approaching e-mail attachment range.
2 in the morning is a great time to run a compression algorithm comparison.


1       1458553185  build/
0.322   469704387   build.tar.Z (oh, how far we've come)
0.185   269780511   build.tar.gz
0.082   119839762   build.tar.bz2
0.046   67705992    build.tar.xz
2 in the morning is a great time to compare compression algorithms.


Ratio   File size   Filename            Command                 Algorithm
      1  1458553185 build/
  0.451   658022612 ../node-modules/
  0.322   469704387 build.tar.Z         compress -k build.tar   Lempel–Ziv–Welch (LZW) (oh, how far we've come)
  0.185   269780511 build.tar.gz        gzip -k9 build.tar      Deflate
  0.082   119839762 build.tar.bz2       bzip2 -zk9 build.tar    Burrows–Wheeler transform
  0.047    68258612 build.tar.br        brotli -kZ build.tar    Brotli
  0.046    67705992 build.tar.xz        xz -zk9e build.tar      Lempel–Ziv–Markov (LZMA)


0.046 is *really* mind-blowing.
If I can get a proper static copy of MDN, I'll make a torrent and share a magnet link here. I know I'm not the only one who wants something like this. I don't think the file sizes will be so bad. My current "build" of the entire site is sitting at 1.36 GiB. (Only a little more than double the size of node_modules!) So, with browser compatibility data and such, I think it'll still be less than 2GiB.

Aggressively compressed with bzip2 -9, it's only 114.29 MiB. A compression ratio of 0.08. That blows my mind.
Now I've just realized that if /en-US/docs/Web/HTML/Global_attributes is saved with that filename, the Web server is probably going to send the wrong MIME type. Wget solves this with --adjust-extension.

Man, you really don't have to do this...
@prologic What I need it to do is crawl a website, executing JavaScript along the way, and saving the resulting DOMs to HTML files. It isn't necessary to save the files downloaded via XHR and the like, but I would need it to save page requisites. CSS, JavaScript, favicons, etc.

Something that I'd like to have, but isn't required, is mirroring of content (+ page requisites) in frames. (Example) This would involve spanning hosts, but I only need to span hosts for this specific purpose.

It would also be nice if the program could resolve absolute paths to relative paths (/en-US/docs/Web/HTML/Global_attributes -> ../../Global_attributes) but this isn't required either. I think I'm going to have to have a local Web server running anyway because just about all the links are to directories with an index.html. (i.e the actual file referenced by /en-US/docs/Web/HTML/Global_attributes is /en-US/docs/Web/HTML/Global_attributes/index.html.)
@prologic That's awfully nice of you, but you don't need to do that. I know you're a busy guy.

I'm sure I can find something if I look around some more. I can't be the only one that wants to make a static mirror of a dynamic website.
@adi Wow, that's a great idea. I wonder if MDN could be used as a data source. The Markdown would need some significant transformation done. https://github.com/mdn/content/blob/main/files/en-us/web/html/element/span/index.md
@prologic It's close, but it's just a Web scraping library. I'm looking for something of the command line variety.
Doing it this way will also solve *another* issue I'm having. You actually can "build" the site and you get almost all the information in static files. However, all the links have capitalization, e.g. /en-US/docs/Web/CSS/border, and all the filenames are in lowercase, e.g. /en-us/docs/web/css/border.
@prologic I'm trying to make a static local mirror of MDN Web Docs. It's all free information on GitHub, but the whole system is extremely complicated.

<​tinfoil-hat>I think it's so they can sell more MDN plus subscriptions, making people use their terrible MDN Offline system that uses the local storage of your browser.

At this point, I'm willing to run a local dev server and just save each generated page and its dependencies.

I really only need it to run JavaScript so it can request the browser compatibility JSON. It's https://github.com/mdn/browser-compat-data but the MDN server, annoyingly, transforms it.

Once the BCD data is rendered statically, I should be able to remove the references to the JavaScript.

That will solve another issue I'm having where the JavaScript is constantly trying to download /api/v1/whoami, which seemingly has no purpose aside from user tracking.
@prologic I'm trying to make a static local mirror of MDN Web Docs. It's all free information on GitHub, but the whole system is extremely complicated.

<​tinfoil-hat>I think it's so they can sell more MDN plus subscriptions, making people use their terrible MDN Offline system that uses the local storage of your browser.<​/tinfoil-hat>

At this point, I'm willing to run a local dev server and just save each generated page and its dependencies.

I really only need it to run JavaScript so it can request the browser compatibility JSON. It's https://github.com/mdn/browser-compat-data but the MDN server, annoyingly, transforms it.

Once the BCD is rendered statically, I should be able to remove the references to the JavaScript.

That will solve another issue I'm having where the JavaScript is constantly trying to download /api/v1/whoami, which seemingly has no purpose aside from user tracking.
@prologic I'm trying to make a static local mirror of MDN Web Docs. It's all free information on GitHub, but the whole system is extremely complicated.

<​tinfoil-hat>I think it's so they can sell more MDN plus subscriptions, making people use their terrible MDN Offline system that uses the local storage of your browser.<​/tinfoil-hat>

At this point, I'm willing to run a local dev server and just save each generated page and its dependencies.

I really only need it to run JavaScript so it can request the browser compatibility JSON. It's https://github.com/mdn/browser-compat-data but the MDN server, annoyingly, transforms it.

Once the BCD data is rendered statically, I should be able to remove the references to the JavaScript.

That will solve another issue I'm having where the JavaScript is constantly trying to download /api/v1/whoami, which seemingly has no purpose aside from user tracking.
@prologic I'm trying to make a static local mirror of MDN Web Docs. It's all free information on GitHub, but the whole system is extremely complicated.

I think it's so they can sell more MDN plus subscriptions, making people use their terrible MDN Offline system that uses the local storage of your browser.

At this point, I'm willing to run a local dev server and just save each generated page and its dependencies.

I really only need it to run JavaScript so it can request the browser compatibility JSON. It's https://github.com/mdn/browser-compat-data but the MDN server, annoyingly, transforms it.

Once the BCD data is rendered statically, I should be able to remove the references to the JavaScript.

That will solve another issue I'm having where the JavaScript is constantly trying to download /api/v1/whoami, which seemingly has no purpose aside from user tracking.
Anyone know of a tool that will crawl a website, run JavaScript, and then save the static HTML?

I tried Wpull, but I can't get it to stop crashing on startup and development seems to have stopped.

I'm sure there's a joke to be made about Python here.
Anyone know of a tool that will crawl a website, run JavaScript, and then save the resulting DOM as HTML?

I tried Wpull, but I can't get it to stop crashing on startup and development seems to have stopped.

I'm sure there's a joke to be made about Python here.
TLS is absolutely applicable to Gopher and people have done it, but there's no standard so everyone implements it differently.
It's not widely implemented in clients or daemons.

Also, lots of people are against TLS because it's too hard to implement on your own; Gopher daemons would need to depend on an external library.

If you want Gopher encrypted, the best option is to make your Gopher daemon accessible as a Tor hidden service.
Unix time 1666666666 in ~10 minutes. https://time.is/Unix_time~
@eaplmx Gopher is designed to be a simple way to access information on the Internet, as an alternative to the World Wide Web. No markup, just plain text and hyperlinks to resources.

Gemini is too simple in the wrong places, e.g. the very limited Markdown-lite. It's also too complicated in the wrong places, e.g. mandatory encryption.

Gopher's continued usage even after being "beaten" by the Web speaks volumes. I don't hate Gemini. Actually, I enjoy exploring Geminispace from time to time. I think it's a fad, though. People aren't going to use it in 30 years.

(Assertions like that, when it comes to technology, never come true. In 30 years, when Gemini takes over, feel free to come back to this twt and make fun of me. It won't be the first time an inferior protocol becomes dominant.)
@prologic It's called "cgod" and it isn't written in C *or* Go? I want my money back...

I also like Gopher more than Gemini. The problem Gemini is trying to solve is better solved by just writing static HTML 4.01 pages.
@cobra I like Geomyidae, but it doesn't have per-user Gopherspace. Gophernicus does, along with a few other bells and whistles.
@lyse I also use Tridactyl, but with a single 1600x900 screen and a TrackPoint I really don't find myself using anything but j/k, H/L, and J/K. Maybe d and C-d/C-u at times.

I haven't used Warpd at all, beyond playing with it initially.
@ocdtrekkie
Paging @ocdtrekkie
@jsreed5 Audacious is a free software music player that replicates the Winamp interface. It even supports Winamp skins. The Winamp style interface doesn't work too well on Wayland, though.
@jsreed5 Audacious is a free software music player that replicated the Winamp interface. It even supports Winamp skins.
@akoizumi I don't use it regularly myself and I definitely wouldn't host an instance because it's written in JavaScript, but I'm still glad it exists.

Several countries either censor or have attempted to censor Wikipedia, and Wikiless is a great way to bypass it.

https://en.wikipedia.org/wiki/Censorship_of_Wikipedia
The twtxt registry specification is supposed to address this problem, but nobody uses registries either.
@ocdtrekkie @abucci I have several local Git repositories that should have remotes somewhere, but I'm talking about maintaining a local mirror of other people's projects.

I'm referring to Wikiless being hidden from public view on Codeberg, #digfawa

I was envisioning a Raspberry Pi or something pulling new updates automatically with a Cron job.
@ocdtrekkie @abucci I have several local Git repositories that should have remotes somewhere, but I'm talking about maintaining a local mirror of other people's projects.

I'm referring to Wikiless being hidden from public view on Codeberg, #digfawa

I was envisioning a Raspberry Pi or something pulling new updates automatically with a Cron job.
@movq HTTPS-only mode in Firefox and derivatives will automatically try HTTPS, then give you the option to connect with HTTP if the server doesn't support TLS. I'm not sure about Chromium.

I generally like that system, but it can get annoying at times. I wish there was a way I could disable the dialog (while still trying https first for unknown domains) on a per-tab basis, letting it fall back to plain HTTP without user input.

On a somewhat related note, I recently discovered a cURL option to specify a default protocol when invoked without a scheme, e.g. curl mckinley.cc.

In ~/.config/curlrc:


proto-default = https
@prologic Don't know. It'll be interesting to look at the commits when it comes back up.
@prologic TL;DR:

* Wikimedia Legal Enforcement has contacted Codeberg citing quote-on-quote "licensing / trademark infringement issues in the content"
* Codeberg has made the repository private (i.e. it can only be viewed by contributors)
* The author is making changes to the software in order to remove the infringing content*
@ocdtrekkie It's alright. I don't think @prologic is around. Maybe next week.
@prologic, are we doing the call this week?
It's been Vim for me for a long time. Previously, it was Notepad++, but now I wouldn't give up modal editing for anything.
I forgot I had htmlq installed.


$ curl https://github.com/zedeus/nitter/wiki/Instances | htmlq 'table:nth-of-type(2) > tbody > tr' | grep '^<tr>$' | wc -l
83
Agreed, but Nitter makes it better. It's probably the strongest of the alternative frontends. There are 83 documented, public, clearnet instances.


>> document.querySelector("table:nth-of-type(2)").querySelectorAll("tr").length - 1
<- 83
Agreed, but Nitter makes it better. It's probably one of the strongest of the alternative frontends. There are 83 documented, public, clearnet instances.


>> document.querySelector("table:nth-of-type(2)").querySelectorAll("tr").length - 1
<- 83
@prologic Why get locked in to that proprietary, centralized service when you can use any Nitter instance and any feed reader you want?
@akoizumi Keyboards are bloat. All you need are toggle switches connected to the GPIO pins.
@kyokonet When can I send in my application via facsimile?
@prologic That's pretty cool!
Sorry gents, I forgot to post the notes. Remember how I said I was going to bed? Yeah... Some things we talked about this week:

* URIs, URLs, and URNs
* Sketchy SEO companies and Web spam
* Improvements to the search engine
* Goryon debugging
@prologic Well, you can always run a Monero node over Tor :)
@prologic Well, you can always run a Monero node as a Tor hidden service :)
@prologic

> So it comes down to who you trust [more], your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust [more][more][more][more][more][more][more][more], your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust [more][more=], your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.=*
@prologic

> So it comes down to who you trust [more][more=][more][more=][more][more=][more][more=], your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust \n\n\n\n, your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust \n\n, your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust [more], your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I have a port opened and make a direct p2p connection.*
@prologic

> So it comes down to who you trust [more]\n[more]\n[more]\n[more]\n, your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
@prologic

> So it comes down to who you trust \n\n\n\n\n\n\n\n, your ISP or your VPN provider(s)?

My VPN provider, 100%. I've talked about my ISP in the past.

Besides, I don't *need* to trust them as much as my ISP. Under normal circumstances, this is the important information that your ISP can know about you:

* All of your personal information, down to a home address
* The IP addresses to which you're connecting
* Information leaked by unencrypted traffic (DNS queries, etc.)

As long as the VPN provider doesn't require any personal information, and mine doesn't, you're making it so no single party has all of that information. The IP address cloaking is an added benefit for me.

> You still leak your IP address with that TURN server however.

If your WebRTC implementation isn't broken, the TURN server sees your traffic as coming from the VPN server, just like any thing else you connect to through that tunnel. It's the same story if I open a port and make a direct p2p connection.*
Out of curiosity, I tried the leak test on Ungoogled Chromium and it actually was leaking the private use internal IP given to me by my VPN provider. That doesn't happen on LibreWolf due to its security measures.

My real IP still didn't leak because my VPN client prevents any other program from using my real network interface.
@prologic

> the Internet kind of requires IP Addresses to even function in the first place

True, but a VPN can be used to mask your real IP address because all of your network traffic is relayed through another computer with a different IP address.

> p2p protocols like WebRTC require peer addresses to be able to communicate with one another

In principle, yes, but they don't need to be able to communicate directly as long as both clients can communicate with a TURN server. At least, that's how I understand it.