Internet Regulation, Privacy, Hacks, Tech

Since the Reddit thing keep confusing clueless users and people wonder about /r/Hitman, here is some summary.

As all affected subs do, they link to this post:
https://www.reddit.com/r/ModCoord/comments/1476fkn/reddit_blackout_2023_save_3rd_party_apps/

I think it includes all information there are regarding the motives.

Reddit decided to remove the moderators of the larger subs that set their subs private and, as far I can tell, replace them. See:

https://twitter.com/tawnniee/status/1669521276082491392

One of such subs where this happened is /r/pics. Their counter was to limit allowed content to pics of John Oliver, based on a poll among the users. It will be hard for reddit to justify another take on the moderators here. :joy:

The CEO also refers to these mods as “landed gentry” and the site as a whole needs to “grow up”. Internally, he adviced his employees to be mindful wearing Reddit gear in public.

Anyway, as moderators got removed, other subs decided to extend their protest to an unspecified date, some want to keep going until reddit takes back their changes - or remain dark forever.

/r/Hitman extended their protest too, I don’t know for how long but according to one of the mods (@Fuzk), any time frame is possible.

6 Likes

I think its pretty crazy that within the past year – even pretty recently – we’ve seen companies like

  • Twitter
  • Twitch
  • Discord
  • Reddit
    (I’m sure I’m missing a handful)

Change the way their platform works in unfortunate anti-consumer & for-profit ways, even bowing down to forcibly toying with AI with their customer base, and ends up drastically changing the way the platform works for some people.

1 Like

I am so glad I don’t use most of that stuff. I have a Facebook account to keep up with family but that’s about it for social media except occasionally watching Twitch. I wasn’t aware they had changed anything though, which may tell you how often I actually use it.

1 Like

In a surprising last-minute amendment to the otherwise innocuous “Courts and Civil Law (Miscellaneous Provisions) Bill 2022” from September 2022 the Irish Government added a provision that would allow the Irish DPC to declare almost all its procedures “confidential”. Section 26A would make most reporting about procedures or decisions by the DPC a crime. Speaking about outlandish claims by “big tech” or unfair procedures that often concern millions of users would equally become a crime.

For context, the Irish DPC is known to be super lax at enforcing European privacy laws which is why the European subsidiary of Meta (Facebook) is located there. Probably other big techs are there for the same reason.

I bet Meta is glad they did not join the Brexit.

1 Like

All your Hitman speedrun are belong to us.

6 Likes

That can’t be legal… Google doesn’t own the internet…
They own something that can pull up a list of things on the internet, but they don’t own them…

2 Likes

Hope they don’t touch my Discord DMs :roll_eyes:

1 Like

It is the zeitgeist for AI to scrape everything available regardless of ownerships.
Worth noting that in the EU, where there are at least some privacy laws, Google did not mention these things in their policy.

Oh while we are at this, guess what else is absolutely incompatible to EU laws? Meta’s Twitter clone.

3 Likes

YOU WILL OWN NOTHING AND BE HAPPY .

2 Likes

I doubt that matters all that much anymore.

1 Like


First they rename Twitter to X, and now they say it’s gonna be run by AI? Do they really want to sink it?
Because if yes, I can respect that.

3 Likes

They’re going to run Twitter with a so-called “AI”? Fine, I’ll close my account.

3 Likes

“Twitter becomes a Twitter-clone” is a plot twist I was not expecting.

4 Likes

It is amazing how UK is able to surpass the worst dystopian works with their surveillance programs.

The existing IPA regime appears to already allow the U.K. government to demand that companies alter their services in a manner that may affect all users. For example, a technical capability notice requiring the “removal by a relevant operator of electronic protection” could be used to force a service, such as WhatsApp or Signal, to remove or undermine the end-to-end encryption of the services it provides worldwide, if the government considers that such a measure is proportionate to the aim sought.

Device manufacturers would likely also have to notify the government before making available important security updates that fix known vulnerabilities and keep devices secure. Accordingly, the Secretary of State, upon receiving such an advance notice, could now request operators to, for instance, abstain from patching security gaps to allow the government to maintain access for surveillance purposes.

More importantly, expanding the extraterritorial effects of the notices regimes would entitle the U.K. government to decide the fate of data privacy and security for virtually every citizen in the world. For example, a notice asking operators to undermine end-to-end encryption would mean that end-to-end encryption would also be weakened for citizens in states with authoritarian regimes and a weak rule of law.

6 Likes

Aaaand it’s done.

The government, however, has said the bill does not ban end-to-end encryption.
Instead it will require companies to take action to stop child abuse on their platforms and as a last resort develop technology to scan encrypted messages, it has said.

Tech companies have said scanning messages and end-to-end encryption are fundamentally incompatible.

2 Likes

Isn’t that obvious ?

Didn’t you know? Laws of mathematics don’t apply here!

2 Likes

As some know, this forum has a critical stance regarding posting AI generated images. But obviously large companies are investing much in this field, not only for image generators but other stuff as well.

Here is a handy list of statements why copyrighted stuff that is used to train AI should not be paid or even asked to be used by the companies:

3 Likes

I take a much, much harder stance that most people I know on the topic, but I do agree that using other people’s work to “train” large language models is a violation of copyright. I think it’s completely natural for the companies developing these programs to argue against paying for that content - it means they can make less money and, let’s face it, if they didn’t think they could make money with large language models, they wouldn’t be doing it in the first place.

2 Likes
1 Like