Internet Regulation, Privacy, Hacks, Tech

If you were still using Facebook after the first two data leaks then you are an idiot. Also Disclose TV is not a reliable source of information on anything at all let alone cybersecurity. The article they linked has been amended and they are no saying it is possibly a scam run by con artists.

i have to, because of school and some of my friends tho. i wish i could abandon it tho

2 Likes

I finally deleted my mostly unused Facebook account 3 months ago. Hope it was in time before anything serious happened with my data, limited as it would probably have been.

If you’re on Twitch, you probably should change your password and enable 2FA:

3 Likes

Hey hey another new EU regulation idea! How about if we require messaging apps to filter your otherwise end-to-end messages for copyright infringements? That sounds good!

This is by the way part of the Digital Service Act that I linked about a few posts above. Looks like this thing becomes the next big nightmare since 2019.

2 Likes

Who would have guessed?

6 Likes

Shocking

5 Likes

Reddit wants to join money laundry trends by adding to the Karma point system a new system based on Euthereum.

https://www.reddit.com/community-points

Depending on how much use this will have, the CO2 foot print can become huge, as long no Proof of Stake is used. Of course this is in no way addressed.

Also a cool feature, this system that is advertised to liberate and free you, can be used for weighted votes. Seems like the default view on votes will be how the “richest” users did vote. Great!

I also like how inconsistent this is:

Have complete control
Only you control your private key. Reddit only knows your public address, so we can check your balance and give you new Points. We can’t take your Points away or do anything with them without your explicit permission.

vs

In addition, the community has final say on who earns how many Points. If someone is acting in bad faith, for example spamming the subreddit, the community can vote to strike them from current and future distributions.

Announcement on Twitter:

https://twitter.com/iamRahul20x/status/1455950491658117125

Reddit has 500M monthly active users.
When we all pull this off, we would onboard 500M web2 users into web3 and then there is no going back.
Let me say that again - 500 million new crypto users.

:partying_face: :volcano:

Also, if companies (!) really wanted to decentralize the Internet, they would encourage to use stuff like Mastodon instead of putting their platform on a boilerplate.

3 Likes

Another matter, many things go on on the EU level.

Today it was intended to do another vote on the Digital Service Act. I did a frst post about int a bit above, last month:

But that got delayed. Because the MEPs were disagreeing about the extend of it.

Now the optimistic users would think “wow, there are politicians who oppose the bad stuff? Even enough to delay the vote?”

well…

They were not happy because a growing number of them wants it to be worse. With proposals like this:

(EDIT: Hahaha it is copy-paste from the Consumer Protection Cooperation Regulation from 4 years ago. This would not have passed upload filters. :face_with_hand_over_mouth:)

The difference to now is that not every platform has to apply blocking measurements, mostly larger social media is affected. This might be not important anymore, everyone has to apply them.

Everyone? No, only those who are truely neutral toward the data they host.
What is considered to be neutral?
Well… the moment you rank it or suggest it to your users or even offer a search engine which naturally lists results in a certain order, is not neutral anymore. RIP

Another idea is that platforms are not allowed to restrict access to content of press publishers no matter if it contains desinformation or violates the platform’s TOS.

The concept is known as Ancillary copyright for press publishers and is basically an attempt to force the digital world to keep the analog world of the press alive. In the end the aim is always to first force platforms to pay for the content they link on and second to forbid platforms to not share them. No, that is not overly simplified.


Unlike the Copyright Reform (some might remember my thread on old HMF), the Digital Service Act (DSA) is the mother of all Internet regulation in the EU. It does not cover one part of activity (like Copyright) but all things both companies and users do. Which is why it can be easily worse if the parliament wants it to be.

I sadly have no English source for all that currently, here is a German one.

1 Like

As you already know from this thread, the EU wants to fight end-to-end encryption.

As you also know, UK is no longer a member of the EU. While the brexit claims to break free from the EU regulation, worry no more this could mean the Internet has some more breathing room there:

I don’t know why the actual headline is not in the snippet here, it is much better:

Privacy is for paedophiles, UK government seems to be saying while spending ÂŁ500k demonising online chat encryption

4 Likes

Hey guys, I found out why YouTube got rid of the dislike button!

2 Likes
6 Likes

Thisisfine.jpg

1 Like

Don’t even need to read the article to say this nut God bless Ken Klippenstein.

Am deutschen Wesen soll die Welt genesen.

https://justitia-int.org/en/the-digital-berlin-wall-act-2-how-the-german-prototype-for-online-censorship-went-global-2020-edition/

The Digital Berlin Wall Act 2: How the German Prototype for Online Censorship went Global – 2020 edition

“Once democracies cede the high ground and renege on their commitment to free speech by privatizing and outsourcing regulation, authoritarians will rush in creating a regulatory race to the bottom. This entails severe and negative consequences for free speech, independent media, the vibrancy of civil society and political pluralism, without which authoritarianism cannot be defeated, nor democracy protected,” says Jacob Mchangama.

While I agree on the statements, I did not know this site until now. It is a Danish think-tank which tries to influence the public and decision makers in regards of fundamental freedom rights.

Okay the next big thing the EU wants to put upon us in this topic is about to shape up.

tl;dr: To stop child pornography, messengers shall be required to scan our chats.

Let’s start off with the draft the EU released got leaked today.

Proposal for a

REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
laying down rules to prevent and combat child sexual abuse

Article 10

Technologies and safeguards

  1. […]

  2. […]

  3. The technologies shall be:
    (a) fective in detecting the dissemination of known or new child sexual abuse
    material or the solicitation of children, as applicable;
    (b) not be able to extract any other information from the relevant communications
    than the information strictly necessary to detect, using the indicators referred to
    in paragraph 1, patterns pointing to the dissemination of known or new child
    sexual abuse material or the solicitation of children, as applicable;
    (c) in accordance with the state of the art in the industry and the least intrusive in
    terms of the impact on the users’ rights to private and family life, including the
    confidentiality of communication, and to protection of personal data;
    (d) sufficiently reliable, in that they limit to the maximum extent possible the rate
    of errors regarding the detection.

  4. The provider shall:
    (a) take all the necessary measures to ensure that the technologies and indicators,
    as well as the processing of personal data and other data in connection thereto,
    are used for the sole purpose of detecting the dissemination of known or new
    child sexual abuse material or the solicitation of children, as applicable, insofar
    as strictly necessary to execute the detection orders addressed to them;
    (b) establish effective internal procedures to prevent and, where necessary, detect
    and remedy any misuse of the technologies, indicators and personal data and
    other data referred to in point (a), including unauthorized access to, and
    unauthorised transfers of, such personal data and other data;
    (c) ensure regular human oversight as necessary to ensure that the technologies
    operate in a sufficiently reliable manner and, where necessary, in particular
    when potential errors and potential solicitation of children are detected, human
    intervention;
    (d) establish and operate an accessible, age-appropriate and user-friendly
    mechanism that allows users to submit to it, within a reasonable timeframe,
    complaints about alleged infringements of its obligations under this Section, as
    well as any decisions that the provider may have taken in relation to the use of
    the technologies, including the removal or disabling of access to material
    provided by users, blocking the users’ accounts or suspending or terminating
    the provision of the service to the users, and process such complaints in an
    objective, effective and timely manner;
    (e) inform the Coordinating Authority, at the latest one month before the start date
    specified in the detection order, on the implementation of the envisaged
    measures set out in the implementation plan referred to in Article 7(3);
    (f) regularly review the functioning of the measures referred to in points (a), (b),
    (c) and (d) of this paragraph and adjust them where necessary to ensure that the
    requirements set out therein are met, as well as document the review process
    and the outcomes thereof and include that information in the report referred to
    in Article 9(3).

  5. The provider shall inform users in a clear, prominent and comprehensible way of the
    following:
    (a) the fact that it operates technologies to detect online child sexual abuse to
    execute the detection order, the ways in which it operates those technologies
    and the impact on the confidentiality of users’ communications;
    (b) the fact that it is required to report potential online child sexual abuse to the EU
    Centre in accordance with Article 12;
    (c) the users’ right of judicial redress referred to in Article 9(1) and their rights to
    submit complaints to the provider through the mechanism referred to in
    paragraph 4, point (d) and to the Coordinating Authority in accordance with
    Article 34.
    The provider shall not provide information to users that may reduce the effectiveness
    of the measures to execute the detection order.

  6. Where a provider detects potential online child sexual abuse through the measures
    taken to execute the detection order, it shall inform the users concerned without
    undue delay, after Europol or the national law enforcement authority of a Member
    State that received the report pursuant to Article 48 has confirmed that the
    information to the users would not interfere with activities for the prevention,
    detection, investigation and prosecution of child sexual abuse offences.

Sorry that was a bit much but let me comment on it:

  • Chats these days are encryped. This encryption has to either be modified with backdoors or the scanning has to be done on your very own device before the encryption is applied. Which too is kind of a backdoor. The latter is more likely to happen as Apple already does that with questionable success.
  • Needles to say, there is no good way to detect child pornography. There are hash lists that can be used to compare known material. But they are also not reliable and so called hash collisions for them are known which means you can trigger the detection with harmless images.
  • While the proposal requires the messenger to inform you, it also shall not inform you in great detail (so you cannot try to bypass it with that knowledge) and also not after for example Interpol did allow them to in case of a “hit”
  • It is up to the messenger to implement the detection, and is should not be used for anything else and should detect misuse. “as applicable” :smirk: I don’t see how that could work.

This is nothing else than a Trojan horse type of spyware reading your chats, legally required to be shipped with messengers. I bet no dissidents in non-democratic countries will do a backflip out of joy. And I don’t want to get there in Europe with another big step either.

I expect more analyses to come up from today on, now that we have a first draft. Later down the road there will also demonstrations happen and I will try to inform you here about the dates and places.

EDIT: Here is a thread that does analize the draft. Also, just now, the official draft has been released and there are no significant differences.

https://twitter.com/AlecMuffett/status/1524066299600683008

2 Likes

A racist and gun enthusiast nobody leaked US government documents with the grand pursuit of showing off a group of bedroom dwelling teenagers in their oh-so-edgy Discord server. The media congregated in reporting in.

Imagine how beneficial this scenario were to be if Discord itself were a little more stringent in its policies against said users that inhibit the site.

The current ecosystem of Discord’s userbase is not just abhorrent, it is unjustifiably questionable. The platform’s seeming unwillingness to address these problems by passing it off as the “easiest and simplified way to talk” has created a culture of paranoia and injustice where you will be protected regardless of how harmful you come across as.

Discord already has a system in place to detect their reports and act accordingly. Why not use said system to detect obvious red flags such as “GOVERNMENT LEAKS” in your site?

https://www.washingtonpost.com/national-security/2023/04/13/suspect-pentagon-documents-leak/

By the time Discord would take action against someone that does something like this, it would be too little too late. All leaks spread instantaneously once they’re uploaded somewhere on the internet. Reminds me of the leaked jet documents leaked in War Thunder. Also fake accounts, proxies, dark web etc. are also a thing.

The government is responsible to keep classified documents safe. If a 21-year-old was able to do this, then someone needs to be sacked and their security needs an overhaul, pronto.

I am always skeptical when it comes to detecting systems.

Right now there are surely many places, also on Discord, that discuss these leaks and link to external sites that have them as a topic. I would not know how a system reacting on “government leaks” does not also trigger a wave of false positives in this time.

And leaks, if you as a site decide that this is forbidden content, is more clear cut than the more regular issue of mobbing, harassment and such stuff, from the point of view of an automated system.
That Discord is unable to detect so much shows that “cutting-edge technology”, no matter if Discord uses that term or any other social media, is just a buzzword. They have something, it is helpful enough to justify its existence, but it does not solve the general issue of people being mean to other people.

Discord has quite a high number of actions taken against users. But it is impossible to say how high the actual number of wrongdoings is. So it is hard to put these numbers into a frame. From personal experience as a moderator I know that, on top of that, is super hard to keep voice chats a safe space. So hard it was simply not possible to run these on the HITMAN Discord. I am quite certain Discord is helpless at investigating reports from these spaces.

1 Like

How would an automated detection system reliably differentiate between somebody who wants to blow up people in real life and people like us, who want to blow up people in Hitman? After all, we use a lot of violent language. “Blow them up”, “eliminate the target”, “go for headshots” are some examples.
Sure, context is important, but how does an automated script understand the subtleties of human interaction?
I agree with Urben in that I think it would be hard to create an automated detection algorithm, that finds all legimate threats, while producing few or no false positives.

1 Like