The Nexus Of Privacy looks at the connections between technology, policy, strategy, and justice. We’re also on the fediverse at @thenexusofprivacy@infosec.pub and @thenexusofprivacy@lemmy.sdf.org (but lemmy.sdf.org is having federation problems so now we’re here)

  • 6 Posts
  • 25 Comments
Joined 11 months ago
cake
Cake day: December 14th, 2023

help-circle




  • A very interesting idea! Actually it seems to me there are two interesting ideas here:

    • endorsements. Something like this (whether it’s from feeler servers or other sources) is clearly needed to make consent-based federation scale. IndieWeb’s Vouch protocol and the “letters of introduction” Erin Shephard discusses in “A better moderation system is possible for the social web” are similar approaches. You could also imagine building endorsement logic on top of an instance catalog like the FediSeer (of The Bad Space) or infrastructure like FIRES.

    • restricting visibility of a boost to servers the original post is federated with. This is something that’s long overdue in the fediverse! Akkoma’s bubble is a somewhat-similar concept; Bonfire’s boundaries might well support this.





  • Or, using Gab provides a sense of what’s possible.

    And child porn is a great example – and CSAM more generally. Today’s fediverse would have less CSAM if the CSAM instances weren’t on it. Why hasn’t that happened? The reason that many instances give for not block the instances that are well-known sources of CSAM is that CSAM isn’t the only thing on that instance. And it’s true: these instances have lots of people talking about all kinds of things, and only a relatively-small number of people spreading CSAM. So not blocking them is completely in aligment with the Big Fedi views Evan articulates: everybody (even CSAM-spreaders) should have an account, and it’s more important to have the good (non-CSAM) people on the fediverse than to keep the bad (CSAM-spreading) people off.

    A different view is that whoa, even a relatively-small number of people spreading CSAM is way too many, and today’s fediverse would be better if they weren’t on it, and if the instances that allow CSAM are providing a haven for them then those instances shouldn’t be on the fediverse. It seems to me that view would result in less CSAM on the fediverse, which I see as a good thing.



  • thenexusofprivacy@lemmy.worldOPtoFediverse@lemmy.worldThe (annotated) case for a "big fedi"
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    11 months ago

    I agree that small doesn’t equal safer, in other articles I’ve quoted Mekka as saying that for many Black Twitter users there’s more racism and Nazis on the fediverse than Twitter. And I agre that better tools will be good. The question is whether, with current tools, growth with the principles of Big Fedi leads to more or less safety. Evan assumes that safety can be maintained: “There may be some bad people too, but we’ll manage them.” Given that the tools aren’t sufficient to manage the bad people today, that seems like an unrealistic assumption to me.

    And yes, there are ways to keep these people off the fediverse (although they’re not perfect). Gab isn’t on the fediverse today because everybody defederated it. OANN isn’t on the fediverse today because everybody threatened to defederate the instance that (briefly) hosted them, and as a result the instance decided to enforce their terms of service. There’s a difference between Evan’s position that he wants them to have accounts on the fediverse, and the alternate view that we don’t want them to have accounts on the fediverse (although may not always be able to prevent it).


  • It’s a good comment, thanks for sharing it here! On the bolded part, yes, it’s possible to do polls on Mastodon … it could be very interesting to do a series around these questions. But of course a lot depends on who’s doing the poll. Evan for example has blocked a lot of people – which is fine, there is nothing the matter with blocking people, but it skews the poll results. And a lot depends on how the poll questions are phrased. Still, it’s a good idea and I’ll think about whether there’s a sensible way to do it.

    I agree that some of what Evan characterised as Small Fedi isn’t about small for small’s sake, it’s more about the view you describe – what L. Rhodes calls “networked communities”. Of course, the consequences of this result in slower growth than the Big Fedi view, so a smaller network in the short-to-medium term, so from his perspective I can see why I chose this framing.

    And from the comment:

    Can the Big Fedi people connect with everyone they want to, while the Small Fedi folk keep their comfortable distance and protect their safe spaces?

    Yes, I think a schism’s likely to happen – “Meta’s fediverse”, instances that federate with Threads, will be more attractive to Big Fedi people, and the “free fediverses” that don’t federate with Threads (or other surveillance capitialism companies) will be more attractive to people who don’t buy into the bigger is better view.


  • Indeed, there are lots of people like that already on the fediverse, and blocking entire instances is a blunt but powerful tool that well-moderated fediverse instances currently rely on for protection. Today, people on instances where admins and moderators don’t block instances that have multiple badly-behaving people have to deal with a lot more harassment and hate speech than people on instances who do. So we’ll certainly see a situation where some instances block Threads and others don’t. The open question, though, is how many instances will decide to also block instances that federate with Threads – just as many instances decided to block instances that federated with Gab.


  • It’s not that he wants the fediverse to be unsafe. It’s more that the Big Fedi beliefs he describes for the fediverse – everybody having an account there (which by definition includes Nazis, anti-trans hate groups, etc) , relying on the same kind of automated moderation tools that we’ve don’t lead to safety on other platforms – lead to a fediverse that’s unsafe for many.

    And sure there are some people who say Fedi is fine as it is. But that’s not the norm for people who disagree with the “Big Fedi” view he sketches. It’s like if somebody said “People who want to federate with Threads are all transphobic.” There are indeed some transphobic people who want to federate with Threads – We Distribute just reported on one – but claiming that’s the typical view of people who want to federate with Threads would be a mischaracterization.







  • On “influencer”, I don’t think we’re going to convince each other. I’ve sometimes described professors as influencers – Dan Gillmor and Scott Galloway leap to mind.

    I also don’t think many of those people would agree that they “strongly support Meta.”

    That’s true! Meta’s got such a deservedly bad reputation that very few want to see themselves as supporting Meta! And I agree that they’re supporting federation with Meta despite their real misgivings about the company, and they’re doing it because they see it as in the fediverse’s best interests. But still, Meta’s saying “we want to embrace the fediverse” and they’re saying “this is a good thing” and telling people that concerns are overstated … that’s supporting Meta.

    If the Alex Jones server decides to terrorize a bunch of families, how can they claim to not have an association? How would they not have pressure to defederate or cancel their hosting?

    The legal responsibilities and pressures are different for a service provider or infrastructure provider than for a social network. They’ll get pressure, and Threads (a social network) might defederate, but I wouldn’t expect them to cancel their services or hosting. Organizations like EFF argue that instrastructure providers should stay out of policing content – even for content like Kiwifarms. I should probably discuss this in more detail (or maybe do a separate post on this).

    They can track everything they do because they control their servers; they can’t track us because we control ours.

    If you’re on a server that federates with Meta and haven’t blocked Meta, then most things you do can potentially be federated to Meta at which point it’ll be tracked even if they aren’t using any Meta services

    Whether we federate or not also has no impact on their ability to do any of the Meta-Fediverse stuff. We can’t run up and smack the ActivityPub out of their hands and be like, “No! Bad Meta!” ;)

    That last statement is true. Still, in an alternatie universe where fediverse influencers said “we don’t want you” and the vast majority of instances chose not to federate then it would be similar to the Gab situation “Meta wanted to come to the fediverse, we said no we don’t want hate groups and genocide-enablers here, so they’re doing their own thing” with the addition of “they’re also calling it the fediverse but don’t fall for it”. But we’re not in that universe.


  • No worries on the tone and wording, it’s the internet, I’ve experienced far worse. And your feedback is helpful, so the time you put into it is appreciated.

    On Evan as influencer, I’ve highlighted for a while the contrast between opinions of Eugen and other lead devs of fediverse projects, large instance admins, the people still on the SWICG standards body, and journalists who write about the fediverse – who in general almost all strongly support Meta – and people on the fediverse, who are much more split. “Influencer” is as good a term as any to refer to the first category of people.

    I think the story of their public statements is that they’ve said everything you’d hope to hear. I’ve seen many takes that they somehow betray a hidden agenda, and that seems wrong at the very least…

    In the statements I quoted they were very up front about their agenda! Similarly in the section where I talk about their potential long-term plans if they decide to invest in this direction is consistent with Zuckerberg’s comments about his interest in a decentralized approach. But yeah, they’re also saying what they know people want to hear.

    I think it’s also important to note that they’ve only said that they’re not sure what the default will be.

    Fair, I’ve rewritten that section to clarify that this is only their current plan. It’s be really funny if Meta suggested taking the privacy-friendly approach knowing that Mastodon would try to talk them out of it 🤣🤣🤣. I still expect them to go with opt-in, but we shall see. I agree that if they go the opt-in route it’s not necessarily for nefarious reasons, in my view it really is in their users best interest. But that’s the thing about the embrace-and-extend strategies (whether or not the third step is to extinguish), the extensions are very often in the users interests, they just cause problems for the open alternatives.

    On Cambridge Analytica, I agree the data flow was in a different direction, but still: they trusted Bannon and CA with it the data that was the most valuable asset in their business model. And (other than some bad press) it worked out just fine for them! So I guess we draw different conclusions on who they’ll trust with what in the future.

    In any case though…

    So they would need to admin those instances or trust that the admins wouldn’t tamper with that data.

    No, they have other options here. One is to provide services that cooperating instances in “Meta’s fediverse” can use that involve sharing data with Meta, and create a win/win scenario for them to share the data. Think of Disney or some corporation that wants to target ads (using Meta’s services, in return for a revenue share) to people on their instances – and automate some of the moderation (by using Meta’s services). Why wouldn’t they harvest data and share it with Meta so that the services are more effective? Another is to provide a hosting service for corporations (and perhaps individuals) to have their own instances … it’s kind of a variant of the first one but packaged differently.

    (And both of these apply to non-public data as well.)

    In terms of blocking a DeSantis instance I agree it’s not surrendering control to them, I just meant that Meta could monetize the heck out of it even if all the instances i the current fediverse blocked it. If they had the infrastructure in place today, DeSantis and others would be paying to boost their instances’ posts to Threads (and also Gab and Truth Social and the instances that Fox News, Breitbart, etc are running). They might well miss the window for the 2024 US election but it (hopefully) won’t be the last election in the world.


  • Sure, Meta – and Google, and Microsoft – is good about funding open-source projects when it suits their interest. Given where they are relative to Open AI and Google, releasing LLaMA as open source made a lot of sense for them. If they decide to seriously invest in fediverse compatibiilty they might well do something like release an open source client toolkit that would provide full functionality on Threads, whatever subset of Threads functionality Mastodon and maybe a couple of other platforms suppport, and has adaptors so that the community can support other platforms. Right now there isn’t a good solution (nobody uses the AP C2S standard, Mastodon’s API is the defacto standard but there are compatibility problems and quirks) so it benefits the community. And, it would have support for Threads functionality that other platforms don’t support, so it benefits Meta more than everybody else.

    But we were specifically talking about why they’d make it easy for people to move away from Threads to other platforms. Do you think that’s in their business interest?



  • OK, so, if you don’t trust Meta, and think they’re generally acting in a selfish manner, why do you think that they’ll freely let people move from Threads to the fedierse and make it easy to take all their followers?

    Or phrased somewhat differently: it’s clearly good from their perspective to say that people can move their followers. Do you think it’s also always better for them to also let people easily move all their followers (which Meta is able to monetize while on Threads) to some other instance (where it’s harder for Meta to monetize them)? If there are situations where it’s not better from Meta’s perspective, why do you think they’ll make it easy – or even allow it?