Original post: https://bsky.app/profile/ssg.dev/post/3lmuz3nr62k26

Email from Bluesky in the screenshot:

Hi there,

We are writing to inform you that we have received a formal request from a legal authority in Turkey regarding the removal of your account associated with the following handle (@carekavga.bsky.social) on Bluesky.

The legal authority has claimed that this content violates local laws in Turkey. As a result, we are required to review the request in accordance with local regulations and Bluesky’s policies.

Following a thorough review, we have determined that the content in question violates local laws in Turkey, as outlined in the legal request. In compliance with these legal provisions, we have restricted access to your account for users.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 days ago

    They are not the good guys, the fediverse is.

    I think you’re overselling the Fediverse here. The Fediverse also absolutely has censorship, it’s just by individual instance admins instead of a for-profit company. If large, influential instances shut down or defederate, a lot of content goes with it.

    Yeah, federated instances technically cache that data, but those communities are effectively dead, links are broken, etc. Users can jump to other services, sure, but the service isn’t the same.

    We’ve seen this here on Lemmy. Beehaw was a cool instance, but they defederated fairly early on. Lemmy.ml was super impactful, but their admins are super aggressive with moderation to the point that many avoid their communities. And so on.

    Whether “the Fediverse” is good depends on your instance and the mods and admins of the various communities you are part of. That kind of sucks.

    Maybe it sucks less than whatever major social media network you’re comparing to, but I hesitate to call it “good,” just different.

    • 73ms@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      3 days ago

      Well it is fundamentally better because it does not only have a single party that makes all the calls thanks to the real decentralization. I wouldn’t call all of fediverse “the good guys” but I would call it “good”.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Sure. It’s like comparing having one tyrant, which can be good or bad (but at least isn’t going anywhere) vs a lot of tyrants whose power is limited to their little area, and who will come and go. I guess that’s better, but I don’t think anyone would say it’s “good,” just a bit better.

        I like the Fediverse, I just think it only went halfway to solving the problem.

        • 73ms@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Do you have a proposal for how you’d solve the other half then or just think it isn’t enough?

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            Yeah, I’m working on something that I think should improve on things, but I keep bringing it up in the hopes that someone beats me to it. Here are some notes:

            • P2P network based on something like IPFS or Iroh (I picked Iroh)
            • a “community” is a distributed hash table, with posts, comments, etc as structured keys
            • everything is cryptographically signed by the author, so you can check for tampering (built-in feature of Iroh)
            • moderation is also distributed, based on “trust”; everyone is a moderator, and you “trust” others’ moderation either explicitly or by happening to moderate similarly; options are “like,” “dislike,” “relevant,” “report” (spam, CSAM, etc)
            • everyone contributes a little storage to the network, and you can adjust your storage quota

            Some interesting side effects of this design:

            • single namespace - no “instances” since hosting is distributed (so just “Technology” instead of “[email protected]”)
            • everyone will see a different feed due to differences in moderation choices
            • no concept of “all” since you wouldn’t sync communities you don’t care about - I would add a discovery mechanism to help here
            • could be “sneakernetted” if countries block this service, provided you have a way to discover other users in each closed region
            • nobody can censor you since moderation is opt-in, so I literally cannot respond to takedown requests by governments
            • there’s a very real risk of echo chambers, but that’s on the user not centralized mods

            When launching, I’d have a default set of mods that automatically “block” things like CSAM, but users can choose to remove those and/or adjust weights. The idea is for moderation to be transparent, but also something users aren’t expected to change.

            The only hosting needs would be:

            • relay servers to connect people - relay servers would be federated and incredibly lightweight
            • storage instances - only needed in the early days until enough people join the network
            • website for documentation and whatnot

            It’s very early days (still working on the P2P part, but have a POC for the moderation algorithm). I’ll probably post once I feel like it’s actually useful, which won’t be for a while.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                They’re essentially the same thing no? The main difference is in how they’re applied:

                • filter - selected by the user, may change multiple times in a given session (hashtags, title text, etc)
                • moderation - set by others or through moderation interaction, won’t likely change in a given session

                With Reddit/Lemmy, moderators are chosen by other moderators/admins, or are the people who create the community. It’s arbitrary and frequently leads to people mass-leaving the community if the moderation is poor. Other social media sites are moderated by algorithms or employees, which can also lead to people mass-leaving if the moderation is poor.

                This approach preserves the distinction, but leaves the control in the hands of the user. If moderation is poor, it’s something you can fix using features like:

                • moderation review - look at stuff that’s hidden, which impacts future moderation (with filters to show/hide based on confidence)
                • view/tweak moderation numbers - select from moderation “styles” (i.e. disregard votes, prefer votes, strict/loose, etc), or set coefficients yourself (advanced, would have a warning)

                Hopefully that’s an improvement. Maybe it’s not, idk, but I like the idea of removing centralized moderation.

                • Fredthefishlord@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 days ago

                  I don’t think your distinction between moderation and filters is correct.

                  I would say it’s closer to filters are something to curate what you see, and moderation is to curate the community.

                  Trust based systems are absolutely an amazing idea, but it is a very good idea to make it so once you have a level of ‘trust’ you don’t just apply filters, but actual moderation instead. Having a hybrid system, where you can apply filters from a certain set of users, but also allow purely trolling or immoral(ie cp and gore) to be fully removed so that new users or visitors to the community do not have issues.

                  It also serves as a stronger prevention measure against racists and nazis.

                  It also serves to preserve the correction function of communities, without allowing popular will to reject reasonable expectations just because they dislike them. That has happened a lot in redditor communities.

                  I strongly agree that purely centralized moderation is bad, but some level of centralization of moderation is beneficial.

                  You could also have trust go by community rather than platform based to prevent exploitation

                  • sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    2 days ago

                    I would say it’s closer to filters are something to curate what you see, and moderation is to curate the community.

                    That’s the traditional definition, sure. Traditional moderation essentially forces others to not see certain content based on the moderator’s opinion, and that’s incompatible with a properly P2P application where there is no central authority.

                    Perhaps here’s a more satisfactory definition:

                    • filter - data is stored, but not shown
                    • moderation - data is not stored (or stored separately)

                    So an individual client wouldn’t have the CP, gore, etc content for a given community because it has been moderated out. However, another user with different moderation settings might still have that content on their machine. If most people in the network remove the content, then the content is effectively gone since it won’t be shared, but there’s no guarantee that nobody has that content. Content nobody sees value in will disappear, since things are only kept when someone wants it.

                    Make sense? The only exception here is for that moderation queue, so you might have that content depending on your settings, but it wouldn’t be shared with others (client feature).

                    I strongly agree that purely centralized moderation is bad, but some level of centralization of moderation is beneficial.

                    Perhaps. I just have trouble figuring out how to decide who the moderators are in a way that doesn’t lead to the problem of new users flooding a community and kicking out existing users, so elections are out. There are no admins to step in to mediate disputes or recover from a hostile takeover.

                    So my solution leans on the users to generally prefer to not associate w/ scammers, spammers, pedophiles, etc, and that disassociation would help them benefit from the moderation efforts of their peers that think similarly to them. However, this also means that Nazis, pedophiles, etc can use the platform to find like-minded people. But the only people impacted by their nonsense should be those who believe similarly to them, since other users wouldn’t see their content.

                    So we’ll still end up with silos, but they’ll be silos that users choose. If they don’t like what they’re seeing, they have the tools to fix that. Hopefully this is good enough that most people will get what they want, and in a way with user-driven censorship instead of platform-driven censorship.

                    The nice thing about this setup is that we can add centralized moderation if we choose in the form of public filter lists. It would be completely opt-in, and clients could be tuned for that use-case. But because of its distributed nature, there’s no protection at the protocol level to prevent undesirable people from forking the client and removing those types of filters, in much the same way that Lemmy doesn’t prevent someone from ignoring all moderation.

                    I’m open to suggestions. I also don’t like the idea of Nazis and child abusers using my platform, but the distributed nature means nobody has any form of top-down control. Either we elect a moderator (which is subject to bots and whatnot), or we remove the concept of moderator entirely.

                    I think we’ll end up with accounts that people can trust completely, such as bots that identify CP, gore, extremism, etc, and then you can just explicitly add them to your trusted moderator list. And I’ll probably add something like that to the codebase once it’s created. But yeah, it’s a tricky problem to solve, and I’m trying to lean on reduced centralization when I have to make a choice.

    • squozenode@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      There’s always gonna be an admin of some kind unless we all run our own instances, but that ends up with everyone just in large echo chambers again, as they federate only with people they agree with, or to scream at people they don’t.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        That’s not necessarily true. Is there an admin of BitTorrent? Not really, people just contribute resources and the network keeps on trucking.

        I’d like to see more exploration of P2P networks like BitTorrent. It should be that a single person leaving the network doesn’t impact anyone, data just gets shuffled so it stays available. The tricky part is moderation, but surely that’s a solvable problem.

    • VampirePenguin@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      For sure. Not that we don’t have problems, but corporate overlords mining our data or censoring us for political back scratching aren’t among them. That’s all imma trying to say.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Nothing is really stopping them from mining your data on Lemmy, all they need is to create an instance and federate, and then get can hoover up whatever they want.

        Censorship is more difficult, sure. But we’re still subject to whatever arbitrary censorship the mods and admins want.

        I think the Fediverse is on net better, but I do think the model has many other problems, and that it’s more of a stepping stone to something better. But being “better” doesn’t mean we’re “good” and the other options are “bad,” it just means we make different tradeoffs. There’s a very real risk of large instances shutting down because the admins lost interest, for example, and that’s less of a concern for a for-profit operation.

        I guess my point is to not oversell the Fediverse. It’s cool, hence why I’m here, but it’s far from perfect.