This post is going to get rapidly brutal and graphic. Consider this to be a content warning, if you need one.
Over the few months that followed, I saw tragedy after tragedy. I saw the videos of the pure visceral devastating aloneness and fear in the eyes of children who were in the immediate aftermath of their parents being killed in front of them. Children too young to know to whom they prayed or why, screaming for God to hear them. I watched children cling to whichever adult was closest to them, usually a blood-stained doctor in a hospital, because they had just had everything they knew of their life ripped from them, by the most advanced weapons in the world falling from the sky over them, whose explosions tore through their house and their family. I saw very young sisters forced into the position of holding their slightly-younger brothers to them to try to reassure them, when neither of them understood why they were suffering this, neither of them knew what would happen next, and neither of them had anywhere near enough emotional development to even begin to work through violently, loudly, chaotically, instantanously losing everything, including their parents.
I saw fathers screaming and crying, trying to dig through rubble with their bare hands, yelling out the names of the children they knew had been crushed and killed after their home collapsed after being directly hit by an Israeli bomb. The fathers must’ve known their kids were gone, they must’ve known their efforts were in vain, they must’ve known the lights of their lives had been taken already. I watched the panicked and desperate digging of pure fatherly instinct turn into a weeping so uncontrollable that their ability to try to dig started to fail them, until they collapsed into a defeated and destroyed pile, screaming up to the sky for God to hear them.
I saw people with limbs missing. I saw mothers carrying the only remaining pieces of their children in bags. I saw mothers collapsing in the street, screaming for God to hear them, upon being told that their child had been brutually and carelessly murdered by bombs big enough to level a whole block. I saw people with nothing but scars and the knowledge of how many funerals they have not yet had the opportunity to have walking down dirt roads desperately attempting to reach the safe zones promised to them by Israel, in hopes of some amount of peace, some amount of time to mourn, some amount of time to find some sort of reason to live, among piles of blood and corpses rotting under concrete. I saw those safe zones get bombed.
I watched every new event, every new tragedy, and each time I said to myself “if this doesn’t change the situation, what will?” Each time I said to myself “if this doesn’t cause a unilateral public backlash over here, what will?” I watched as my Government explained why those traumatised blood-soaked children were actually just too complicated for me to understand and we must continue supporting and arming Israel. I watched as kind and caring people around me said “it’s so awful what’s happening over there” as they went about their normal business.
I became numb. I still attended every march I could, I flyered events, I did speeches just because I knew this was important even though I could no longer empathise with those children, those fathers, those mothers. I knew I had to do something, I had so many luxuries, and they been wounded and maimed and killed with so little to their name. It’s easy to defend myself here; if I thought about this stuff all the time, I would never get anything done, and if I got nothing done, I would not be out on the streets protesting. Many would insist that it’s a natural and predictable response that humans respond to trauma by shutting down their ability to feel things, but in the last couple of days, I have snapped. Maybe it was Shaun’s Palestine video, maybe it was the news about Aaron Bushnell, maybe it was just a matter of time, but something has caused these images and news stories to cut me deeper than they ever have before. Although my heart has grown too callous to imagine what the suffering feels like, I am aggrevated by the hole blown right through my humanity.
Elijah himself and the roar of the river Jordan would struggle to wash away the sin that has now fallen on all of us. When this is all over and people are picking pieces of skull and brain and connective tissue out of the mangled piles of concrete and rebar and bomb shrapnel to try to rebuild Gaza, I am going to look inside myself and find nothing but ferocious disgust and unbearable shame. In a way, I did this. In a way, we all did this. The shrapnel embedded in those civilians and their homes was paid for by my taxes. I killed those Gazans. I sit on a throne of private healthcare, underfloor heating, Uber Eats deliveries; The Gazans lay dead, pieces of their bodies strewn across streets the kids once played on. The markets, the places of worship, the cafes, all the places where in the midst of occupationary deprivation a people who refused to die celebrated and enjoyed what little they did have and built loving community with their neighbours while Israel was given international cover to choke and drought and starve them, are now soaked in the blood of anyone that got in the way of the bomb.
I am angry at Israel, I am angry at my Government, I am angry at Joe Biden, but worst of all I detest myself, and I detest my fellow countrypeople. I know the story of how we got to this point is important to some people, but I can’t find it in myself to think about that right now, and I insist we do not have time to endlessly discuss it. How we got here has no impact on the viscious moral transgression that is happening in Gaza, and the fact that it needs to stop now, and the fact that it should never have been allowed to happen. I know some will say that understanding how we got to this point is vital to what happens next but what happens next isn’t just a question of what happens to the people of Palestine, but also a question about how we live with ourselves elsewhere in the world. The global power structures of today lay dilapidated and exposed on the floor. My soul and your soul now have a rot deep inside them. There is a stain on any principles we ever claimed to have that declares them to have been lies. I hope for your sake that you have a way to find a comfort in repentence for what we have let happen, because I do not.
If you have not yet stood up to demand a ceasefire, what is it going to take? In a thousand lifetimes we would never be able to adequately apologise to the Palestinians for the machinations of global politics that permitted this, nor apologise for the fact that we did not do everything within our power to stop it, but you, reader, and I, can take to the streets and not let our rulers have a single moment of peace until this stops, while there are still Palestinians alive for us to try to apologise to.
Many of us like to ask ourselves, “What would I do if I was alive during slavery? Or the Jim Crow South? Or apartheid? What would I do if my country was committing genocide?”
The answer is, you’re doing it. Right now.
~ Aaron Bushnell, 2024-02-25
]]>I won’t be occupying myself with whether this platform or the protocol it is built on fills a niche not already filled by other options. It exists, people are using it, and we’re going to need to have frank conversations about user safety on it regardless.
I’ll lay my cards on the table early. I’m optimistic about Bluesky and the motivations of the team working on it, but I’m also realistic. All I really have to go on is the documentation provided by Bluesky so far and their stated intent for the future, but they could change their minds on any of it.
Throughout this post, I will be discussing the good and bad parts of Bluesky, both as the invite-only walled garden it currently is, and as the federated platform it promises to be.
There’s an extremely glaring downside to abuse mitigation capabilities in decentralised social systems. Centralised systems like Twitter can benefit from user metadata when detecting ban evasion; user agents, source IP addresses, choice of email provider, etc. But in a decentralised network, you do not have this metadata for users on other homeservers. What this practically means is you will need to end up using the metadata you do have, such as what homeserver a user is federating through, how they choose their handles, who they interact with, etc.
The way Mastodon and Matrix tend to handle this shortcoming is defederating homeservers that seem to be the source of many abusive users. It ends up being a game of “if you don’t keep your house in order, we will ignore your house.” This works, and I hope Bluesky ends up leaning on this, and I hope they end up surfacing the ability to mute/block all users from a given homeserver to end-users as well as homeserver admins. The one problem with this methodology on Bluesky (compared to Mastodon) is moving your account to a new homeserver has almost no friction. The way I foresee solving this is something like “mute every user whose account was initially registered on a given bad homeserver.”
Corollary to the above: if there’s a homeserver that becomes too big to defederate, like mastodon.social is on Mastodon, they must ensure their users are roughly reputable. If a homeserver is too big to defederate and they’re a source of a lot of abusive users, the network will become unusable.
In existing federated networks, like email, Mastodon, and Matrix, homeservers function as a user’s entrypoint to a network. There’s far fewer homeservers than there are users, and homeservers are the only people that know all of the possible metadata about the clients transiting data through them. This means that you, a homeserver admin or an end user can pass judgement on the quality of remote homeserver’s users by empirical evidence, decide that they are not doing enough work to prevent people transiting abuse through them, and then reject everything coming from them.
Abusive users can find new homeservers to pass their abuse through, and you will play a game of cat and mouse to weed bad homeservers from your view of the network, but email has been doing this for decades, and it mostly works.
This is only necessary for homeservers that are very permissive with new registrations. A homeserver can ban a remote user without deferating the whole homeserver but if a given user is repeatedly making new accounts on a remote homeserver, you will need to treat all users of that homeserver with suspicion. Something I would urge people to keep in mind in this area is there’s no technical limitation stopping you from saying “mute any user from homeserver $foo that I have not already seen before.”
Homeservers must employ baseline industry-standard methodologies for pruning abuse from themselves if signups to them are open to the world or face ostracism. Monitor signups, prevent the most obvious cases of registering many different accounts for abuse, detect behaviour that looks like spam, etc.
The Bluesky team have, encouragingly, been preaching about giving users the tools needed to carefully curate what they see on Bluesky by use of first-class support for custom feed algorithms. This is a good idea, but it needs to be implemented with the understanding that if new users see nazis all over Bluesky when they don’t yet have custom algorithms, Bluesky will have terrible retention of new users.
The way Mastodon approaches this is your homeserver admins lay down a baseline of content curation by defederating known bad homeservers on the network. An advantage Bluesky has in this situation is they’re happy for the bsky.social homeserver to be the common entrypoint to the network for new users, so they have a lot of power to shape the first view of the network that new users get.
Bluesky is currently invite-only and the dev team are leaning hard on this to stem the flow of abusive users. This methodology is surprisingly effective: you can find the people inviting the people that are abusing other users and prevent from them inviting more people. Private torrent trackers have been relying on this for decades. The (small) problem with this is that you cannot be sure that the 54k people you currently have are handing out their invites wisely. A sufficiently motivated abusive user may prove adept at finding new ways in, especially when invites are being given to existing users so readily.
Once federation lands, this is of limited use. You can use this methodology to keep your own house in order, and encourage other homeservers to use it to keep their houses in order, but this does nothing to protect your users from remote users on homeservers that do not use this. Even if a remote homeserver were to try to communicate their invite tree to you (bad idea!), they can simply lie about their invite trees.
Some using the platform now are using it as if every important detail has been worked out, and I can understand why. At the time of writing, there’s just over 54k users, including some very big names. The fact that some big things are still being worked out is not always glaringly obvious.
Notably, it was only a few days ago that the Bluesky dev team had to drop everything they were doing to land the ability to block other users. This feature was implemented with a detail (that blocks are publicly viewable data, more on this later) that has been widely criticised. I’ve also been led to believe that the ability to deactivate accounts also had to be quickly implemented in response to someone needing to be deactivated. Whether or not this is true, it’s definitely true that they’re building a plane’s engine while the plane is hurtling through the sky and they’re quickly growing to a size where user safety is becoming urgent.
Other shortcomings include; muted words are not supported, locked accounts are missing and might take quite a while to implement (for technical reasons), you cannot hide replies, reporting categories are limited, etc, etc.
Something we’ve also seen Twitter say a number of times recently is that harmful content can (and should) be allowed to exist but should have its ability to reach other users limited. It seems the brunt of the argument in favour of this is that freedom of speech means people should be allowed to say whatever they want but Twitter as a platform doesn’t have to help deliver everything you say to a wide audience. Looking at what documentation is available and reading things the Bluesky devs have said makes it apparent that they plan to lean into this angle too.
While the above does seem promising, the wording of it does leave something to be desired. It’s conjecture, but my feeling so far is that Bluesky would like to manage as many things as possible only by limiting reach, and only resort to things like account deactivation in extenuating circumstances. Please, keep your own house in order.
This being said, there’s a big difference between users posting bad opinions, and accounts posting opinions that, when taken to their inevitable conclusion, amount to fascism, or users that are going out of their way to harm the mental health of other users, especially those in at-risk demographics. This is a hard balance to strike; sometimes it is hard to know what should be considered reasonable debate, and what should be considered beyond the pale. Hire people that know this difference.
This topic, like many others here, is complicated by federation. If a remote user is being a bastard, you have two options: refuse all content from the user, or label content from the user based on what bastardy thing it is. You, as a homeserver admin, can opt to hide the labeled content for your users, or you can leave it up to your users. It is currently unclear what the bsky.social homeserver plans to block for you and what it plans to leave up to you.
You, as a homeserver admin, can’t opt to simply refuse bastard posts from a remote user. You can only validate the authenticity of a given post by knowing the posts from the author that came before it, so if you refuse one post, you can no longer validate any posts that come after it. This doesn’t practically matter in most cases: if you label their bastardy posts and don’t show them to users, that’s mostly fine. A problem occurs if a user posts something illegal that you cannot tolerate persisting anywhere, but their feed is otherwise fine (imagine their account is briefly compromised, or so.)
I am hoping that bsky.social intends to deactivate accounts using bsky.social as their homeserver if they fall into the “Political Hate-Groups” category, but there has thus far been some instances of things like harassment, transphobia, and racism that have not been acted on as fast as one might hope, and I hope this is a symptom of problems happening faster than they can be solved rather than a deliberate choice of “limiting reach.”
On the off chance that this is a case of “limiting reach” as a means to avoid more forceful actions, I’d like to make a few points about the limitations of that methodology:
Making a distinction between freedom of speech and freedom of reach seems dubious at best. If you force a racist into a room on their own so no one can hear them, they’ll inevitably argue that they are not being afforded freedom of speech, so this doesn’t feel like it helps at all.
The goals of freedom of speech were, and should be, only the freedom to criticise your government without fear of retribution.
I come from Europe which, by and large, understands that freedom of speech does not extend to you being able to intentionally offend people without consequence, but is a backstop designed to prevent a government imprisoning you for disagreeing with them. It seems to me that The United States of America has in recent times dominated international discourse about this topic, where many believe that you should, in fact, be allowed to be racist without facing consequences.
There’s definitely a lot of nuance here. If a government bought in draconian surveillance to detect their citizens being racist, you really should be afforded protections enough to criticise such a policy but you can very easily criticise such a policy without yourself being vocally racist.
Anything short of making that content entirely inaccessible will mean people will eventually find it. This will contribute to your platform feeling unsafe to certain people, and will likely lead (rightly or wrongly) to a nazi bar PR problem.
You can make all the technical and philosophical explanations you want; if people don’t feel comfortable knowing there’s nazis in the same room as them, “but you can’t see them” isn’t a particularly potent salve.
So far it seems that a number of problems in this sphere have been met with “we can solve that with AI-automated content labeling”, or “you’ll be able to curate your feed with custom algorithms!”
Again, I can understand this refrain, but I’m unconvinced that this Technology is developed enough to sufficiently handle user safety in the medium term, and I’m acutely aware that a future plan for algorithms and AI does not solve the fact that there are blossoming problems right now. People in at-risk demographics are currently using Bluesky quite heavily.
AI-automated content recognition, in the current real world, can only practically be relied upon to pick low hanging fruit. You are going to need a thorough and well-tooled team of humans handling everything else, and while I respect that the dev team has been able to do so much with so few people, I can feel that an inflection point is rapidly approaching.
In the last few days Bluesky had to urgently implement the ability to block users. The app launched with the ability mute other users but not the ability to block other users. The key difference between blocking and muting is that blocked users will know they are blocked, and their clients can attempt to prevent them from composing posts that mention or reply to the user that blocked them. Additionally, mutes only hide notifications from muted users, while blocking will also hide the blocked user from timelines, but this is an implementation detail that could be changed.
The explanation (to the best of my knowledge) for why blocks are public is twofold: Bluesky plans to be a federated network, so a remote user’s homeserver needs to know if they’re blocked or not, but also if users can seamlessly move from one homeserver to another the new homeserver also needs to be cognizant of blocks. If I block you while I’m using bsky.social but then you move to jesopo.uk, it’d not be great for you to magically become unblocked. Corollary; if I block you while you’re on bsky.social, bsky.social will tell you that you can’t mention me, and it would not be great if you could then move to jesopo.uk, without having to make a whole new account and social graph, and suddenly be freely permitted to mention me, even if my homeserver will still not show me you mentioning me.
Mutes on the other hand are not public; they solely function to hide the muted user’s content from the muter. I’m unconvinced that the explanation for blocks being public is convincing enough that I’m willing to overlook how damaging public block lists can be to people. You could argue that given the target of your block on Twitter can already see that you’ve blocked them, they can already send a dogpile after you, but this does make that easier.
Additionally, as currently implemented; every post, follow, like, etc is accessible from unauthenticated APIs, and every change to publicly accessible information anywhere on the platform is able to be streamed through an unauthenticated websocket endpoint on the API. One might argue that this is necessary for federation, but the docs disagree. From a purely theoretical standpoint, one could do what mastodon does: only inform a remote homeserver about a new post you make if they have a user that follows you.
This all leaves Bluesky wide open to mass data archival without user consent. This is a topic Mastodon has been struggling with a lot and they have opted to (mostly) solve it by Authorized Fetch which requires an entity requesting a record to identify themselves. Some may say “if you post something publicly, you consent to it being archived!” but many, myself included, believe there’s more than just public and private content. Sometimes you want to be findable but not have the spotlight on you. If you are a public figure, assuming anything you post in public is going to be archived makes sense, but if you’re just some random user trying to have a good time, it can feel like an invasion of privacy.
When you delete a post, or undo a follow, or undo a like, etc etc it currently simply adds a “I’ve deleted that” event to your repository. If someone knows where to look, they can find deleted records.
Consider your entire user is a git repository, and each record type you can make (post, like, follow, etc) is a directory in your git repo. When you add a record, it makes a commit to your repo adding that record. Each commit’s validity, like git, relies on knowing the thing that came before it. This means that, like a git repo, you need to rewrite the entire commit log from the deleted commit onwards to actually delete a commit. Bluesky does not currently expose the ability to rebase your repo:
I’m very glad to hear that they intend to expose this functionality but it’s definitely a real issue to be really concerned about right now.
To follow someone, you create a “follow” record in your repository. The only person that can revoke that record is whoever holds the private key for your repo. If another user blocks you and you’re currently following them, the block event would have to ask you to revoke your own follow.
Since federation is not yet enabled and since Bluesky (at least currently) has your private key, they could action an unfollow for you when you get blocked, but this doesn’t reliably hold true in a federated world, nor in a potential world where clients are the only holders of their private keys, where a homeserver could purely exist to mirror your data on to the wider network. Even in the circumstances where this could work, it doesn’t seem right to sign a piece of data without explicit instruction from the user to which the private key belongs.
This doesn’t mean that if someone is following you and you block them, they’ll still see your posts. They won’t. On a well-behaved homeserver it will not show your posts to blocked users even if they’re following you, and Bluesky could be implemented such that a remote user’s homeserver isn’t told about your new posts if they don’t have any non-blocked users following you.
I’m unsure how practical this would be, but the protocol could be changed such that every attempt to follow a user needs to be co-signed by the user being followed. That user can then revoke their co-signature when they block the follower, though this would mean that resolving a user’s following list would involve cross-referencing it with co-signatures to check for revocation. I have a feeling something like this is going to need to be implemented anyway if Bluesky plans to have locked accounts some day.
It is currently extremely trivial to create URLs that look like something they’re not. it’s very, very easy to make a “google.com” post that actually leads to a RickRoll (or much, much worse) or make a mention that, when clicked, leads to a very different user than you’d expect. The URL problem here is much more of a concern. This problem is solvable to a degree, and I’ve seen the devs musing on how they’d fix it, which is promising.
I am mostly hopeful for the future, but I am also concerned about the challenges posed by the relatively-new frontier that is moderation on a federated network. I think these challenges may have good solutions, but will need time, practice, and experimentation.
I also think the Bluesky devs might have bitten off more than they can chew in the short term. The devs do seem to accept this. Their team is small, they’re working very hard, and their intentions seem good; but the last week has seen a large influx of users, as well as an influx of problems that require both careful consideration and better tools.
In the longer term, I’m encouraged by a stated goal of Composable Moderation, and I’m encouraged by the idea of shared reputation lists that users can subscribe to and publish, to shape their own view of the network and help others shape their own views of the network, but I think the responsibilities of a homeserver admin are still going to include protecting their users using a combination of traditional methodologies as well as the brave new world of moderation that can be afforded to us all with decentralisation and algorithms.
]]>I was about 15 when I was first faced with this refrain; I was an arrogant teenager who insisted that even if the wheel had already been invented, I wanted to take my swing at it; maybe my wheel would be wheelier than theirs, maybe I’d learn something about wheels, maybe I’d have fun. I wanted to viscerally understand the machinations of the code that sits behind the curtain of human-friendly interfaces, as if doing so would make me a real programmer.
I’ve grown a love-hate relationship for that partially-enduring impulse; I get a deep satisfaction from cresting the mountain of a learning curve, but I also manage to scope-creep any project I get my paws on, and what I’ve learned along the way is that the majority of psychic damage inflicted by reimplementing widely-implemented protocols is just how bad other people’s implementations can be.
/* Stupid broken piece of shit ircd didn't send us 001,
you'd think they could at least get that right??
But no, then I'll have to go and add these idiotic kludges
to make them work. Maybe I should instead get the users of these
servers to complain about it to their admins.
[...] */
This understandably disgruntled comment clearly expresses the dilemma here: you either begrudgingly handle other software’s incorrect data, or you hope that ecosystemic pressure may force their hand to stop doing it. I’m here to tell you this developer made the wrong choice, and how this choice consistently facilitates protocol degradation that makes implementing protocols an insurmountable task.
A common casualty of permissive input handling is anything that handles web pages.
Any of you that have had the joy of being in the weeds of web development will be all-too-familiar with how frequently it works on my machine isn’t a sufficient answer; there’s a whole industry built around one of the pains that the robustness principle promulgates; browsers have all had to make their own secret sauce for handling the many innovative ways people have found to write web pages incorrectly, and you cannot trust that your code is correct just because your browser makes it look ok.
People are, by and large, free to write absolutely god-foresaken HTML that, when you squint, looks roughly logical, so browsers help you out and take a guess at what you actually meant, rather than telling you that you’ve written something invalid and really ought to go figure out how. The fact that people don’t even know they’ve done something incorrect hurts the ecosystem, skips over an opportunity for that person to learn, and makes the task of implementing generalised web page parsing out of the reach of most people.
HTML5 has spec-defined the correct way to handle invalid HTML documents which may help this; it’s still going to allow incorrect code to proliferate unimpeded, but at least it will be somewhat predictable. There’s a lot of reasons that the browser market isn’t a crowded field, but I’m willing to bet the bar to entry of handling global decades worth of noncompliant data doesn’t help.
While doing a bit of research for this blogpost, I happened upon a comment in pyyaml’s code that sent me down a bit of a rabbit hole;
# For some strange reasons, the specification does not allow '_' in
# tag handles. I have allowed it anyway.
‘Tags’ are a way to decorate a value with a hint to tell the parser how to
parse it. think of an example like date: !iso8601 2022-01-01
; !iso8601
is
the tag, iso8601
is the tag handle.
As promised, the specification does not allow _
in tag handles, but
the earliest commit of tag handles in libyaml does; this commit does come
after the publication of yaml 1.1, but that also does not allow _
, and
neither does the most recent version of the yaml specification. A
popular rust library accepts _
, as does a popular java library,
as does a popular .net library, et cetera.
Although the case of tag handles are a bit of a niche (just happened to have the best comment), what this demonstrates is that for anyone optimistic enough to try their hand at implementing this protocol, reading and implementing the specification alone will not suffice for functionality parity with even the reference implementations of it. They will come to learn that, no matter what the specification says, someone will end up approaching your software with data they expect to work, because the reference implementation and most implementations derived from it will accept it.
Something that motivated me to write this blogpost is my sorrow at communities moving from IRC to proprietary communication platforms simply because they’re nicer to use, and the upsetting realisation that a massive barrier to entry for writing new and better IRC software is how absolutely maddening it is to try to write IRC software capable of accepting 3 decades worth of infamously subtly invalid protocol being spoken by other software.
As the complexities of writing software grow, the accessibility for hobbyists and enthusiasts wanes. We’re watching the industry creep further and further towards meaningful projects needing to be backed by well-resourced outfits to be able to find their feet and that’s intensely damaging for personal freedoms, as well turning away would-be revolutionaries from even trying to make their mark. Something that is definitely within our collective control is ensuring our own software contributes to enforcing spec-compliance, so that new software starts off having to compare itself to a well-functioning ecosystem.
I do recognise that reversing the dogma of permissive input handling is going to be impractical for a lot of already-broken protocols without very slow incremental moves like HTML5 has tried, but it feels like we as free software advocates should have a moral compulsion to keep protocols reasonable to implement to avoid concentrating expertise in to precariously few hands, and The Robustness Principle, practically, runs counter to this goal.
]]>