It's 6:30am on Monday. In ten minutes I'll be taking Kit to be sedated so they can find out what's wrong with her broken butt. Whatever it is, it was so painful that I had to force-feed her gabapentin every 12 hours over the weekend. She then had to fast in preparation for the anesthetic, so I didn't get much sleep as she woke me up every hour or two to beg for food. Not being able to explain things to cats is the worst.

[time jump - now I'm back home]

Meanwhile. On Friday I made a couple of posts to give some context about a company blogpost on anti-toxicity measures. The thread involved some back-and-forth and by the evening I decided to mute it and try to relax. Of course, one of my replies got picked up for some pretty heavy quote dunking, which I figured out Saturday morning because I saw somebody indirectly memeing about it. You know you're in trouble when something you said has become a copypasta.

Interestingly, since we don't have any kind of automated dogpile detections or warnings, muting that thread meant I didn't have a chance to cut it off by detaching the quotes or blocking anybody. As I was sitting in the vet's office on Saturday morning, waiting to hear the eventual verdict that they'll have to put Kit fully under to even diagnose the issue because it's too painful for her to be examined, I discovered that I had been 100% cooked.


The product needs to stand for something. One big thing we chose is that Bluesky should be a nicer place. This comes from our own beliefs as much as it does from feedback. X is now famous for being toxic, and we feel like that's bad for people, and I would say we're not alone: people repeatedly tell us they want to enjoy themselves online without being harassed, demeaned, or threatened. Crazy product insight, that is.

Of course, how to accomplish that is a hard question. One of the critiques we got on the coming updates that resonates most with me is that we're trying things that feel "paternalistic." I want to explore that problem a bit and share some of my own complex feelings about it.

One of the early things we implemented was community-operated moderation: aggressive blocks, mutelists, blocklists, and even labelers. We've always anchored on community tools, and yet that hasn't been enough. We still hear repeatedly that people have a bad time in the replies.

Labelers are somehow both very powerful and not powerful enough. Users can subscribe to them, send them reports, and then have content hidden by the labeler throughout the app. That makes them powerful, but because they don't apply to everyone they end up acting only as a personal filter, which then isn't powerful enough.

We also had some incredibly painful blowups with the early community labelers that broke my brain about the topic. Something is wrong with the formulation. (If anybody has wondered why labelers haven't gotten easier to discover - that's why.)

One of the followup ideas to labelers that I personally find interesting is a "personal moderator," which would be a labeler you appoint to moderate your own replies and have those decisions apply to everyone that views them. (We sometimes call this giving the labeler "jurisdiction" over the replies.) This is interesting because it might be as capable of any system we build into the application, could use automated systems and/or community decision-making, and would be fully under a person's choice.

Whether this idea would solve the challenges that led to the labeler blowups, however, is not clear to me at all.


There are two things that we have to hold in our heads at the same time: be responsive to the downsides of our product, and respect people's rights to make decisions for themselves. Taking the latter even further - be aware of the risks of making decisions for users.

This can make for some pretty challenging product design work. We have generally tried to square this by creating sophisticated tools and then selecting good defaults. When this doesn't work, it shows up as: "the UI is too complex, nobody knows about the tools available to them, and nobody uses them."

I've always been proud of the Bluesky community for promoting the use of blocks as a form of self-care, because the social dynamics around restricting access are complicated. Deleting a post, turning off replies, detaching a quote -- they're often seen as a sign of weakness or admitting defeat. This means that even when you do know about the tools, there's a negative social pressure to use them.

Probably the best thing we've implemented to deal with crappy replies is the "Followers only" setting, and maybe the smart move would be to just make that the default, but that feels more aggressive than the interventions we announced on Friday. (We are going to make some improvements to the "Who can reply" UI though.)

This doesn't mean that the interventions we've chosen are automatically correct, but it does give context about why we're inclined to systemically change replies. When we see that "Followers only replies" are successful at reducing toxic replies, our inclination is to shape the core experience of replies to anchor more closely toward your social cluster. After all, it is a social network.


Good intentions aren't everything. Some of what we do may not work. If something bombs, we'll roll it back. When something we announce doesn't resonate with people, believe me, we talk about it internally. But we stand by my core belief: that we want social media to be a less toxic place, and we will work hard to make that happen.

It's all kind of like Casablanca when you think about it, a movie about listening to feedback from your users.