Loomio
Fri 13 Jan 2023 6:28AM

Framework for Moderation

ED emi do Public Seen by 253

It’s been exciting to see many new tooters on social.coop and though each of our timelines may look a little different, we are looking for ways to ensure that the social.coop corner of the fediverse continues to reflect our values and ethos for being a cooperative and collaborative space. The CWG Ops team has been hard at work at our moderation duties through this large growth phase and we have been grappling with some moderation questions.

We would like to add clarification to encouraged and discouraged behaviours outlined in our CoC and Reporting Guidelines.

In an effort to encourage and model positive and constructive modes of discourse that demonstrate a mode of online communication that can prove counter to the norms on other platforms,

we’d like to introduce nuance to moderation based on the guidelines outlined on this blog: https://wiki.xxiivv.com/site/discourse.html

In this thread we will discuss different moderation challenges and based on those discussions, will propose amendments to the CoC for approval by the membership.

MS

Matthew Slater Fri 13 Jan 2023 11:17PM

I think the aim is to reduce what are being called 'moments of constriction', and to remind people of the rules and call them to monitor themselves, because moderation is a limited resource. Calling for moderation should be the last resort.

  • tell the person why you didn't like their toot and invite them to soften it

  • block the offending person to protect urself

  • call the moderator to protect others.

SW

Sam Whited Fri 13 Jan 2023 11:22PM

FWIW, in all of the things you're complaining about this is what we did the first time. This isn't about moderators who overreached and deleted things when they should have talked to the user first, this is about users who repeatedly ignored the rules after being politely reminded of them and being asked not to post certain content or soften their tone. These users are hurting others and we're aiming to protect them, if the user ignores us we have to take stronger actions than just asking them to consider being nicer or not posting anti-vax content or whatever the case may be.

MP

Michael Potter Fri 13 Jan 2023 11:41PM

Aaron, I didn't get the impression that you were a right-wing type. Also, I like to think that I'm realistic enough that if I find an idea threatening, it's because it is. Anyway, I took anti-fascist from our own CoC: "Let there be no confusion, Social.Coop is anti-racist, anti-fascist, and anti-transphobic."

I see what you mean about groupthink and hard rules not always having the intended effect. Will the rules not apply to all situations, or would they condemn people who aren't doing anything wrong? I do believe that truth is generally verifiable, not philosophy or theology, but scientific truth. Certain conspiracy theories are well-known, debunked and aren't worth further discussion, imo. I don't think the bird app blocked disinformation out of the goodness of their hearts, but to protect themselves from liability. We might do well to consider that.

MP

Michael Potter Sat 14 Jan 2023 12:49AM

Online spaces that become popular will attract trolls. The purpose of moderation is to stop them from ruining the experience for everyone else.

AW

Aaron Wolf Sat 14 Jan 2023 1:05AM

Glad Loomio is threaded. Still, slight constriction about making sure I don't get into over-posting… I do want to clarify, trying to be concise.

I like to think that I'm realistic enough that if I find an idea threatening, it's because it is

I think a far better mental model is: "If I find an an idea threatening, I need to trust my intuitions and treat it as a threat because it's dangerous to ignore our fears." We can have that attitude while still holding more lightly to whether we are right about the danger. People in general will feel righteous about our judgments while feeling threatened. If we can later get to a state of real open curiosity and equanimity, we can review from a distance whether our fears were calibrated well. If we practice this consciously as a pattern, we can grow in our confidence. I wonder if you are recognizing enough the threat that comes from being overconfident about the accuracy of our perceptions of danger.

The model I'm proposing is one that does not support inaction in the face of perceived threats. I think it's essential that potential threats be addressed sooner rather than later. I just think we need to do the minimum to address immediate perceived danger and allow for a more patient facilitated process for how to finish resolving the situation after initial reaction.

from our own CoC: "Let there be no confusion, Social.Coop is anti-racist, anti-fascist, and anti-transphobic."

For transparency, I opposed that when added. There were similar tensions in drafting where people in a state of threat and constriction insisted on these trigger labels being added. I felt and continue to feel constriction about their presence. I read that sentence as people saying they are too scared of nuance and interpretation and felt only safe with some aspects of zero-tolerance policies, and putting that in was a compromise between that view and the view from myself and others who wanted an effective but less blunt and hard-line approach. So, I want the co-op to have moderators who use best practices and make human judgments to block harm. I don't want the co-op to say that certain debates (e.g. questions of biological sex versus socialized gender, or questions about climate science) are absolutely prohibited. I see that attitude as part of the purist trend in social media that blocks constructive dialogue and growth. I do recognize the serious risk of allowing too much subtle dangerous ideas in the name of dialogue. I want our moderation methods to empower humans to adapt the practical approaches over time as needed. Anyway, this is already way too long here.

Yes, I support any necessary efforts to protect ourselves from liability.

AW

Aaron Wolf Sat 14 Jan 2023 1:11AM

I don't like the term "hate speech" for all these things. Advocating violence isn't hate speech, it's just advocacy of violence. Hate speech is specifically about targeting particular groups and that does not include politicians as a group. It does have legal meaning even.

The Trump example is the only one I would support without any constriction myself. Some of those might be okay hidden behind a Content Warning labeled something like "rage rant" or something. It's okay for people to express some hyperbole, especially in context. I'd like it to be acknowledged consciously and not supported as the norm of communication.

AW

Aaron Wolf Sat 14 Jan 2023 1:18AM

This [EDIT: Sam's post, Loomio isn't threaded beyond single-level apparently, darn!] is the first in this thread to explain what the OP was referring to. I didn't even notice the issue until now. Can you just give a quick summary? I'm imagining something like: poster links to mainstream articles that are popular with anti-vaxxers even though the articles do not take anti-vaxxer position; then people complain; the posters keep doing it and ignore complaints; moderators step in and delete stuff. Is that right?

If so, this goes to my core points that I want to see: more onboarding process where new members are actively (like in conversation with some guide, a real human being) introduced to the social norms, which include making adjustments in light of complaints, not ignoring them. There's almost always some way to improve. And if someone egregiously complains too many times and is hassling others, moderators can deal with that pattern. I don't see how a community can maintain healthy norms and be welcoming without an active onboarding process.

I fully support moderators taking action if users ignore complaints. Furthermore, I wish there was a way to do something like hide a post behind a Content Warning even if the original poster didn't make one, because that is a medium action between merely asking the poster to take action themselves and just deleting their post.

T

tanoujin Sat 14 Jan 2023 1:27AM

Hi Matt, I'd like to look into that case if there is any documentation available - do you have a link to the evidence or related discussions?

T

tanoujin Sat 14 Jan 2023 1:57AM

I see your point, Aaron, but I think instead of hiding a post, the best practice would be to moderate proactively, like adding a public mod warning before a thread gets out of control, locking it and, yes, deleting toots that violate the CoC (https://wiki.social.coop/docs/Code-of-conduct.html). I take this from experience with fora, so I am not sure how to realize that in a microblogger environment though.

If you want discussion about questionable toots, you will run into dead ends offering mediation, just because (qualified!) manpower is limited and such proccesses will most probably fail if there is no commitment by the parties in conflict.

I could imagine a possibility to hand in a moderation complaint to a committee of supervisors which handles such cases swiftly backstage, but transparent to our members. Following discussions should utilize the usual channels imo. (You can see Matthew Slater trying to initiate something like that above.)

T

tanoujin Sat 14 Jan 2023 2:24AM

@Ana: I made good experience (elsewhere) with the purpose of keeping all posts in accordance to (don't laugh at me) the UDHR (https://www.un.org/en/about-us/universal-declaration-of-human-rights).

Just two examples for this minimal consent in action:

Matthew's "Too bad Epstein wasnt hung by his balls" -> Art.5 "No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment."

A person tooting an SS flag without any sarcastic or ironical framing: showing a symbol of "the violent elite of the master-race" -> Art1 "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood." (and a couple more).

That makes it pretty easy, right? Instead of listing what we do not want to see, we refer to the catalogue of rights we want to see untouched at minimum in our reach.

Load More