Loomio

Zeitgeist and other Fediverse LLMs

Dan PhifferDan Phiffer Wed 1 Apr 2026 5:20PMPublicSeen by 256

Last week Laurie Voss announced that he’d released a new Mastodon client called Zeitgeist.blue, “a multi-social-network app that summarizes your feed for the last 24 hours.” As a Mastodon client, it appears to authenticate a given user with an existing instance (including social.coop, potentially) and, using Anthropic or GitHub Copilot, processes timelines for the purposes of summarizing one’s timelines using a large language model (LLM).

Many replies to the announcement were critical of the project and took issue with the lack of consent and being subjected to AI surveillance. Voss was dismissive of those concerns, referring to people complaining about the lack of consent mechanisms as “tedious bastards,” and began blocking people replying in the thread.

Voss was informed of an existing precedent for people to tag their bios in order to opt-out and subsequently added an opt-out for people who do not want their posts indexed. Initially Zeitgeist indexed everything, including DMs and follower-only posts, but now DMs are filtered out. There is also unique “User Agent” metadata that accompanies requests and theoretically could be used to block requests as an instance, if we chose to take that route.

I’d like to start a discussion here of how the cooperative would like to handle this, or other projects that engage in trawling content and processing posts using LLMs. Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?

Some related essays that might be helpful for thinking about this with:

Dan Phiffer

Dan PhifferThu 2 Apr 2026 2:43PM

I have added an opt-out hashtag and I'll be curious to hear what you think of Zeitgeist (and perhaps to confirm that my posts don't appear in your summaries).

If you think this is problematic, let me know, but I'd like to understand what the actual alternative is here.

Yes, I do think it's problematic that you seem uncertain whether the people you follow are consenting to your use of their posts by using this client. But I don't think you should leave the coop over it. I would miss you!

I wonder if you'd feel okay with tagging your bio the same way I just did? #LLMClient perhaps?

Flancian

FlancianThu 2 Apr 2026 2:53PM

@Dan Phiffer thank you, I think it's likely I'm leaving given how things are likely to go, but we can remain in touch and I wish you well! I just dislike enclosure by committee, left-authoritarianism, time wasting, and people assuming and expressing bad faith -- and I've seen too much of this over the years, even as I was putting hours of work for the coop in preference to spending time on my personal projects or alongside more like-minded people. At some point it's just rational to move on to greener and more progressive (from my PoV) pastures, and leave the instance to people who want it to be a different thing. I will probably write a retrospective in any case, and do it in an orderly way making sure the TWG in particular has everything they need.

Dan Phiffer

Dan PhifferThu 2 Apr 2026 2:58PM

This is surprising and sad to hear. I'd like to remind you that my very first encounter with this subject, minutes after reading about it as the on-call mod, was me offering to meet synchronously to talk things through on this. That offer that was not accepted, and now I wonder if you regard it as being made in bad faith?

Flancian

FlancianThu 2 Apr 2026 3:00PM

@Dan Phiffer No, I have no doubt about your good faith! Thank you for the offer. I did not decline it, I said I was open to it? We can talk in the next TWG sync as well.

I've added this to my profile: "#searchable, occasionally #llmassisted." -- the earlier is for Tootfinder.ch. I settled on #llmassisted because it seems more generic while still clear enough, and can also cover things like "I'm using LLMs to write some posts"? Which I'm not, but I think people might occasionally want to do?

Dan Phiffer

Dan PhifferThu 2 Apr 2026 3:04PM

I did not decline it, I said I was open to it?

@Flancian apologies, I'm looking at the thread (on Zulip not here) and still not seeing your message. It seems like a missed opportunity.

EDIT: I found the message, this one was on me.

Flancian

FlancianThu 2 Apr 2026 4:46PM

@Dan Phiffer no problem!

After reviewing https://github.com/seldo/zeitgeist/commit/90b09ad6f683d7059ef4a5c4a3883704a7e71057 and seeing #nobots and #noai are supported to opt-out, I'm thinking of just having #bots and/or #ai in my profile to signal the opposite (endorsement/opt-in/likelihood of using these technologies).

Sieva

SievaWed 1 Apr 2026 6:39PM

@flancian a couple of thoughts:
- we should be concerned about the tool usage against our data, but perhaps not a specific developer account
- personally, I'd like this person to remain blocked, if not at the instance level, then at least at the level of my account (currently not possible because social.coop blocks him). If we unblock him, **I'd really like to know about that when it happens.

Flancian

FlancianThu 2 Apr 2026 12:12AM

@Sieva I think that makes sense; to be clear my understanding is that if we move to Limit you will not be able to interact with them without seeing a warning, and if they follow you that would show up as a follow request. From the Mastodon docs on limiting at https://docs.joinmastodon.org/admin/moderation/#limit-user:

"Previously known as “silencing”. A limited account is hidden from all other users on that instance, except for its followers. All of the content is still there, and it can still be found via search, mentions, and following, but the content is invisible publicly. Notifications about activities from limited accounts will be handled according to account-level notification preferences (which default to “filter” for limited accounts).

If a limited account attempts to follow a user on that instance, the follow is converted into a follow request.

At this moment, limit does not affect federation. A locally limited account is not limited automatically on other servers. Account limitations are reversible."

So if I have this right, limiting would probably be as safe as suspend for most users? And still allow others to opt-in into interacting with this account.

Sieva

SievaWed 1 Apr 2026 6:22PM

I would prefer blocking requests with User-Agent=Zeitgeist/1.0.

Flancian

FlancianWed 1 Apr 2026 8:37PM

@Sieva that makes sense, but note that it'd be hard (the TWG would have to think about it I believe, @Dan Phiffer @Calix @Ammar) to do this for only some users, it would be more like an instance-wide ban (similar to this suspension). I would personally find this unsatisfactory because it means my Fediverse experience would break any time I wanted to interact with someone who is using this client.

Dan Phiffer

Dan PhifferWed 1 Apr 2026 9:02PM

Yeah, to my knowledge this isn't something we've done before, blocking based on User-Agent. I bet we could figure it out though.

Kris Warner

Kris WarnerWed 1 Apr 2026 6:31PM

Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?

No.

Mikelo

MikeloWed 1 Apr 2026 9:47PM

Any use of a user’s data, LLM or otherwise, should be opt-in. Anything less is unacceptable.

I don’t consider myself a tedious bastard. I do think we as a cooperative should always protect ourselves against the entitled actions of those like Laurie Voss.

Dave V. ND9JR

Dave V. ND9JRWed 1 Apr 2026 11:08PM

I'll go one further than say I don't consent to LLMs using my posts: doing so constitutes what I see as "altering the deal:" when I started posting I never imagined that my posts would go to such things and now we have someone doing exactly that.

Also, what is it with certain people here defending people/organizations who are clearly bad actors? This happened with Facebook/Meta and the discussion about Threads and now I'm seeing it again.

Flancian

FlancianThu 2 Apr 2026 12:07AM

@Dave V. ND9JR  that's probably me 😅 I just disagree with the assertion that these people and entities are necessarily "clearly bad actors" as a whole/in totality, so I try to err on the side of being able to communicate with them until proven wrong. In the Threads case, I made the point that it was better if people in the Fediverse were to talk to "good" users in Threads and the other way around; in this case, I'm making the point that this person is worth talking to for at least some of us, as are other potential users of this new client, so I don't think it's ideal if Social.coop blanket-bans everybody effectively removing them from the Fediverse.

I think it's reasonable for people to want to want talk to others, even if you in particular see no point to it -- this is why we're different people also we share an instance. Strength in diversity and such?

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 12:52AM

@Flancian Yes, it's you. Except this isn't just about "talking to others." It's allowing an instance that's clearly using people's posts without their consent to feed LLMs (which themselves have a history of slupring all sorts of data without people's consent, including violating copyright), led by someone who when told "We don't consent to this" basically said, "Your consent doesn't matter; I'm doing it anyway." How that doesn't constitute a bad actor I don't know, but it certainly does to me.

Flancian

FlancianThu 2 Apr 2026 1:10AM

@Dave V. ND9JR one correction -- I think the discussion so far is not about allowing or disallowing an instance. The thing being developed is a client, not an instance, and the current suspension is only against the developer of the tool (a single account.)

I think I understand your position, it's just that I consent to people using this tool because I don't care about my posts going to LLMs, and I think people blocking each other seems good enough if they want to signal they don't want to interact for any reason, including the other person being more pro-LLM or anti-copyright or whatever than them. 

The issue here is perhaps that you and I have different preferences but are trying to share an instance, and there's no effective per-account mechanism for this "setting". So I'm asking people not to prevent me and others from setting this setting how we want it, same as I don't want to prevent you from doing the same. I think we should try to respect each other's preferences essentially instead of jumping to telling each other "this is the way and every other way is wrong", which an instance-wide setting of suspend, or a blanket block based on user agent, would be.

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 1:23AM

@Flancian  No worries there; I'll be leaving this instance in about a week.

Flancian

FlancianThu 2 Apr 2026 10:29AM

@Dave V. ND9JR oh! But why, if you want to share? Where are you going? Single hosted or a different community?

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 4:21PM

@Flancian I refuse to share until you state your motive for asking these questions.

Flancian

FlancianThu 2 Apr 2026 5:34PM

@Dave V. ND9JR I was just curious, other people have left recently and I was also considering moving instances. But your tone makes it clear this is no friendly exchange, so disengaging now. Good luck.

Benji Mauer

Benji MauerWed 1 Apr 2026 11:49PM

There seem to be two threads of conversation here:
1. The risk that if a social.coop member signs in to Zeitgeist, the OTHER party to that user's DMs content would be compromised. In other words, if Dave and Anna have a DM thread on social.coop, and Dave signs up for Zeitgeist or a similarly poorly architected client without Anna's consent, that client could read Anna's DMs or private replies to Dave. This seems genuinely concerning, though could be true of ANY client that is poorly designed, whether it uses AI or not. I would like to prevent the general case of this, if possible. Though, I'm not sure it's practical, unless we have an allow-list of clients we all agree to.
2. The risk that LLMs will use public posts for training. I understand people's concerns, but as people posting on the open web, hoovering by all manner of parties seems just inevitable at this point. We can certainly use robots.txt directives or any other directives that indicate our desire for LLMs not to use our posts for training, but the only thing that will actually stop it, given the completely rampant violations of consent AI companies are currently engaged in, is having your social network in Signal threads with disappearing messages. I think we should do everything practical to prevent LLM crawling, but I certainly don't expect my posts won't be used for training whatever we tell bots to do or not do.

Benji Mauer

Benji MauerWed 1 Apr 2026 11:57PM

Personally, I don't care about blocking Laurie Voss. I believe I followed Laurie, but I don't know because they're... blocked now. I don't particularly want access to people I follow to be blocked as "punishment" for something that doesn't impact me because I didn't sign up for Zeitgeist and never will, because it's a bad idea poorly executed.

I support @Flancian's proposal:

My proposal is that we should remove this suspend, perhaps downgrading to a limit if people have significant concerns with this user (I don't). Limit removes their posts from timelines and makes the profile harder to access (IIUC) but crucially allows people who are following the user to continue to interact with them.

Because it seems reasonable and right-sized to the actual risk, and I'd prefer to be able to interact with and see what this (apparent) charlatan is up to.

Luke Thorburn

Luke ThorburnThu 2 Apr 2026 1:10AM

My 2c:
- There are lots of different ways in which people's data can come into contact with an LLM, which I think it's important to distinguish. E.g., inference vs training, local vs remote-hosted LLMs, various forms of contractual use and privacy commitments, various forms of hardware-based privacy guarantees. I believe these are all relevant to making policy decisions (i.e., at least some combinations of these should be allowed).
- LLMs have huge potential to give people more more agency over their attention. E.g., people who find certain content triggering or bad for their mental health can create bespoke interfaces to modify that content or filter it out. I was involved in this paper which makes this argument in much more detail if interested.
- I hope we can find a policy that allows for such use cases — e.g., inference-only, no training, sufficient privacy guarantees.

(Not endorsing or commenting on Laurie's behaviour or how his specific app was designed.)

Danyl Strype

Danyl StrypeThu 2 Apr 2026 4:01AM

Kia ora koutou, this is just the beginning of the gold rush of vibe-coded apps, and other uses of generative models. For now there is a moratorium on listing fediverse-related software on the fediverse.party if it makes any use of this technology. We're keeping a watchlist for this purpose, please let me know about any gaps in this list.

Billy Smith

Billy SmithThu 2 Apr 2026 7:16AM

I had missed most of this, but it's an example of a simple principle.

Is there consent?

Yes or No?

If the question is not even being asked, then block by default.

Luke Opperman

Luke OppermanThu 2 Apr 2026 1:54PM

  1. My sentiment, currently well-represented in other's comments, is broadly and deeply anti-LLM and pro-consent, in line with our corner of the fediverse' norms against projects to scrape content or misinterpret "it's public" as consent.

  2. But this case is complicated in terms of response because it is not a scraping project but a client acting on behalf of (a presumed) user of social.coop. I'm broadly in favor of folks developing and using clients of their choice. Some of those clients will have flaws (in intent or in implementation, perhaps moreso for vibe-coded ones) that e.g. expose DMs, or be written by people whose interactions get them blocked by our instance - yet acting-as and authorized as a client for a member in good standing who is trusting/using that client to e.g. summarize or prioritize their feed.. The central question I have is how we might regulate this.

  3. There's a totally separate (agreed by all involved?) concern raised here about CWG moderation of the developer's account, that I'll leave to CWG to receive feedback and address.

It seems it would need to be a new capability and instance-wide policy to block User-Agents that we agree are risks or not in line with our values. Also entails vetting and moderating/reviewing use of particular clients - banning any source-containing-AGENTS.md client is a possible policy, banning clients that have known security or value deficits is another, but with significant tech/policy workload to maintain this list with some level of open access for new clients. This case might indicate the start of a desire/direction from the membership, but despite being opposed to such tech I'm not sure I would vote to undertake the work to exclude it. Uncomfortable.

Billy Smith

Billy SmithFri 3 Apr 2026 7:09AM

@Luke Opperman

If they are acting on behalf of a social.coop user, then we need to add to our rules that members of social.coop do not use these services.

Stéphane Klein

Stéphane KleinThu 2 Apr 2026 4:34PM

My position on this is based on a distinction I think is worth making:

I accept that anything I publish publicly on the web can be read and processed by an LLM — this was already true before LLMs existed, with search engines, RSS aggregators, archive.org, etc. If I post publicly, I lose control of the data. That's the deal.

However, I draw the line at the nature of the output:

  • I'm fine with my public posts being processed by an LLM whose output remains private (e.g. someone summarizing their own timeline for personal use).
  • I would want an opt-in mechanism for cases where my posts are used to produce a public artifact — a published summary, a generated article, anything redistributed publicly that derives from my words.

On private messages, my position is simpler: I am opposed by default to any private message being sent to an external service, whether it involves an LLM or not. This should require explicit opt-in from all parties involved in the conversation, not just the person using the client.

The concern with Zeitgeist, for me, is less about LLM access to public data and more about these two boundaries: the public/private nature of the output, and the protection of private messages from external processing.

Aaron GK

Aaron GKFri 3 Apr 2026 12:47AM

I don't know that the user has violated our code of conduct or federation policy so much as gone and done something that many of us are uneasy with, myself included, but that we don't have an articulated and agreed upon policy framework for. Because of this, at this point, I don't think suspending the user is a good thing.

That said, I think some of their behavior is incredibly obnoxious and presumptuous. For one thing, this person has basically come out and put themself in a position to build tools that will affect the direction of the fediverse. So speaking of whether or not "its public" equals consent, this person put themself in a public position. To then block users who criticized them is a bad faith move. Yes, generally blocking people you don't care to interact with is desirable, but we're not just talking about run of the mill clash of personalities. This is the one thing that gave me pause on whether this person violated our policies.

Nonetheless, I think the way to handle this is to start thinking about and discussing clear policies around AI that reflect our values.

This from @Stéphane Klein resonates with me strongly:

However, I draw the line at the nature of the output:

  • I'm fine with my public posts being processed by an LLM whose output remains private (e.g. someone summarizing their own timeline for personal use).

  • I would want an opt-in mechanism for cases where my posts are used to produce a public artifact — a published summary, a generated article, anything redistributed publicly that derives from my words.

Though I still am uneasy with the first bullet point too.

Our Federation policy does state that bot accounts following users with #NoBot in their profile is cause for suspending, however I don't have a problem with bot accounts following me. Some of them are dumb and obnoxious and I can just block them individually. I do have a problem with AIs following me. Can we make #NoAI a thing to express we don't consent to AI accounts following us? And still, a user using AI such as this Zeitgeist thing doesn't necessarily make it an "AI" account in the same way that bot accounts are bot accounts.

Billy Smith

Billy SmithFri 3 Apr 2026 7:12AM

@Aaron GK The main people who use the "Public = Consent" argument are usually Techbro's who want to do something that they know people will not like.

Like the Glassholes using GoogleGlass in bars to find vulnerable women to target.

Danyl Strype

Danyl StrypeSun 5 Apr 2026 4:18AM

@Aaron GK

I do have a problem with AIs following me. Can we make #NoAI a thing to express we don't consent to AI accounts following us?

Broad brush policies against "AI" risk throwing out babies with bathwater. I think it's worth being more specific than that. Despite the common misuse of "AI" to describe LLMs and other generative models (which I've taken to calling Trained MOLEs), AI as an area of computer science is both much older and much broader than the current data-driven MOLE Training goldrush. It has produced, and may produce in the future, many forms of AI automation we might be fine with.

Ana Ulin

Ana UlinTue 7 Apr 2026 5:41PM

@Danyl Strype I would go even further: When folks say LLMs, does that include, for example, locally-hosted models? Or setups where the data is used for inference but not shared for training purposes?

There are some legitimate, worthwhile uses for thoughtfully-implemented LLM-enabled systems that I would still want to allow (accessibility being a big one, others have been mentioned elsewhere in this thread).

Danyl Strype

Danyl StrypeWed 8 Apr 2026 1:51AM

@Ana Ulin FYI This is a live debate across a number of project communities, including F-Droid. Maybe there's a need for a cross-project meeting (or even a conference) about AI ethics?

The most pressing issue for F-Droid purposes is answering the question; when is "Open Source AI" really libre software? Where people using it have full autonomy, down to being able to reproduce the model from freely-licensed raw inputs (source code, weights, and data). But we're also concerned about other ethical issues raised by LLMs, and AI / automation in general, which might justify the use of anti-feature flags, or require some new ones.

Stephanie Jo Kent

Stephanie Jo KentWed 8 Apr 2026 10:20AM

@Luke Thorburn the link to your paper takes me to a library directory not a specific paper. Can you include title and citation info?

Luke Thorburn

Luke ThorburnWed 8 Apr 2026 10:41AM

@Stephanie Jo Kent — here you go!

The moral case for using language model agents for recommendation
Seth Lazar, Luke Thorburn, Tian Jin & Luca Belli
Inquiry

Various links:
- https://doi.org/10.1080/0020174X.2025.2515579
- https://www.tandfonline.com/doi/full/10.1080/0020174X.2025.2515579
- https://arxiv.org/pdf/2410.12123 (preprint that has a different title, but the same paper)

Flancian

How should Social.coop react to accounts that use AI or develop AI tools in the Fediverse?

proposal by Flancian Closed Sat 11 Apr 2026 5:01PM

Outcome
by Flancian Mon 13 Apr 2026 8:04PM

Thank you for participating in the sense check! I found it very informative.

The opinion was split in an interesting way. People who suggested improvements or disagreed with the proposal did it in two possible ways, meaning: either favoring a stronger stance than the stated one (meaning e.g. suspending by default) or a more measured stance (questioning the need to limit at all because of e.g. using or promoting AI).

There was also some comments on the lack of adequate definitions/fuzzily worded concepts; I agree on all counts, I believe a more clearly worded sense check or a full vote for a change for our moderation policy or our public code of conduct (which IMHO should remain as aligned as possible) should follow.

What do you think about the opinions that were expressed? What do you take out of it? Please share! And thank you again for discussing these occasionally difficult topics together as a community.

What are you proposing?

Accounts that are reported for using or developing AI or promoting the use of AI in the Fediverse should be at most Limited, not Suspended, until further review. This allows people to have a warning that the account might be practicing a policy they haven't consented to yet; while also allowing people to opt into interactions (which Suspend doesn't).

Why is this important?

Currently the CWG is suspending accounts for ~weeks at a time even when significant numbers of Social.coop users are following them and presumably might have consented to the account's "personality" or actions. This reduces people's choice at no appreciable incremental safety to others (over e.g. the proposed Limit).

What are you asking people to do?

Please express your agreement or constructive disagreement with this proposal through votes and comments.

Thank you so much for reading and expressing your opinion!

Results

ResultsOptionVotes% of votes cast% of eligible voters
Looks good11282kekorraelaguaAlan (@alanz)mike_halesJohnKutiFlancianPete AshtonScott JensonAaron GKSieva@geesegoose@social.coopGilles DePemig Dutilh
Could be better14363Nathan SchneiderBilly SmithAlex RodriguezEduardo MercovichAna UlinKévinLuis VillaTed O'NeillAJAlexanderAmmarpjw@social.coopEmsquaredOliver Geer (@ogeer@social.coop)
Needs a rethink14363Erik MoellerDjangoBrian VaughanBen DaviesSam WhitedCalixHollie ButlerDynamicDavid (@dash@social.coop)Matt EdgarMx RooAdrianKris WarnerColectivo Desalienación
Undecided43892Robert GuthrieDanyl StrypeKevin FlanaganStacco TroncosoDave MenningerJosef Davies-CoatesChris ZumbrunnBob HaugenLynn Fosterwouter@freeknowledge.euJoshua ChalifourJ. Nathan MatiasfreescholarJoshuabenjamin melançonSteve HerrickKC TerryClayton (clayton@social.coop)Stéphane KleinZane Selvans

39 of 477 votes cast (8% participation)

Flancian

Flancian
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Wed 8 Apr 2026 5:09PM

I think the proposed policy adjustment has benefits and no perceivable drawbacks w.r.t. the current stance of suspending first. Please consider it :) Thank you!

Billy Smith

Billy Smith
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I don't want to follow AI tools, or to have AI-accounts following me.

It's the consent principle.

Do I get the choice to say yes, or no?

Do they give other people the choice to say yes or no?

Members of social.coop should always give that choice to other people, as otherwise they're refusing to give other people the same rights that they claim for themselves.

That is not co-operative behaviour.

Kévin

Kévin
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I would only need clarification on "developing AI or promoting the use of AI", I personally know people working on locally hosted LLMs who in essence would be considered to be "promotion". Equally if we're talking about journalism in this sphere that could also be "promotion". Just for those reasons we need better clarity at where the line is drawn. AI tools in use, ban them

Personally though I hate everything about AI, however, if we're making rules they should be fair and transparent.

Aaron GK

Aaron GK
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Wed 8 Apr 2026 5:09PM

As I stated in the general discussion I don't think use of AI/LLMs violated our code of conduct, federation policy, or other rules as they currently stand. I may disagree with some about the potential costs/benefits of AIs in the fediverse but I think those I disagree with on the general merits of AI have a right to have our server policies enforced in a predictable, transparent manner.

I would like to see us update the COC and Federation Policy to delineate acceptable and unacceptable use of AIs

Alex Rodriguez

Alex Rodriguez
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

Absent an AI policy as part of our content moderation guidelines, this doesn't quite make sense as the place to start. If this were to be put forward as a next step without an AI policy to refer to, then the language needs much more specific definitions. If this is just about reversing the current de facto policy of suspending things that are reported for AI stuff then make the policy more clearly that: "no suspensions for using AI"

Also, I'm a hard no to letting AIs harvest my toots

Brian Vaughan

Brian Vaughan
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I have a zero-tolerance policy towards the use of LLMs; I regard the use of them as unethical.

Emsquared

Emsquared
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

One size does not fit all cases so presume suspensions would ideally be on a case be case basis as an account of an AI or llm developer is fine with me whilst an account using AI in some form that is modeling on our interactions but not explicitly stated or declared should be suspended. Again as others have said public posts are open to being used for such purposes anyway so the best we can do is make a moral point.

In short stay open minded but proceed with caution and limit bad actors.

Erik Moeller

Erik Moeller
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I absolutely don't think accounts should be limited or suspended for using AI or having opinions about AI that are not in line with Mastodon orthodoxy. If that has happened in the past, it's IMO not a good practice and should be revisited. Scraping is in its own category and legitimate reason for suspending an account. I think this proposal needs a bit of work to make it clear what policy is being amended and to what effect.

Dynamic

Dynamic
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

This language is far too vague for a proposal.

There's a wide range of activities that could fall under "using or developing AI or promoting the use of AI in the Fediverse", from idle talk about projects to use AI in the Fediverse (which I don't think would justify even being Limited) to transparently declaring an intention to harvest data from users that interact with the account in any way (which could warrant a Suspension).

Regardless, harvesting data should be by affirmative consent only.

Django

Django
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I highly doubt anyone has been suspended for Promoting AI use.

Aaron GK

Aaron GKThu 9 Apr 2026 12:26AM

@Django  in the general discussion above the poll it was mentioned that the account belonging to Laurie Voss, the maker of this Zeitgeist LLM, was suspended.

Django

DjangoThu 9 Apr 2026 12:35AM

@Aaron GK I understand. My comment is critiquing the current formulation:

using or developing AI or promoting the use of AI

The user was using AI.

No one was suspended for Promoting AI.

I think it muddies the question to lump all those cases together.

Luis Villa

Luis VillaThu 9 Apr 2026 4:07AM

@Django that's literally what the top of the page says happened to Laurie Voss. I can't see their account anymore so hard to say.

Django

DjangoThu 9 Apr 2026 3:21PM

@Luis Villa The user was feeding public and private posts into a commercial LLM.

No one was suspended for Promoting AI.

Adrian

Adrian
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I understand the motivation for this proposal, but as worded it is too vague. Because this proposal addresses moderation, it should probably include some modification of the code of conduct or reporting guide.

Alexander

Alexander
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I agree with others who say this sounds a little bit vague to be a finished proposal. Personally, I don't believe that AI promoting accounts should be suspended or limited, unless they're explicitly hoovering data without consent.

Kris Warner

Kris Warner
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I'm in favor of suspending them. LLMs waste vast amounts of energy, they're not good at what they're supposed to do, they disempower and deskill workers, and their use further concentrates power in the hands of a few. I want the fediverse to be a hostile environment for those pushing these slop machines.

Luis Villa

Luis Villa
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

A lot of the "voting" is all feelings about LLMs, no analysis of the proposal. Doesn't make me optimistic about the quality of cooperative governance here, in what is admittedly a charged moment.

Voting "could be better" because I think it's important to say that, as far as I can tell, there is no anti-AI policy here, so even "limitation" feels premature. But it is better than suspension.

Colectivo Desalienación

Colectivo Desalienación
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

The real question isn't how Social.coop should react to accounts using AI tools. The real question is: when do we start campaigns against the Master? Is not the AI (tool) is the Master!

— Colectivo Desalienación

pjw@social.coop

pjw@social.coop
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I think the discussion above illustrates that this requires a broader discussion, maybe among a working group.

I am generally AI-skeptical, and wouldn't want these tools using my posts. But I think that should be distinguished from a separate question about whether we have that stance as an instance. I feel less comfortable saying "we, as a coop, refuse uses of LLMs", which seems a bit like tyranny of the majority - what about our values is strictly incompatible with edge cases?

Luke Thorburn

Luke Thorburn
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

I'm really not in favour of this — it's not clear to me how AI or 'AI use in the fediverse' is defined, how this policy could be fairly applied, or why AI use should be singled out relative to all the other morally contestable ways in which people can use information we post online without our informed consent.

At minimum, what about visually impaired people using a text-to-speech model as a screen reader?

I would leave this instance if this policy is enacted.

Ben Davies

Ben Davies
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

The developer is not interested in gaining the consent of post creators. We to make consent a core principle that must be adhered to before people can engage with our network.

David (@dash@social.coop)

David (@dash@social.coop)
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

This proposal is ambiguous about whether it is:

  • Trying to contribute to the discussion at hand - how social.coop should respond to fedi-slurping LLM tools.

  • Trying to achieve a change in future moderation policy towards human accounts.

  • Trying to over-rule a specific decision of the CWG.

Eduardo Mercovich

Eduardo Mercovich
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I think it depends on many factors. While AI as proposed today are certainly not my plate, if an experiment is done with care and people's participation (clear and open communication, opt-in, private results not feeded to corporate training, good faith, etc.) I could accept the limit.

AJ

AJ
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

Where is the middle ground that both protects the platform and respects people's right to choose what tools they use outside this space. Being too heavy handed can have unintended impact, but setting firm boundaries can be helpful for clarity.

Hollie Butler

Hollie Butler
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

Too vague. It also feels retaliatory - you've been repeatedly asked to just wait until our regular moderator meeting in nine days, where mods can get together to talk about it, but you continue to press this. There has been no final decision as to Limit or Suspend (or another option), and this proposal makes it sound like this is something we're routinely doing, when this is the first time it's come up and it's reasonable and responsible for us to discuss it as a group.

Oliver Geer (@ogeer@social.coop)

Oliver Geer (@ogeer@social.coop)
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I agree that:

  • 👍 Accounts developing AI or promoting the use of AI in the Fediverse that do not violate any point of our code of conduct should not be suspended. This is independent of my stance on AI: I intentionally follow accounts I really disagree with, so I can keep tabs on what they are saying and criticise it when necessary.

  • 👍 Don't suspend accounts that scrape user content with opt-in consent.

  • 🤔 Consider separately accounts that scrape user content without opt-in consent for any purpose.

Oliver Geer (@ogeer@social.coop)

Oliver Geer (@ogeer@social.coop)Fri 10 Apr 2026 10:30AM

The third point needs more elaboration, but I don't have time to give it. Ideas on that include:

Matt Edgar

Matt Edgar
<span class="translation_missing" title="translation missing: en.poll_proposal_options.needs a rethink">Needs A Rethink</span>
Wed 8 Apr 2026 5:09PM

Seems to be a generalised proposal in response to a single specific incident. Can see the intent, but cannot support this proposal as framed

Ana Ulin

Ana Ulin
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I am not against this current proposal, which I see as trying to reverse a mistaken suspension of the author of an AI tool. (It seems like the harm was coming from the tool, not from Voss' personal account; being an asshole doesn't seem like sufficient grounds for banning.)

I am strongly against suspending or limiting accounts for having "wrong" opinions about AI, so I'm wary of enshrining suspension as this policy (unless we already have a formal suspension policy that this softens?).

Gilles DePemig Dutilh

Gilles DePemig Dutilh
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Wed 8 Apr 2026 5:09PM

I want to stress that my intelligence is artificial too.

Nathan Schneider

Nathan Schneider
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I agree that we need an organizational co-op policy on LLMs; I am concerned that a working group is making these decisions for us, since clearly we have a very mixed set of perspectives on the topic. Ideally such a policy would:

  • Enable individual choice
  • Provide some level of collective defense
Ammar

Ammar
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

I would make sure we take some time to define ai, LLMs is not the same as AI and synthetic content could be considered waste that unnecessary makes the infrastructure of the network more expensive

Item removed

Danyl Strype

Danyl StrypeTue 21 Apr 2026 7:13AM

@Ammar

I would make sure we take some time to define AI

100%. "AI" is and has always been a buzzphrase. A handwave at a computer science research goal, not a specific kind of technology. No sensible policy can be enacted about all of the things that have been or will be described using this buzzphrase, as if they are a single class.

synthetic content could be considered waste that unnecessary makes the infrastructure of the network more expensive

This comment sneaks in any assumption that social.coop can make decisions on behalf of a global view of 'network good'. This is neither possible nor desirable. The fediverse and other decentralised networks are, by definition, systems where participants like social.coop can only evaluate and make decisions on what's right for our members, and our infrastructure. Decisions about network-level concerns belong in protocol standardisation forums, like SocialCG/WG or to a lesser extent, the FEP process.

Ted O'Neill

Ted O'Neill
<span class="translation_missing" title="translation missing: en.poll_proposal_options.could be better">Could Be Better</span>
Wed 8 Apr 2026 5:09PM

The conflation of developing and promoting makes this unclear. I disagree with those promoting most LLMs, but that leads to discussion (perhaps). Developing is a hard no for me.

JohnKuti

JohnKuti
<span class="translation_missing" title="translation missing: en.poll_proposal_options.looks good">Looks Good</span>
Wed 8 Apr 2026 5:09PM

To my way of thinking, a lot of the objections around consent can be dealt with by users: uncheck the box for "automatically accept new followers".

Luke

LukeWed 8 Apr 2026 5:37PM

Like others, I'm not OK with AI systems harvesting posts and training LLMs (on the final question asked in the initial post).

Jonobie Ford

Jonobie FordWed 8 Apr 2026 7:04PM

I think I mostl agree with this, if I understand things correctly:
If I start to follow an account that someone else has reported for AI, then I get a warning but can still choose to follow or not, right? That's cool by me.

Things I'm unclear on for this suggested policy:
If someone reports an account I've already followed, what happens if it gets limited? Is there any warning I get to see, or is it just a silent thing that doesn't affect me?

What counts as "using or developing AI or promoting the use of AI in the fediverse"? My particular line is "I do not want AIs following me and scraping my content into training data". I understand this likely happens, but I don't consent to it and would love if social.coop helped keep them away from me. But I don't mind that those things exist if there's a clear opt-in policy. And I don't mind accounts existing that, say, use LLMs to describe alt text, or something like that.

Django

DjangoWed 8 Apr 2026 7:10PM

Considering ANY post that is sent to an LLM is potentially used for training, and that current consent models are Opt-out, I believe this is a fundamental breach of contract with the social web.

Social.coop users deserve the protection and precaution they have come to trust from the Moderation team.

If anything, I would like to see our policies reviewed in light of this threat model (ranging from Grammarly style impersonation, to more nefarious Surveillance).

Adrian

AdrianWed 8 Apr 2026 7:34PM

With respect to the proposal for social.coop to run an ATproto personal data server (PDS): I would just like to point out that, by virtue of the complete openness and lack of privacy in the ATproto architecture, we have to assume that everything (posts, media, social graphs, etc.) available even in a social.coop-hosted PDS will be systematically archived by many, many different actors, and this information will certainly be used for training LLMs (e.g., https://arxiv.org/html/2510.02343). Selfhosting does not prevent this.

There is no realistic option for opting out due to the basic constraints of ATproto: everything is public and designed to be aggregated into globally-visible "firehoses" of content, which actors with resources can simply keep indefinitely.

Danyl Strype

Danyl StrypeTue 21 Apr 2026 7:27AM

@Adrian

> by virtue of the complete openness and lack of privacy in the ATproto architecture, we have to assume that everything (posts, media, social graphs, etc.) available even in a social.coop-hosted PDS will be systematically archived by many, many different actors

The same is mostly true of the AP architecture, which works by having social.coop send copies of posts to every instance where someone follows the person who posted it. Yes, AP posts come with metadata about the intended scope of a post. But without Object Capabilities to enforce Followers-only, or E2EE to make Direct Posts reliably private, scope metadata is just a suggestion, not a law of the network.

In the absence of effective technical mechanisms to prevent this, public posts anywhere on the net can and will be scraped, and probably used for MOLE Training. Anyone who thinks they can prevent this by making policies is channeling King Canute.

But all of this is orthogonal to the topic(s) raised by this thread. What social.coop members can make and enforce policies on is what kinds of following relationships the service enables with accounts on other services, and on what basis.

Adrian

AdrianTue 21 Apr 2026 3:37PM

@Danyl Strype I agree that public posts can be scraped, but even this is much more difficult in AP than ATproto. In ATproto, all data are consolidated into a centralized firehose, which dramatically facilitates scraping the entire network. One could argue that ATproto provides the ideal design for surveilling a whole network.

In AP, it would be necessary to scrape across thousands of servers to achieve the same effect, and the effort can be detected and mitigated by server admins (which happens all the time in response to increasingly aggressive scraping bots). It would be much more challenging to scrape all of AP. Not to mention that AP actually permits private posts, which are impossible in ATproto.

Erik Moeller

Erik MoellerWed 8 Apr 2026 8:55PM

Rather than focusing on use of specific technology in our code of conduct, I'd be in favor of a narrower policy amendment:

- Scraping of follower-only content or DMs for any reason (AI or not) is prohibited as it violates users' normal privacy expectations. Any accounts or instances engaged in such scraping activity may be suspended immediately.

- If and when accounts or instances that have previously engaged such practices credibly discontinue them, they may be unblocked after a period of 30 days, but repeat violations may result in indefinite suspension.

Benjamin Mako Hill

Benjamin Mako HillThu 9 Apr 2026 1:37AM

@Erik Moeller This strikes me as a better way of approaching this. If it wasn't obvious before, this thread makes it clear that social.coop has a range of opinions on LLMs and AI. I don't think participating in this instance means we need to disagree on everything, or on the value and ethics of AI and LLMs in particular. Given the diversity of opinion here, I think it's important that members be able to express their individual preferences for how they want to interact with AI tools and the people that builds them.

Flancian

FlancianThu 9 Apr 2026 5:24PM

https://social.coop/@flancian/116375830084655955 for my updated position on waiting for arbitration for the Seldo incident 🙂 Thank you all for working together on this!

About the poll/sense check: thank you for participating! I agree a better worded proposal should follow. People disagreed with the framing in both directions (some thought it went too far by assuming Limit for things potentially as broad as AI promotion, which I agree with; some thought we should be stricter for any kind of AI use.)

Aaron GK

Aaron GKFri 10 Apr 2026 3:18AM

Again, I too want as little AI as possible in the Fediverse. I think we should put clear guidelines in our policy documents such as the code of conduct and federation abuse policies that would address that. I do not think our policy documents currently have that.

Some facts to ground some discussion I hope to have:

  • At least one account was suspended (see very original post in this discussion) and it seems perhaps more.

  • While it seems the majority so far are fine with the suspension, there are several coop members who disagree with it because they do not want AI banned from their use of social.coop

Some questions I have. These are not rhetorical. If anyone would like to take a crack at them I'd like to read your thoughts:

  • Do you think use of AI violates any existing social.coop policies? If so what parts specifically? In what way?

  • How would you explain to a fellow member of this democratic organization that does not want accounts using AI suspended that we have all already agreed that suspension is what would happen to accounts using AI?

  • Do you think the moderation actions should be based on agreements we as social.coop members have made by joining and participating, such as the code of conduct and federation abuse policies? Always, or just usually?

    • Do you think the CWG can/should make moderation decisions based on what they think is best for us, if a behavior is not address in policy?

    • What if a minority of fellow members don't agree the moderation actions is best for us? What if a majority of fellow members don't agree the action is best for us?

Ana Ulin

Ana UlinFri 10 Apr 2026 6:58PM

@Aaron GK the phrase "use of AI" is so vague as to be meaningless. "Using a local LLM to summarize and read out loud" feels very different from, say, "send all posts and DMs to OpenAI and allow them to train on them". As I understand it, the latter example is what most folks here are strongly objecting to.

Erik Moeller

Erik MoellerFri 10 Apr 2026 11:35AM

  • Do you think use of AI violates any existing social.coop policies? If so what parts specifically? In what way?

Not inherently. As I understand it, what's special about the Zeitgeist project is that it ingested data from users who authorized the app, which may include contexts such as follower-only posts, DM, or accounts excluded from search engines. If those issues have now been addressed (i.e. if the bot now respects privacy boundaries) then IMO this specific account suspension could be revisited.

The federation abuse policy doesn't go into much detail here, but it does include as grounds for suspension "A bot following social.coop users who have #nobot in their Mastodon profiles."

See https://wiki.social.coop/wiki/Federation_abuse_policy.

Disclaimer: I'm not in any of the working groups and wasn't involved in this moderation decision.

  • How would you explain to a fellow member of this democratic organization that does not want accounts using AI suspended that we have all already agreed that suspension is what would happen to accounts using AI?

"Using AI" is a very broad category that is not sufficiently descriptive of the policy violation that caused the Zeitgeist account suspension, as I understand it.

  • Do you think the moderation actions should be based on agreements we as social.coop members have made by joining and participating, such as the code of conduct and federation abuse policies? Always, or just usually?

Yes, per the bylaws (https://wiki.social.coop/wiki/Bylaws). My understanding of the bylaws is that any major shift in social.coop moderation policy to, for example, ban specific opinions about AI, or use of AI in drafting posts, or suspension of accounts on other instances that permit such use, would require a larger vote among members.

Those of us (myself included) who are opposed to such significant changes may not only vote "disagree", but could vote "block". Per the governance section of the bylaws:

"A Block vote represents a fundamental disagreement—a belief that the proposal violates Social.coop's core principles. Proposals with Block require at least 9 times more Agree votes than Disagree and Block votes in order to pass."

Aaron GK

Aaron GKFri 10 Apr 2026 5:18PM

@Erik Moeller Thank you. Appreciate this response very much. It has helped me understand where some folks may be coming from better.

Derek Caelin

Derek CaelinFri 10 Apr 2026 8:12PM

Narrowly, I agree that a "limit" policy is better than a "suspend" policy in this case, and that limitation should be more often exercised as a moderation tool. I don't, however, think limitation is the right approach in this case.

The CWG should make a decision to limit or suspend based on our Code of Conduct and the Federation Abuse Policy. At the moment, there is no policy regarding "using or developing AI or promoting the use of AI in the Fediverse", so the CWG should not make a moderation decision on those grounds. We do have a policy around bots following users, but LLMs don't need to follow anyone to view public content, and as I understand it, as a client, Zeitgeist isn't a bot.

To answer @Dan Phiffer's original question: "do you feel okay with AI systems harvesting posts and training LLMs?" - I don't feel okay. I am skeptical that a policy could be crafted to address it that doesn't radically shape how our server federates.

Flancian

FlancianMon 13 Apr 2026 8:21PM

@Derek Caelin yes, I agree, well put. Part of the reason I appealed to the moderation decision is because I think it is important we keep moderation policy and our public CoC/FAP aligned -- +1 also to @Erik Moeller's related message above.

Django

DjangoSun 12 Apr 2026 8:05PM

The main problem here is there is NO OPT-IN mechanism, the burden falls to user/authors OPT-OUT by adding hashtags into their bio

the #NoBot example is interesting, because IIUC it comes from pre-ActivityPub days (Ostatus) when the social network lacked effective moderation tools.

This is a question of baseline consent!

Flancian

FlancianMon 13 Apr 2026 8:08PM

+1, this is totally a key question on how consent works in the Fediverse. I think it probably makes sense to have both opt-out and opt-in instances IMHO. Would you agree with that?

If so, the question becomes whether we want Social.coop to be an opt-in or an opt-out instance. I am proposing opt-out because I think this aligns clearly with our stated values (we are, after all, about social networks and cooperation!). But I understand if some subset of the community thinks the other way.

I think the time nears to actually propose a way and vote on it, and then be explicit about it for people who are around or want to join. Wdyt?

P.S. when I started bonfire.social.coop it was because, beyond several people wanting to experiment with Bonfire (myself included), I was interested in exploring the idea of Social.coop running a second instance for people who want a different set of rules of engagement, or who want to experiment with Fediverse-enabled governance.

Dan Phiffer

Dan PhifferTue 14 Apr 2026 12:28PM

@Flancian My view is that consent doesn't "work in the Fediverse" differently from other places (consent is an explicit understanding between two parties). There are structures built into the software, like a field for content warnings and quote boost settings, that can better facilitate consent in ways that are not as well defined on other networks. And, as a result, the software choices have made the Fediverse an attractive option to people who've felt underserved by the capacity for consent offered by other networks.

Alex Rodriguez

Alex RodriguezTue 14 Apr 2026 12:57PM

@Flancian is it possible to set up an account on bonfire.social.coop? I tried my social.coop credentials and that didn't work.

Dan Phiffer

Dan PhifferTue 14 Apr 2026 1:28PM

@Alex Rodriguez yeah, I can help you get setup. What is your social.coop handle? I'll DM you over there.

Alex Rodriguez

Alex RodriguezTue 14 Apr 2026 1:33PM

@Dan Phiffer https://social.coop/@arod

Dan Phiffer

Dan PhifferTue 14 Apr 2026 1:39PM

Thanks, I just sent you an invite.

Dan Phiffer

Dan PhifferTue 14 Apr 2026 12:15PM

One twist to the Zeitgeist model that makes consent impossible in some scenarios, is that it reverses the flow of information:

  1. Alice follows Bob (but Bob does not follow Alice)

  2. Alice announces their intent to use Zeitgeist in a Mastodon toot

  3. Bob does not see the announcement, and has no way to know that they could modify their bio with hashtags to opt out

  4. Alice proceeds to process Bob's content with LLMs, no consent was possible

Flancian

FlancianTue 14 Apr 2026 2:00PM

@Dan Phiffer this is a good point. Still, I'm not clear this changes things too much, as the question remains: is the onus on the AI adopters to seek consent from each individual party they will interact with (the opt-in path), or on AI celibates to manifest their wish not to interact with people who use AI in e.g. their Fediverse clients (the opt-out path)? I think the latter is more reasonable, but that is my opinion; we are having this conversation because both opinions are in principle sensible, AND there has not been a Social.coop vote on this specific decision. Do we agree on that much at least?

Note that society already works on implied consent to some way, and which additional explicit consent is needed is part of the social contract (which is why I'm trying to frame this as an explicit update to our CoC or FAP). When you talk to someone in a public, you don't usually have to go through an additional pre-talking process to see if they are OK being talked to. People who want to lower the chances that others misjudge their availability for a chat often end up wearing headphones, which is a way of opting out of most social interactions. Etc.

Brian Vaughan

Brian VaughanTue 14 Apr 2026 4:59PM

@flancian You do, in fact, have to go through an additional pre-talking process with someone in public to see if they are willing to have a conversation with you. A lot of this is informal -- social conventions about time and place, body language -- and opaque to some people, in which case the ethical thing to do is to formally ask a person if they are willing to talk.

It would be more accurate to describe the wearing of headphones as a response to common patterns of harassment, in which people will ignore informal rules about initiating conversations and will respond angrily to formal rejection to their initiating a conversation. The headphones are, in short, a defense against a material danger.

Commercial social media operates by deliberately subverting people's intuitions about the social context of conversations, in order to datamine those conversations and manipulate them into conflicts in order to drive engagement. The Fediverse, while somewhat better, still leans too heavily on the models of commercial social media, and it remains difficult to negotiate the intended audience for a post and the expectations about participation in a thread.

LLMs, as datamining tools that ignore consent, make the situation worse.

Benjamin Mako Hill

Benjamin Mako HillTue 14 Apr 2026 7:16PM

Agreed. And for this reason, I'm generally just very skeptical of this
idea that one can or should publish things openly on the web (i.e.,
without access gated by agreements) and then expect more control over
that content than is implied by commonly held expectations related to
redistribution and reuse. Both the law and common sense allow for at
least some redistribution and reuse (e.g., "fair use"), which I think
is generally a good thing.

If the data is not public and/or if getting access to it requires an
agreement to some different set of rules, and/or if the material is
private (like DMs), that's clearly different. The DM thing mentioned
above sounded like a pretty clear violation of expectations. But it
also seems to have been addressed in this specific case.

FWIW, the reuse of public data by AI models is very much in line with
what I think people should all expect from any data published openly
on the web in 2026. Whether you like it or not (and I don't!), I'm
having trouble keeping my public webservers online at all given the
massive traffic from AI scraper bots. There are many tens of thousands of
copies of any public webpage I maintain—regardless of what I
say in my robots.txt, write on the webpages, or want in my heart.

In the absence of changes to the law, I think we'd be well served by
treating openly published data as public and at least largely out of
our control. We should ensure that material we want to be kept private
or semi-private is distributed using tools that have a technical
design that makes them appropriate for that purpose.

Danyl Strype

Danyl StrypeTue 21 Apr 2026 4:30AM

@Benjamin Mako Hill

In the absence of changes to the law, I think we'd be well served by
treating openly published data as public and at least largely out of
our control. We should ensure that material we want to be kept private
or semi-private is distributed using tools that have a technical
design that makes them appropriate for that purpose.

This! 1000 times this!

Public is public. People who demand software that gives them the power to publish to a global audience, without any of the consequences that naturally comes with it, are either a) demanding dry water, ie deluding themselves, or b) seeking to separate power and accountability, which is totally counter to cooperative values.

I want to be clear that I'm not saying that the group demanding dry water don't have legitimate needs that aren't being served by existing fediverse software. On the contrary, I think it's important to listen to these people and find out what problems they're actually trying to solve, and help them find ways to solve those problems that are both possible and desirable.

The current butting of heads between people pushing Public Not Public discourse, and people pointing out that public *is* public, produces far more heat than light in most cases, and is not really serving anyone.

Nic

NicTue 21 Apr 2026 9:27AM

The current butting of heads between people pushing Public Not Public discourse, and people pointing out that public is public, produces far more heat than light in most cases, and is not really serving anyone. @Danyl Strype

There seems a bit of blurring between the argument that it's technically possible to scrape public, published work; that it's legally ok to scrape public work (which I think depends on context, use and jurisdiction, afaik, ianal); and that it's all morally fine…

> Whether you like it or not (and I don't!)… regardless of what I
say in my robots.txt, write on the webpages, or want in my heart. @Benjamin Mako Hill

Sure out there we've no control. But here? As an intentional community of people trying to do social media different to big-tech, can't we acknowledge the technical and legal realities, while still saying 'we think doing ABC is bad, it goes against our community's discussed and agreed principles'? Not every value needs be defined by legal and technical capabilities - that's not how day to day life works…

Of course this in turn is granular and contextual – more complex than one line in a policy saying yes/no LLMs. Me using Mastodon's in-built DeepL to translate a Fediverse post in my feed is technically scraping then using an LLM on a post. But it's obviously very different to scraping all my published data to profile / credit-rate / misquote / train / impersonate / better surveil / clone me by LLM (etc) – and I'd be much happier having a social.coop ToS or CoC that this would be in breach of.

Danyl Strype

Danyl StrypeTue 21 Apr 2026 10:42AM

@Nic These are all very familiar Public Not Public talking points. I can't extract an argument from this that isn't answered in the comment to which you're replying. Can you @Benjamin Mako Hill?

Care to TL;DR what you want responded to here Nic?

Nic

NicTue 21 Apr 2026 11:15AM

@Danyl Strype I thought you had written that people who say they don't want their posts on social.coop to be scraped are demanding 'dry water' and are 'deluding themselves', sorry if I misunderstood. I'm saying just because we can't stop a behaviour doesn't mean we have to endorse it.

Danyl Strype

Danyl StrypeTue 21 Apr 2026 1:17PM

@Nic

People who say they don't want their public posts to be scraped, on social.coop or elsewhere, are demanding dry water and are deluding themselves. Also "web scraping is good, actually".

because we can't stop a behaviour doesn't mean we have to endorse it.

I don't disagree, but I'm not sure how this is helpful. If there is consensus that members don't want posts ingested by tools known to be feeding them to Trained MOLEs - and I totally sympathise with that - then work needs to be done to define exactly what tools/ uses are the problem, and write policies targeting those specifically.

But quite frankly I think social.coop members would be better to put time and energy into things the co-op can actually have an effect on. Like investing some resources in the effort to E2EE DMs and Followers-only posts, so only their intended audience can see them.

Nic

NicTue 21 Apr 2026 2:02PM

Also "web scraping is good, actually".

That link also points out that "Scraping to violate the public’s privacy is bad, actually" & "Scraping to alienate creative workers’ labor is bad, actually" . So the question is perhaps more "Scraping and then what?".

Perhaps 'Scraping and non-consensual processing, other than to assist with translations and accessibility – or anonymised for research & analysis – is bad actually' would cover a lot of the concerns…

but I'm not sure how this is helpful

Because it helps distinguish between people who want to respect people's wishes, and those who don't give a hoot, ie consentful technologists. Which I think would be helpful in the original question about whether or not to suspend or block accounts who use AI – is their use going against the wishes / policies / code-of-conduct of this community or not?

Benjamin Mako Hill

Benjamin Mako HillTue 21 Apr 2026 5:01PM

I'm saying just because we can't stop a behaviour doesn't mean we
have to endorse it.

I don't think anybody here is "endorsing" people doing anything they
want with any public data. I think my post makes it clear I am unhappy
with many of the same kinds of data use/reuse that you are. I'm
arguing that we should acknowledge the technical and social realities
of the internet we are living and interacting in and build for and
around that.

Your "out there" and "in here" metaphor above feels off because the
point of this place is to publish things "out there."

"In here" corresponds to the shared social and technological
infrastructure where we really do get to make the rules, vet folks
before participating, and so on. When people sign up here, they have
to apply, agree to principles, answer questions, are vetted, etc.

None of that applies to the people who view the stuff we post publicly
or the content itself. We should be wise to acknowledge that when we
throw messages over the wall with the express purpose of letting the
world see and copy and share them, our desires for how it should be
treated (stated and unstated) are just that.

If we want this place to just be "in here" we can do what most other
likeminded internet communities have done and just raise the
drawbridge and become fully insular. But I think that defeats the
point and undermines the promise.