Loomio

Zeitgeist and other Fediverse LLMs

Dan PhifferDan Phiffer Wed 1 Apr 2026 5:20PMPublicSeen by 153

Last week Laurie Voss announced that he’d released a new Mastodon client called Zeitgeist.blue, “a multi-social-network app that summarizes your feed for the last 24 hours.” As a Mastodon client, it appears to authenticate a given user with an existing instance (including social.coop, potentially) and, using Anthropic or GitHub Copilot, processes timelines for the purposes of summarizing one’s timelines using a large language model (LLM).

Many replies to the announcement were critical of the project and took issue with the lack of consent and being subjected to AI surveillance. Voss was dismissive of those concerns, referring to people complaining about the lack of consent mechanisms as “tedious bastards,” and began blocking people replying in the thread.

Voss was informed of an existing precedent for people to tag their bios in order to opt-out and subsequently added an opt-out for people who do not want their posts indexed. Initially Zeitgeist indexed everything, including DMs and follower-only posts, but now DMs are filtered out. There is also unique “User Agent” metadata that accompanies requests and theoretically could be used to block requests as an instance, if we chose to take that route.

I’d like to start a discussion here of how the cooperative would like to handle this, or other projects that engage in trawling content and processing posts using LLMs. Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?

Some related essays that might be helpful for thinking about this with:

Dan Phiffer

Dan PhifferThu 2 Apr 2026 2:43PM

I have added an opt-out hashtag and I'll be curious to hear what you think of Zeitgeist (and perhaps to confirm that my posts don't appear in your summaries).

If you think this is problematic, let me know, but I'd like to understand what the actual alternative is here.

Yes, I do think it's problematic that you seem uncertain whether the people you follow are consenting to your use of their posts by using this client. But I don't think you should leave the coop over it. I would miss you!

I wonder if you'd feel okay with tagging your bio the same way I just did? #LLMClient perhaps?

Flancian

FlancianThu 2 Apr 2026 2:53PM

@Dan Phiffer thank you, I think it's likely I'm leaving given how things are likely to go, but we can remain in touch and I wish you well! I just dislike enclosure by committee, left-authoritarianism, time wasting, and people assuming and expressing bad faith -- and I've seen too much of this over the years, even as I was putting hours of work for the coop in preference to spending time on my personal projects or alongside more like-minded people. At some point it's just rational to move on to greener and more progressive (from my PoV) pastures, and leave the instance to people who want it to be a different thing. I will probably write a retrospective in any case, and do it in an orderly way making sure the TWG in particular has everything they need.

Dan Phiffer

Dan PhifferThu 2 Apr 2026 2:58PM

This is surprising and sad to hear. I'd like to remind you that my very first encounter with this subject, minutes after reading about it as the on-call mod, was me offering to meet synchronously to talk things through on this. That offer that was not accepted, and now I wonder if you regard it as being made in bad faith?

Flancian

FlancianThu 2 Apr 2026 3:00PM

@Dan Phiffer No, I have no doubt about your good faith! Thank you for the offer. I did not decline it, I said I was open to it? We can talk in the next TWG sync as well.

I've added this to my profile: "#searchable, occasionally #llmassisted." -- the earlier is for Tootfinder.ch. I settled on #llmassisted because it seems more generic while still clear enough, and can also cover things like "I'm using LLMs to write some posts"? Which I'm not, but I think people might occasionally want to do?

Dan Phiffer

Dan PhifferThu 2 Apr 2026 3:04PM

I did not decline it, I said I was open to it?

@Flancian apologies, I'm looking at the thread (on Zulip not here) and still not seeing your message. It seems like a missed opportunity.

EDIT: I found the message, this one was on me.

Flancian

FlancianThu 2 Apr 2026 4:46PM

@Dan Phiffer no problem!

After reviewing https://github.com/seldo/zeitgeist/commit/90b09ad6f683d7059ef4a5c4a3883704a7e71057 and seeing #nobots and #noai are supported to opt-out, I'm thinking of just having #bots and/or #ai in my profile to signal the opposite (endorsement/opt-in/likelihood of using these technologies).

Sieva

SievaWed 1 Apr 2026 6:39PM

@flancian a couple of thoughts:
- we should be concerned about the tool usage against our data, but perhaps not a specific developer account
- personally, I'd like this person to remain blocked, if not at the instance level, then at least at the level of my account (currently not possible because social.coop blocks him). If we unblock him, **I'd really like to know about that when it happens.

Flancian

FlancianThu 2 Apr 2026 12:12AM

@Sieva I think that makes sense; to be clear my understanding is that if we move to Limit you will not be able to interact with them without seeing a warning, and if they follow you that would show up as a follow request. From the Mastodon docs on limiting at https://docs.joinmastodon.org/admin/moderation/#limit-user:

"Previously known as “silencing”. A limited account is hidden from all other users on that instance, except for its followers. All of the content is still there, and it can still be found via search, mentions, and following, but the content is invisible publicly. Notifications about activities from limited accounts will be handled according to account-level notification preferences (which default to “filter” for limited accounts).

If a limited account attempts to follow a user on that instance, the follow is converted into a follow request.

At this moment, limit does not affect federation. A locally limited account is not limited automatically on other servers. Account limitations are reversible."

So if I have this right, limiting would probably be as safe as suspend for most users? And still allow others to opt-in into interacting with this account.

Sieva

SievaWed 1 Apr 2026 6:22PM

I would prefer blocking requests with User-Agent=Zeitgeist/1.0.

Flancian

FlancianWed 1 Apr 2026 8:37PM

@Sieva that makes sense, but note that it'd be hard (the TWG would have to think about it I believe, @Dan Phiffer @Calix @Ammar) to do this for only some users, it would be more like an instance-wide ban (similar to this suspension). I would personally find this unsatisfactory because it means my Fediverse experience would break any time I wanted to interact with someone who is using this client.

Dan Phiffer

Dan PhifferWed 1 Apr 2026 9:02PM

Yeah, to my knowledge this isn't something we've done before, blocking based on User-Agent. I bet we could figure it out though.

Kris Warner

Kris WarnerWed 1 Apr 2026 6:31PM

Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?

No.

Mikelo

MikeloWed 1 Apr 2026 9:47PM

Any use of a user’s data, LLM or otherwise, should be opt-in. Anything less is unacceptable.

I don’t consider myself a tedious bastard. I do think we as a cooperative should always protect ourselves against the entitled actions of those like Laurie Voss.

Dave V. ND9JR

Dave V. ND9JRWed 1 Apr 2026 11:08PM

I'll go one further than say I don't consent to LLMs using my posts: doing so constitutes what I see as "altering the deal:" when I started posting I never imagined that my posts would go to such things and now we have someone doing exactly that.

Also, what is it with certain people here defending people/organizations who are clearly bad actors? This happened with Facebook/Meta and the discussion about Threads and now I'm seeing it again.

Flancian

FlancianThu 2 Apr 2026 12:07AM

@Dave V. ND9JR  that's probably me 😅 I just disagree with the assertion that these people and entities are necessarily "clearly bad actors" as a whole/in totality, so I try to err on the side of being able to communicate with them until proven wrong. In the Threads case, I made the point that it was better if people in the Fediverse were to talk to "good" users in Threads and the other way around; in this case, I'm making the point that this person is worth talking to for at least some of us, as are other potential users of this new client, so I don't think it's ideal if Social.coop blanket-bans everybody effectively removing them from the Fediverse.

I think it's reasonable for people to want to want talk to others, even if you in particular see no point to it -- this is why we're different people also we share an instance. Strength in diversity and such?

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 12:52AM

@Flancian Yes, it's you. Except this isn't just about "talking to others." It's allowing an instance that's clearly using people's posts without their consent to feed LLMs (which themselves have a history of slupring all sorts of data without people's consent, including violating copyright), led by someone who when told "We don't consent to this" basically said, "Your consent doesn't matter; I'm doing it anyway." How that doesn't constitute a bad actor I don't know, but it certainly does to me.

Flancian

FlancianThu 2 Apr 2026 1:10AM

@Dave V. ND9JR one correction -- I think the discussion so far is not about allowing or disallowing an instance. The thing being developed is a client, not an instance, and the current suspension is only against the developer of the tool (a single account.)

I think I understand your position, it's just that I consent to people using this tool because I don't care about my posts going to LLMs, and I think people blocking each other seems good enough if they want to signal they don't want to interact for any reason, including the other person being more pro-LLM or anti-copyright or whatever than them. 

The issue here is perhaps that you and I have different preferences but are trying to share an instance, and there's no effective per-account mechanism for this "setting". So I'm asking people not to prevent me and others from setting this setting how we want it, same as I don't want to prevent you from doing the same. I think we should try to respect each other's preferences essentially instead of jumping to telling each other "this is the way and every other way is wrong", which an instance-wide setting of suspend, or a blanket block based on user agent, would be.

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 1:23AM

@Flancian  No worries there; I'll be leaving this instance in about a week.

Flancian

FlancianThu 2 Apr 2026 10:29AM

@Dave V. ND9JR oh! But why, if you want to share? Where are you going? Single hosted or a different community?

Dave V. ND9JR

Dave V. ND9JRThu 2 Apr 2026 4:21PM

@Flancian I refuse to share until you state your motive for asking these questions.

Flancian

FlancianThu 2 Apr 2026 5:34PM

@Dave V. ND9JR I was just curious, other people have left recently and I was also considering moving instances. But your tone makes it clear this is no friendly exchange, so disengaging now. Good luck.

Benji Mauer

Benji MauerWed 1 Apr 2026 11:49PM

There seem to be two threads of conversation here:
1. The risk that if a social.coop member signs in to Zeitgeist, the OTHER party to that user's DMs content would be compromised. In other words, if Dave and Anna have a DM thread on social.coop, and Dave signs up for Zeitgeist or a similarly poorly architected client without Anna's consent, that client could read Anna's DMs or private replies to Dave. This seems genuinely concerning, though could be true of ANY client that is poorly designed, whether it uses AI or not. I would like to prevent the general case of this, if possible. Though, I'm not sure it's practical, unless we have an allow-list of clients we all agree to.
2. The risk that LLMs will use public posts for training. I understand people's concerns, but as people posting on the open web, hoovering by all manner of parties seems just inevitable at this point. We can certainly use robots.txt directives or any other directives that indicate our desire for LLMs not to use our posts for training, but the only thing that will actually stop it, given the completely rampant violations of consent AI companies are currently engaged in, is having your social network in Signal threads with disappearing messages. I think we should do everything practical to prevent LLM crawling, but I certainly don't expect my posts won't be used for training whatever we tell bots to do or not do.

Benji Mauer

Benji MauerWed 1 Apr 2026 11:57PM

Personally, I don't care about blocking Laurie Voss. I believe I followed Laurie, but I don't know because they're... blocked now. I don't particularly want access to people I follow to be blocked as "punishment" for something that doesn't impact me because I didn't sign up for Zeitgeist and never will, because it's a bad idea poorly executed.

I support @Flancian's proposal:

My proposal is that we should remove this suspend, perhaps downgrading to a limit if people have significant concerns with this user (I don't). Limit removes their posts from timelines and makes the profile harder to access (IIUC) but crucially allows people who are following the user to continue to interact with them.

Because it seems reasonable and right-sized to the actual risk, and I'd prefer to be able to interact with and see what this (apparent) charlatan is up to.

Luke Thorburn

Luke ThorburnThu 2 Apr 2026 1:10AM

My 2c:
- There are lots of different ways in which people's data can come into contact with an LLM, which I think it's important to distinguish. E.g., inference vs training, local vs remote-hosted LLMs, various forms of contractual use and privacy commitments, various forms of hardware-based privacy guarantees. I believe these are all relevant to making policy decisions (i.e., at least some combinations of these should be allowed).
- LLMs have huge potential to give people more more agency over their attention. E.g., people who find certain content triggering or bad for their mental health can create bespoke interfaces to modify that content or filter it out. I was involved in this paper which makes this argument in much more detail if interested.
- I hope we can find a policy that allows for such use cases — e.g., inference-only, no training, sufficient privacy guarantees.

(Not endorsing or commenting on Laurie's behaviour or how his specific app was designed.)

Danyl Strype

Danyl StrypeThu 2 Apr 2026 4:01AM

Kia ora koutou, this is just the beginning of the gold rush of vibe-coded apps, and other uses of generative models. For now there is a moratorium on listing fediverse-related software on the fediverse.party if it makes any use of this technology. We're keeping a watchlist for this purpose, please let me know about any gaps in this list.

Billy Smith

Billy SmithThu 2 Apr 2026 7:16AM

I had missed most of this, but it's an example of a simple principle.

Is there consent?

Yes or No?

If the question is not even being asked, then block by default.

Luke Opperman

Luke OppermanThu 2 Apr 2026 1:54PM

  1. My sentiment, currently well-represented in other's comments, is broadly and deeply anti-LLM and pro-consent, in line with our corner of the fediverse' norms against projects to scrape content or misinterpret "it's public" as consent.

  2. But this case is complicated in terms of response because it is not a scraping project but a client acting on behalf of (a presumed) user of social.coop. I'm broadly in favor of folks developing and using clients of their choice. Some of those clients will have flaws (in intent or in implementation, perhaps moreso for vibe-coded ones) that e.g. expose DMs, or be written by people whose interactions get them blocked by our instance - yet acting-as and authorized as a client for a member in good standing who is trusting/using that client to e.g. summarize or prioritize their feed.. The central question I have is how we might regulate this.

  3. There's a totally separate (agreed by all involved?) concern raised here about CWG moderation of the developer's account, that I'll leave to CWG to receive feedback and address.

It seems it would need to be a new capability and instance-wide policy to block User-Agents that we agree are risks or not in line with our values. Also entails vetting and moderating/reviewing use of particular clients - banning any source-containing-AGENTS.md client is a possible policy, banning clients that have known security or value deficits is another, but with significant tech/policy workload to maintain this list with some level of open access for new clients. This case might indicate the start of a desire/direction from the membership, but despite being opposed to such tech I'm not sure I would vote to undertake the work to exclude it. Uncomfortable.

Stéphane Klein

Stéphane KleinThu 2 Apr 2026 4:34PM

My position on this is based on a distinction I think is worth making:

I accept that anything I publish publicly on the web can be read and processed by an LLM — this was already true before LLMs existed, with search engines, RSS aggregators, archive.org, etc. If I post publicly, I lose control of the data. That's the deal.

However, I draw the line at the nature of the output:

  • I'm fine with my public posts being processed by an LLM whose output remains private (e.g. someone summarizing their own timeline for personal use).
  • I would want an opt-in mechanism for cases where my posts are used to produce a public artifact — a published summary, a generated article, anything redistributed publicly that derives from my words.

On private messages, my position is simpler: I am opposed by default to any private message being sent to an external service, whether it involves an LLM or not. This should require explicit opt-in from all parties involved in the conversation, not just the person using the client.

The concern with Zeitgeist, for me, is less about LLM access to public data and more about these two boundaries: the public/private nature of the output, and the protection of private messages from external processing.

Aaron GK

Aaron GKFri 3 Apr 2026 12:47AM

I don't know that the user has violated our code of conduct or federation policy so much as gone and done something that many of us are uneasy with, myself included, but that we don't have an articulated and agreed upon policy framework for. Because of this, at this point, I don't think suspending the user is a good thing.

That said, I think some of their behavior is incredibly obnoxious and presumptuous. For one thing, this person has basically come out and put themself in a position to build tools that will affect the direction of the fediverse. So speaking of whether or not "its public" equals consent, this person put themself in a public position. To then block users who criticized them is a bad faith move. Yes, generally blocking people you don't care to interact with is desirable, but we're not just talking about run of the mill clash of personalities. This is the one thing that gave me pause on whether this person violated our policies.

Nonetheless, I think the way to handle this is to start thinking about and discussing clear policies around AI that reflect our values.

This from @Stéphane Klein resonates with me strongly:

However, I draw the line at the nature of the output:

  • I'm fine with my public posts being processed by an LLM whose output remains private (e.g. someone summarizing their own timeline for personal use).

  • I would want an opt-in mechanism for cases where my posts are used to produce a public artifact — a published summary, a generated article, anything redistributed publicly that derives from my words.

Though I still am uneasy with the first bullet point too.

Our Federation policy does state that bot accounts following users with #NoBot in their profile is cause for suspending, however I don't have a problem with bot accounts following me. Some of them are dumb and obnoxious and I can just block them individually. I do have a problem with AIs following me. Can we make #NoAI a thing to express we don't consent to AI accounts following us? And still, a user using AI such as this Zeitgeist thing doesn't necessarily make it an "AI" account in the same way that bot accounts are bot accounts.