Zeitgeist and other Fediverse LLMs
Last week Laurie Voss announced that he’d released a new Mastodon client called Zeitgeist.blue, “a multi-social-network app that summarizes your feed for the last 24 hours.” As a Mastodon client, it appears to authenticate a given user with an existing instance (including social.coop, potentially) and, using Anthropic or GitHub Copilot, processes timelines for the purposes of summarizing one’s timelines using a large language model (LLM).
Many replies to the announcement were critical of the project and took issue with the lack of consent and being subjected to AI surveillance. Voss was dismissive of those concerns, referring to people complaining about the lack of consent mechanisms as “tedious bastards,” and began blocking people replying in the thread.
Voss was informed of an existing precedent for people to tag their bios in order to opt-out and subsequently added an opt-out for people who do not want their posts indexed. Initially Zeitgeist indexed everything, including DMs and follower-only posts, but now DMs are filtered out. There is also unique “User Agent” metadata that accompanies requests and theoretically could be used to block requests as an instance, if we chose to take that route.
I’d like to start a discussion here of how the cooperative would like to handle this, or other projects that engage in trawling content and processing posts using LLMs. Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?
Some related essays that might be helpful for thinking about this with:
Josh DavisWed 1 Apr 2026 5:47PM
I am definitely not ok with having our posts scraped for LLMs. If we can opt-out for the whole instance, I'd support that. (and scraping DMs should be illegal, imho - it's like going through people's mail).
FlancianWed 1 Apr 2026 8:35PM
@Josh Davis scraping DMs does sound potentially problematic, but DMs will only exist between people if they interact with each other in the first place? If people don't like person X they "just" have to not send them any messages for the messages not to interact with an LLM through person X. So overall I don't yet understand why people would object to this in particular or require further explicit opt-out, although I'm glad opt-out both ways has been added it seems as reported by Dan (who did great research writing this up). Thank you!
FlancianWed 1 Apr 2026 5:53PM
This is great, thank you so much Dan for posting this and getting discussion started!
One missing data point complementing the above: the CWG has suspended the user, so their profile and posts are currently unreachable if you try to access them with your Social.coop account. This is also true if you try to search for their profile; as long as you're using Social.coop to interact with the Fediverse, it is as if they don't exist. You need to read any posts directly in their instance (thanks Dan for the links), or use some other Fediverse server to interact with Laurie.
FWIW, this is how I see things:
The developer stepped into a hornet's nest and was insensitive about concerns raised by users. I think calling people "tedious bastards" is a bit out of line, but not terrible -- if you read the whole thread, there were definitely people who were acting aggressively and in an entitled way. I wouldn't use those words, but I understand how the sentiment could arise and be suboptimally expressed.
The developer blocked users, which IMHO is completely acceptable behavior. There is nothing in our Code of Conduct or indeed in any other CoC I'm aware of against blocking. If anything blocking and moving on is healthy behavior in social networks, as long as it's not somehow weaponized.
The developer has since then corrected their stance somewhat and otherwise seems to be behaving like a balanced/non-dangerous entity (based on their posts since and before).
All in all, my position (which I've already told the CWG about) is that suspending this user instance-wide is too heavy-handed, since it achieves very little in the way of protecting users against the perceived threat (the developer is developing a client, which presumably might have many users in the future, not only him) and it limits personal freedom significantly.
I checked server stats and 44 out of ~1.4k active Social.coop users are following Laurie, and unless the suspension is reverted these connections will be severed soon. What is more, all of this happens silently by default given how Mastodon works. So the TLDR is that every time we (meaning the CWG) suspend a user like this, they drop off the face of earth and nobody who was interested in hearing from them will even know about it by default. I find it overly heavy-handed and honestly borderline Kafkian.
My proposal is that we should remove this suspend, perhaps downgrading to a limit if people have significant concerns with this user (I don't). Limit removes their posts from timelines and makes the profile harder to access (IIUC) but crucially allows people who are following the user to continue to interact with them.
More generally and to the point of the thread (which is about the Fediverse and LLMs in general), I would posit that limit instead of suspend is a better way to manage other similar perceived "risks" with LLMs interactions in the future if people are worried enough that they think Social.coop should blanket-ban accounts and instances making use of AI.
Thanks for reading, and for working together on making Social.coop better and safer while respecting personal freedoms!
Dan PhifferWed 1 Apr 2026 6:31PM
@Flancian just to add a little more specificity, based on other comms we've had elsewhere:
as I understand it, April 24 is when suspension connections to Laurie Voss's account get severed.
members of the CWG have heard your position on the suspension and we've offered to discuss it at our next meeting, on April 17.
I'm curious how this part would work:
limit instead of suspend is a better way to manage other similar perceived "risks" with LLMs interactions in the future if people are worried enough that they think Social.coop should blanket-ban accounts and instances making use of AI.
If you authenticate your account, @flancian@social.coop, to Zeitgeist, the fact that we follow each other means my posts will get processed by the LLM unless I know to opt out, and go through with modifying my bio.
I'm not sure how a Limit would work here. I don't think there's any way for the instance to know who has auth'd their accounts to this client, and I don't feel like Limit is a good way to handle those cases.
Do you feel like the consent model as I've described above is acceptable? Have I consented to having my data processed by an LLM in this case if I haven't opted out, but you've chosen to use that client?
FlancianThu 2 Apr 2026 12:40AM
@Dan Phiffer yes, please let me know what you decide in two weeks when the CWG has the time to discuss this question! As you know I think it would be better not to keep this account suspended for that long given that a Limit is IMHO equally good for any safety concerns that people might have, but I see that ship seems to have sailed and the interests of users who are interested in continuing to interact have been discounted enough that there is no intention of doing so currently.
About me hypothetically signing into Zeitgeist: are we on the same page that this scenario is mostly independent of the discussion about Suspend vs Limit? As in, both would be ineffective if Social.coop members want to prioritize preventing people from having the possibility of interacting with them and an LLM at the same time using a client that supports that in any way, without having oracle-like information about which users are using which clients.
I don't see any way of accomplishing that -- modulo only allowing "trusted" clients, like the Mastodon official interface and an allowlist of iOS and Android clients that have somehow committed to not supporting AI or have implemented some consent protocol that doesn't exist yet (and is slightly less likely to come into existence each day interested people literally can't talk to Fediverse developers anymore because they're working in the space). Even with an allowlist approach, as you know, anybody can scrape our posts or copy/paste anything into an LLM as long as the posts are public.
I'm not conservative like this and just want to be able to talk to people with whichever tools they need or want to use, so I'm fine with them using an LLM for that, and conversely I think it's reasonable to ask that people do not unduly limit other people's freedom to use whichever shape of Fediverse client they like with their friends.
I'd suggest people who don't want their posts to go into LLMs 1. Ask people not to follow them if they're using an LLM-enabled client, add non-consent tags to their bio a la #nobot, and then try to get the tags supported by mainstream clients/codified as a FEP and wait for these to work or 2. Consider making their accounts private. Anything else seems performative more than useful, but I might be missing something?
Dan PhifferThu 2 Apr 2026 10:49AM
@Dan Phiffer yes, please let me know what you decide in two weeks
We will certainly let you know.
About me hypothetically signing into Zeitgeist: are we on the same page that this scenario is mostly independent of the discussion about Suspend vs Limit?
It is a separate discussion.
I would like to know, less hypothetically, as someone who is mutually connected to you on the fediverse: have you signed into Zeitgeist using your social.coop account?
I'll quote my question again to you, I don't believe you've answered it yet.
Do you feel like the consent model as I've described above is acceptable? Have I consented to having my data processed by an LLM in this case if I haven't opted out, but you've chosen to use that client?
FlancianThu 2 Apr 2026 11:07AM
@Dan Phiffer nope, I haven't!
Do you think it would be plain out wrong if Social.coop users did sign into it? If so, maybe this should be posted as an announcement at some point?
About consent: I'm unsure. I think up to now the assumption in the Fediverse seemed to be that anybody could sign into instances using whichever client they preferred, regardless of who coded it, and do whatever they wanted with the posts they read this way, including copy/pasting them into any other app; or printing them, rolling them up and smoking them. So all things considered it seems to me like assuming people might use LLM inference to summarize or distill/rank posts in their personal workflows seems roughly in line with my past expectations on what "fair use" is. I understand people feel strongly about LLMs and posts being used for training in particular; but that's also a separate question potentially. My understanding is that posts being fed into LLMs for inference using an API key will not actually end up in RHEL or training sets, but that'd be worth researching further.
Dan PhifferThu 2 Apr 2026 1:33PM
@Flancian printing my toot and smoking it is a funny example and is different from feeding the data to an LLM system in terms of its implications to me, the author.
As far as I can tell, Fair Use is an American Legal term and I wasn't asking in a legal sense. I meant it as more of a personal ethical question. The fact that you haven't auth'd to Zeitgeist yet, to me, signals how you might understand how that act could be seen as a transgression to people who you follow.
I get that there's a difference between inference and training large language models, and my guess is that whatever promises are made by any given hyperscaler could vary depending on the specific API key used and be subject to change over time.
FlancianThu 2 Apr 2026 2:31PM
@Dan Phiffer true, I meant fair use in an idiosyncratic and non-US sense, as in: I think it's fair enough by my and the internet's historical standards, and I think people who want to restrict interactions further should probably undertake the work to establish and implement the protocols needed to do this without imposing too many limitations on other people's personal freedoms.
Potentially flipping the Fediverse into "only vetoed clients can access" is reminiscent of the "enable authorized fetch" discussion, and of course also the Threads.net, bridges and scrapers related discussions before that; in general, and this has been brought up before and by other people, Social.coop seems to be having "same shaped" discussions once or twice a year between the "group safety first" and "personal freedom first" crowds, in a way that makes me think the tension is hard to resolve in a way satisfactory for both.
Finally, I haven't signed into Zeitgeist yet but I have it on my todo list as I want to see how it works and which mechanisms the author has implemented. I probably would get to it this weekend by default. If you think this is problematic, let me know, but I'd like to understand what the actual alternative is here. Moving to a different instance seems of course to be one; but would announcing in advance that I plan to sign into Zeitgeist work? If so, how much in advance and how often? Note that even if I move to a different instance I would be porting most of my following/followers, so it wouldn't make much difference w.r.t. the exposure of Social.coop users to the alleged menace of LLM inference by itself.
M. Page-Lieberman - @jotaemei@social.coop ·Wed 1 Apr 2026 5:39PM
Thank you for bringing this to our attention, @Dan Phiffer. I am uninformed on the different approaches to handle situations like this one and will be reading the texts you provided (and a few I bookmarked years ago but forget to get back to ...) before making any suggestions. In the meantime, I am looking forward to hearing the suggestions from others. Blessings.
Edit: I will say though that initially, this reminds me of the guy with the bridge to other social media platforms, and that I am wondering if we should come up with an approach like a rubric of our agreed upon rules for access of tools and bots on to social.coop and our beliefs on how developers should approach things from the beginning of their projects, which we would communicate publicly and could offer to that entity, which I do not remember the name of but that prevented the Nazis from being able to federate with most instances. Such rubrics might help us in the future to quickly dispense with these matters. These rubrics could be something of the form that proposed tools must provide for A, B, C, etc... and be developed in a spirit of respect for users' desires for X, Y, Z, and in which one of these capital letters cold be privacy or explicit (as in opt-in) consent.
It might even serve to signal to developers early on what is expected from them from a significant portion of the platform users before they start creating their bots and tools, and what kinds of responses they should expect from users on the platform if they are not conceptually mindful before writing their first line of code.