Zeitgeist and other Fediverse LLMs
Last week Laurie Voss announced that he’d released a new Mastodon client called Zeitgeist.blue, “a multi-social-network app that summarizes your feed for the last 24 hours.” As a Mastodon client, it appears to authenticate a given user with an existing instance (including social.coop, potentially) and, using Anthropic or GitHub Copilot, processes timelines for the purposes of summarizing one’s timelines using a large language model (LLM).
Many replies to the announcement were critical of the project and took issue with the lack of consent and being subjected to AI surveillance. Voss was dismissive of those concerns, referring to people complaining about the lack of consent mechanisms as “tedious bastards,” and began blocking people replying in the thread.
Voss was informed of an existing precedent for people to tag their bios in order to opt-out and subsequently added an opt-out for people who do not want their posts indexed. Initially Zeitgeist indexed everything, including DMs and follower-only posts, but now DMs are filtered out. There is also unique “User Agent” metadata that accompanies requests and theoretically could be used to block requests as an instance, if we chose to take that route.
I’d like to start a discussion here of how the cooperative would like to handle this, or other projects that engage in trawling content and processing posts using LLMs. Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?
Some related essays that might be helpful for thinking about this with:
Josh DavisWed 1 Apr 2026 5:47PM
I am definitely not ok with having our posts scraped for LLMs. If we can opt-out for the whole instance, I'd support that. (and scraping DMs should be illegal, imho - it's like going through people's mail).
FlancianWed 1 Apr 2026 5:53PM
This is great, thank you so much Dan for posting this and getting discussion started!
One missing data point complementing the above: the CWG has suspended the user, so their profile and posts are currently unreachable if you try to access them with your Social.coop account. This is also true if you try to search for their profile; as long as you're using Social.coop to interact with the Fediverse, it is as if they don't exist. You need to read any posts directly in their instance (thanks Dan for the links), or use some other Fediverse server to interact with Laurie.
FWIW, this is how I see things:
The developer stepped into a hornet's nest and was insensitive about concerns raised by users. I think calling people "tedious bastards" is a bit out of line, but not terrible -- if you read the whole thread, there were definitely people who were acting aggressively and in an entitled way. I wouldn't use those words, but I understand how the sentiment could arise and be suboptimally expressed.
The developer blocked users, which IMHO is completely acceptable behavior. There is nothing in our Code of Conduct or indeed in any other CoC I'm aware of against blocking. If anything blocking and moving on is healthy behavior in social networks, as long as it's not somehow weaponized.
The developer has since then corrected their stance somewhat and otherwise seems to be behaving like a balanced/non-dangerous entity (based on their posts since and before).
All in all, my position (which I've already told the CWG about) is that suspending this user instance-wide is too heavy-handed, since it achieves very little in the way of protecting users against the perceived threat (the developer is developing a client, which presumably might have many users in the future, not only him) and it limits personal freedom significantly.
I checked server stats and 44 out of ~1.4k active Social.coop users are following Laurie, and unless the suspension is reverted these connections will be severed soon. What is more, all of this happens silently by default given how Mastodon works. So the TLDR is that every time we (meaning the CWG) suspend a user like this, they drop off the face of earth and nobody who was interested in hearing from them will even know about it by default. I find it overly heavy-handed and honestly borderline Kafkian.
My proposal is that we should remove this suspend, perhaps downgrading to a limit if people have significant concerns with this user (I don't). Limit removes their posts from timelines and makes the profile harder to access (IIUC) but crucially allows people who are following the user to continue to interact with them.
More generally and to the point of the thread (which is about the Fediverse and LLMs in general), I would posit that limit instead of suspend is a better way to manage other similar perceived "risks" with LLMs interactions in the future if people are worried enough that they think Social.coop should blanket-ban accounts and instances making use of AI.
Thanks for reading, and for working together on making Social.coop better and safer while respecting personal freedoms!
Dan PhifferWed 1 Apr 2026 6:31PM
@Flancian just to add a little more specificity, based on other comms we've had elsewhere:
as I understand it, April 24 is when suspension connections to Laurie Voss's account get severed.
members of the CWG have heard your position on the suspension and we've offered to discuss it at our next meeting, on April 17.
I'm curious how this part would work:
limit instead of suspend is a better way to manage other similar perceived "risks" with LLMs interactions in the future if people are worried enough that they think Social.coop should blanket-ban accounts and instances making use of AI.
If you authenticate your account, @flancian@social.coop, to Zeitgeist, the fact that we follow each other means my posts will get processed by the LLM unless I know to opt out, and go through with modifying my bio.
I'm not sure how a Limit would work here. I don't think there's any way for the instance to know who has auth'd their accounts to this client, and I don't feel like Limit is a good way to handle those cases.
Do you feel like the consent model as I've described above is acceptable? Have I consented to having my data processed by an LLM in this case if I haven't opted out, but you've chosen to use that client?
SievaWed 1 Apr 2026 6:39PM
@flancian a couple of thoughts:
- we should be concerned about the tool usage against our data, but perhaps not a specific developer account
- personally, I'd like this person to remain blocked, if not at the instance level, then at least at the level of my account (currently not possible because social.coop blocks him). If we unblock him, **I'd really like to know about that when it happens.
SievaWed 1 Apr 2026 6:22PM
I would prefer blocking requests with User-Agent=Zeitgeist/1.0.
Kris WarnerWed 1 Apr 2026 6:31PM
Setting aside the chaotic launch of this one project, do you feel okay with AI systems harvesting posts and training LLMs?
No.
M. Page-Lieberman - @jotaemei@social.coop ·Wed 1 Apr 2026 5:39PM
Thank you for bringing this to our attention, @Dan Phiffer. I am uninformed on the different approaches to handle situations like this one and will be reading the texts you provided (and a few I bookmarked years ago but forget to get back to ...) before making any suggestions. In the meantime, I am looking forward to hearing the suggestions from others. Blessings.
Edit: I will say though that initially, this reminds me of the guy with the bridge to other social media platforms, and that I am wondering if we should come up with an approach like a rubric of our agreed upon rules for access of tools and bots on to social.coop and our beliefs on how developers should approach things from the beginning of their projects, which we would communicate publicly and could offer to that entity, which I do not remember the name of but that prevented the Nazis from being able to federate with most instances. Such rubrics might help us in the future to quickly dispense with these matters. These rubrics could be something of the form that proposed tools must provide for A, B, C, etc... and be developed in a spirit of respect for users' desires for X, Y, Z, and in which one of these capital letters cold be privacy or explicit (as in opt-in) consent.
It might even serve to signal to developers early on what is expected from them from a significant portion of the platform users before they start creating their bots and tools, and what kinds of responses they should expect from users on the platform if they are not conceptually mindful before writing their first line of code.