

Yeah, let’s all use a fork that doesn’t patch those AI slop vulnerabilities.


Yeah, let’s all use a fork that doesn’t patch those AI slop vulnerabilities.


Yeah, ok. But the military is explicitly supposed to keep functioning when the backend gets nuked literally. Who wants to pay for that kind of redundancy just so that some people can watch Netflix while they’re dying of radiation poisoning?


Law in some (many?) European countries already requires more intrusive age checks. The EU also has some explicit requirements. There is also push to ban social media for people under a certain age (maybe 16).
The EU has just presented an age verification app. That app would become a required standard through new laws. or even through case law from court judgments.


Everyone can be an Attestation Provider,
Maybe everyone can apply to become one, but there will be some certification process. Anything else would defeat the purpose. So you have the question of how much that would cost and who pays for that.
I agree that, on a technical level, it should be possible to implement support for the app.


Maybe, but letting a new instance federate doesn’t create a bigger abuse risk than allowing account creation. Going through a compliance checklist takes more effort.
It might split the Fediverse in compliant and non-compliant, where compliant servers don’t talk to the others.


Good questions. I haven’t seen any info about the economics yet. I think that’s up to the member states?


You don’t need age verification if you run it for your family and know everyone’s age.
You could run your own family forum just fine. The problems start when you want to federate. Let me be crass to make the point. Say someone posts child porn and that gets federated to your instance. You think you can just declare that someone else’s problem to avoid legal complications?
The way I expect this would work, is that instances would become responsible for who they federate with. If an instance allows your family instance to federate, they would allow your users to indirectly use their instance. We’ll have to wait what lawmakers or courts do, as you say. But I think, federation would only be by manual approval after some sort of check for compliance, or maybe even a legal contract similar to how it goes in GDPR. Actually, such GDPR contracts might be required anyway, but who cares.


I hadn’t considered if existing legislation might already require implementing an age verification when l posed the question. Now that you bring it up, I fear it does.
The DSA has exceptions for small companies. But I would caution that there is no case law that supports your interpretation that users should be counted on a per-instance basis. Courts are often not very receptive to attempts to avoid rules through such formalities. Bear in mind that the DSA is supposed to protect the “fundamental rights” of Europeans, which may not include running an instance.
Other laws do not have such exceptions. This app seems poised to become the required age verification mechanism, wherever age should be known. Either use the app or show you have something better.
In January, a Berlin court ruled that TikTok was in violation of the GDPR for not doing enough age checking. It’s being appealed. It remains to be seen how much of that case will be applicable to the Fediverse. But there is a good chance, that even without new laws, age-gating will become mandatory through case law.


Good point. European governments keep churning out the digital regulation, but have hardly any qualified people to enforce them. That has protected the Fediverse, so far.
But a straight age-gating requirement would require no particular qualifications to spot. Would you be willing to face a hefty fine just for the privilege of running an instance?


You are talking about the DSA.
There is no reason to believe that future social media bans will have such exceptions. VDL said explicitly that the app means that there are “no more excuses”.
The DSA excludes small platforms from some rules, so as not to overwhelm start-ups with bureaucracy. Clearly, such considerations are to be neutralized in the future.


Why not?


After the recent judgments against Meta, it was predicted that there would be a crackdown on mental health topics. DDLC has been connected to a suicide in the UK.


That’s not how it works.


How do you algorithmically manipulate those 12M people with Mastodon?
The usual way, whatever that is. What would Mastodon do about it? How do you manipulate Bluesky?
BTW, Bluesky has almost 40M users.
It’s the number in OP, so I ran with that. The fediverse number apparently excludes Gab and Truth Social. Makes sense, since those aren’t federated with the rest, but that also shows an issue.


Alternate history: Bluesky never happens. Instead, some company opens up a Mastodon instance as a Twitter replacement. So instead of Bluesky with 12M+ users, there’s a Mastodon instance with 12M+ users. Now what?
Didn’t some of those promise not to merge AI slop? I guess I misunderstood.