Comments

You must log in or register to comment.

Fool wrote

Sounds like there needs to be some sort of Proxy / Forwarder type system for interaction between instances.

So an instance would only allow a specific list of caching hosts to pull, and only connect to the caching hosts to pull data.

4

mima OP wrote

That kinda is one of the proposed solutions (specifically #3) in this GitHub issue, I think.

Though I think your specific solution of letting instances have its own caching hosts introduces its own problem too: it eventually will lead to centralized caching services, since it wouldn't make sense to have each instance host its own caching service. Which kinda defeats the point of decentralization in Mastodon...

2

Fool wrote

Option 3 is completely different to what I proposed. What I proposed could include some consolidation, but potentially everyone could run their own separate caching instance.

You could say DNS has the same issue, people can and do create their own root hosts, it's just the choice of choosing who to trust.

3

mima OP wrote

Well if everyone just run their own caching instance, wouldn't the DDoS problem still exist, this time just moved to their caching instances..?

2

Fool wrote (edited )

A few 1000 connections isn't much for a proxy repeating the same content - it is a lot for a server running indexing and queries.

Edit: I just realised I read it wrong, the problem is Mastodon DDoS on random web hosts, not on other Federated hosts... I still think proxy caching would be key, with only the initial linking instance fetching the content.

4

mima OP wrote (edited )

I still think proxy caching would be key, with only the initial linking instance fetching the content.

Yeah that was suggested at first, but the devs were concerned about some malicious instances giving out fake previews. But then other social media sites already allow you to edit the preview itself, so maybe it's not really a big of a deal...

4