I will also likely regret wading in on this, but:
1: Using robots.txt requires also fetching robots.txt first, so that only marginally reduces the total number of requests.
2: Caching and CDNs are already a well-established pattern on the web, and necessary for a lot of things that have nothing to do with Mastodon/fediverse. Solving that problem other ways will likely create new problems, so at least that falls back on existing solutions that are known to work.
1: Using robots.txt requires also fetching robots.txt first, so that only marginally reduces the total number of requests.
2: Caching and CDNs are already a well-established pattern on the web, and necessary for a lot of things that have nothing to do with Mastodon/fediverse. Solving that problem other ways will likely create new problems, so at least that falls back on existing solutions that are known to work.