This is the only user of fetch_actor_and_object which previously just
always preteneded to be successful. For all the activity types handled
here, we absolutely need the referenced object to be able to process it
(other than Announce whether or not processing those activity types for
unknown remote objects is desirable in the first place is up for debate)
All other users of the similar fetch_actor already properly check success.
Note, this currently lumps all reolv failure reasons together,
so even e.g. boosts of MRF rejected posts will still exhaust all
retries. The following commit improves on this.
It makes decisions based on error sources harder since all possible
nesting levels need to be checked for. As shown by the return values
handled in the receiver worker something else still nests those,
but this is a first start.
The return value is never used here; later stages which actually need it
fetch the user themselves and it doesn't matter wheter we wait for the
fech here or later (if needed at all).
Even more, this early fetch always fails if the user was already deleted
or never known to begin with, but we get something referencing it; e.g.
the very Delete action carrying out the user deletion.
This prevents processing of the Delete, but before that it will be
reattempted several times, each time attempring to fetch the
non-existing profile, wasting resources.
It was used to migrate OStatus connections to ActivityPub if possible,
but support for OStatus was long since dropped, all new actors always AP
and if anything wasn't migrated before, their instance is already marked
as unreachable anyway.
The associated logic was also buggy in several ways and deleted users
got set to ap_enabled=false also causing some issues.
This patch is a pretty direct port of the original Pleroma MR;
follow-up commits will further fix and clean up remaining issues.
Changes made (other than trivial merge conflict resolutions):
- converted CHANGELOG format
- adapted migration id for Akkoma’s timeline
- removed ap_enabled from additional tests
Ported-from: https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3880
Current logic unconditionally adds public adressing to "cc"
and follower adressing to "to" after attempting to strip it
from the other one. This creates serious problems:
First the bug prompting this investigation and fix,
unconditional addition creates duplicates when adressing
URIs already were in their intended final field; e.g.
this is prominently the case for all "unlisted" posts.
Since List.delete only removes the first occurence,
this then broke follower-adress stripping later on
making the policy ineffective.
It’s also just not safe in general wrt to non-public adressing:
e.g. pre-existing duplicates didn’t get fully stripped,
bespoke adressing modes with only one of public addressing
or follower addressing are mangled — and most importantly:
any belatedly received DM or follower-only post
also got public adressing added!
Shockingly this last point was actually asserted as "correct" in tests;
it appears to be a mistake from mindless match adjustments
while fixing crashes on nil adressing in
10c792110e.
Clean up this sloppy logic up, making sure no more duplicates are
added by us, all instances of relevant adresses are purged and only
readded when they actually existed to begin with.
Current AP spec demands anonymous objects to have an id value,
but explicitly set it to JSON null. Howeveras it turns out this is
incompatible with JSON-LD requiring `@id` to be a string and thus AP
spec is incompatible iwth the Ativity Streams spec it is based on.
This is an issue for (the few) AP implementers actually performing
JSON-LD processing, like IceShrimp.NET.
This was uncovered by IceShrimp.NET’s zotan due to our adoption of
anonymous objects for emoj in f101886709.
The issues is being discussed by W3C, and will most likely be resolved
via an errata redefining anonymous objects to completely omit the id
field just like transient objects already do. See:
https://github.com/w3c/activitypub/issues/476
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/848
Ever since the browser frontend switcher was introduced in
de64c6c54a /akkoma counts as
an API prefix and thus gets skipped by frontend plugs
breaking the old swagger ui path of /akkoma/swagger-ui.
Do the simple thing and change the frontend path to
/pleroma/swaggerui which isn't an API path and can't collide
with frontend user paths given pleroma is areserved nickname.
Reported in
https://meta.akkoma.dev/t/view-all-endpoints/269/7https://meta.akkoma.dev/t/swagger-ui-not-loading/728
Usually an id should point to another AP object
and the image file isn’t an AP object. We currently
do not provide standalone AP objects for emoji and
don't keep track of remote emoji at all.
Thus just federate them as anonymous objects,
i.e. objects only existing within a parent context
and using an explicit null id.
IceShrimp.NET previously adopted anonymous objects
for remote emoji without any apparent issues. See:
333611f65e
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/694
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.
Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.
Testing on one affected instance appears to confirm this theory
Ref.:
https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafterhttps://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
Tested-by: bjo
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.
Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.
Implementation notes:
- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering
Overall performance of timelines and creating new posts which contain URLs is greatly improved.
Trying to display non-media as media crashed the renderer,
but when posting a status with a valid, non-media object id
the post was still created, but then crashed e.g. timeline rendering.
It also crashed C2S inbox reads, so this could not be used to leak
private posts.
In Mastodon media can only be used by owners and only be associated with
a single post. We currently allow media to be associated with several
posts and until now did not limit their usage in posts to media owners.
However, media update and GET lookup was already limited to owners.
(In accordance with allowing media reuse, we also still allow GET
lookups of media already used in a post unlike Mastodon)
Allowing reuse isn’t problematic per se, but allowing use by non-owners
can be problematic if media ids of private-scoped posts can be guessed
since creating a new post with this media id will reveal the uploaded
file content and alt text.
Given media ids are currently just part of a sequentieal series shared
with some other objects, guessing media ids is with some persistence
indeed feasible.
E.g. sampline some public media ids from a real-world
instance with 112 total and 61 monthly-active users:
17.465.096 at t0
17.472.673 at t1 = t0 + 4h
17.473.248 at t2 = t1 + 20min
This gives about 30 new ids per minute of which most won't be
local media but remote and local posts, poll answers etc.
Assuming the default ratelimit of 15 post actions per 10s, scraping all
media for the 4h interval takes about 84 minutes and scraping the 20min
range mere 6.3 minutes. (Until the preceding commit, post updates were
not rate limited at all, allowing even faster scraping.)
If an attacker can infer (e.g. via reply to a follower-only post not
accessbile to the attacker) some sensitive information was uploaded
during a specific time interval and has some pointers regarding the
nature of the information, identifying the specific upload out of all
scraped media for this timerange is not impossible.
Thus restrict media usage to owners.
Checking ownership just in ActivitDraft would already be sufficient,
since when a scheduled status actually gets posted it goes through
ActivityDraft again, but would erroneously return a success status
when scheduling an illegal post.
Independently discovered and fixed by mint in Pleroma
1afde067b1
In MastoAPI media descriptions are updated via the
media update API not upon post creation or post update.
This functionality was originally added about 6 years ago in
ba93396649 which was part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/626 and
https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/450.
They introduced image descriptions to the front- and backend,
but predate adoption of Mastodon API.
For a while adding an `descriptions` array on post creation might have
continued to work as an undocumented Pleroma extension to Masto API, but
at latest when OpenAPI specs were added for those endpoints four years
ago in 7803a85d2c, these codepaths ceased
to be used. The API specs don’t list a `descriptions` parameter and
any unknown parameters are stripped out.
The attachments_from_ids function is only called from
ScheduledActivity and ActivityDraft.create with the latter
only being called by CommonAPI.{post,update} whihc in turn
are only called from ScheduledActivity again, MastoAPI controller
and without any attachment or description parameter WelcomeMessage.
Therefore no codepath can contain a descriptions parameter.
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing
This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/670
literally nothing uses C2S AP, and it's another route into core
systems which requires analysis and maintenance. A second API
is just extra surface for potentially bad things so let's take
it out back and obliterate it
We were overzealous with matching on a raw error from the object fetch that should have never been relied on like this. If we can't fetch successfully we should assume that the collection is private.
Building a more expressive and universal error struct to match on may be something to consider.