When note editing support was added, it was omitted to strip internal
fields from edited notes and their history.
This was uncovered due to Mastodon inlining the like count as a "likes"
collection conflicting with our internal "likes" list causing validation
failures. In a spot check with likes/like_count it was not possible to
inject those internal fields into the local db via Update, but this
was not extensively tested for all fields and avenues.
Similarly address normalisation did not normalise addressing in the
object history, although this was never at risk of being exploitable.
The revision history of the Pleroma MR adding edit support reveals
recusrive stripping was intentionally avoided, since it will end up
removing e.g. emoji from outgoing activities. This appears to still
be true. However, all current internal fields ("pleroma_interal"
appears to be unused) contain data already publicised otherwise anyway.
In the interest of fixing a federation bug (and at worst potential data
injection) quickly outgoing stripping is left non-recursive for now.
Of course the ultimate fix here is to not mix remote and internal data
into the same map in the first place, but unfortunately having a single
map of all truth is a core assumption of *oma's AP doc processing.
Changing this is a masive undertaking and not suitable for providing
a short-term fix.
Previously there were mainly two attack vectors:
- for raw keys the owner <-> key mapping wasn't verified at all
- keys were retrieved with refetching allowed
and only the top-level ID was sanitised while
usually keys are but a subobject
This reintroduces public key checks in the user actor,
previously removed in 9728e2f8f7
but now adapted to account for the new mapping mechanism.
The first collection page is (sometimes?) inlined
which caused crashes when attempting to log the fetch failure.
But there’s no need to fetch and we can treat it like the other inline collection
And for now treat partial fetches as a success, since for all
current users partial collection data is better than no data at all.
If an error occurred while fetching a page, this previously
returned a bogus {:ok, {:error, _}} success, causing the error
to be attached to the object as an reply list subsequently
leading to the whole post getting rejected during validation.
Also the pinned collection caller did not actually handle
the preexisting error case resulting in process crashes.
The object lookup is later repeated in the validator, but due to
caching shouldn't incur any noticeable performance impact.
It’s actually preferable to check here, since it avoids the otherwise
occuring user lookup and overhead from starting and aborting a
transaction
Most of them actually only accept either activities or a
non-activity object later; querying both is then a waste
of resources and may create false positives.
User.get_or_fetch_by_(apid|nickname) are the only external users of fetch_and_prepare_user_from_ap_id,
thus there’s no point in duplicating logging, expecially not at error level.
Currently (duplicated) _not_found errors for users make up the bulk of my logs
and are created almost every second. Deleted users are a common occurence and not
worth logging outside of debug
To facilitate this ObjectValidator.fetch_actor_and_object is adapted to
return an informative error. Otherwise we’d be unable to make an
informed decision on retrying or not later. There’s no point in
retrying to fetch MRF-blocked stuff or private posts for example.
This is the only user of fetch_actor_and_object which previously just
always preteneded to be successful. For all the activity types handled
here, we absolutely need the referenced object to be able to process it
(other than Announce whether or not processing those activity types for
unknown remote objects is desirable in the first place is up for debate)
All other users of the similar fetch_actor already properly check success.
Note, this currently lumps all reolv failure reasons together,
so even e.g. boosts of MRF rejected posts will still exhaust all
retries. The following commit improves on this.
It was used to migrate OStatus connections to ActivityPub if possible,
but support for OStatus was long since dropped, all new actors always AP
and if anything wasn't migrated before, their instance is already marked
as unreachable anyway.
The associated logic was also buggy in several ways and deleted users
got set to ap_enabled=false also causing some issues.
This patch is a pretty direct port of the original Pleroma MR;
follow-up commits will further fix and clean up remaining issues.
Changes made (other than trivial merge conflict resolutions):
- converted CHANGELOG format
- adapted migration id for Akkoma’s timeline
- removed ap_enabled from additional tests
Ported-from: https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3880
Current logic unconditionally adds public adressing to "cc"
and follower adressing to "to" after attempting to strip it
from the other one. This creates serious problems:
First the bug prompting this investigation and fix,
unconditional addition creates duplicates when adressing
URIs already were in their intended final field; e.g.
this is prominently the case for all "unlisted" posts.
Since List.delete only removes the first occurence,
this then broke follower-adress stripping later on
making the policy ineffective.
It’s also just not safe in general wrt to non-public adressing:
e.g. pre-existing duplicates didn’t get fully stripped,
bespoke adressing modes with only one of public addressing
or follower addressing are mangled — and most importantly:
any belatedly received DM or follower-only post
also got public adressing added!
Shockingly this last point was actually asserted as "correct" in tests;
it appears to be a mistake from mindless match adjustments
while fixing crashes on nil adressing in
10c792110e.
Clean up this sloppy logic up, making sure no more duplicates are
added by us, all instances of relevant adresses are purged and only
readded when they actually existed to begin with.
Current AP spec demands anonymous objects to have an id value,
but explicitly set it to JSON null. Howeveras it turns out this is
incompatible with JSON-LD requiring `@id` to be a string and thus AP
spec is incompatible iwth the Ativity Streams spec it is based on.
This is an issue for (the few) AP implementers actually performing
JSON-LD processing, like IceShrimp.NET.
This was uncovered by IceShrimp.NET’s zotan due to our adoption of
anonymous objects for emoj in f101886709.
The issues is being discussed by W3C, and will most likely be resolved
via an errata redefining anonymous objects to completely omit the id
field just like transient objects already do. See:
https://github.com/w3c/activitypub/issues/476
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/848
Usually an id should point to another AP object
and the image file isn’t an AP object. We currently
do not provide standalone AP objects for emoji and
don't keep track of remote emoji at all.
Thus just federate them as anonymous objects,
i.e. objects only existing within a parent context
and using an explicit null id.
IceShrimp.NET previously adopted anonymous objects
for remote emoji without any apparent issues. See:
333611f65e
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/694
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.
Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.
Implementation notes:
- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering
Overall performance of timelines and creating new posts which contain URLs is greatly improved.
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing
This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/670
literally nothing uses C2S AP, and it's another route into core
systems which requires analysis and maintenance. A second API
is just extra surface for potentially bad things so let's take
it out back and obliterate it
We were overzealous with matching on a raw error from the object fetch that should have never been relied on like this. If we can't fetch successfully we should assume that the collection is private.
Building a more expressive and universal error struct to match on may be something to consider.