The initial info message listing all found packs ought to be sufficient
and with many packs installed thiscan create multiple pages of log
messages on each emoji reload or server start.
Any errors or non-indexed packs are still logged to higher levels.
This requirement was originally added together with splicing the
inbox owner into the non b* addressing fields to make bcc transports
work in https://git.pleroma.social/pleroma/pleroma/-/merge_requests/390.
Later on this was relaxed to always allow deliveries devoid of any
addressing at all in f6cb963df2
and always allow deliveries from actors the owner is following in
750b369d04 to fix interop issues with
Mastodon and Honk respectively.
The justification for both the filtering and splicing comes from
one sentence in AP spec’s inbox section:
> In general, the owner of an inbox is likely
> to be able to access all of their inbox contents.
While this may provide plausible justification for splicing the owner
into cc, it is less clear how this requires or justifies the set of
filtering rules employed here.
Surveying a few other implementations no similar
filtering or splicing appears to be employed.
Furthermore, spec-compliant servers will strip bto/bcc _before_
delivery to remote servers, meaning any compliant bcc transport
out there will NOT contain any explicit addressing of the inbox owner.
Thus the addressing requirement directly opposes
the goal of the original patch.
Currently the requirement for the owner to be addressed once again
is causing interop issues. It turns out to be the root cause of
a long-standing (2+ years) bug preventing meaningful federation.
Bridgy sends e.g. Follow activities and Accepts for Follows directly
to the affected user’s personal inbox while solely addressing
the public scope in the to field. Notably follow relations never
getting established prevented the "accept if followed" allow rule
to ever come into effect.
To make matters worse non-addressed messages simply lead to a
vague "internal server error" response being sent back
which likely slowed down locating the issue.
Furthermore additional issues wrt to signatures cropped up after
the 500-response issues wa first reported, but they seem to have
already been fixed in the meantime, possibly with the signature
handling overhaul in Akkoma.
Given it repeatedly caused issues, does not appear to align with common
practice in the wider fedi ecosystem and apparently contradicts its
original intention, simply remove the requirement.
This is confirmed to fix bridgy interop.
The addressing splicing actually should also add the inbox owner to bto
or bcc instead of cc, but for now this is not changed and in practice
bto/bcc delivery appears to be basically unused anyway.
Most headers are automatically checked by the library after this
upgrade. But since digest is only required for requests with a body
and body processing is handled outside the lib atm, we need to
explicity pass the presence or absence along or not get feedback
about creating broken signatures.
This makes bugs in our signatures more apparent
allowing faster discovery and fixing
This property was introduced as a way to gauge whether and
how much enabling authfetch might break passive federation in
https://akkoma.dev/AkkomaGang/akkoma/pulls/312.
However, with the db field defaulting to false, there’s no distinction
between instances without valid signatures and those which just never
attempted to fetch anything from the local instance.
Furthermore, this was never exposed anywhere and required manually
checking the database or cachex state via a remote shell.
Given the above it appears this doesn't actually
provide anything useful, thus drop it.
The most common permanent receiver error arises for likes/boosts
when we don’t yet know the rlevant object and can't fetch it
due to the remote being overwhelmed or otherwise down.
Before this changes all retries were rather rapid
thus not giving the remote enough time to recover
and usually all failing. Now the remote has about 20
minutes to recover before we give up.
Transient errors from race conditions and (presumably)
weird database-cache interactions also occur regularly.
However, they resolve within the first one or two retries
and those intial retries still happen relatively quickly.
Only scrubbing "content" leads to differences between
"content" and "contentMap" eventhough the latter should
ideally match the former exactly for the primary language’s entry.
While ideally, for locally generated posts there should be no difference
between applying the scrubber or not, as it turns out automatically
generated attachment links didn't match the form expected by our default
scrubber.
Currently Akkoma never uses nor exposes the value of contentMap entries,
thus this oversight was harmless wrt to safety and at most pertubed
the language detection for our posts perfomed by remote servers.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/928
Despite its name this property is not supposed to be a full URI,
but just a bare domain witout protocol. Furthermore, it’s supposed
to be the WebFinger domain used in userhandles and NOT the domain used
for API and ActivityPub objects (which every caller will already know
anyway).
Not following this caused issues for Pachli and Tusky.
Reported-by: nikclayton
Added in Mastodon 2.9.2 (June 2019) this is plain-text-only and supposed
to be shorter compared to the older description field.
Some clients were reported to require this field to properly function.
Reported-by: https://akkoma.dev/paulyd
The queue was not actually registered leading to
jobs getting scheduled each week but never processed
forever lingering in 'available' state.
Fixes omission in: a4f834a687
And use the project’s preferred spelling when
not referring to specific distro package.
Spelling of RedHat packages was left as is
as I can’t access RedHat’s package repository.
This allows to retain posts and boosts of remote actors with local
follows regardless of age.
With the "full" setting this can be taken further treating such
followed actors just like local users even keeping all posts they
liked or reacated to.
Pinned objects and their threads will be refetched
on user refresh which by default happens after a day
once a user is encountered again in any form including a mention.
We observed pruning pinned objects usually results in heavy load for
hours after a database prune due to a clogged up remote fetch queue as
pinned posts and their threads of many (most?) users get refetched.
Thus do not prune pinned posts by default.
Keeping closer to earlier behaviour this will still prune threads of
pinned posts regardless of --keep-threads if nothing else prevenets it.
Statmenets for keeping and breaking threads vastly differ
and the whole if block doesn't even fit on one screen.
Thus move each version out into its own function to
improve readability