While the function signature allows returning many errors at once,
OpenApiSpex.cast_and_validate currently only ever returns the first
invalid field it encounters. Thus we need to retry multiple times to
clean up all offenders.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/pulls/992#issuecomment-15027
No caller of `reload` actually uses the result in any way
so there’s no need to wait for a response and risk running
into a timeout (by default 5 seconds).
Discovered-by: sn0w <me@sn0w.re>
Based-on: 1fb54d5c2c
Reloading the entire emoji set from disk, reparsing all pack JSON files,
etc is unnecessarily costly for large emoji sets. We already know which
single or few changes we want to apply, so do just that instead.
No caller cares about the order
(and although, rare with concurrent reads at same time like a write
the table might return unordered results anyway).
Unordered sets have a constant read time,
ordered sets logarithmic times, but there’s no benfit for us
Display will fail for all but Create and Announce anyway since
0c9bb0594a. We exclude Announce
activities from redirects here since they are not identical
with the announced post and akkoma-fe stripping the repeat header
on he /notice/ page might lead to confusion about which is which.
In particular those redirects exiting breaks the assumptions from
the above commit’s commit message and made it possible to obtain
database IDs for activities other than one’s own likes allowing
slightly more mischief with the rendering bug it fixed.
Note: while 0c9bb0594a speculated about
public likes also leaking IDs to other users, the public like endpoint
is actually paginated by post id/date not like id/date like the private
endpoint. Thus it does not allow getting database IDs of others’ likes.
Happens commonly for e.g. replies to follower-only posts
if no one one your instance follows the replied-to account
or replies/quotes of deleted posts.
Before this change Masto API response would treat those
replies as root posts, making it hard to automatically or
mentally filter them out.
With this change replies already show up sensibly as
recognisable replies in akkoma-fe.
Quotes of unavailable posts however still show up as if they
weren’t quotes at all, but this can only be improved client-side.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/715
They do nothing. As documented[1] only three specific
options regarding timeouts are parsed for individual request
and none of them is set by AdapterHelper, only pool-specific options.
In particular this means we always relied on Mint’s default CA cert
verification based on queries to the CAStore package (which we include).
[1]: https://hexdocs.pm/finch/Finch.html#request/3-options
It required a bunch of and even call-specific boilerplate
and is not necessary since we can just capture the real logger
as laready done in other tests.
While its data was included in healthcheck responses,
it was not used to determine the healthy status
and for informational purposes Prometheus metrics,
ObanWeb dashboard or the Phoenix live dashboard are all better fits.
In particular, the data shown in healtcheck responses had no temporal
information, but there’s quite a difference between X failures scattered
across many days of uptime and X failures within a couple minutes.
If the id of another activity type was used
it would show the post referenced by the activity
but wrongly attributing it to the activity actor
instead of the actual author.
E.g. ids of like activities can be obtained from
pagiantion info of the favourites endpoint.
For all other activity types the id would need to be guessed
which is considered practically infeasible for Flake UUIDs.
This should’ve been mostly harmless in practice, since:
- since the activity has the same context as the original post,
both the original and misattributed duplicate will show up in the
same thread
- only posts liked by a user can be misattributed to them,
presumably making it hard/impossible to associate someone with
content they disagree with
- by default only the liking user themself can view their like history
and therefore obtain IDs for their like activities.
Notably though, there is a user seting to allow anyone to browse
ones like history and therefore obtain like IDs. However, since
akkoma-fe has no support for actually displaying those, there might
be no actual users of this features.
This is more intuitive for users.
On the flip side, this makes the API more confusing
since now min_id/max_id no longer correspond to ids returned in the
response and only link headers can be used to traverse all response
pages. However, Mastodon already does this according to its
documentation, so clients should already handle this well.
The only other usage of get_follow_requests_query is only interested
in the total count of requests (from active users), thus changing its
select part is safe.
Also gets rid of (outside tests unused) User.get_follow_requests.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/380
The old approach required adding a special virtual field
to any table potentially needing such foreign-id pagination and
also still required manually sorting according to pagiantion settings
since the pagination helper does not know whether
this virtual field was set or not.
Using lists with each entry containing the pagination id and the actual
entry insterad allows any table to use this mechanism unchanged and
does not require manually sorting.
Since it was unused, this also drops the pagination mode paramter from
fetch_favourited_with_fav_id.
Furthermore, as a side effect of this change a bug in the favourite
benchmark is fixed. It used to incorrectly attempt to use IDs of
the liked objects for pagination instead of the like IDs as advertised
in Link headers.
Split into scheduled (intentionally delayed until a later trigger date)
and available (eligible for immediate processing but did not yet start).
This will help in diagnosing overloaded instances or too-low queue
limits as well as expose configuration mishaps like
https://akkoma.dev/AkkomaGang/akkoma/issues/924.
(The latter by violently crashing the telemetry poller process while
attempting put_in for a non-configured queue creating well visible logs)
Instancce stats are cached only renewed every 5 minutes anyway
and IO stats are cumulative over the entire runtime so no info
is lost.
Polling those every 10s is wasteful and the next commit will add a
periodic measurement which is (comparetively) more costly to compute.
It’s only used in one place and there not even all of
its functionality is needed. It’s not only simpler and shorter,
but easier to understand if Tesla’s keyword list is just inlined.
The only useful bit which is now migrated to Pleroma.HTTP is
addition of the user-agent header (except, sometimes, in tests)
The web_push_encryption lib assumes HTTPoison semantics
which is why we also need to convert the header format.
Inspecting the libraries source shows that Tesla won’t
understand the options anyway and its only used to enable TLS/SSL.
When this was ported from Pleroma in
5da9cbd8a5
we did not take into acount that Akkoma’s and Pleroma’s
HTTP backend take different options.
There’s no need for the :pool option
and enforcing a body limit on download
is currently not possible with Finch
Ideally we’d use a single common HTTP request error format handling
for _all_ HTTP requests (including non-ActivityPub requests, e.g. NodeInfo).
But for the purpose of this commit this would create too much noise
and it is significant effort to go through all error pattern matches etc
too ensure it is still all correct or update as needed.
The Signature module now handles interaction with the HTTPSignature library
and the plug everything related to HTTP itself. It now also no longer needs to be public.
To achieve this signatures are now generated by a custom
Tesla Middleware placed after the FollowRedirects Middleware.
Any requests which should be signed needs
to pass the signing key via opts.
This also unifies the associated header logic between fetching and
publishing, notably resolving a divergence wrt the "host" header.
Relevant spec demands the host header shall include a port
identification if not using the protocols standard port.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/731
The remote part is included in federated emoji names by e.g.
Iceshrimp.NET ever since remote emoji support was added in
4d21aa1670
and as of writing it still continues to do so.
It adds no value for us though; we add the remote part automatically
based on the URL and it makes it more difficult to correctly coalesce
the original reaction (from a user for whom the moji was local)
and the subsequent reactions with the identical emoji from users of
other instances. Additionally the remote part can cause issues when
later used with our REST API.
For non-reactions this is unproblematic and thus
there’s no need to change anything there.
Use a migration to fix up existing activities.
This will cause some (further) desync from the inlined reactions
array, but will be fixable with the resync mix task and avoids
issues when running the resync without first fixing existing activities.
This was already removed from mix.exs in
ea5a2a9f21
but as it turns out it was also re-set
during runtime.
Since we never set it outside of CI in
the first place there’s no need to
force-disable it here.
Even if --keep-threads is used, replies of
pinned posts might still be pruned as documented
for this option.
Thus keep-threads is no reason to skip reply counter recalculation.
Presumably those inlined copies were added to avoid the need for queries
each time the info is needed. However, they tend to desync from the actual
activities for not yet fully understood reasons; see:
https://akkoma.dev/AkkomaGang/akkoma/issues/956
As a workaround until the root cause is identified and fixed and/or
we no longer rely on the inliend copies add a mix task to regenerate
the inlined "cache" from the authorative activity data.
Does not yet deal with inlined emoji reactions
since its format is a pain to deal with.
ActivityPub spec demands each actor has at least an inbox and outbox.
Furthermore, the current representation wouldn’t even be accepted by
ourselves, since our processing requires objects to be flagged with a
sensible type else we don't know what to do with it.
Including the nickname is just a peemptive measure.
There were no reports of this causing problems in real-world deployments
and at least for federation with other Akkoma instances we should have
never run into this, since we _always_ expose the full representation of
the instance actor and atm also always use the latter for fetching
remote content (which prevents us from fetching followers-only content).
Nonetheless, serving something which violates spec and we wouldn’t even
accept ourselves seems obviously bad, so fix it and add tests to prevent
this from reoccuring.
Fixes bug introduced in 8f322456a0
Most HTTPS requests actually fall into the single-digit millisecond
range or below on average. Even the more costly endpoints almost always
average around the lower third of the millisecond magnitude.
Only endpoints doing synchronous remote HTTP fetches (e.g. for signing
keys) occasionally spike into the order of seconds.
As is, the bucket resolution is completely unfit to reason about
anything and even just averages are better indications.
Most database queries take less than a millisecond and even in total
almost all take less than 50ms for me. Decode time is but a tiny
fraction of that and queue time usually only takes a small part of total
time too (but may spike on high load).
Shift the buckets down to be able to
give insight into all relevant cases.
In particular this allows to determine whether high averages
are the result of generally high processing times or just a few
outliers lifting the whole average up (e.g. slow network fetches).
Exact numbers are biased towards my setup for lack of other comparison
data, but at least the order of magnitude should be ok everywhere.