While the previous version was already plenty fast for smaller instances
there seems to be a more-than-linear slowdown for large instances.
These prefilters should (hopefully) not exclude anything in need of
fixing, but seem to already cut down significantly on the overall
processing time.
The remote part is included in federated emoji names by e.g.
Iceshrimp.NET ever since remote emoji support was added in
4d21aa1670
and as of writing it still continues to do so.
It adds no value for us though; we add the remote part automatically
based on the URL and it makes it more difficult to correctly coalesce
the original reaction (from a user for whom the moji was local)
and the subsequent reactions with the identical emoji from users of
other instances. Additionally the remote part can cause issues when
later used with our REST API.
For non-reactions this is unproblematic and thus
there’s no need to change anything there.
Use a migration to fix up existing activities.
This will cause some (further) desync from the inlined reactions
array, but will be fixable with the resync mix task and avoids
issues when running the resync without first fixing existing activities.
This was already removed from mix.exs in
ea5a2a9f21
but as it turns out it was also re-set
during runtime.
Since we never set it outside of CI in
the first place there’s no need to
force-disable it here.
Even if --keep-threads is used, replies of
pinned posts might still be pruned as documented
for this option.
Thus keep-threads is no reason to skip reply counter recalculation.
ActivityPub spec demands each actor has at least an inbox and outbox.
Furthermore, the current representation wouldn’t even be accepted by
ourselves, since our processing requires objects to be flagged with a
sensible type else we don't know what to do with it.
Including the nickname is just a peemptive measure.
There were no reports of this causing problems in real-world deployments
and at least for federation with other Akkoma instances we should have
never run into this, since we _always_ expose the full representation of
the instance actor and atm also always use the latter for fetching
remote content (which prevents us from fetching followers-only content).
Nonetheless, serving something which violates spec and we wouldn’t even
accept ourselves seems obviously bad, so fix it and add tests to prevent
this from reoccuring.
Fixes bug introduced in 8f322456a0
Presumably those inlined copies were added to avoid the need for queries
each time the info is needed. However, they tend to desync from the actual
activities for not yet fully understood reasons; see:
https://akkoma.dev/AkkomaGang/akkoma/issues/956
As a workaround until the root cause is identified and fixed and/or
we no longer rely on the inliend copies add a mix task to regenerate
the inlined "cache" from the authorative activity data.
Does not yet deal with inlined emoji reactions
since its format is a pain to deal with.
This emulates the previous effective behaviour of only
running tests on pull requests and skipping straight
to building artefacts and docs after merge.
Pull requests which do not pass tests shouldn’t be merged in the first place.
Most HTTPS requests actually fall into the single-digit millisecond
range or below on average. Even the more costly endpoints almost always
average around the lower third of the millisecond magnitude.
Only endpoints doing synchronous remote HTTP fetches (e.g. for signing
keys) occasionally spike into the order of seconds.
As is, the bucket resolution is completely unfit to reason about
anything and even just averages are better indications.
Most database queries take less than a millisecond and even in total
almost all take less than 50ms for me. Decode time is but a tiny
fraction of that and queue time usually only takes a small part of total
time too (but may spike on high load).
Shift the buckets down to be able to
give insight into all relevant cases.
In particular this allows to determine whether high averages
are the result of generally high processing times or just a few
outliers lifting the whole average up (e.g. slow network fetches).
Exact numbers are biased towards my setup for lack of other comparison
data, but at least the order of magnitude should be ok everywhere.
The old code was unnecessarily complicated, full of unused and/or
duplicated functions making it hard to understand what will actually
happen and for whom at runtime.
Since we only support a single HTTP backend this can be greatly simplified.
Now everything gets default options from a single place and only
functions to modify parts actually difffering across calls are exposed.
No HTTP3/QUIC support yet.
Note, allowing both here means we don't actually profit from HTTP2 multiplexing
due to Finch(? or maybe a dependency of Finch?) limitations. But it means we can
now interact with HTTP2-only instances (if such exist) and still may get minor
gains from header compression etc
Adventurous admins can change the config to allow only HTTP2,
thus profiting from multiplexing (but breaking federation with
HTTP1-only instances which are in fact observed to exist).
The next Tusky release is going to remove support for the v1 filters API,
see: https://codeberg.org/tusky/Tusky/pulls/5215.
Since Akkoma doesn't support the v2 API this
could cause significant issues for Akkoma users.
And improve monitoring documentation in particular
more detailed instructions for setting up Prometheus
metric scraping and the reference Grafana dashboard.
Currently translated at 0.1% (2 of 1004 strings)
Translated using Weblate (Turkish)
Currently translated at 100.0% (0 of 0 strings)
Added translation using Weblate (Turkish)
Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Hasan Yıldız <hasanyildiz0@yaani.com>
Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/tr/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
Resolves interop issue with a (reverted but possibly returning) bridgy change
as was reported in the comments of
https://akkoma.dev/AkkomaGang/akkoma/issues/831.
This won't change anything for the problem originally reported there.
Notably we now always fetch the full collection (up to the configured
item count limit) instead of only using the first page if its link was
inlined.
If only steps are conditional the whole CI workflow
will be held up waiting until a slot is available to start them
just to then not do anything at all.
This allows us to drop "when" condition from individual steps
whenever it is now redundant with the top-level condition.
The lint pipeline spent ~7 minutes downloading and compiling
and only a few seconds actually checking the style.
The former is fully redundandt with what’s done during test anyway.