The old code was unnecessarily complicated, full of unused and/or
duplicated functions making it hard to understand what will actually
happen and for whom at runtime.
Since we only support a single HTTP backend this can be greatly simplified.
Now everything gets default options from a single place and only
functions to modify parts actually difffering across calls are exposed.
No HTTP3/QUIC support yet.
Note, allowing both here means we don't actually profit from HTTP2 multiplexing
due to Finch(? or maybe a dependency of Finch?) limitations. But it means we can
now interact with HTTP2-only instances (if such exist) and still may get minor
gains from header compression etc
Adventurous admins can change the config to allow only HTTP2,
thus profiting from multiplexing (but breaking federation with
HTTP1-only instances which are in fact observed to exist).
The next Tusky release is going to remove support for the v1 filters API,
see: https://codeberg.org/tusky/Tusky/pulls/5215.
Since Akkoma doesn't support the v2 API this
could cause significant issues for Akkoma users.
And improve monitoring documentation in particular
more detailed instructions for setting up Prometheus
metric scraping and the reference Grafana dashboard.
Currently translated at 0.1% (2 of 1004 strings)
Translated using Weblate (Turkish)
Currently translated at 100.0% (0 of 0 strings)
Added translation using Weblate (Turkish)
Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Hasan Yıldız <hasanyildiz0@yaani.com>
Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/tr/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
Resolves interop issue with a (reverted but possibly returning) bridgy change
as was reported in the comments of
https://akkoma.dev/AkkomaGang/akkoma/issues/831.
This won't change anything for the problem originally reported there.
Notably we now always fetch the full collection (up to the configured
item count limit) instead of only using the first page if its link was
inlined.
If only steps are conditional the whole CI workflow
will be held up waiting until a slot is available to start them
just to then not do anything at all.
This allows us to drop "when" condition from individual steps
whenever it is now redundant with the top-level condition.
The lint pipeline spent ~7 minutes downloading and compiling
and only a few seconds actually checking the style.
The former is fully redundandt with what’s done during test anyway.
- This adds extra tests to be sure that scrubbing still happens.
- When doing this I notices that the htmlMfm key wasn't stored in the database when comming through the federator. This has been now been fixed too.
- We also test that values true, false or no attribute all work for incomming messages.
Previously all such requests led to '401 Unauthorized'
whih might have triggered retries.
Now, to not leak any MRF info, we just indicate an
accept for POST requests without actually processing the object
and indiscriminately return "not found" for GET requests.
Notably this change also now causes all signed fetch requests from
blocked domains to be rejected even if authorized_fetch isn’t enabled.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/929
To make it usable in scenarios without a draft.
The next commit adds a user for the new function.
This does technically change behaviour a bit, since
"private" relies to "direct" messages no longer implicitly
address the parent post’s actor, but this seems like a contrived
scenario and was likely never intended to actually occur anyway
as cocorroborated by the absence of tests for it.
A pool timeout shorter than the receive timeout
makes race conditions leading to active connections
being killed more likely and laso just doesn’t make
much sense in general.
See: https://github.com/sneako/finch/pull/292
This was added in a924e117fd
with its name mirroring Finch’s own config option, but with
pool_timeout such a setting already existed since
2fe1484ed3 and the new one
was never actually used.
It may still crash due to a race condition between checking for file
existence and opening/streaming, but File.stream! has no safe version
we can use to avoid this completely.
Just not deleting such files during a reload is easy enough.