Instancce stats are cached only renewed every 5 minutes anyway
and IO stats are cumulative over the entire runtime so no info
is lost.
Polling those every 10s is wasteful and the next commit will add a
periodic measurement which is (comparetively) more costly to compute.
Most HTTPS requests actually fall into the single-digit millisecond
range or below on average. Even the more costly endpoints almost always
average around the lower third of the millisecond magnitude.
Only endpoints doing synchronous remote HTTP fetches (e.g. for signing
keys) occasionally spike into the order of seconds.
As is, the bucket resolution is completely unfit to reason about
anything and even just averages are better indications.
Most database queries take less than a millisecond and even in total
almost all take less than 50ms for me. Decode time is but a tiny
fraction of that and queue time usually only takes a small part of total
time too (but may spike on high load).
Shift the buckets down to be able to
give insight into all relevant cases.
In particular this allows to determine whether high averages
are the result of generally high processing times or just a few
outliers lifting the whole average up (e.g. slow network fetches).
Exact numbers are biased towards my setup for lack of other comparison
data, but at least the order of magnitude should be ok everywhere.
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.
Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.