Rpc failed with reazon: noconnection

false 	table_cache_hit 	Table cache hit ratio is too low: 0.69681
false 	index_cache_hit 	Index cache hit ratio is too low: 0.89572 

from iotop:

Total DISK READ:        65.62 M/s | Total DISK WRITE:       148.87 K/s
Current DISK READ:     114.62 M/s | Current DISK WRITE:       2.28 M/s

with postgres being basically the only process active.

now could be a good time to mention that my postgresql database (I presume for akkoma) is somewhere around 60gb, I am pretty sure that is unreasonably large, considering I prune all entries past 1 day every other month.

merciful christ what is that disk read???

that’s probably your issue, your harddrive probably physically cannot read much faster

IHBA (an instance that is over 5 years old) peaks at 200 K/s (1000x less than yours)

is that persistent?

sudo iotop -od 10 will average every 10s and if it’s still in the many M/s you’ve got something horribly wrong

60gb is also massive

ok with that command I see this prrocess running in mb/s:

  11278 be/4 postgres   66.33 M/s    0.00 B/s postgres: 13/main: autovacuum worker akkoma

is this the result of the prune command?

this is a single user instance so I do not know why the database is so large or why there is this amount of disk activity.

any advice appreciated.

oh ok yes in that case this is entirely fine, you’ve got a vacuum running after your prune

this can take a while (like, hours) - leave it overnight and it should probably be fine by morning

oh ok that is a relief.

could you shed any ligght on why the postgresql database is so large? there are only my posts on the server that could not account for this amount of gigabytes.

are there any tips for reducing the size of it? I am running out of space on my vps.

appreciate your support with this serverr issue.

do you follow any relays? those guys can massively inflate the size of any database since you’re pulling in a whole bunch of posts

you may also have to spend some time on a proper vacuum which will reclaim any space the prune internally freed - see Database maintenance tasks - Akkoma Documentation

I am following several relays however I thought with the prune command it would clear the past posts from the database?

is this not so?

so once this current vacuum command is complete I ought to look at the vacuum full command shown in that maintenance guide?

thanks again for your support.

there was a note there that a certain amount of free space is required to run these commands? I only have 100gb disk space and the database is now 60gb, will I likely run into trouble?

it does

however postgres does not automatically return that free space to the system

see PostgreSQL: Documentation: 16: VACUUM

the autovacuum process running right now may give you some more space, you can watch your free space with df -h and see if it’s increasing

ok well I will wait for this ‘autovacuum worker akkoma’ process to complete when run ‘vacuum full’ and see if that returns free space!

df -h appears to show a fairly stable size for space available, so maybe the process is not returning storage.

thanks again for your help!

if it does not give you the storage you desire, you can turn down remote_post_retention_days and prune again, with a vacuum - i personally use a value of 30

i’ll add that unless yxou run with --prune-orphaned-activities only the posts themselves will be deleted, not any associated activites.
The latter often end up taking up more space than just the former so adding can help. However, pruning orphaned activities is the more costly process and can take quite a while on large instances and while running bog down the instance so much it’s not really usable during it.

thank you Oneric, I will run prune with --prune-orphaned-activities after running vacuum full.

I am a little nervous thhat I will run out of space during the vacuum though since the database is now so large!

I will run prune with --prune-orphaned-activities after running vacuum full.

The other way around would be better (else space freed up by deleting activities won’t be reclaimed on disk), but if you have neough storage space to spare it might not matter much

I am a little nervous thhat I will run out of space during the vacuum though since the database is now so large!

VACUUM FULL more or less rewrites the database, meaning while running it may (up to) double in size before the old copy gets deleted

apologies forr resurrecting this thread, however I attempted to run vacuum full on my server only for the process to error out very quickly with the following message:

akkoma@z133:~$ ./bin/pleroma_ctl database vacuum full

13:55:14.598 [info] Running VACUUM FULL.

13:55:14.598 [warning] Re-packing your entire database may take a while and will consume extra disk space during the process.

13:56:19.485 [debug] QUERY ERROR db=64886.5ms queue=0.3ms idle=1248.2ms
vacuum full; []
** (Postgrex.Error) ERROR XX001 (data_corrupted) invalid page in block 878567 of relation base/16781/17226
    lib/ecto/adapters/sql.ex:1047: Ecto.Adapters.SQL.raise_sql_call_error/1
    nofile:1: (file)
    (stdlib 5.0.2) erl_eval.erl:750: :erl_eval.do_apply/7
    (elixir 1.15.4) lib/code.ex:543: Code.validated_eval_string/3

the autovacuum process is running again however I do not know if that will resolve my database size issues.

–edit–

some searching for this errorr message tells me it could be very bad! any advice is appreciated I do not know a lot about sql.

this implies serious hardware issues

I do not know your hosting situation. if it’s on a vps, then you can probably request support from your hosting provider. if not, then your own hardware may not be in the best shape

I would advise stopping all processes before it gets any worse, and backing up your database

if the vacuum is still running, leave it running and stop akkoma

let the vacuum finish on its own

the service is running on a vps. I will look at that repair code you sent, tho I am not sure of basics like a: how to enter the code or b: how to interact with sql daemon.

tthanks for the support.

will this faulty page be in the database akkoma or [sitename] ? \l in psql lists two databases.

this is now fully outside the bounds of akkoma support

I cannot tell you which database you use, as I do not have access to your machine or config.

ok well thanks for the guidance I will have to guess.