false table_cache_hit Table cache hit ratio is too low: 0.69681
false index_cache_hit Index cache hit ratio is too low: 0.89572
from iotop:
Total DISK READ: 65.62 M/s | Total DISK WRITE: 148.87 K/s
Current DISK READ: 114.62 M/s | Current DISK WRITE: 2.28 M/s
with postgres being basically the only process active.
now could be a good time to mention that my postgresql database (I presume for akkoma) is somewhere around 60gb, I am pretty sure that is unreasonably large, considering I prune all entries past 1 day every other month.
could you shed any ligght on why the postgresql database is so large? there are only my posts on the server that could not account for this amount of gigabytes.
are there any tips for reducing the size of it? I am running out of space on my vps.
I am following several relays however I thought with the prune command it would clear the past posts from the database?
is this not so?
so once this current vacuum command is complete I ought to look at the vacuum full command shown in that maintenance guide?
thanks again for your support.
there was a note there that a certain amount of free space is required to run these commands? I only have 100gb disk space and the database is now 60gb, will I likely run into trouble?
if it does not give you the storage you desire, you can turn down remote_post_retention_days and prune again, with a vacuum - i personally use a value of 30
i’ll add that unless yxou run with --prune-orphaned-activities only the posts themselves will be deleted, not any associated activites.
The latter often end up taking up more space than just the former so adding can help. However, pruning orphaned activities is the more costly process and can take quite a while on large instances and while running bog down the instance so much it’s not really usable during it.
I will run prune with --prune-orphaned-activities after running vacuum full.
The other way around would be better (else space freed up by deleting activities won’t be reclaimed on disk), but if you have neough storage space to spare it might not matter much
I am a little nervous thhat I will run out of space during the vacuum though since the database is now so large!
VACUUM FULL more or less rewrites the database, meaning while running it may (up to) double in size before the old copy gets deleted
apologies forr resurrecting this thread, however I attempted to run vacuum full on my server only for the process to error out very quickly with the following message:
akkoma@z133:~$ ./bin/pleroma_ctl database vacuum full
13:55:14.598 [info] Running VACUUM FULL.
13:55:14.598 [warning] Re-packing your entire database may take a while and will consume extra disk space during the process.
13:56:19.485 [debug] QUERY ERROR db=64886.5ms queue=0.3ms idle=1248.2ms
vacuum full; []
** (Postgrex.Error) ERROR XX001 (data_corrupted) invalid page in block 878567 of relation base/16781/17226
lib/ecto/adapters/sql.ex:1047: Ecto.Adapters.SQL.raise_sql_call_error/1
nofile:1: (file)
(stdlib 5.0.2) erl_eval.erl:750: :erl_eval.do_apply/7
(elixir 1.15.4) lib/code.ex:543: Code.validated_eval_string/3
the autovacuum process is running again however I do not know if that will resolve my database size issues.
–edit–
some searching for this errorr message tells me it could be very bad! any advice is appreciated I do not know a lot about sql.
I do not know your hosting situation. if it’s on a vps, then you can probably request support from your hosting provider. if not, then your own hardware may not be in the best shape
I would advise stopping all processes before it gets any worse, and backing up your database
the service is running on a vps. I will look at that repair code you sent, tho I am not sure of basics like a: how to enter the code or b: how to interact with sql daemon.