Akkoma VPS storage optimization

Hi floaty & friends!

Kaia asked me to look into Akkoma VPS sizing. VPS main drive is almost full (83GB/96GB) with 71GB used by Akkoma, 56GB being the DB and 14GB the media.

Since we use the Docker setup, migrating is not difficult. However, I wanted to inquire beforehand whether it makes sense to look into cutting down on DB size by e.g. pruning or removing uploads.

How do you keep your database size down?

Thanks!
ultem

that’s a big db - yeah you can 100% prune that down a bit

first port of call would be your retention - in instance → remote post retention days, you can turn it down a bit, i think the default is 90? i use 30, but as low as 7 is entirely fine

once you set this to a different value, you can start pruning (see: Database maintenance tasks - Akkoma Documentation, used in combination with the limit option to make it a little easier on your DB)

1 Like

Thanks! I made the change in the GUI.

If I however do sudo -u akkoma ./docker-resources/manage.sh mix pleroma.database prune_objects --limit 20 it will return report that 13:22:41.771 [info] Deleted 20 objects..., however then breaks after deleting 0 orphaned bookmarks with (DBConnection.ConnectionError) client #PID<0.99.0> timed out because it queued and checked out the connection for longer than 15000ms. Even with limiting to 1. The space is running out, but I still have 2GB left…

Stacktrace:

13:22:41.699 [info] Pruning objects older than 20 days, limiting to 20 rows

13:22:41.771 [debug] QUERY OK source="objects" db=30.6ms queue=4.0ms idle=293.6ms
DELETE FROM "objects" AS o0 WHERE (o0."id" IN (SELECT so0."id" FROM "objects" AS so0 WHERE (so0."updated_at" < $1) AND (split_part(so0."data"->>'actor', '/', 3) != $2) LIMIT $3)) [~N[2025-01-02 13:22:41], "brotka.st", 20]

13:22:41.771 [info] Deleted 20 objects...

13:22:41.830 [debug] QUERY OK db=56.1ms queue=2.8ms idle=333.9ms
delete from public.bookmarks
where id in (
  select b.id from public.bookmarks b
  left join public.activities a on b.activity_id = a.id
  left join public.objects o on a."data" ->> 'object' = o.data ->> 'id'
  where o.id is null
)
 []

13:22:41.830 [info] Deleted 0 orphaned bookmarks...

13:22:56.850 [error] Postgrex.Protocol (#PID<0.491.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.99.0> timed out because it queued and checked out the connection for longer than 15000ms

#PID<0.99.0> was at location:

    :prim_inet.recv0/3
    (postgrex 0.17.5) lib/postgrex/protocol.ex:3197: Postgrex.Protocol.msg_recv/4
    (postgrex 0.17.5) lib/postgrex/protocol.ex:2222: Postgrex.Protocol.recv_bind/3
    (postgrex 0.17.5) lib/postgrex/protocol.ex:2077: Postgrex.Protocol.bind_execute_close/4
    (db_connection 2.7.0) lib/db_connection/holder.ex:354: DBConnection.Holder.holder_apply/4
    (db_connection 2.7.0) lib/db_connection.ex:1558: DBConnection.run_execute/5
    (db_connection 2.7.0) lib/db_connection.ex:1653: DBConnection.run/6
    (db_connection 2.7.0) lib/db_connection.ex:772: DBConnection.parsed_prepare_execute/5


13:22:56.866 [debug] QUERY ERROR db=15033.1ms queue=2.4ms idle=392.9ms
DELETE FROM hashtags
USING hashtags AS ht
LEFT JOIN hashtags_objects hto
      ON ht.id = hto.hashtag_id
LEFT JOIN user_follows_hashtag ufht
      ON ht.id = ufht.hashtag_id
WHERE
    hashtags.id = ht.id
    AND hto.hashtag_id is NULL
    AND ufht.hashtag_id is NULL
 []
** (DBConnection.ConnectionError) tcp recv: closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)
    (ecto_sql 3.10.2) lib/ecto/adapters/sql.ex:1047: Ecto.Adapters.SQL.raise_sql_call_error/1
    (pleroma 3.14.0-719-g2e049037-develop) lib/mix/tasks/pleroma/database.ex:357: Mix.Tasks.Pleroma.Database.run/1
    (mix 1.15.4) lib/mix/task.ex:447: anonymous fn/3 in Mix.Task.run_task/5
    (mix 1.15.4) lib/mix/cli.ex:92: Mix.CLI.run_task/2
    /usr/local/bin/mix:2: (file)

you can probably ignore that error for now, it should end up succeeding once the db is a tad smaller - it’ll become consistent at the end

and increasing the limit to something like 1000 would be fine

1 Like