I’m preparing to migrate an account on one instance to new instance. As part of this, typically a user would do an account back - to simply have the data. I requested a backup from the user panel >24 hrs ago and it’s still showing “This backup is not ready yet.”
Backup
2026-02-02T17:08:37.000Z This backup is not ready yet.
I feel like I’m missing something dumb.. Is there scheduler broken? Or is it me?
Have you already (force)refresehed the page yet? (Ctrl+F5 in Firefox)?
If it’s still stuck pending after that, ask your instance owner to check whether there are pending backup jobs in either the Prometheus metrics/Grafana dashboard or the Oban Web dashboard if enabled. If so, also whether the pending queue is slowly decreasing.
Typically a backup becomes available within a couple minutes for me, so this is not a (known) general issue
looking at the oban dashboard I did find this under discarded:
Attempt 1—29s ago
** (DBConnection.ConnectionError) tcp recv (idle): closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)
(ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:1096: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:994: Ecto.Adapters.SQL.execute/6
(ecto 3.12.5) lib/ecto/repo/queryable.ex:232: Ecto.Repo.Queryable.execute/4
(ecto 3.12.5) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
(pleroma 3.16.0-0-g6d88834) lib/pleroma/repo.ex:77: anonymous fn/5 in Pleroma.Repo.chunk_stream/4
(elixir 1.15.4) lib/stream.ex:1626: Stream.do_resource/5
(elixir 1.15.4) lib/enum.ex:4387: Enum.reduce/3
(pleroma 3.16.0-0-g6d88834) lib/pleroma/user/backup.ex:191: Pleroma.User.Backup.write/4
This looks like either the time limit for an individual DB query or for the whole backup job was exceeded. Currently (I’m just assuming your outdated version behaves the same, but maybe not) there’s no explicit time limit on the individual queries used during backup creation. Meaning the default from Ecto is used. But each query only fetches a chunked batch which typically shouldn’t take so long.
You can try:
VACUUM ANALYZE; and pgtune your database to improve its performance
raising the default query timeout (config :pleroma, Pleroma.Repo, timeout: 30_000)
patch a trailing , timeout: :infinity argument to the Repo.all call in lib/pleroma/repo.ex from your stacktrace
raising the job time limit (config :pleroma, :workers, timeout: [backup: timer.seconds(…), …]. Note you’ll also need to copy all other default entries in the list if you want to override one)