Since I migrated from Pleroma to Akkoma the service crashes from time to time. Maybe I can relate this to times with a high number of incoming http requests.
Syslog tells me:
2023-09-24T21:19:47.503499+02:00 akkoma pleroma[731]: eheap_alloc: Cannot allocate 2545496 bytes of memory (of type "heap").
2023-09-24T21:19:47.507901+02:00 akkoma kernel: [27861.806148] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507916+02:00 akkoma kernel: [27861.806159] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507918+02:00 akkoma kernel: [27861.806164] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507919+02:00 akkoma kernel: [27861.806206] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507920+02:00 akkoma kernel: [27861.806215] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507921+02:00 akkoma kernel: [27861.806219] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507923+02:00 akkoma kernel: [27861.806221] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507924+02:00 akkoma kernel: [27861.806529] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507925+02:00 akkoma kernel: [27861.806532] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.507926+02:00 akkoma kernel: [27861.806555] __vm_enough_memory: pid: 770, comm: 1_dirty_cpu_sch, no enough memory for the allocation
2023-09-24T21:19:47.598950+02:00 akkoma pleroma[731]: #015
2023-09-24T21:19:49.347455+02:00 akkoma systemd[1]: pleroma.service: Main process exited, code=exited, status=1/FAILURE
2023-09-24T21:19:49.347482+02:00 akkoma systemd[1]: pleroma.service: Failed with result 'exit-code'.
2023-09-24T21:19:49.347498+02:00 akkoma systemd[1]: pleroma.service: Unit process 764 (epmd) remains running after unit stopped.
2023-09-24T21:19:49.347513+02:00 akkoma systemd[1]: pleroma.service: Consumed 12min 43.779s CPU time.
2023-09-24T21:19:49.481436+02:00 akkoma systemd[1]: pleroma.service: Scheduled restart job, restart counter is at 1.
2023-09-24T21:19:49.483249+02:00 akkoma systemd[1]: Stopped pleroma.service - Pleroma social network.
2023-09-24T21:19:49.483568+02:00 akkoma systemd[1]: pleroma.service: Consumed 12min 43.779s CPU time.
2023-09-24T21:19:49.483974+02:00 akkoma systemd[1]: pleroma.service: Found left-over process 764 (epmd) in control group while starting unit. Ignoring.
2023-09-24T21:19:49.484092+02:00 akkoma systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
2023-09-24T21:19:49.497963+02:00 akkoma systemd[1]: Started pleroma.service - Pleroma social network.
Sometimes its not type “heap” but type “old_heap”.
And here is the memory utilization during the last crash and a few hours of historic data:
I already increased the virtual machines memory by 1 GB.
This never happened during the last two and a half years with Pleroma. If someone can convince me that erl_crash.dump does not contain sensitive information I would be happy to share.