I just thought I would share my LibreTranslate setup here with some comments.
In general it is working very well with Akkoma, although I have noticed that some languages like Polish do not get translated very well; Others like Portuguese work fantastic though.
LibreTranslate can be run solely on the CPU and from what I have gathered it isn’t really faster with GPU accelleration (CUDA), however in the GPU it can likely process much more simultaneous requests. For now I am using the CPU, but I plan to switch to an old Nvidia GPU (with 3GB vRAM, which is more than sufficient for LT) sometimes soon.
The following is my Podman Quadlet container file, which should be pretty self explanatory even when writing a docker-compose file or so:
[Unit]
Description=LibreTranslate Container
After=local-fs.target
[Container]
Image=docker.io/libretranslate/libretranslate:latest
#ContainerName=LibreTranslate
Volume=/data/libretranslate/apikeys:/app/db:z
Volume=/data/libretranslate/data:/home/libretranslate/.local/share:z
Volume=/data/libretranslate/cache:/home/libretranslate/.local/cache:z
PublishPort=5000:5000
#Environment=LT_DEBUG=true
#Environment=LT_HOST=0.0.0.0
#Environment=LT_PORT=5000
Environment=LT_REQ_LIMIT=100
Environment=LT_CHAR_LIMIT=5000
Environment=LT_BATCH_LIMIT=10
Environment=LT_FRONTEND_LANGUAGE_SOURCE=en
Environment=LT_FRONTEND_LANGUAGE_TARGET=pt
#Environment=LT_LOAD_ONLY=de,en,pt,es
Environment=LT_API_KEYS=true
Environment=LT_API_KEYS_DB_PATH=/app/db/api_keys.db
#Enable auto-updates
#Label=io.containers.autoupdate=registry
[Service]
Restart=always
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target
The first start takes a long time, as the container downloads several GB of Argos Translate language model files. Note the “share” and “cache” volumes, which persist these large files on a container restart. The LT_LOAD_ONLY
environment variable can be used to limit the size of this language model download if only a limited set of languages is needed.
The api-key stuff is optional but recommended if you also want to use the LibreTranslate web-interface outside of Akkoma. Then you can limit the public access even further and give more allowance to Akkoma via an API key. See the LibreTranslate docu on how to generate these API keys once the system is running.
Akkoma side you can now simply enable LibreTranslate in Admin-fe at the very bottom of the “instance” settings. Use http://localhost:5000
as the URL.
In akkoma-fe you need to set a default translation language to remove the warning and once that is done the translation box also allows manually specifying source languages in case the auto-detection fails.
Edit: I noticed that the translate button is still shown to non-authenticated visitors on akkoma-fe, but fails with a non-authentiacted notice. Might be nice to hide that completely for visitors.
LibreTranslate like all ML software is a bit heavy on system resources, so don’t try this on a low end VPS. But I was surprised how little it needs in the end for my small Akkoma instance. Initially the container occupied a few GB of ram, but once things settled down it now idles around 600mb RAM use and negligible CPU use. Obviously once you start using it the CPU use will shoot up and larger instances will likely need to offload it to a GPU.
Let me know if you have any questions of further suggestions how to improve this. I’ll amend this post if/when I switch to a GPU based setup.