Libtorrent v2.x memory limiting on linux

General support for problems installing or using Deluge
Post Reply
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Libtorrent v2.x memory limiting on linux

Post by mhertz »

Just wanted add that I tested following working on linux(with systemd), to restrict page-caching not growing indefinitely, possibly making performance issues as been reported previously by some, just like on windows.

Add file named e.g. 'wslimit.conf' under '/etc/systemd/system/deluged.service.d' (make dir if not already there):

Code: Select all

[Service]
MemoryMax=512M
(Change e.g. to '1G' etc, after personal preference)

Finally run 'sudo systemctl daemon-reload'.

(Alternate way being simply running 'sudo systemctl edit deluged', adding the part in codebox above(both lines, added between the two comment-blocks where cursor initially is) and save/exit, which automatically generates 'deluged.service.d' dir if not there, the override file therein, and runs daemon-reload too).

To remove this fix, then simply delete wslimit.conf(override.conf if used alternate way above), and if made the deluged.service.d dir yourself for this, then delete that too, and end with 'sudo systemctl daemon-reload'.

Same trick can be done for dockers through adding the '-m 512m' option to the commands generating the docker image e.g.:

Code: Select all

sudo docker run -d \
--name deluge \
[...] \
-m 512m \
lscr.io/linuxserver/deluge:latest
Last, one can also do this manually by starting deluged through systemd-run with limiting enabled e.g.:

Code: Select all

sudo systemd-run --scope -p MemoryMax=512M --uid=deluge -u deluged deluged
This generates a temp unit with scope extension, which by default a bunch random numbers for name, hence the '-u deluged' option(which isn't for specifying user, as that is the '--uid' option instead here), so can stop deluged again in same session through 'sudo systemctl stop deluged.scope', instead of specifying said random numbers.

Anyway, for all solutions you can check 'free -m' now doesn't use up all your memory during torrenting(listed in 'buff/cache' section, not 'used', and where 'free' very small without this trick). Also, under 'Memory' section of 'sudo systemctl status deluged'(service extension is implied when not given, so need specify 'deluged.scope' specifically for 'systemd-run'), or 'sudo systemd-cgtop', and for dockers under 'mem usage / limit' section of 'sudo docker stats deluge'.

Sorry no solution yet for non-systemd systems, but hope helps somewhat at least still.

Edit: Just stumbled over easiest method, just simply run:

Code: Select all

sudo systemctl set-property deluged MemoryMax=512M
Can rerun with new value to change, or 'MemoryMax=' to reset to infinity, or just delete generated '/etc/systemd/system.control/deluged.service.d' dir. Also can add '--runtime' switch for testing it just in current session.
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Re: Libtorrent v2.x memory limiting on linux

Post by mhertz »

Just wanted to add a non-systemd solution for users needing such - also btw I edited in an even easier systemd solution in previous post in single command i.e. 'sudo systemctl set-property deluged MemoryMax=512M', done.

OK, systemd and docker is just simply using the native cgroup kernel functionality to limit memory, which we can do ourself manually also, either from the mounted cgroup virtual filesystem with shell commands, or using another frontend, like libcgroup, or cgroup-tools sometimes called on some distros, when the CLI frontend tools included also. Anyway, i'm just adding the manual shell commands solution here, as no need for another frontend really imho, but I can post a writeup if wanted later for the libcgroup method.

Here's example commands demonstrating this, to adapt/include into own script/service, and I mount the cgroup virtual filesystem myself manually, just in case not already done on the distro you use(though many do already in /sys/fs/cgroup, so change if wanted to avoid redundancy):

Code: Select all

sudo -u deluge deluged 
sudo mkdir /mnt/cgroup
sudo umount /sys/fs/cgroup/memory 2>/dev/null
sudo mount -t cgroup2 none /mnt/cgroup
echo +memory | sudo tee /mnt/cgroup/cgroup.subtree_control
sudo mkdir /mnt/cgroup/deluged
echo $(pidof -x deluged) | sudo tee /mnt/cgroup/deluged/cgroup.procs
echo 512m | sudo tee /mnt/cgroup/deluged/memory.max
(Edit: Cgroup v1 alternative method, instead of v2 above, posted in next post)

You can check deluged belonging to correct cgroup(here 'deluged') with 'cat /proc/$(pidof -x deluged)/cgroup'.

When finished you stop deluged and remove the cgroup, which because of some semantics specifically need 'rmdir', not 'rm -rf', so 'sudo rmdir /mnt/cgroup/deluged', and unmount with 'sudo umount /mnt/cgroup'.

No need imho to post alternative method with libcgroup/cgroup-tools here, as just a frontend to above, but will do upon request if needed.

Last, I thought about looking into if possible for me to include either the above(native) or libcgroup way, to add linux memory limiting in addition to the windows one, in my wslimit deluge plugin, but decided not to, because as takes root to make the cgroup(which can then be configured to let a non-root user control said group), then don't like that, and likewise is not that hard to implement manually yourself in your own setup anyway imho.

Hope helps.
User avatar
ambipro
Moderator
Moderator
Posts: 417
Joined: Thu May 19, 2022 3:33 am
Contact:

Re: Libtorrent v2.x memory limiting on linux

Post by ambipro »

long live wslimit.
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Re: Libtorrent v2.x memory limiting on linux

Post by mhertz »

Lol, thanks bro, that put a smile on my face honestly :) Appreciate you being sparring-partner and beta tester in that again :)
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Re: Libtorrent v2.x memory limiting on linux

Post by mhertz »

I tested the non-systemd solution in an ubuntu VM initially, but now when I actually tested it in a non-systemd distro, then didn't work, so reasearched and troubleshooted why, and so hence this post for completeness.

If the distro has mounted any cgroups already e.g. memory, and they use hybrid mode, so both cgroupv1 and v2, then doesn't work, because memory already reserved in the v1 group and not available in v2 then. I tested 5 distros, all non-systemd with runit, openrc and s6 as init's - one distro of the 5 I couldn't make it work on though, but finally found out it was using a kernel built without memory cgroup support i.e. the gentoo minimal installation livecd iso, but that is not a normal situation, and gentoo has other install-iso's, plus your installed gentoo system would most probably have that enabled regardless, unless very niche.

Anyway, I edited in a few extra commands to non-systemd instructions in previous post, which I tested working - if not using hybrid mode, but straight cgroups v2, then not needed and will just show an error, hence I appended '2>/dev/null' to silence that.

Very last, lets say you use a system with only cgroup v1 support for some reason, despite should be very rare but still. Then here's additional instructions for that:

Code: Select all

sudo -u deluge deluged
mkdir /mnt/cgroup
sudo mount -t cgroup -o memory none /mnt/cgroup
mkdir /mnt/cgroup/deluged
echo 512m | sudo tee /mnt/cgroup/deluged/memory.limit_in_bytes
echo 512m | sudo tee /mnt/cgroup/deluged/memory.memsw.limit_in_bytes
As before stop deluged when finished, run 'rmdir /mnt/cgroup/deluged'(note again 'rm -rf' not working because semantics), and unmount with 'sudo umount /mnt/cgroup'.

In cgroupv1, then limiting memory allows additional draining all swap additionally, so hence we further restrict the memory.memsw.limit_in_bytes additionally here.

Hope helps.
janos66
New User
New User
Posts: 7
Joined: Sun Sep 03, 2023 1:50 am

Re: Libtorrent v2.x memory limiting on linux

Post by janos66 »

So, after complaining in another forum thread, I looked at the 'nmon' readings, not just 'top' and read a little about how memory mapping a file works in a nutshell.
As I understand, when a big chunk of file regions are memory mapped, nothing really happens except an equal amount of virtual memory is committed to the mapping. Since the virtual address space is practically endless at this scale, it means absolutely nothing at this point. And when actual read/write happens, it's still treated as cache/dirty, so droppable/flushable as necessary when physical memory (plus swap space, if any) is running out.
Meaning, it's not an issue if deluge (or more precisely libtorrent) seemingly takes up a big amount of memory because it's mostly just virtual memory allocation, with only a portion of it backed by physical memory (if any) and treated the same cache/dirty as it would be with file open/read/write access instead of memory mapping file regions.
Thus, no adjustment is necessary. It's actually much better than deluge and/or libtorrent keeping a private cache of it's own because then the kernel couldn't decide when and how much cache to drop or when and how to flush the dirty pages of that cache. This seems to be more efficient overall regrdless of how it might look at first glance.
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Re: Libtorrent v2.x memory limiting on linux

Post by mhertz »

Correct! :) But however, that is just in theory unfortunately, because evidently some have reported issues, on windows and linux both, about OOM issues and slowdowns/crashes etc, because even though you're right and this is virtual memory, and cached memory when accessed, set to be immediately freed again when other stuff need it, unused ram is wasted ram and all that, functionality of page-cache, then in practice it is shown that this sometimes for some doesn't happen (fast enough) and there is an issue sometimes with kernel page/file-caching component of OS, and so this thread's instructions, and my wslimit windows plugin, both are made for that context solely, and not just in general, for fear of "Help! Linux ate my ram", but probably could be stipulated better by me. I personally don't enable any of these myself, though granted only torrent very minimally, but regardless would only use if actually seeing a negative impact for myself. Windows file-page cache is believed to be little worse regarding this, and e.g. needed extra workarounds added by libtorrent devs, like forced flushing calls of dirty pages at times etc, but linux issues also reported, so a universal issue, when happens, and hence this thread.

Good post! :)
JoeyMcFly
New User
New User
Posts: 1
Joined: Fri Dec 29, 2023 10:43 pm

Re: Libtorrent v2.x memory limiting on linux

Post by JoeyMcFly »

Sorry for the slight necropost here, but (and I fully realise that there's every chance that I'm being a Muppet here):

How would I go about doing this if I'm not using the thin client, on Ubuntu? I've tried following your instructions but it just says that deluged.service doesn't exist (fair enough, I'm not using the thin client), but what service should I be referring to instead? Or does it not work that way?

I apologise in advance if this question seems mighty dumb.
mhertz
Moderator
Moderator
Posts: 2195
Joined: Wed Jan 22, 2014 5:05 am
Location: Denmark

Re: Libtorrent v2.x memory limiting on linux

Post by mhertz »

No worries :)

If you have systemd, which I guess you do when using ubuntu, then easiest is running e.g.:

Code: Select all

sudo systemd-run --scope -p MemoryMax=512M --uid=martin -u deluge deluge
(Change 'martin' to your user, or what you want deluge app run under).

You can check under 'sudo systemctl status deluge.scope', that the memory is limited correctly, during usage.

Not much reason to setup this manually, when systend installed imho, I mean all the shell commands I posted previously, but could if wanted.

Last, I know you said not using thinclient, but could set it up seemlessly with a deluged service, maybe in a socket so first loaded when needed if wanted, and setup to be restricted, memory-wise, and then have deluge GTK-UI auto-connect to it upon every startup, but just a thought.

If anything unclear, then let me know of-course.

Edit: The first command-line posted, needs terminal window open until deluge stopped, unless using various workarounds(simply backgrounding not enough), and is little un-standard to have a GUI app in a cgroup/service, so would much rather recommend the latter I posted, and if not wanting use a deluged service file, then can also just use e.g. 'sudo systemd-run --scope -p MemoryMax=512M --uid=martin -u deluged deluged', and this don't need terminal open, and then have deluge app setup to auto-connect to said deluge instance when run.
Post Reply