Advice on globally limiting resources for containers

Discussion about Docker
Locked
User avatar
davidecavestro
Posts: 23
Joined: 17 Jul 2023, 16:07

Advice on globally limiting resources for containers

Post by davidecavestro »

Hi, given that we have no control over docker config files - and since I just use containerized services - I was wondering how I could limit the total amount of memory and cpu used by docker AND the whole set of containers.

Setting limits per container is good but not enough: having an upper limit in place for the whole stuff makes me more confident about system health status.

A trivial way would be modifying existing docker cgroups at boot

Code: Select all

[me@nas ~]# cat /etc/crontabs/root |grep reboot
@reboot /root/bin/limit-containers.sh

[me@nas ~]# cat /root/bin/limit-containers.sh
#!/bin/bash

CG_NAME=docker

MEMORY=3200 # Mb
CPU=180     # percentage: dual core max is 200

# Create the cgroups in case they don't exist yet
mkdir -p /sys/fs/cgroup/memory/$CG_NAME
mkdir -p /sys/fs/cgroup/cpu/$CG_NAME

# Set the limits
echo $(($MEMORY * 1024 * 1024)) > /sys/fs/cgroup/memory/$CG_NAME/memory.limit_in_bytes

echo 100000 > /sys/fs/cgroup/cpu/$CG_NAME/cpu.cfs_period_us
echo $(($CPU * 1000)) > /sys/fs/cgroup/cpu/$CG_NAME/cpu.cfs_quota_us
Another option would be leaving default values to docker cgroups, create dedicated cgroups just for containers and bind them at container launch. But it would be more invasive...

Any advice?
Locked

Return to “Docker”