ulimits

ulimit settings are little-known settings to most Linux desktop users, but they are a really painful and often-encountered issue when working with servers. In a nutshell, ulimit settings control many aspects around a process' resource usage just like our Docker resource tweaks we covered earlier and they are applied to every process and shell that has been started. These limits are almost always set on distributions to prevent a stray process from taking down your machine, but the numbers have usually been chosen with regular desktop usage in mind, so trying to run server-type code on unchanged systems is bound to hit at least the open file limit, and possibly some other limits.

We can use ulimit -a to see what our current (also called soft) settings are:

$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29683
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29683
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

As you can see, there are only a few things set here, but there is one that stands out: our "open files" limit (1024) is fine for general applications, but if we run many services that handle a large number of open files (such as a decent amount of Docker containers), this value must be changed or you will hit errors and your services will effectively die.

You can change this value for your current shell with ulimit -S <flag> <value>:

$ ulimit -n
1024

$ # Set max open files to 2048
$ ulimit -S -n 2048

$ # Let's see the full list again
$ ulimit -a
<snip>
open files (-n) 2048
<snip>

But what if we try to set this to something really high?

$ ulimit -S -n 10240
bash: ulimit: open files: cannot modify limit: Invalid argument

Here, we have now encountered the hard limit imposed by the system. This limit is something that will need to be changed at the system level if we want to modify it beyond those values. We can check what these hard limits are with ulimit -H -a:

$ ulimit -H -a | grep '^open files'
open files (-n) 4096

So, if we want to increase our open files number beyond 4096, we really need to change the system-level settings. Also, even if the soft limit of 4086 is fine with us, the setting is only for our own shell and its child processes, so it won't affect any other service or process on the system.

If you really wanted to, you actually can change the ulimit settings of an already-running process with prlimit from the util-linux package, but this method of adjusting the values is discouraged because the settings do not persist during process restarts and are thus pretty useless for that purpose. With that said, if you want to find out whether your ulimit settings have been applied to a service that is already running, this CLI tool is invaluable, so don't be afraid to use it in those cases.

To change this setting, you need to do a combination of options that is dependent on your distribution:

  • Create a security limits configuration file. You can do this rather simply by adding a few lines to something like /etc/security/limits.d/90-ulimit-open-files-increase.conf. The following example sets the open file soft limit on root and then on all other accounts (* does not apply to the root account) to 65536. You should find out what the appropriate value is for your system ahead of time:
    root soft nofile 65536
    root hard nofile 65536
    * soft nofile 65536
    * hard nofile 65536
  • Add the pam_limits module to Pluggable Authentication Module (PAM). This will, in turn, affect all user sessions with the previous ulimit change setting because some distributions do not have it included otherwise your changes might not persist. Add the following to /etc/pam.d/common-session:

    session required pam_limits.so
  • Alternatively, on some distributions, you can directly add the setting to the affected service definition in systemd in an override file:
    LimitNOFILE=65536
Overriding systemd services is a somewhat lengthy and distracting topic for this section, but it is a very common strategy for tweaking third-party services running on cluster deployments with that init system, so it is a very valuable skill to have. If you would like to know more about this topic, you can find a condensed version of the process at https://askubuntu.com/a/659268, and if you want the detailed version the upstream documentation can be found at https://www.freedesktop.org/software/systemd/man/systemd.service.html.
CAUTION! In the first example, we used the * wildcard, which affects all accounts on the machine. Generally, you want to isolate this setting to only the affected service accounts, if possible, for security reasons. We also used root because root values are specifically set by name in some distributions, which overrides the * wildcard setting due to the higher specificity. If you want to learn more about limits, you can find more information on these settings at https://linux.die.net/man/5/limits.conf.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset